Various copyedits to reduce future tense, wordiness, and use of 'please' (#5788)

* Reword lots of instances of 'will'

* Reword lots of instances of won't

* Reword lots of instances of we'll

* Eradicate you'll

* Eradicate 'be able to' type of phrases

* Eradicate 'unable to' type of phrases

* Eradicate 'has / have to' type of phrases

* Eradicate 'note that' type of phrases

* Eradicate 'in order to' type of phrases

* Redirect to official Chef and Puppet docs

* Eradicate gratuitous 'please'

* Reduce use of e.g.

* Reduce use of i.e.

* Reduce use of N.B.

* Get rid of 'sexagesimal' and correct some errors
This commit is contained in:
Misty Stanley-Jones 2018-01-25 17:37:23 -08:00 committed by GitHub
parent 1df7737c73
commit a4f5e30249
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
583 changed files with 3729 additions and 4111 deletions

View File

@ -124,5 +124,5 @@ know.
If you have questions about how to write for Docker's documentation, have a look
at the [style guide](https://docs.docker.com/opensource/doc-style/). The style
guide provides guidance about grammar, syntax, formatting, styling, language, or
tone. If something isn't clear in the guide, please submit an issue to let us
tone. If something isn't clear in the guide, submit an issue to let us
know or submit a pull request to help us improve it.

View File

@ -12,7 +12,7 @@ We really want your feedback, and we've made it easy. You can edit, rate, or
file an issue at the bottom of every page on
[https://docs.docker.com/](https://docs.docker.com/).
**Please only file issues about the documentation in this repository.** One way
**Only file issues about the documentation in this repository.** One way
to think about this is that you should file a bug here if your issue is that you
don't see something that should be in the docs, or you see something incorrect
or confusing in the docs.
@ -21,7 +21,7 @@ or confusing in the docs.
ask in [https://forums.docker.com](https://forums.docker.com) instead.
- If you have an idea for a new feature or behavior change in a specific aspect
of Docker, or have found a bug in part of Docker, please file that issue in
of Docker, or have found a bug in part of Docker, file that issue in
the project's code repository.
## Contributing
@ -158,7 +158,7 @@ You have three options:
bundle install
```
>**Note**: You may have to install some packages manually.
>**Note**: You may need to install some packages manually.
f. Change the directory to `docker.github.io`.

View File

@ -13,7 +13,7 @@ x-custom:
name: "custom"
```
The contents of those fields will be ignored by Compose, but they can be
The contents of those fields are ignored by Compose, but they can be
inserted in your resource definitions using [YAML anchors](http://www.yaml.org/spec/1.2/spec.html#id2765878).
For example, if you want several of your services to use the same logging
configuration:

View File

@ -16,8 +16,8 @@ string. In the example above, if `POSTGRES_VERSION` is not set, the value for
the `image` option is `postgres:`.
You can set default values for environment variables using a
[`.env` file](../env-file.md), which Compose will automatically look for. Values
set in the shell environment will override those set in the `.env` file.
[`.env` file](../env-file.md), which Compose automatically looks for. Values
set in the shell environment override those set in the `.env` file.
> **Important**: The `.env file` feature only works when you use the
> `docker-compose up` command and does not work with `docker stack deploy`.
@ -27,9 +27,9 @@ Both `$VARIABLE` and `${VARIABLE}` syntax are supported. Additionally when using
the [2.1 file format](compose-versioning.md#version-21), it is possible to
provide inline default values using typical shell syntax:
- `${VARIABLE:-default}` will evaluate to `default` if `VARIABLE` is unset or
- `${VARIABLE:-default}` evaluates to `default` if `VARIABLE` is unset or
empty in the environment.
- `${VARIABLE-default}` will evaluate to `default` only if `VARIABLE` is unset
- `${VARIABLE-default}` evaluates to `default` only if `VARIABLE` is unset
in the environment.
Other extended shell-style features, such as `${VARIABLE/foo/bar}`, are not
@ -45,6 +45,6 @@ Compose.
command: "$$VAR_NOT_INTERPOLATED_BY_COMPOSE"
If you forget and use a single dollar sign (`$`), Compose interprets the value
as an environment variable and will warn you:
as an environment variable and warns you:
The VAR_NOT_INTERPOLATED_BY_COMPOSE is not set. Substituting an empty string.

View File

@ -8,7 +8,7 @@
<div id="mac-add-keys" class="tab-pane fade in active">
<br>
{% capture mac-content-add %}
1. Start the `ssh-agent` in the background using the command `eval "$(ssh-agent -s)"`. You will get the agent process ID in return.
1. Start the `ssh-agent` in the background using the command `eval "$(ssh-agent -s)"`. You get the agent process ID in return.
```none
eval "$(ssh-agent -s)"

View File

@ -53,7 +53,7 @@
$ ls -al ~/.ssh
```
This will list files in your `.ssh` directory.
This lists files in your `.ssh` directory.
2. Check to see if you already have SSH keys you can use.
@ -91,7 +91,7 @@
$ ls -al ~/.ssh
```
This will list files in your `.ssh` directory.
This lists files in your `.ssh` directory.
2. Check to see if you already have a SSH keys you can use.

View File

@ -1,23 +1,18 @@
{% capture green-check %}![yes](/engine/installation/images/green-check.svg){: style="height: 14px; display: inline-block"}{% endcapture %}
{% capture superscript-link %}[1](#edge-footnote){: style="vertical-align: super; font-size: smaller;" }{% endcapture %}
{: style="width: 75%" }
| Month | Docker CE Edge | Docker CE Stable |
|:----------|:----------------------------------------|:------------------|
| January | {{ green-check }} | |
| February | {{ green-check }} | |
| March | {{ green-check }}{{ superscript-link }} | {{ green-check }} |
| April | {{ green-check }} | |
| May | {{ green-check }} | |
| June | {{ green-check }}{{ superscript-link }} | {{ green-check }} |
| July | {{ green-check }} | |
| August | {{ green-check }} | |
| September | {{ green-check }}{{ superscript-link }} | {{ green-check }} |
| October | {{ green-check }} | |
| November | {{ green-check }} | |
| December | {{ green-check }}{{ superscript-link }} | {{ green-check }} |
| Month | Docker CE Edge | Docker CE Stable |
|:----------|:------------------|:------------------|
| January | {{ green-check }} | |
| February | {{ green-check }} | |
| March | {{ green-check }} | {{ green-check }} |
| April | {{ green-check }} | |
| May | {{ green-check }} | |
| June | {{ green-check }} | {{ green-check }} |
| July | {{ green-check }} | |
| August | {{ green-check }} | |
| September | {{ green-check }} | {{ green-check }} |
| October | {{ green-check }} | |
| November | {{ green-check }} | |
| December | {{ green-check }} | {{ green-check }} |
`1`: On Linux distributions, these releases will only appear in the `stable`
channels, not the `edge` channels. For that reason, on Linux distributions,
you need to enable both channels.
{: id="edge-footnote" }

View File

@ -59,7 +59,7 @@ You can install Docker EE in different ways, depending on your needs:
2. Temporarily store the Docker EE repository URL you noted down in the
[prerequisites](#prerequisites) in an environment variable.
This will not persist when the current session ends.
This does not persist when the current session ends.
```bash
$ export DOCKERURL='<DOCKER-EE-URL>'
@ -139,8 +139,8 @@ You can install Docker EE in different ways, depending on your needs:
```
If this is the first time you are installing a package from a recently added
repository, you will be prompted to accept the GPG key, and
the key's fingerprint will be shown. Verify that the fingerprint matches
repository, you are prompted to accept the GPG key, and
the key's fingerprint is shown. Verify that the fingerprint matches
`{{ gpg-fingerprint }}` and if so, accept the key.
2. On production systems, you should install a specific version of Docker EE
@ -155,7 +155,7 @@ You can install Docker EE in different ways, depending on your needs:
```
The contents of the list depend upon which repositories you have enabled,
and will be specific to your version of {{ linux-dist-long }}
and is specific to your version of {{ linux-dist-long }}
(indicated by the `.el7` suffix on the version, in this example). Choose a
specific version to install. The second column is the version string. You
can use the entire version string, but **you need to include at least to the
@ -223,7 +223,7 @@ To upgrade Docker EE:
If you cannot use the official Docker repository to install Docker EE, you can
download the `.{{ package-format | downcase }}` file for your release and
install it manually. You will need to download a new file each time you want to
install it manually. You need to download a new file each time you want to
upgrade Docker EE.
{% if linux-dist == "rhel" %}

View File

@ -11,7 +11,7 @@ non-interactively. The source code for the scripts is in the
environments**, and you should understand the potential risks before you use
them:
- The scripts require `root` or `sudo` privileges in order to run. Therefore,
- The scripts require `root` or `sudo` privileges to run. Therefore,
you should carefully examine and audit the scripts before running them.
- The scripts attempt to detect your Linux distribution and version and
configure your package management system for you. In addition, the scripts do
@ -22,7 +22,7 @@ them:
manager without asking for confirmation. This may install a large number of
packages, depending on the current configuration of your host machine.
- The script does not provide options to specify which version of Docker to install,
and will install the latest version that is released in the "edge" channel.
and installs the latest version that is released in the "edge" channel.
- Do not use the convenience script if Docker has already been installed on the
host machine using another mechanism.
@ -48,9 +48,9 @@ adding your user to the "docker" group with something like:
sudo usermod -aG docker your-user
Remember that you will have to log out and back in for this to take effect!
Remember to log out and back in for this to take effect!
WARNING: Adding a user to the "docker" group will grant the ability to run
WARNING: Adding a user to the "docker" group grants the ability to run
containers which can be used to obtain root privileges on the
docker host.
Refer to https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
@ -59,8 +59,8 @@ WARNING: Adding a user to the "docker" group will grant the ability to run
Docker CE is installed. It starts automatically on `DEB`-based distributions. On
`RPM`-based distributions, you need to start it manually using the appropriate
`systemctl` or `service` command. As the message indicates, non-root users are
not able to run Docker commands by default.
`systemctl` or `service` command. As the message indicates, non-root users can't
run Docker commands by default.
#### Upgrade Docker after using the convenience script

View File

@ -3,7 +3,7 @@ project was created and is being actively developed to ensure that Docker users
can enjoy a fantastic out-of-the-box experience on {{cloudprovider}}. It is now
generally available and can now be used by everyone.
As an informed user, you might be curious to know what this project has to offer
As an informed user, you might be curious to know what this project offers
you for running your development, staging, or production workloads.
## Native to Docker
@ -14,12 +14,12 @@ operational complexity and adding unneeded additional APIs to the Docker stack.
Docker for {{cloudprovider}} allows you to interact with Docker directly
(including native Docker orchestration), instead of distracting you with the
need to navigate extra layers on top of Docker. You can focus instead on the
thing that matters most: running your workloads. This will help you and your
thing that matters most: running your workloads. This helps you and your
team to deliver more value to the business faster, to speak one common
"language", and to have fewer details to keep in your head at once.
The skills that you and your team have already learned, and will continue to
learn, using Docker on the desktop or elsewhere will automatically carry over to
The skills that you and your team have already learned, and continue to
learn, using Docker on the desktop or elsewhere automatically carry over to
using Docker on {{cloudprovider}}. The added consistency across clouds also
helps to ensure that a migration or multi-cloud strategy is easier to accomplish
in the future if desired.
@ -65,12 +65,12 @@ processes. In Docker for {{cloudprovider}}, your cluster is resilient to a
variety of such issues by default.
Log rotation native to the host is configured for you automatically, so chatty
logs won't use up all of your disk space. Likewise, the "system prune" option
logs don't use up all of your disk space. Likewise, the "system prune" option
allows you to ensure unused Docker resources such as old images are cleaned up
automatically. The lifecycle of nodes is managed using auto-scaling groups or
similar constructs, so that if a node enters an unhealthy state for unforeseen
reasons, the node will be taken out of load balancer rotation and/or replaced
automatically and all of its container tasks will be rescheduled.
reasons, the node is taken out of load balancer rotation and/or replaced
automatically and all of its container tasks are rescheduled.
These self-cleaning and self-healing properties are enabled by default and don't
need configuration, so you can breathe easier as the risk of downtime is
@ -91,7 +91,7 @@ communicating the current state of your infrastructure and the issues you are
seeing to the upstream. In Docker for {{cloudprovider}}, you receive new tools
to communicate any issues you experience quickly and securely to Docker
employees. The Docker for {{cloudprovider}} shell includes a `docker-diagnose`
script which, at your request, will transmit detailed diagnostic information to
script which, at your request, transmits detailed diagnostic information to
Docker support staff to reduce the traditional
"please-post-the-output-of-this-command" back and forth frequently encountered
in bug reports.

View File

@ -3,7 +3,7 @@ dockercloud/api-docs
[![Deploy to Docker Cloud](https://files.cloud.docker.com/images/deploy-to-dockercloud.svg)](https://cloud.docker.com/stack/deploy/)
If you find a typo or mismatch between the API and this documentation, please send us a pull request!
If you find a typo or mismatch between the API and this documentation, send us a pull request!
## Usage

View File

@ -17,12 +17,12 @@ Error Code | Meaning
400 | Bad Request -- There's a problem in the content of your request. Retrying the same request will fail.
401 | Unauthorized -- Your API key is wrong or your account has been deactivated.
402 | Payment Required -- You need to provide billing information to perform this request.
403 | Forbidden -- Quota limit exceeded. Please contact support to request a quota increase.
403 | Forbidden -- Quota limit exceeded. Contact support to request a quota increase.
404 | Not Found -- The requested object cannot be found.
405 | Method Not Allowed -- The endpoint requested does not implement the method sent.
409 | Conflict -- The object cannot be created or updated because another object exists with the same unique fields
415 | Unsupported Media Type -- Make sure you are using `Accept` and `Content-Type` headers as `application/json` and that the data your are `POST`-ing or `PATCH`-ing is in valid JSON format.
429 | Too Many Requests -- You are being throttled because of too many requests in a short period of time.
500 | Internal Server Error -- There was a server error while processing your request. Try again later, or contact support.
503 | Service Unavailable -- We're temporarily offline for maintenance. Please try again later.
504 | Gateway Timeout -- Our API servers are at full capacity. Please try again later.
503 | Service Unavailable -- We're temporarily offline for maintenance. Try again later.
504 | Gateway Timeout -- Our API servers are at full capacity. Try again later.

View File

@ -331,7 +331,7 @@ protocol | The protocol of the port, either `tcp` or `udp`
inner_port | The published port number inside the container
outer_port | The published port number in the node public network interface
port_name | Name of the service associated to this port
uri_protocol | The protocol to be used in the endpoint for this port (i.e. `http`)
uri_protocol | The protocol to be used in the endpoint for this port, such as `http`
endpoint_uri | The URI of the endpoint for this port
published | Whether the port has been published in the host public network interface or not. Non-published ports can only be accessed via links.

View File

@ -26,7 +26,7 @@ Attribute | Description
--------- | -----------
resource_uri | A unique API endpoint that represents the registry
name | Human-readable name of the registry
host | FQDN of the registry, i.e. `registry-1.docker.io`
host | FQDN of the registry, such as `registry-1.docker.io`
is_docker_registry | Whether this registry is run by Docker
is_ssl | Whether this registry has SSL activated or not
port | The port number where the registry is listening to

View File

@ -22,7 +22,7 @@ This is a [namespaced endpoint](#namespaced-endpoints).
Attribute | Description
--------- | -----------
resource_uri | A unique API endpoint that represents the repository
name | Name of the repository, i.e. `my.registry.com/myrepo`
name | Name of the repository, such as `my.registry.com/myrepo`
in_use | If the image is being used by any of your services
registry | Resource URI of the registry where this image is hosted
@ -123,7 +123,7 @@ Available in Docker Cloud's **REST API**
Parameter | Description
--------- | -----------
name | Name of the repository, i.e. 'my.registry.com/myrepo'
name | Name of the repository, such as 'my.registry.com/myrepo'
username | Username to authenticate with the third party registry
password | Password to authenticate with the third party registry
@ -258,7 +258,7 @@ repository.Remove()
docker-cloud repository rm registry.local/user1/image1
```
Removes the external repository from Docker Cloud. It won't remove the repository from the third party registry where it's stored.
Removes the external repository from Docker Cloud. It doesn't remove the repository from the third party registry where it's stored.
### Endpoint Type

View File

@ -299,7 +299,7 @@ Strategy | Description
-------- | -----------
EMPTIEST_NODE | It will deploy containers to the node with the lower total amount of running containers (default).
HIGH_AVAILABILITY | It will deploy containers to the node with the lower amount of running containers of the same service.
EVERY_NODE | It will deploy one container on every node. The service won't be able to scale manually. New containers will be deployed to new nodes automatically.
EVERY_NODE | It will deploy one container on every node. The service can't scale manually. New containers will be deployed to new nodes automatically.
### Network Modes
@ -408,15 +408,15 @@ Available in Docker Cloud's **REST API**
Parameter | Description
--------- | -----------
image | (required) The image used to deploy this service in docker format, i.e. `tutum/hello-world`
name | (optional) A human-readable name for the service, i.e. `my-hello-world-app` (default: `image` without namespace)
image | (required) The image used to deploy this service in docker format, such as `tutum/hello-world`
name | (optional) A human-readable name for the service, such as `my-hello-world-app` (default: `image` without namespace)
target_num_containers | (optional) The number of containers to run for this service initially (default: 1)
run_command | (optional) The command used to start the containers of this service, overriding the value specified in the image, i.e. `/run.sh` (default: `null`)
entrypoint | (optional) The command prefix used to start the containers of this service, overriding the value specified in the image, i.e. `/usr/sbin/sshd` (default: `null`)
container_ports | (optional) An array of objects with port information to be published in the containers for this service, which will be added to the image port information, i.e. `[{"protocol": "tcp", "inner_port": 80, "outer_port": 80}]` (default: `[]`) (See table `Service Port attributes` below)
container_envvars | (optional) An array of objects with environment variables to be added in the service containers on launch (overriding any image-defined environment variables), i.e. `[{"key": "DB_PASSWORD", "value": "mypass"}]` (default: `[]`) (See table `Service Environment Variable attributes` below)
linked_to_service | (optional) An array of service resource URIs to link this service to, including the link name, i.e. `[{"to_service": "/api/app/v1/service/80ff1635-2d56-478d-a97f-9b59c720e513/", "name": "db"}]` (default: `[]`) (See table `Related services attributes` below)
bindings | (optional) An array of bindings this service has to mount, i.e. `[{"volumes_from": "/api/app/v1/service/80ff1635-2d56-478d-a97f-9b59c720e513/", "rewritable": true}]` (default: `[]`) (See table `Related bindings attributes` below)
bindings | (optional) An array of bindings this service mounts, i.e. `[{"volumes_from": "/api/app/v1/service/80ff1635-2d56-478d-a97f-9b59c720e513/", "rewritable": true}]` (default: `[]`) (See table `Related bindings attributes` below)
autorestart | (optional) Whether the containers for this service should be restarted if they stop, i.e. `ALWAYS` (default: `OFF`, possible values: `OFF`, `ON_FAILURE`, `ALWAYS`) (see [Crash recovery](/docker-cloud/apps/autorestart/) for more information)
autodestroy | (optional) Whether the containers should be terminated if they stop, i.e. `OFF` (default: `OFF`, possible values: `OFF`, `ON_SUCCESS`, `ALWAYS`) (see [Autodestroy](/docker-cloud/apps/auto-destroy/) for more information)
sequential_deployment | (optional) Whether the containers should be launched and scaled in sequence, i.e. `true` (default: `false`) (see [Service scaling](/docker-cloud/apps/service-scaling/) for more information)

View File

@ -157,7 +157,9 @@ Content-Type: application/json
docker-cloud stack create --name hello-world -f docker-compose.yml
```
Creates a new stack without starting it. Note that the JSON syntax is abstracted by both, the Docker Cloud CLI and our UI, in order to use [Stack YAML files](/docker-cloud/apps/stack-yaml-reference/).
Creates a new stack without starting it. The JSON syntax is abstracted to use
[Stack YAML files](/docker-cloud/apps/stack-yaml-reference/) in both
the Docker Cloud CLI and our UI,
### Endpoint Type
@ -171,7 +173,7 @@ Available in Docker Cloud's **REST API**
Parameter | Description
--------- | -----------
name | (required) A human-readable name for the stack, i.e. `my-hello-world-stack`
name | (required) A human-readable name for the stack, such as `my-hello-world-stack`
nickname | (optional) A user-friendly name for the stack (`name` by default)
services | (optional) List of services belonging to the stack. Each service accepts the same parameters as a [Create new service](#create-a-new-service) operation (default: `[]`) plus the ability to refer "links" and "volumes-from" by the name of another service in the stack (see example).

View File

@ -35,7 +35,13 @@ Docker Cloud currently offers a **HTTP REST API** and a **Websocket Stream API**
# Authentication
In order to be able to make requests to the Docker Cloud API, you should first obtain an ApiKey for your account. For this, log into Docker Cloud, click on the menu on the upper right corner of the screen, select **Account info** and then select **API keys**.
To make requests to the Docker Cloud API, you need an ApiKey for your account.
To get one:
1. Log into Docker Cloud.
2. Click on the menu on the upper right corner of the screen.
3. Select **Account info**.
4. Select **API keys**.
## REST API

View File

@ -13,10 +13,10 @@ You just need to have [Docker Engine](https://docs.docker.com/engine/installatio
and [Docker Compose](https://docs.docker.com/compose/install/) installed on your
platform of choice: Linux, Mac or Windows.
For this sample, we will create a sample .NET Core Web Application using the
`aspnetcore-build` Docker image. After that, we will create a `Dockerfile`,
For this sample, we create a sample .NET Core Web Application using the
`aspnetcore-build` Docker image. After that, we create a `Dockerfile`,
configure this app to use our SQL Server database, and then create a
`docker-compose.yml` that will define the behavior of all of these components.
`docker-compose.yml` that defines the behavior of all of these components.
> **Note**: This sample is made for Docker Engine on Linux. For Windows
> Containers, visit
@ -24,10 +24,10 @@ configure this app to use our SQL Server database, and then create a
1. Create a new directory for your application.
This directory will be the context of your docker-compose project. For
This directory is the context of your docker-compose project. For
[Docker for Windows](https://docs.docker.com/docker-for-windows/#/shared-drives) and
[Docker for Mac](https://docs.docker.com/docker-for-mac/#/file-sharing), you
have to set up file sharing for the volume that you need to map.
need to set up file sharing for the volume that you need to map.
1. Within your directory, use the `aspnetcore-build` Docker image to generate a
sample web application within the container under the `/app` directory and
@ -53,17 +53,17 @@ configure this app to use our SQL Server database, and then create a
CMD /bin/bash ./entrypoint.sh
```
This file defines how to build the web app image. It will use the
This file defines how to build the web app image. It uses the
[microsoft/aspnetcore-build](https://hub.docker.com/r/microsoft/aspnetcore-build/),
map the volume with the generated code, restore the dependencies, build the
project and expose port 80. After that, it will call an `entrypoint` script
that we will create in the next step.
project and expose port 80. After that, it calls an `entrypoint` script
that we create in the next step.
1. The `Dockerfile` makes use of an entrypoint to your webapp Docker
image. Create this script in a file called `entrypoint.sh` and paste the
contents below.
> **Note**: Make sure to use UNIX line delimiters. The script won't work if
> **Note**: Make sure to use UNIX line delimiters. The script doesn't work if
> you use Windows-based delimiters (Carriage return and line feed).
```bash
@ -81,13 +81,13 @@ configure this app to use our SQL Server database, and then create a
exec $run_cmd
```
This script will restore the database after it starts up, and then will run
This script restores the database after it starts up, and then runs
the application. This allows some time for the SQL Server database image to
start up.
1. Create a `docker-compose.yml` file. Write the following in the file, and
make sure to replace the password in the `SA_PASSWORD` environment variable
under `db` below. This file will define the way the images will interact as
under `db` below. This file defines the way the images interact as
independent services.
> **Note**: The SQL Server container requires a secure password to startup:
@ -145,8 +145,8 @@ configure this app to use our SQL Server database, and then create a
}
[...]
```
1. Go to `app.csproj`. You will find a line like:
1. Go to `app.csproj`. You see a line like:
```
<PackageReference Include="Microsoft.EntityFrameworkCore.Sqlite" Version="1.1.2" />
@ -158,7 +158,7 @@ configure this app to use our SQL Server database, and then create a
```
<PackageReference Include="Microsoft.EntityFrameworkCore.SqlServer" Version="1.1.2" />
```
The Sqlite dependency was at version 1.1.2 at the time of this writing. Use the same
version for the SQL Server dependency.
@ -183,7 +183,7 @@ configure this app to use our SQL Server database, and then create a
$ docker-compose up
```
Go ahead and try out the website! This sample will use the SQL Server
Go ahead and try out the website! This sample uses the SQL Server
database image in the back-end for authentication.
Ready! You now have a ASP.NET Core application running against SQL Server in

View File

@ -145,7 +145,7 @@ A service has the following fields:
Image (required) <code>string</code>
</dt>
<dd>
The image that the service will run. Docker images should be referenced
The image that the service runs. Docker images should be referenced
with full content hash to fully specify the deployment artifact for the
service. Example:
<code>postgres@sha256:e0a230a9f5b4e1b8b03bb3e8cf7322b0e42b7838c5c87f4545edb48f5eb8f077</code>

View File

@ -36,8 +36,7 @@ fi
You can source your `~/.bash_profile` or launch a new terminal to utilize
completion.
If you're using MacPorts instead of brew, you'll need to slightly modify your steps to the
following:
If you're using MacPorts instead of brew, use the following steps instead:
Run `sudo port install bash-completion` to install bash completion.
Add the following lines to `~/.bash_profile`:
@ -53,14 +52,14 @@ completion.
### Zsh
Place the completion script in your `/path/to/zsh/completion`, using e.g. `~/.zsh/completion/`:
Place the completion script in your `/path/to/zsh/completion` (typically `~/.zsh/completion/`):
```shell
$ mkdir -p ~/.zsh/completion
$ curl -L https://raw.githubusercontent.com/docker/compose/{{site.compose_version}}/contrib/completion/zsh/_docker-compose > ~/.zsh/completion/_docker-compose
```
Include the directory in your `$fpath`, e.g. by adding in `~/.zshrc`:
Include the directory in your `$fpath` by adding in `~/.zshrc`:
```shell
fpath=(~/.zsh/completion $fpath)
@ -80,12 +79,12 @@ exec $SHELL -l
## Available completions
Depending on what you typed on the command line so far, it will complete:
Depending on what you typed on the command line so far, it completes:
- available docker-compose commands
- options that are available for a particular command
- service names that make sense in a given context (e.g. services with running or stopped instances or services based on images vs. services based on Dockerfiles). For `docker-compose scale`, completed service names will automatically have "=" appended.
- arguments for selected options, e.g. `docker-compose kill -s` will complete some signals like SIGHUP and SIGUSR1.
- service names that make sense in a given context, such as services with running or stopped instances or services based on images vs. services based on Dockerfiles. For `docker-compose scale`, completed service names automatically have "=" appended.
- arguments for selected options. For example, `docker-compose kill -s` completes some signals like SIGHUP and SIGUSR1.
Enjoy working with Compose faster and with less typos!

View File

@ -29,12 +29,12 @@ The default path for a Compose file is `./docker-compose.yml`.
>**Tip**: You can use either a `.yml` or `.yaml` extension for this file. They both work.
A service definition contains configuration which will be applied to each
A service definition contains configuration which is applied to each
container started for that service, much like passing command-line parameters to
`docker run`.
As with `docker run`, options specified in the Dockerfile (e.g., `CMD`,
`EXPOSE`, `VOLUME`, `ENV`) are respected by default - you don't need to
As with `docker run`, options specified in the Dockerfile, such as `CMD`,
`EXPOSE`, `VOLUME`, `ENV`, are respected by default - you don't need to
specify them again in `docker-compose.yml`.
This section contains a list of all configuration options supported by a service
@ -63,7 +63,7 @@ Attempting to do so results in an error.
Alternate Dockerfile.
Compose will use an alternate file to build with. A build path must also be
Compose uses an alternate file to build with. A build path must also be
specified.
build: .
@ -163,10 +163,10 @@ The entrypoint can also be a list, in a manner similar to
- memory_limit=-1
- vendor/bin/phpunit
> **Note**: Setting `entrypoint` will both override any default entrypoint set
> **Note**: Setting `entrypoint` both overrides any default entrypoint set
> on the service's image with the `ENTRYPOINT` Dockerfile instruction, *and*
> clear out any default command on the image - meaning that if there's a `CMD`
> instruction in the Dockerfile, it will be ignored.
> clears out any default command on the image - meaning that if there's a `CMD`
> instruction in the Dockerfile, it is ignored.
### env_file
@ -186,18 +186,19 @@ these values.
- /opt/secrets.env
Compose expects each line in an env file to be in `VAR=VAL` format. Lines
beginning with `#` (i.e. comments) are ignored, as are blank lines.
beginning with `#` are processed as comments and are ignored. Blank lines are
also ignored.
# Set Rails/Rack environment
RACK_ENV=development
> **Note**: If your service specifies a [build](#build) option, variables
> defined in environment files will _not_ be automatically visible during the
> defined in environment files are _not_ automatically visible during the
> build.
The value of `VAL` is used as is and not modified at all. For example if the
value is surrounded by quotes (as is often the case of shell variables), the
quotes will be included in the value passed to Compose.
quotes are included in the value passed to Compose.
Keep in mind that _the order of files in the list is significant in determining
the value assigned to a variable that shows up more than once_. The files in the
@ -228,7 +229,7 @@ and
VAR=hello
```
$VAR will be `hello`.
$VAR is `hello`.
### environment
@ -250,7 +251,7 @@ machine Compose is running on, which can be helpful for secret or host-specific
- SESSION_SECRET
> **Note**: If your service specifies a [build](#build) option, variables
> defined in `environment` will _not_ be automatically visible during the
> defined in `environment` are _not_ automatically visible during the
> build.
### expose
@ -311,7 +312,7 @@ Add hostname mappings. Use the same values as the docker client `--add-host` par
- "somehost:162.242.195.82"
- "otherhost:50.31.209.229"
An entry with the ip address and hostname will be created in `/etc/hosts` inside containers for this service, e.g:
An entry with the ip address and hostname is created in `/etc/hosts` inside containers for this service, e.g:
162.242.195.82 somehost
50.31.209.229 otherhost
@ -364,7 +365,7 @@ a link alias (`SERVICE:ALIAS`), or just the service name.
- db:database
- redis
Containers for the linked service will be reachable at a hostname identical to
Containers for the linked service are reachable at a hostname identical to
the alias, or the service name if no alias was specified.
Links also express dependency between services in the same way as
@ -411,19 +412,19 @@ id.
pid: "host"
Sets the PID mode to the host PID mode. This turns on sharing between
container and the host operating system the PID address space. Containers
launched with this flag will be able to access and manipulate other
Sets the PID mode to the host PID mode. This turns on sharing between
container and the host operating system the PID address space. Containers
launched with this flag can access and manipulate other
containers in the bare-metal machine's namespace and vise-versa.
### ports
Expose ports. Either specify both ports (`HOST:CONTAINER`), or just the container
port (a random host port will be chosen).
port (an ephemeral host port is chosen).
> **Note**: When mapping ports in the `HOST:CONTAINER` format, you may experience
> erroneous results when using a container port lower than 60, because YAML will
> parse numbers in the format `xx:yy` as sexagesimal (base 60). For this reason,
> erroneous results when using a container port lower than 60, because YAML
> parses numbers in the format `xx:yy` as a base-60 value. For this reason,
> we recommend always explicitly specifying your port mappings as strings.
ports:
@ -447,7 +448,7 @@ Override the default labeling scheme for each container.
### stop_signal
Sets an alternative signal to stop the container. By default `stop` uses
SIGTERM. Setting an alternative signal using `stop_signal` will cause
SIGTERM. Setting an alternative signal using `stop_signal` causes
`stop` to send that signal instead.
stop_signal: SIGUSR1
@ -470,10 +471,10 @@ Mount paths or named volumes, optionally specifying a path on the host machine
(`HOST:CONTAINER`), or an access mode (`HOST:CONTAINER:ro`).
For [version 2 files](compose-versioning#version-2), named volumes need to be specified with the
[top-level `volumes` key](compose-file-v2.md#volume-configuration-reference).
When using [version 1](compose-versioning#version-1), the Docker Engine will create the named
When using [version 1](compose-versioning#version-1), the Docker Engine creates the named
volume automatically if it doesn't exist.
You can mount a relative path on the host, which will expand relative to
You can mount a relative path on the host, which expands relative to
the directory of the Compose configuration file being used. Relative paths
should always begin with `.` or `..`.
@ -501,11 +502,11 @@ There are several things to note, depending on which
[Compose file version](compose-versioning#versioning) you're using:
- For [version 1 files](compose-versioning#version-1), both named volumes and
container volumes will use the specified driver.
container volumes use the specified driver.
- No path expansion will be done if you have also specified a `volume_driver`.
- No path expansion is done if you have also specified a `volume_driver`.
For example, if you specify a mapping of `./foo:/data`, the `./foo` part
will be passed straight to the volume driver without being expanded.
is passed straight to the volume driver without being expanded.
See [Docker Volumes](/engine/userguide/dockervolumes.md) and
[Volume Plugins](/engine/extend/plugins_volume.md) for more information.
@ -514,7 +515,7 @@ See [Docker Volumes](/engine/userguide/dockervolumes.md) and
Mount all of the volumes from another service or container, optionally
specifying read-only access (``ro``) or read-write (``rw``). If no access level
is specified, then read-write will be used.
is specified, then read-write is used.
volumes_from:
- service_name

View File

@ -30,13 +30,13 @@ The default path for a Compose file is `./docker-compose.yml`.
>**Tip**: You can use either a `.yml` or `.yaml` extension for this file. They both work.
A [container](/engine/reference/glossary.md#container) definition contains configuration which will be applied to each
A [container](/engine/reference/glossary.md#container) definition contains configuration which are applied to each
container started for that service, much like passing command-line parameters to
`docker run`. Likewise, network and volume definitions are analogous to
`docker network create` and `docker volume create`.
As with `docker run`, options specified in the Dockerfile (e.g., `CMD`,
`EXPOSE`, `VOLUME`, `ENV`) are respected by default - you don't need to
As with `docker run`, options specified in the Dockerfile, such as `CMD`,
`EXPOSE`, `VOLUME`, `ENV`, are respected by default - you don't need to
specify them again in `docker-compose.yml`.
You can use environment variables in configuration values with a Bash-like
@ -126,7 +126,7 @@ with the `webapp` and optional `tag` specified in `image`:
build: ./dir
image: webapp:tag
This will result in an image named `webapp` and tagged `tag`, built from `./dir`.
This results in an image named `webapp` and tagged `tag`, built from `./dir`.
#### context
@ -139,7 +139,7 @@ When the value supplied is a relative path, it is interpreted as relative to the
location of the Compose file. This directory is also the build context that is
sent to the Docker daemon.
Compose will build and tag it with a generated name, and use that image thereafter.
Compose builds and tags it with a generated name, and use that image thereafter.
build:
context: ./dir
@ -148,7 +148,7 @@ Compose will build and tag it with a generated name, and use that image thereaft
Alternate Dockerfile.
Compose will use an alternate file to build with. A build path must also be
Compose uses an alternate file to build with. A build path must also be
specified.
build:
@ -203,7 +203,7 @@ Add hostname mappings at build-time. Use the same values as the docker client `-
- "somehost:162.242.195.82"
- "otherhost:50.31.209.229"
An entry with the ip address and hostname will be created in `/etc/hosts` inside containers for this build, e.g:
An entry with the ip address and hostname is created in `/etc/hosts` inside containers for this build, e.g:
162.242.195.82 somehost
50.31.209.229 otherhost
@ -237,7 +237,7 @@ those used by other software.
> Added in [version 2.2](compose-versioning.md#version-22) file format
Set the network containers will connect to for the `RUN` instructions during
Set the network containers connect to for the `RUN` instructions during
build.
build:
@ -332,11 +332,11 @@ client create option.
Express dependency between services, which has two effects:
- `docker-compose up` will start services in dependency order. In the following
example, `db` and `redis` will be started before `web`.
- `docker-compose up` starts services in dependency order. In the following
example, `db` and `redis` are started before `web`.
- `docker-compose up SERVICE` will automatically include `SERVICE`'s
dependencies. In the following example, `docker-compose up web` will also
- `docker-compose up SERVICE` automatically include `SERVICE`'s
dependencies. In the following example, `docker-compose up web` also
create and start `db` and `redis`.
Simple example:
@ -353,7 +353,7 @@ Simple example:
db:
image: postgres
> **Note**: `depends_on` will not wait for `db` and `redis` to be "ready" before
> **Note**: `depends_on` does not wait for `db` and `redis` to be "ready" before
> starting `web` - only until they have been started. If you need to wait
> for a service to be ready, see [Controlling startup order](/compose/startup-order.md)
> for more on this problem and strategies for solving it.
@ -361,8 +361,8 @@ Simple example:
> [Added in version 2.1 file format](compose-versioning.md#version-21).
A healthcheck indicates that you want a dependency to wait
for another container to be "healthy" (i.e. its healthcheck advertises a
successful state) before starting.
for another container to be "healthy" (as indicated by a successful state from
the healthcheck) before starting.
Example:
@ -382,7 +382,7 @@ Example:
healthcheck:
test: "exit 0"
In the above example, Compose will wait for the `redis` service to be started
In the above example, Compose waits for the `redis` service to be started
(legacy behavior) and the `db` service to be healthy before starting `web`.
See the [healthcheck section](#healthcheck) for complementary
@ -440,10 +440,10 @@ The entrypoint can also be a list, in a manner similar to
- memory_limit=-1
- vendor/bin/phpunit
> **Note**: Setting `entrypoint` will both override any default entrypoint set
> **Note**: Setting `entrypoint` both overrides any default entrypoint set
> on the service's image with the `ENTRYPOINT` Dockerfile instruction, *and*
> clear out any default command on the image - meaning that if there's a `CMD`
> instruction in the Dockerfile, it will be ignored.
> clears out any default command on the image - meaning that if there's a `CMD`
> instruction in the Dockerfile, it is ignored.
### env_file
@ -464,19 +464,20 @@ empty or undefined.
- /opt/secrets.env
Compose expects each line in an env file to be in `VAR=VAL` format. Lines
beginning with `#` (i.e. comments) are ignored, as are blank lines.
beginning with `#` are processed as comments and are ignored. Blank lines are
also ignored.
# Set Rails/Rack environment
RACK_ENV=development
> **Note**: If your service specifies a [build](#build) option, variables
> defined in environment files will _not_ be automatically visible during the
> defined in environment files are _not_ automatically visible during the
> build. Use the [args](#args) sub-option of `build` to define build-time
> environment variables.
The value of `VAL` is used as is and not modified at all. For example if the
value is surrounded by quotes (as is often the case of shell variables), the
quotes will be included in the value passed to Compose.
quotes are included in the value passed to Compose.
Keep in mind that _the order of files in the list is significant in determining
the value assigned to a variable that shows up more than once_. The files in the
@ -507,7 +508,7 @@ and
VAR=hello
```
$VAR will be `hello`.
$VAR is `hello`.
### environment
@ -529,7 +530,7 @@ machine Compose is running on, which can be helpful for secret or host-specific
- SESSION_SECRET
> **Note**: If your service specifies a [build](#build) option, variables
> defined in `environment` will _not_ be automatically visible during the
> defined in `environment` are _not_ automatically visible during the
> build. Use the [args](#args) sub-option of `build` to define build-time
> environment variables.
@ -595,7 +596,7 @@ Add hostname mappings. Use the same values as the docker client `--add-host` par
- "somehost:162.242.195.82"
- "otherhost:50.31.209.229"
An entry with the ip address and hostname will be created in `/etc/hosts` inside containers for this service, e.g:
An entry with the ip address and hostname is created in `/etc/hosts` inside containers for this service, e.g:
162.242.195.82 somehost
50.31.209.229 otherhost
@ -603,7 +604,7 @@ An entry with the ip address and hostname will be created in `/etc/hosts` inside
### group_add
Specify additional groups (by name or number) which the user inside the
container will be a member of. Groups must exist in both the container and the
container should be a member of. Groups must exist in both the container and the
host system to be added. An example of where this is useful is when multiple
containers (running as different users) need to all read or write the same
file on the host system. That file can be owned by a group shared by all the
@ -622,7 +623,7 @@ services:
- mail
```
Running `id` inside the created container will show that the user belongs to
Running `id` inside the created container shows that the user belongs to
the `mail` group, which would not have been the case if `group_add` were not
used.
@ -741,7 +742,7 @@ a link alias (`"SERVICE:ALIAS"`), or just the service name.
- "db:database"
- "redis"
Containers for the linked service will be reachable at a hostname identical to
Containers for the linked service are reachable at a hostname identical to
the alias, or the service name if no alias was specified.
Links also express dependency between services in the same way as
@ -772,7 +773,7 @@ The default value is json-file.
driver: "none"
> **Note**: Only the `json-file` and `journald` drivers make the logs available directly from
> `docker-compose up` and `docker-compose logs`. Using any other driver will not
> `docker-compose up` and `docker-compose logs`. Using any other driver does not
> print any logs.
Specify logging options for the logging driver with the ``options`` key, as with the ``--log-opt`` option for `docker run`.
@ -815,7 +816,7 @@ Aliases (alternative hostnames) for this service on the network. Other container
Since `aliases` is network-scoped, the same service can have different aliases on different networks.
> **Note**: A network-wide alias can be shared by multiple containers, and even by multiple services. If it is, then exactly which container the name will resolve to is not guaranteed.
> **Note**: A network-wide alias can be shared by multiple containers, and even by multiple services. If it is, then exactly which container the name resolves to is not guaranteed.
The general format is shown here.
@ -922,12 +923,12 @@ Example usage:
pid: "service:foobar"
If set to one of the following forms: `container:<container_name>`,
`service:<service_name>`, the service will share the PID address space of the
`service:<service_name>`, the service shares the PID address space of the
designated container or service.
If set to "host", the service's PID mode will be the host PID mode. This turns
If set to "host", the service's PID mode is the host PID mode. This turns
on sharing between container and the host operating system the PID address
space. Containers launched with this flag will be able to access and manipulate
space. Containers launched with this flag can access and manipulate
other containers in the bare-metal machine's namespace and vise-versa.
> **Note**: the `service:` and `container:` forms require
@ -945,11 +946,11 @@ Tunes a container's PIDs limit. Set to `-1` for unlimited PIDs.
### ports
Expose ports. Either specify both ports (`HOST:CONTAINER`), or just the container
port (a random host port will be chosen).
port (an ephemeral host port is chosen).
> **Note**: When mapping ports in the `HOST:CONTAINER` format, you may experience
> erroneous results when using a container port lower than 60, because YAML will
> parse numbers in the format `xx:yy` as sexagesimal (base 60). For this reason,
> erroneous results when using a container port lower than 60, because YAML
> parses numbers in the format `xx:yy` as a base-60 value. For this reason,
> we recommend always explicitly specifying your port mappings as strings.
ports:
@ -968,7 +969,7 @@ port (a random host port will be chosen).
> [Added in version 2.2 file format](compose-versioning.md#version-22)
Specify the default number of containers to deploy for this service. Whenever
you run `docker-compose up`, Compose will create or remove containers to match
you run `docker-compose up`, Compose creates or removes containers to match
the specified number. This value can be overridden using the
[`--scale`](/compose/reference/up.md) flag.
@ -1001,7 +1002,7 @@ SIGKILL.
### stop_signal
Sets an alternative signal to stop the container. By default `stop` uses
SIGTERM. Setting an alternative signal using `stop_signal` will cause
SIGTERM. Setting an alternative signal using `stop_signal` causes
`stop` to send that signal instead.
stop_signal: SIGUSR1
@ -1057,7 +1058,7 @@ more information.
Mount host folders or named volumes. Named volumes need to be specified with the
[top-level `volumes` key](#volume-configuration-reference).
You can mount a relative path on the host, which will expand relative to
You can mount a relative path on the host, which expands relative to
the directory of the Compose configuration file being used. Relative paths
should always begin with `.` or `..`.
@ -1065,7 +1066,7 @@ should always begin with `.` or `..`.
The short syntax uses the generic `[SOURCE:]TARGET[:MODE]` format, where
`SOURCE` can be either a host path or volume name. `TARGET` is the container
path where the volume will be mounted. Standard modes are `ro` for read-only
path where the volume is mounted. Standard modes are `ro` for read-only
and `rw` for read-write (default).
volumes:
@ -1095,7 +1096,7 @@ expressed in the short form.
- `source`: the source of the mount, a path on the host for a bind mount, or the
name of a volume defined in the
[top-level `volumes` key](#volume-configuration-reference). Not applicable for a tmpfs mount.
- `target`: the path in the container where the volume will be mounted
- `target`: the path in the container where the volume is mounted
- `read_only`: flag to set the volume as read-only
- `bind`: configure additional bind options
- `propagation`: the propagation mode used for the bind
@ -1129,8 +1130,8 @@ volumes:
```
> **Note:** When creating bind mounts, using the long syntax requires the
> referenced folder to be created beforehand. Using the short syntax will
> create the folder on the fly if it doesn't exist.
> referenced folder to be created beforehand. Using the short syntax
> creates the folder on the fly if it doesn't exist.
> See the [bind mounts documentation](/engine/admin/volumes/bind-mounts.md/#differences-between--v-and---mount-behavior)
> for more information.
@ -1142,7 +1143,7 @@ service.
volume_driver: mydriver
> **Note:** In [version 2 files](compose-versioning.md#version-2), this
> option will only apply to anonymous volumes (those specified in the image,
> option only applies to anonymous volumes (those specified in the image,
> or specified under `volumes` without an explicit named volume or host path).
> To configure the driver for a named volume, use the `driver` key under the
> entry in the [top-level `volumes` option](#volume-configuration-reference).
@ -1155,7 +1156,7 @@ See [Docker Volumes](/engine/userguide/dockervolumes.md) and
Mount all of the volumes from another service or container, optionally
specifying read-only access (``ro``) or read-write (``rw``). If no access level is specified,
then read-write will be used.
then read-write is used.
volumes_from:
- service_name
@ -1178,7 +1179,7 @@ then read-write will be used.
### restart
`no` is the default restart policy, and it will not restart a container under any circumstance. When `always` is specified, the container always restarts. The `on-failure` policy restarts a container if the exit code indicates an on-failure error.
`no` is the default restart policy, and it does restart a container under any circumstance. When `always` is specified, the container always restarts. The `on-failure` policy restarts a container if the exit code indicates an on-failure error.
- restart: no
- restart: always
@ -1253,7 +1254,7 @@ that looks like this:
1gb
The supported units are `b`, `k`, `m` and `g`, and their alternative notation `kb`,
`mb` and `gb`. Please note that decimal values are not supported at this time.
`mb` and `gb`. Decimal values are not supported at this time.
## Volume configuration reference
@ -1283,15 +1284,15 @@ up:
volumes:
data-volume:
An entry under the top-level `volumes` key can be empty, in which case it will
use the default driver configured by the Engine (in most cases, this is the
An entry under the top-level `volumes` key can be empty, in which case it
uses the default driver configured by the Engine (in most cases, this is the
`local` driver). Optionally, you can configure it with the following keys:
### driver
Specify which volume driver should be used for this volume. Defaults to whatever
driver the Docker Engine has been configured to use, which in most cases is
`local`. If the driver is not available, the Engine will return an error when
`local`. If the driver is not available, the Engine returns an error when
`docker-compose up` tries to create the volume.
driver: foobar
@ -1309,14 +1310,14 @@ documentation for more information. Optional.
### external
If set to `true`, specifies that this volume has been created outside of
Compose. `docker-compose up` will not attempt to create it, and will raise
Compose. `docker-compose up` does not attempt to create it, and raises
an error if it doesn't exist.
`external` cannot be used in conjunction with other volume configuration keys
(`driver`, `driver_opts`).
In the example below, instead of attempting to create a volume called
`[projectname]_data`, Compose will look for an existing volume simply
`[projectname]_data`, Compose looks for an existing volume simply
called `data` and mount it into the `db` service's containers.
version: '2'
@ -1394,10 +1395,10 @@ explanation of Compose's use of Docker networking features, see the
Specify which driver should be used for this network.
The default driver depends on how the Docker Engine you're using is configured,
but in most instances it will be `bridge` on a single host and `overlay` on a
but in most instances it is `bridge` on a single host and `overlay` on a
Swarm.
The Docker Engine will return an error if the driver is not available.
The Docker Engine returns an error if the driver is not available.
driver: overlay
@ -1478,15 +1479,15 @@ conflicting with those used by other software.
### external
If set to `true`, specifies that this network has been created outside of
Compose. `docker-compose up` will not attempt to create it, and will raise
Compose. `docker-compose up` does not attempt to create it, and raises
an error if it doesn't exist.
`external` cannot be used in conjunction with other network configuration keys
(`driver`, `driver_opts`, `group_add`, `ipam`, `internal`).
In the example below, `proxy` is the gateway to the outside world. Instead of
attempting to create a network called `[projectname]_outside`, Compose will
look for an existing network simply called `outside` and connect the `proxy`
attempting to create a network called `[projectname]_outside`, Compose
looks for an existing network simply called `outside` and connect the `proxy`
service's containers to it.
version: '2'

View File

@ -28,7 +28,7 @@ There are several versions of the Compose file format 1, 2, 2.x, and 3.x
>
> We recommend keeping up-to-date with newer releases as much as possible.
However, if you are using an older version of Docker and want to determine which
Compose release is compatible, please refer to the [Compose release
Compose release is compatible, refer to the [Compose release
notes](https://github.com/docker/compose/releases/). Each set of release notes
gives details on which versions of Docker Engine are supported, along
with compatible Compose file format versions. (See also, the discussion in
@ -87,7 +87,7 @@ Version 1 files cannot declare named
Compose does not take advantage of [networking](/compose/networking.md) when you
use version 1: every container is placed on the default `bridge` network and is
reachable from every other container at its IP address. You will need to use
reachable from every other container at its IP address. You need to use
[links](compose-file-v1.md#links) to enable discovery between containers.
Example:
@ -238,7 +238,7 @@ several more.
- Removed: `volume_driver`, `volumes_from`, `cpu_shares`, `cpu_quota`,
`cpuset`, `mem_limit`, `memswap_limit`, `extends`, `group_add`. See
the [upgrading](#upgrading) guide for how to migrate away from these.
(For more information on `extends`, please see [Extending services](/compose/extends.md#extending-services).)
(For more information on `extends`, see [Extending services](/compose/extends.md#extending-services).)
- Added: [deploy](/compose/compose-file/index.md#deploy)
@ -306,11 +306,11 @@ several options have been removed:
- `cpu_shares`, `cpu_quota`, `cpuset`, `mem_limit`, `memswap_limit`: These
have been replaced by the [resources](/compose/compose-file/index.md#resources) key under
`deploy`. Note that `deploy` configuration only takes effect when using
`deploy`. `deploy` configuration only takes effect when using
`docker stack deploy`, and is ignored by `docker-compose`.
- `extends`: This option has been removed for `version: "3.x"`
Compose files. (For more information, please see [Extending services](/compose/extends.md#extending-services).)
Compose files. (For more information, see [Extending services](/compose/extends.md#extending-services).)
- `group_add`: This option has been removed for `version: "3.x"` Compose files.
- `pids_limit`: This option has not been introduced in `version: "3.x"` Compose files.
- `link_local_ips` in `networks`: This option has not been introduced in

View File

@ -158,13 +158,13 @@ The default path for a Compose file is `./docker-compose.yml`.
>**Tip**: You can use either a `.yml` or `.yaml` extension for this file.
They both work.
A service definition contains configuration which will be applied to each
A service definition contains configuration that is applied to each
container started for that service, much like passing command-line parameters to
`docker container create`. Likewise, network and volume definitions are analogous to
`docker network create` and `docker volume create`.
As with `docker container create`, options specified in the Dockerfile (e.g., `CMD`,
`EXPOSE`, `VOLUME`, `ENV`) are respected by default - you don't need to
As with `docker container create`, options specified in the Dockerfile, such as `CMD`,
`EXPOSE`, `VOLUME`, `ENV`, are respected by default - you don't need to
specify them again in `docker-compose.yml`.
You can use environment variables in configuration values with a Bash-like
@ -208,7 +208,7 @@ with the `webapp` and optional `tag` specified in `image`:
build: ./dir
image: webapp:tag
This will result in an image named `webapp` and tagged `tag`, built from `./dir`.
This results in an image named `webapp` and tagged `tag`, built from `./dir`.
> **Note**: This option is ignored when
> [deploying a stack in swarm mode](/engine/reference/commandline/stack_deploy.md)
@ -222,7 +222,7 @@ When the value supplied is a relative path, it is interpreted as relative to the
location of the Compose file. This directory is also the build context that is
sent to the Docker daemon.
Compose will build and tag it with a generated name, and use that image
Compose builds and tags it with a generated name, and use that image
thereafter.
build:
@ -232,7 +232,7 @@ thereafter.
Alternate Dockerfile.
Compose will use an alternate file to build with. A build path must also be
Compose uses an alternate file to build with. A build path must also be
specified.
build:
@ -281,7 +281,7 @@ at build time is the value in the environment where Compose is running.
> **Note:** This option is new in v3.2
A list of images that the engine will use for cache resolution.
A list of images that the engine uses for cache resolution.
build:
context: .
@ -365,7 +365,7 @@ configuration. Two different syntax variants are supported.
> **Note**: The config must already exist or be
> [defined in the top-level `configs` configuration](#configs-configuration-reference)
> of this stack file, or stack deployment will fail.
> of this stack file, or stack deployment fails.
For more information on configs, see [configs](/engine/swarm/configs.md).
@ -410,12 +410,12 @@ The long syntax provides more granularity in how the config is created within
the service's task containers.
- `source`: The name of the config as it exists in Docker.
- `target`: The path and name of the file that will be mounted in the service's
- `target`: The path and name of the file to be mounted in the service's
task containers. Defaults to `/<source>` if not specified.
- `uid` and `gid`: The numeric UID or GID which will own the mounted config file
- `uid` and `gid`: The numeric UID or GID that owns the mounted config file
within in the service's task containers. Both default to `0` on Linux if not
specified. Not supported on Windows.
- `mode`: The permissions for the file that will be mounted within the service's
- `mode`: The permissions for the file that is mounted within the service's
task containers, in octal notation. For instance, `0444`
represents world-readable. The default is `0444`. Configs cannot be writable
because they are mounted in a temporary filesystem, so if you set the writable
@ -533,8 +533,8 @@ Specify a service discovery method for external clients connecting to a swarm.
> **[Version 3.3](compose-versioning.md#version-3) only.**
* `endpoint_mode: vip` - Docker assigns the service a virtual IP (VIP),
which acts as the “front end” for clients to reach the service on a
* `endpoint_mode: vip` - Docker assigns the service a virtual IP (VIP)
that acts as the “front end” for clients to reach the service on a
network. Docker routes requests between the client and available worker
nodes for the service, without client knowledge of how many nodes
are participating in the service or their IP addresses or ports.
@ -593,7 +593,7 @@ mode topics.
#### labels
Specify labels for the service. These labels will *only* be set on the service,
Specify labels for the service. These labels are *only* set on the service,
and *not* on any containers for the service.
version: "3"
@ -703,7 +703,7 @@ services or containers in a swarm.
on non swarm deployments, use
[Compose file format version 2 CPU, memory, and other resource
options](compose-file-v2.md#cpu-and-other-resources).
If you have further questions, please refer to the discussion on the GitHub
If you have further questions, refer to the discussion on the GitHub
issue [docker/compose/4513](https://github.com/docker/compose/issues/4513){: target="_blank" class="_"}.
{: .important}
@ -755,7 +755,7 @@ updates.
(default: `pause`).
- `monitor`: Duration after each task update to monitor for failure `(ns|us|ms|s|m|h)` (default 0s).
- `max_failure_ratio`: Failure rate to tolerate during an update.
- `order`: Order of operations during updates. One of `stop-first` (old task is stopped before starting new one), or `start-first` (new task is started first, and the running tasks will briefly overlap) (default `stop-first`) **Note**: Only supported for v3.4 and higher.
- `order`: Order of operations during updates. One of `stop-first` (old task is stopped before starting new one), or `start-first` (new task is started first, and the running tasks briefly overlap) (default `stop-first`) **Note**: Only supported for v3.4 and higher.
> **Note**: `order` is only supported for v3.4 and higher of the compose
file format.
@ -793,10 +793,10 @@ The following sub-options (supported for `docker compose up` and `docker compose
- [sysctls](#sysctls)
- [userns_mode](#userns_mode)
>**Tip:** See also, the section on [how to configure volumes
>**Tip:** See the section on [how to configure volumes
for services, swarms, and docker-stack.yml
files](#volumes-for-services-swarms-and-stack-files). Volumes _are_ supported
but in order to work with swarms and services, they must be configured properly,
files](#volumes-for-services-swarms-and-stack-files). Volumes _are_ supported
but to work with swarms and services, they must be configured
as named volumes or associated with services that are constrained to nodes with
access to the requisite volumes.
@ -814,14 +814,15 @@ client create option.
### depends_on
Express dependency between services, which has two effects:
Express dependency between services, Service dependencies cause the following
behaviors:
- `docker-compose up` will start services in dependency order. In the following
example, `db` and `redis` will be started before `web`.
- `docker-compose up` starts services in dependency order. In the following
example, `db` and `redis` is started before `web`.
- `docker-compose up SERVICE` will automatically include `SERVICE`'s
dependencies. In the following example, `docker-compose up web` will also
create and start `db` and `redis`.
- `docker-compose up SERVICE` automatically includes `SERVICE`'s
dependencies. In the following example, `docker-compose up web` also
creates and starts `db` and `redis`.
Simple example:
@ -839,7 +840,7 @@ Simple example:
> There are several things to be aware of when using `depends_on`:
>
> - `depends_on` will not wait for `db` and `redis` to be "ready" before
> - `depends_on` does not wait for `db` and `redis` to be "ready" before
> starting `web` - only until they have been started. If you need to wait
> for a service to be ready, see [Controlling startup order](/compose/startup-order.md)
> for more on this problem and strategies for solving it.
@ -901,10 +902,10 @@ The entrypoint can also be a list, in a manner similar to
- memory_limit=-1
- vendor/bin/phpunit
> **Note**: Setting `entrypoint` will both override any default entrypoint set
> **Note**: Setting `entrypoint` both overrides any default entrypoint set
> on the service's image with the `ENTRYPOINT` Dockerfile instruction, *and*
> clear out any default command on the image - meaning that if there's a `CMD`
> instruction in the Dockerfile, it will be ignored.
> clears out any default command on the image - meaning that if there's a `CMD`
> instruction in the Dockerfile, it is ignored.
### env_file
@ -925,19 +926,20 @@ empty or undefined.
- /opt/secrets.env
Compose expects each line in an env file to be in `VAR=VAL` format. Lines
beginning with `#` (i.e. comments) are ignored, as are blank lines.
beginning with `#` are treated as comments and are ignored. Blank lines are
also ignored.
# Set Rails/Rack environment
RACK_ENV=development
> **Note**: If your service specifies a [build](#build) option, variables
> defined in environment files will _not_ be automatically visible during the
> defined in environment files are _not_ automatically visible during the
> build. Use the [args](#args) sub-option of `build` to define build-time
> environment variables.
The value of `VAL` is used as is and not modified at all. For example if the
value is surrounded by quotes (as is often the case of shell variables), the
quotes will be included in the value passed to Compose.
quotes are included in the value passed to Compose.
Keep in mind that _the order of files in the list is significant in determining
the value assigned to a variable that shows up more than once_. The files in the
@ -968,7 +970,7 @@ and
VAR=hello
```
$VAR will be `hello`.
$VAR is `hello`.
### environment
@ -990,7 +992,7 @@ machine Compose is running on, which can be helpful for secret or host-specific
- SESSION_SECRET
> **Note**: If your service specifies a [build](#build) option, variables
> defined in `environment` will _not_ be automatically visible during the
> defined in `environment` are _not_ automatically visible during the
> build. Use the [args](#args) sub-option of `build` to define build-time
> environment variables.
@ -1018,7 +1020,7 @@ specifying both the container name and the link alias (`CONTAINER:ALIAS`).
> **Notes:**
>
> If you're using the [version 2 or above file format](compose-versioning.md#version-2), the externally-created containers
must be connected to at least one of the same networks as the service which is
must be connected to at least one of the same networks as the service that is
linking to them. [Links](compose-file-v2#links) are a
legacy option. We recommend using [networks](#networks) instead.
>
@ -1033,7 +1035,7 @@ Add hostname mappings. Use the same values as the docker client `--add-host` par
- "somehost:162.242.195.82"
- "otherhost:50.31.209.229"
An entry with the ip address and hostname will be created in `/etc/hosts` inside containers for this service, e.g:
An entry with the ip address and hostname is created in `/etc/hosts` inside containers for this service, e.g:
162.242.195.82 somehost
50.31.209.229 otherhost
@ -1136,7 +1138,7 @@ a link alias (`SERVICE:ALIAS`), or just the service name.
- db:database
- redis
Containers for the linked service will be reachable at a hostname identical to
Containers for the linked service are reachable at a hostname identical to
the alias, or the service name if no alias was specified.
Links are not required to enable services to communicate - by default,
@ -1149,7 +1151,7 @@ Links also express dependency between services in the same way as
> **Notes**
>
> * If you define both links and [networks](#networks), services with
> links between them must share at least one network in common in order to
> links between them must share at least one network in common to
> communicate.
>
> * This option is ignored when
@ -1177,7 +1179,7 @@ The default value is json-file.
> **Note**: Only the `json-file` and `journald` drivers make the logs
available directly from `docker-compose up` and `docker-compose logs`.
Using any other driver will not print any logs.
Using any other driver does not print any logs.
Specify logging options for the logging driver with the ``options`` key, as with the ``--log-opt`` option for `docker run`.
@ -1254,7 +1256,7 @@ Aliases (alternative hostnames) for this service on the network. Other container
Since `aliases` is network-scoped, the same service can have different aliases on different networks.
> **Note**: A network-wide alias can be shared by multiple containers, and even by multiple services. If it is, then exactly which container the name will resolve to is not guaranteed.
> **Note**: A network-wide alias can be shared by multiple containers, and even by multiple services. If it is, then exactly which container the name resolves to is not guaranteed.
The general format is shown here.
@ -1346,7 +1348,7 @@ networks:
Sets the PID mode to the host PID mode. This turns on sharing between
container and the host operating system the PID address space. Containers
launched with this flag will be able to access and manipulate other
launched with this flag can access and manipulate other
containers in the bare-metal machine's namespace and vise-versa.
### ports
@ -1356,11 +1358,11 @@ Expose ports.
#### Short syntax
Either specify both ports (`HOST:CONTAINER`), or just the container
port (a random host port will be chosen).
port (an ephemeral host port is chosen).
> **Note**: When mapping ports in the `HOST:CONTAINER` format, you may experience
> erroneous results when using a container port lower than 60, because YAML will
> parse numbers in the format `xx:yy` as sexagesimal (base 60). For this reason,
> erroneous results when using a container port lower than 60, because YAML
> parses numbers in the format `xx:yy` as a base-60 value. For this reason,
> we recommend always explicitly specifying your port mappings as strings.
ports:
@ -1382,7 +1384,7 @@ expressed in the short form.
- `published`: the publicly exposed port
- `protocol`: the port protocol (`tcp` or `udp`)
- `mode`: `host` for publishing a host port on each node, or `ingress` for a swarm
mode port which will be load balanced.
mode port to be load balanced.
```none
ports:
@ -1402,7 +1404,7 @@ configuration. Two different syntax variants are supported.
> **Note**: The secret must already exist or be
> [defined in the top-level `secrets` configuration](#secrets-configuration-reference)
> of this stack file, or stack deployment will fail.
> of this stack file, or stack deployment fails.
For more information on secrets, see [secrets](/engine/swarm/secrets.md).
@ -1444,15 +1446,15 @@ The long syntax provides more granularity in how the secret is created within
the service's task containers.
- `source`: The name of the secret as it exists in Docker.
- `target`: The name of the file that will be mounted in `/run/secrets/` in the
- `target`: The name of the file to be mounted in `/run/secrets/` in the
service's task containers. Defaults to `source` if not specified.
- `uid` and `gid`: The numeric UID or GID which will own the file within
- `uid` and `gid`: The numeric UID or GID that owns the file within
`/run/secrets/` in the service's task containers. Both default to `0` if not
specified.
- `mode`: The permissions for the file that will be mounted in `/run/secrets/`
- `mode`: The permissions for the file to be mounted in `/run/secrets/`
in the service's task containers, in octal notation. For instance, `0444`
represents world-readable. The default in Docker 1.13.1 is `0000`, but will
be `0444` in the future. Secrets cannot be writable because they are mounted
represents world-readable. The default in Docker 1.13.1 is `0000`, but is
be `0444` in newer versions. Secrets cannot be writable because they are mounted
in a temporary filesystem, so if you set the writable bit, it is ignored. The
executable bit can be set. If you aren't familiar with UNIX file permission
modes, you may find this
@ -1515,7 +1517,7 @@ SIGKILL.
### stop_signal
Sets an alternative signal to stop the container. By default `stop` uses
SIGTERM. Setting an alternative signal using `stop_signal` will cause
SIGTERM. Setting an alternative signal using `stop_signal` causes
`stop` to send that signal instead.
stop_signal: SIGUSR1
@ -1624,7 +1626,7 @@ volumes:
Optionally specify a path on the host machine
(`HOST:CONTAINER`), or an access mode (`HOST:CONTAINER:ro`).
You can mount a relative path on the host, which will expand relative to
You can mount a relative path on the host, that expands relative to
the directory of the Compose configuration file being used. Relative paths
should always begin with `.` or `..`.
@ -1654,7 +1656,7 @@ expressed in the short form.
- `source`: the source of the mount, a path on the host for a bind mount, or the
name of a volume defined in the
[top-level `volumes` key](#volume-configuration-reference). Not applicable for a tmpfs mount.
- `target`: the path in the container where the volume will be mounted
- `target`: the path in the container where the volume is mounted
- `read_only`: flag to set the volume as read-only
- `bind`: configure additional bind options
- `propagation`: the propagation mode used for the bind
@ -1694,7 +1696,7 @@ volumes:
When working with services, swarms, and `docker-stack.yml` files, keep in mind
that the tasks (containers) backing a service can be deployed on any node in a
swarm, which may be a different node each time the service is updated.
swarm, and this may be a different node each time the service is updated.
In the absence of having named volumes with specified sources, Docker creates an
anonymous volume for each task backing a service. Anonymous volumes do not
@ -1708,7 +1710,7 @@ volume present.
As an example, the `docker-stack.yml` file for the
[votingapp sample in Docker
Labs](https://github.com/docker/labs/blob/master/beginner/chapters/votingapp.md) defines a service called `db` that runs a `postgres` database. It is
configured as a named volume in order to persist the data on the swarm,
configured as a named volume to persist the data on the swarm,
_and_ is constrained to run only on `manager` nodes. Here is the relevant snip-it from that file:
```none
@ -1764,7 +1766,7 @@ volume mounts (shared filesystems)](/docker-for-mac/osxfs-caching.md).
### restart
`no` is the default restart policy, and it will not restart a container under
`no` is the default restart policy, and it does not restart a container under
any circumstance. When `always` is specified, the container always restarts. The
`on-failure` policy restarts a container if the exit code indicates an
on-failure error.
@ -1773,7 +1775,7 @@ on-failure error.
restart: always
restart: on-failure
restart: unless-stopped
> **Note**: This option is ignored when
> [deploying a stack in swarm mode](/engine/reference/commandline/stack_deploy.md)
> with a (version 3) Compose file. Use [restart_policy](#restart_policy) instead.
@ -1828,7 +1830,7 @@ that looks like this:
1gb
The supported units are `b`, `k`, `m` and `g`, and their alternative notation `kb`,
`mb` and `gb`. Please note that decimal values are not supported at this time.
`mb` and `gb`. Decimal values are not supported at this time.
## Volume configuration reference
@ -1862,15 +1864,15 @@ up:
volumes:
data-volume:
An entry under the top-level `volumes` key can be empty, in which case it will
use the default driver configured by the Engine (in most cases, this is the
An entry under the top-level `volumes` key can be empty, in which case it
uses the default driver configured by the Engine (in most cases, this is the
`local` driver). Optionally, you can configure it with the following keys:
### driver
Specify which volume driver should be used for this volume. Defaults to whatever
driver the Docker Engine has been configured to use, which in most cases is
`local`. If the driver is not available, the Engine will return an error when
`local`. If the driver is not available, the Engine returns an error when
`docker-compose up` tries to create the volume.
driver: foobar
@ -1888,14 +1890,14 @@ documentation for more information. Optional.
### external
If set to `true`, specifies that this volume has been created outside of
Compose. `docker-compose up` will not attempt to create it, and will raise
Compose. `docker-compose up` does not attempt to create it, and raises
an error if it doesn't exist.
`external` cannot be used in conjunction with other volume configuration keys
(`driver`, `driver_opts`).
In the example below, instead of attempting to create a volume called
`[projectname]_data`, Compose will look for an existing volume simply
`[projectname]_data`, Compose looks for an existing volume simply
called `data` and mount it into the `db` service's containers.
version: '2'
@ -1923,7 +1925,7 @@ refer to it within the Compose file:
> External volumes are always created with docker stack deploy
>
External volumes that do not exist _will be created_ if you use [docker stack
External volumes that do not exist _are created_ if you use [docker stack
deploy](#deploy) to launch the app in [swarm mode](/engine/swarm/index.md)
(instead of [docker compose up](/compose/reference/up.md)). In swarm mode, a
volume is automatically created when it is defined by a service. As service
@ -1956,7 +1958,7 @@ conflicting with those used by other software.
> [Added in version 3.4 file format](compose-versioning.md#version-34)
Set a custom name for this volume. The name field can be used to reference
networks which contain special characters. The name is used as is
networks that contain special characters. The name is used as is
and will **not** be scoped with the stack name.
version: '3.4'
@ -1989,10 +1991,10 @@ Networks](https://github.com/docker/labs/blob/master/networking/README.md)
Specify which driver should be used for this network.
The default driver depends on how the Docker Engine you're using is configured,
but in most instances it will be `bridge` on a single host and `overlay` on a
but in most instances it is `bridge` on a single host and `overlay` on a
Swarm.
The Docker Engine will return an error if the driver is not available.
The Docker Engine returns an error if the driver is not available.
driver: overlay
@ -2072,7 +2074,7 @@ documentation for more information. Optional.
Only used when the `driver` is set to `overlay`. If set to `true`, then
standalone containers can attach to this network, in addition to services. If a
standalone container attaches to an overlay network, it can communicate with
services and standalone containers which are also attached to the overlay
services and standalone containers that are also attached to the overlay
network from other Docker daemons.
```yaml
@ -2139,15 +2141,15 @@ conflicting with those used by other software.
### external
If set to `true`, specifies that this network has been created outside of
Compose. `docker-compose up` will not attempt to create it, and will raise
Compose. `docker-compose up` does not attempt to create it, and raises
an error if it doesn't exist.
`external` cannot be used in conjunction with other network configuration keys
(`driver`, `driver_opts`, `ipam`, `internal`).
In the example below, `proxy` is the gateway to the outside world. Instead of
attempting to create a network called `[projectname]_outside`, Compose will
look for an existing network simply called `outside` and connect the `proxy`
attempting to create a network called `[projectname]_outside`, Compose
looks for an existing network simply called `outside` and connect the `proxy`
service's containers to it.
version: '2'
@ -2203,20 +2205,20 @@ It can also be used in conjuction with the `external` property:
## configs configuration reference
The top-level `configs` declaration defines or references
[configs](/engine/swarm/configs.md) which can be granted to the services in this
[configs](/engine/swarm/configs.md) that can be granted to the services in this
stack. The source of the config is either `file` or `external`.
- `file`: The config is created with the contents of the file at the specified
path.
- `external`: If set to true, specifies that this config has already been
created. Docker will not attempt to create it, and if it does not exist, a
created. Docker does not attempt to create it, and if it does not exist, a
`config not found` error occurs.
- `name`: The name of the config object in Docker. This field can be used to
reference configs which contain special characters. The name is used as is
reference configs that contain special characters. The name is used as is
and will **not** be scoped with the stack name. Introduced in version 3.5
file format.
In this example, `my_first_config` will be created (as
In this example, `my_first_config` is created (as
`<stack_name>_my_first_config)`when the stack is deployed,
and `my_second_config` already exists in Docker.
@ -2229,7 +2231,7 @@ configs:
```
Another variant for external configs is when the name of the config in Docker
is different from the name that will exist within the service. The following
is different from the name that exists within the service. The following
example modifies the previous one to use the external config called
`redis_config`.
@ -2250,20 +2252,20 @@ stack.
## secrets configuration reference
The top-level `secrets` declaration defines or references
[secrets](/engine/swarm/secrets.md) which can be granted to the services in this
[secrets](/engine/swarm/secrets.md) that can be granted to the services in this
stack. The source of the secret is either `file` or `external`.
- `file`: The secret is created with the contents of the file at the specified
path.
- `external`: If set to true, specifies that this secret has already been
created. Docker will not attempt to create it, and if it does not exist, a
created. Docker does not attempt to create it, and if it does not exist, a
`secret not found` error occurs.
- `name`: The name of the secret object in Docker. This field can be used to
reference secrets which contain special characters. The name is used as is
reference secrets that contain special characters. The name is used as is
and will **not** be scoped with the stack name. Introduced in version 3.5
file format.
In this example, `my_first_secret` will be created (as
In this example, `my_first_secret` is created (as
`<stack_name>_my_first_secret)`when the stack is deployed,
and `my_second_secret` already exists in Docker.
@ -2276,7 +2278,7 @@ secrets:
```
Another variant for external secrets is when the name of the secret in Docker
is different from the name that will exist within the service. The following
is different from the name that exists within the service. The following
example modifies the previous one to use the external secret called
`redis_secret`.

View File

@ -4,8 +4,8 @@ keywords: documentation, docs, docker, compose, orchestration, containers
title: "Quickstart: Compose and Django"
---
This quick-start guide demonstrates how to use Docker Compose to set up and run a simple Django/PostgreSQL app. Before starting, you'll need to have
[Compose installed](install.md).
This quick-start guide demonstrates how to use Docker Compose to set up and run a simple Django/PostgreSQL app. Before starting,
[install Compose](install.md).
### Define the project components
@ -197,7 +197,7 @@ In this section, you set up the database connection for Django.
>
> ALLOWED_HOSTS = ['*']
>
> Please note this value is **not** safe for production usage. Refer to the
> This value is **not** safe for production usage. Refer to the
[Django documentation](https://docs.djangoproject.com/en/1.11/ref/settings/#allowed-hosts) for more information.
5. List running containers.

View File

@ -13,13 +13,14 @@ named `.env` placed in the folder where the `docker-compose` command is executed
These syntax rules apply to the `.env` file:
* Compose expects each line in an `env` file to be in `VAR=VAL` format.
* Lines beginning with `#` (i.e. comments) are ignored.
* Lines beginning with `#` are processed as comments and ignored.
* Blank lines are ignored.
* There is no special handling of quotation marks (i.e. **they will be part of the VAL**, you have been warned ;) ).
* There is no special handling of quotation marks. This means that
**they are part of the VAL**.
## Compose file and CLI variables
The environment variables you define here will be used for [variable
The environment variables you define here is used for [variable
substitution](compose-file/index.md#variable-substitution) in your Compose file,
and can also be used to define the following [CLI
variables](reference/envvars.md):
@ -36,7 +37,7 @@ variables](reference/envvars.md):
> **Notes**
>
> * Values present in the environment at runtime will always override
> * Values present in the environment at runtime always override
those defined inside the `.env` file. Similarly, values passed via command-line
arguments take precedence as well.
>

View File

@ -34,7 +34,7 @@ You can pass environment variables from your shell straight through to a service
environment:
- DEBUG
The value of the `DEBUG` variable in the container will be taken from the value for the same variable in the shell in which Compose is run.
The value of the `DEBUG` variable in the container is taken from the value for the same variable in the shell in which Compose is run.
## The “env_file” configuration option
@ -56,7 +56,7 @@ You can also pass a variable through from the shell by not giving it a value:
docker-compose run -e DEBUG web python console.py
The value of the `DEBUG` variable in the container will be taken from the value for the same variable in the shell in which Compose is run.
The value of the `DEBUG` variable in the container is taken from the value for the same variable in the shell in which Compose is run.
## The “.env” file
@ -88,12 +88,12 @@ Values in the shell take precedence over those specified in the `.env` file. If
services:
web:
image: 'webapp:v2.0'
When values are provided both with a shell `environment` variable and with an `env_file` configuration file, values of environment variables will be taken **from environment key first and then from environment file, then from a `Dockerfile` `ENV`entry**:
When values are provided both with a shell `environment` variable and with an `env_file` configuration file, values of environment variables is taken **from environment key first and then from environment file, then from a `Dockerfile` `ENV`entry**:
$ cat ./Docker/api/api.env
NODE_ENV=test
$ cat docker-compose.yml
version: '3'
services:
@ -104,15 +104,15 @@ When values are provided both with a shell `environment` variable and with an `e
environment:
- NODE_ENV=production
You can test this with for e.g. a _NodeJS_ container in the CLI:
You can test this with a command like the following command that starts a _NodeJS_ container in the CLI:
$ docker-compose exec api node
> process.env.NODE_ENV
'production'
Having any `ARG` or `ENV` setting in a `Dockerfile` will evaluate only if there is _no_ Docker _Compose_ entry for `environment` or `env_file`.
Having any `ARG` or `ENV` setting in a `Dockerfile` evaluates only if there is _no_ Docker _Compose_ entry for `environment` or `env_file`.
_Spcecifics for NodeJS containers:_ If you have a `package.json` entry for `script:start` like `NODE_ENV=test node server.js`, then this will overrule _any_ setting in your `docker-compose.yml` file.
_Spcecifics for NodeJS containers:_ If you have a `package.json` entry for `script:start` like `NODE_ENV=test node server.js`, then this overrules _any_ setting in your `docker-compose.yml` file.
## Configuring Compose using environment variables
@ -120,4 +120,5 @@ Several environment variables are available for you to configure the Docker Comp
## Environment variables created by links
When using the ['links' option](compose-file.md#links) in a [v1 Compose file](compose-file.md#version-1), environment variables will be created for each link. They are documented in the [Link environment variables reference](link-env-deprecated.md). Please note, however, that these variables are deprecated - you should just use the link alias as a hostname instead.
When using the ['links' option](compose-file.md#links) in a [v1 Compose file](compose-file.md#version-1), environment variables are created for each link. They are documented in
the [Link environment variables reference](link-env-deprecated.md). However, these variables are deprecated. Use the link alias as a hostname instead.

View File

@ -205,7 +205,7 @@ looks like this:
volumes:
- "/data"
In this case, you'll get exactly the same result as if you wrote
In this case, you get exactly the same result as if you wrote
`docker-compose.yml` with the same `build`, `ports` and `volumes` configuration
values defined directly under `web`.
@ -302,7 +302,7 @@ replaces the old value.
> was defined in the original service.
>
> For example, if the original service defines `image: webapp` and the
> local service defines `build: .` then the resulting service will have
> local service defines `build: .` then the resulting service has a
> `build: .` and no `image` option.
>
> This is because `build` and `image` cannot be used together in a version 1

View File

@ -62,7 +62,7 @@ environment variable](./reference/envvars.md#compose-project-name).
Typically, you want `docker-compose up`. Use `up` to start or restart all the
services defined in a `docker-compose.yml`. In the default "attached"
mode, you'll see all the logs from all the containers. In "detached" mode (`-d`),
mode, you see all the logs from all the containers. In "detached" mode (`-d`),
Compose exits after starting the containers, but the containers continue to run
in the background.
@ -99,7 +99,7 @@ You should use a `volume` if you want to make changes to your code and see them
reflected immediately, for example when you're developing code and your server
supports hot code reloading or live-reload.
There may be cases where you'll want to use both. You can have the image
There may be cases where you want to use both. You can have the image
include the code using a `COPY`, and use a `volume` in your Compose file to
include the code from the host during development. The volume overrides
the directory contents of the image.

View File

@ -66,7 +66,7 @@ Define the application dependencies.
> loop lets us attempt our request multiple times if the redis service is
> not available. This is useful at startup while the application comes
> online, but also makes our application more resilient if the Redis
> service has to be restarted anytime during the app's lifetime. In a
> service needs to be restarted anytime during the app's lifetime. In a
> cluster, this also helps handling momentary connection drops between
> nodes.
@ -304,9 +304,9 @@ services. For example, to see what environment variables are available to the
$ docker-compose run web env
See `docker-compose --help` to see other available commands. You can also install [command completion](completion.md) for the bash and zsh shell, which will also show you available commands.
See `docker-compose --help` to see other available commands. You can also install [command completion](completion.md) for the bash and zsh shell, which also shows you available commands.
If you started Compose with `docker-compose up -d`, you'll probably want to stop
If you started Compose with `docker-compose up -d`, stop
your services once you've finished with them:
$ docker-compose stop

View File

@ -20,5 +20,5 @@ Compose is a tool for defining and running multi-container Docker applications.
- [Environment file](env-file.md)
To see a detailed list of changes for past and current releases of Docker
Compose, please refer to the
Compose, refer to the
[CHANGELOG](https://github.com/docker/compose/blob/master/CHANGELOG.md).

View File

@ -197,12 +197,12 @@ but may be less stable.
## Upgrading
If you're upgrading from Compose 1.2 or earlier, you'll need to remove or
If you're upgrading from Compose 1.2 or earlier, remove or
migrate your existing containers after upgrading Compose. This is because, as of
version 1.3, Compose uses Docker labels to keep track of containers, and so they
need to be recreated with labels added.
version 1.3, Compose uses Docker labels to keep track of containers, and your
containers need to be recreated to add the labels.
If Compose detects containers that were created without labels, it will refuse
If Compose detects containers that were created without labels, it refuses
to run so that you don't end up with two sets of them. If you want to keep using
your existing containers (for example, because they have data volumes you want
to preserve), you can use Compose 1.5.x to migrate them with the following
@ -213,7 +213,7 @@ docker-compose migrate-to-labels
```
Alternatively, if you're not worried about keeping them, you can remove them.
Compose will just create new ones.
Compose just creates new ones.
```bash
docker rm -f -v myapp_web_1 myapp_db_1 ...

View File

@ -9,7 +9,7 @@ notoc: true
> **Note**: Environment variables are no longer the recommended method for connecting to linked services. Instead, you should use the link name (by default, the name of the linked service) as the hostname to connect to. See the [docker-compose.yml documentation](compose-file.md#links) for details.
>
> Environment variables will only be populated if you're using the [legacy version 1 Compose file format](compose-file.md#versioning).
> Environment variables are only populated if you're using the [legacy version 1 Compose file format](compose-file.md#versioning).
Compose uses [Docker links](/engine/userguide/networking/default_network/dockerlinks.md)
to expose services' containers to one another. Each linked container injects a set of
@ -18,22 +18,22 @@ environment variables, each of which begins with the uppercase name of the conta
To see what environment variables are available to a service, run `docker-compose run SERVICE env`.
<b><i>name</i>\_PORT</b><br>
Full URL, e.g. `DB_PORT=tcp://172.17.0.5:5432`
Full URL, such as `DB_PORT=tcp://172.17.0.5:5432`
<b><i>name</i>\_PORT\_<i>num</i>\_<i>protocol</i></b><br>
Full URL, e.g. `DB_PORT_5432_TCP=tcp://172.17.0.5:5432`
Full URL, such as `DB_PORT_5432_TCP=tcp://172.17.0.5:5432`
<b><i>name</i>\_PORT\_<i>num</i>\_<i>protocol</i>\_ADDR</b><br>
Container's IP address, e.g. `DB_PORT_5432_TCP_ADDR=172.17.0.5`
Container's IP address, such as `DB_PORT_5432_TCP_ADDR=172.17.0.5`
<b><i>name</i>\_PORT\_<i>num</i>\_<i>protocol</i>\_PORT</b><br>
Exposed port number, e.g. `DB_PORT_5432_TCP_PORT=5432`
Exposed port number, such as `DB_PORT_5432_TCP_PORT=5432`
<b><i>name</i>\_PORT\_<i>num</i>\_<i>protocol</i>\_PROTO</b><br>
Protocol (tcp or udp), e.g. `DB_PORT_5432_TCP_PROTO=tcp`
Protocol (tcp or udp), such as `DB_PORT_5432_TCP_PROTO=tcp`
<b><i>name</i>\_NAME</b><br>
Fully qualified container name, e.g. `DB_1_NAME=/myapp_web_1/myapp_db_1`
Fully qualified container name, such as `DB_1_NAME=/myapp_web_1/myapp_db_1`
## Related information

View File

@ -56,9 +56,9 @@ look like `postgres://{DOCKER_IP}:8001`.
## Update containers
If you make a configuration change to a service and run `docker-compose up` to update it, the old container will be removed and the new one will join the network under a different IP address but the same name. Running containers will be able to look up that name and connect to the new address, but the old address will stop working.
If you make a configuration change to a service and run `docker-compose up` to update it, the old container is removed and the new one joins the network under a different IP address but the same name. Running containers can look up that name and connect to the new address, but the old address stops working.
If any containers have connections open to the old container, they will be closed. It is a container's responsibility to detect this condition, look up the name again and reconnect.
If any containers have connections open to the old container, they are closed. It is a container's responsibility to detect this condition, look up the name again and reconnect.
## Links
@ -66,7 +66,7 @@ Links allow you to define extra aliases by which a service is reachable from ano
version: "3"
services:
web:
build: .
links:
@ -78,11 +78,11 @@ See the [links reference](compose-file.md#links) for more information.
## Multi-host networking
> **Note**: The instructions in this section refer to [legacy Docker Swarm](/compose/swarm.md) operations, and will only work when targeting a legacy Swarm cluster. For instructions on deploying a compose project to the newer integrated swarm mode, consult the [Docker Stacks](/compose/bundles.md) documentation.
> **Note**: The instructions in this section refer to [legacy Docker Swarm](/compose/swarm.md) operations, and only work when targeting a legacy Swarm cluster. For instructions on deploying a compose project to the newer integrated swarm mode, consult the [Docker Stacks](/compose/bundles.md) documentation.
When [deploying a Compose application to a Swarm cluster](swarm.md), you can make use of the built-in `overlay` driver to enable multi-host communication between containers with no changes to your Compose file or application code.
Consult the [Getting started with multi-host networking](/engine/userguide/networking/get-started-overlay/) to see how to set up a Swarm cluster. The cluster will use the `overlay` driver by default, but you can specify it explicitly if you prefer - see below for how to do this.
Consult the [Getting started with multi-host networking](/engine/userguide/networking/get-started-overlay/) to see how to set up a Swarm cluster. The cluster uses the `overlay` driver by default, but you can specify it explicitly if you prefer - see below for how to do this.
## Specify custom networks
@ -94,7 +94,7 @@ Here's an example Compose file defining two custom networks. The `proxy` service
version: "3"
services:
proxy:
build: ./proxy
networks:
@ -133,7 +133,7 @@ Instead of (or as well as) specifying your own networks, you can also change the
version: "3"
services:
web:
build: .
ports:
@ -155,4 +155,4 @@ If you want your containers to join a pre-existing network, use the [`external`
external:
name: my-pre-existing-network
Instead of attempting to create a network called `[projectname]_default`, Compose will look for a network called `my-pre-existing-network` and connect your app's containers to it.
Instead of attempting to create a network called `[projectname]_default`, Compose looks for a network called `my-pre-existing-network` and connect your app's containers to it.

View File

@ -24,8 +24,7 @@ anywhere.
2. Define the services that make up your app in `docker-compose.yml`
so they can be run together in an isolated environment.
3. Lastly, run
`docker-compose up` and Compose will start and run your entire app.
3. Run `docker-compose up` and Compose starts and runs your entire app.
A `docker-compose.yml` looks like this:
@ -79,7 +78,7 @@ The features of Compose that make it effective are:
Compose uses a project name to isolate environments from each other. You can make use of this project name in several different contexts:
* on a dev host, to create multiple copies of a single environment (e.g., you want to run a stable copy for each feature branch of a project)
* on a dev host, to create multiple copies of a single environment, such as when you want to run a stable copy for each feature branch of a project
* on a CI server, to keep builds from interfering with each other, you can set
the project name to a unique build number
* on a shared host or dev host, to prevent different projects, which may use the
@ -167,7 +166,7 @@ For details on using production-oriented features, see
## Release notes
To see a detailed list of changes for past and current releases of Docker
Compose, please refer to the
Compose, refer to the
[CHANGELOG](https://github.com/docker/compose/blob/master/CHANGELOG.md).
## Getting help
@ -176,11 +175,11 @@ Docker Compose is under active development. If you need help, would like to
contribute, or simply want to talk about the project with like-minded
individuals, we have a number of open channels for communication.
* To report bugs or file feature requests: please use the [issue tracker on Github](https://github.com/docker/compose/issues).
* To report bugs or file feature requests: use the [issue tracker on Github](https://github.com/docker/compose/issues).
* To talk about the project with people in real time: please join the
* To talk about the project with people in real time: join the
`#docker-compose` channel on freenode IRC.
* To contribute code or documentation changes: please submit a [pull request on Github](https://github.com/docker/compose/pulls).
* To contribute code or documentation changes: submit a [pull request on Github](https://github.com/docker/compose/pulls).
For more information and resources, please visit the [Getting Help project page](/opensource/get-help/).
For more information and resources, visit the [Getting Help project page](/opensource/get-help/).

View File

@ -14,18 +14,18 @@ up your application, you can run Compose apps on a Swarm cluster.
### Modify your Compose file for production
You'll almost certainly want to make changes to your app configuration that are
more appropriate to a live environment. These changes may include:
You probably need to make changes to your app configuration to make it ready for
production. These changes may include:
- Removing any volume bindings for application code, so that code stays inside
the container and can't be changed from outside
- Binding to different ports on the host
- Setting environment variables differently (e.g., to decrease the verbosity of
- Setting environment variables differently, such as when you need to decrease the verbosity of
logging, or to enable email sending)
- Specifying a restart policy (e.g., `restart: always`) to avoid downtime
- Adding extra services (e.g., a log aggregator)
- Specifying a restart policy like `restart: always` to avoid downtime
- Adding extra services such as a log aggregator
For this reason, you'll probably want to define an additional Compose file, say
For this reason, consider defining an additional Compose file, say
`production.yml`, which specifies production-appropriate
configuration. This configuration file only needs to include the changes you'd
like to make from the original Compose file. The additional Compose file
@ -41,14 +41,14 @@ complete example.
### Deploying changes
When you make changes to your app code, you'll need to rebuild your image and
When you make changes to your app code, remember to rebuild your image and
recreate your app's containers. To redeploy a service called
`web`, you would use:
`web`, use:
$ docker-compose build web
$ docker-compose up --no-deps -d web
This will first rebuild the image for `web` and then stop, destroy, and recreate
This first rebuilds the image for `web` and then stop, destroy, and recreate
*just* the `web` service. The `--no-deps` flag prevents Compose from also
recreating any services which `web` depends on.
@ -62,7 +62,7 @@ remote Docker hosts very easy, and is recommended even if you're not deploying
remotely.
Once you've set up your environment variables, all the normal `docker-compose`
commands will work with no further configuration.
commands work with no further configuration.
### Running Compose on a Swarm cluster

View File

@ -4,15 +4,14 @@ keywords: documentation, docs, docker, compose, orchestration, containers
title: "Quickstart: Compose and Rails"
---
This Quickstart guide will show you how to use Docker Compose to set up and run
a Rails/PostgreSQL app. Before starting, you'll need to have [Compose
installed](install.md).
This Quickstart guide shows you how to use Docker Compose to set up and run
a Rails/PostgreSQL app. Before starting, [install Compose](install.md).
### Define the project
Start by setting up the four files you'll need to build the app. First, since
Start by setting up the four files needed to build the app. First, since
your app is going to run inside a Docker container containing all of its
dependencies, you'll need to define exactly what needs to be included in the
dependencies, define exactly what needs to be included in the
container. This is done using a file called `Dockerfile`. To begin with, the
Dockerfile consists of:
@ -25,7 +24,7 @@ Dockerfile consists of:
RUN bundle install
COPY . /myapp
That'll put your application code inside an image that will build a container
That'll put your application code inside an image that builds a container
with Ruby, Bundler and all your dependencies inside it. For more information on
how to write Dockerfiles, see the [Docker user
guide](/engine/tutorials/dockerimages.md#building-an-image-from-a-dockerfile)
@ -37,7 +36,7 @@ in a moment by `rails new`.
source 'https://rubygems.org'
gem 'rails', '5.0.0.1'
You'll need an empty `Gemfile.lock` in order to build our `Dockerfile`.
Create an empty `Gemfile.lock` to build our `Dockerfile`.
touch Gemfile.lock
@ -71,8 +70,8 @@ using [docker-compose run](/compose/reference/run/):
docker-compose run web rails new . --force --database=postgresql
First, Compose will build the image for the `web` service using the
`Dockerfile`. Then it will run `rails new` inside a new container, using that
First, Compose builds the image for the `web` service using the
`Dockerfile`. Then it runs `rails new` inside a new container, using that
image. Once it's done, you should have generated a fresh app.
List the files.
@ -240,7 +239,7 @@ To restart the application:
### Rebuild the application
If you make changes to the Gemfile or the Compose file to try out some different
configurations, you will need to rebuild. Some changes will require only
configurations, you need to rebuild. Some changes require only
`docker-compose up --build`, but a full rebuild requires a re-run of
`docker-compose run web bundle install` to sync changes in the `Gemfile.lock` to
the host, followed by `docker-compose up --build`.

View File

@ -17,9 +17,9 @@ Options:
--build-arg key=val Set build-time variables for one service.
```
Services are built once and then tagged, by default as `project_service`, e.g.,
`composetest_db`. If the Compose file specifies an
[image](/compose/compose-file/index.md#image) name, the image will be
Services are built once and then tagged, by default as `project_service`. For
example, `composetest_db`. If the Compose file specifies an
[image](/compose/compose-file/index.md#image) name, the image is
tagged with that name, substituting any variables beforehand. See [variable
substitution](#variable-substitution).

View File

@ -22,4 +22,4 @@ Images must have digests stored, which requires interaction with a
Docker registry. If digests aren't stored for all images, you can fetch
them with `docker-compose pull` or `docker-compose push`. To push images
automatically when bundling, pass `--push-images`. Only services with
a `build` option specified will have their images pushed.
a `build` option specified have their images pushed.

View File

@ -88,7 +88,7 @@ Supported values: `true` or `1` to enable, `false` or `0` to disable.
## COMPOSE\_PATH\_SEPARATOR
If set, the value of the `COMPOSE_FILE` environment variable will be separated
If set, the value of the `COMPOSE_FILE` environment variable is separated
using this character as path separator.

View File

@ -14,7 +14,7 @@ Options:
Stream container events for every container in the project.
With the `--json` flag, a json object will be printed one per line with the
With the `--json` flag, a json object is printed one per line with the
format:
```

View File

@ -20,4 +20,4 @@ Options:
This is equivalent of `docker exec`. With this subcommand you can run arbitrary
commands in your services. Commands are by default allocating a TTY, so you can
do e.g. `docker-compose exec web sh` to get an interactive prompt.
use a command such as `docker-compose exec web sh` to get an interactive prompt.

View File

@ -101,7 +101,7 @@ webapp:
```
If the `docker-compose.admin.yml` also specifies this same service, any matching
fields will override the previous file. New values, add to the `webapp` service
fields override the previous file. New values, add to the `webapp` service
configuration.
```

View File

@ -34,7 +34,7 @@ services:
- db
```
If you run `docker-compose pull ServiceName` in the same directory as the `docker-compose.yml` file that defines the service, Docker will pull the associated image. For example, to call the `postgres` image configured as the `db` service in our example, you would run `docker-compose pull db`.
If you run `docker-compose pull ServiceName` in the same directory as the `docker-compose.yml` file that defines the service, Docker pulls the associated image. For example, to call the `postgres` image configured as the `db` service in our example, you would run `docker-compose pull db`.
```
$ docker-compose pull db

View File

@ -14,6 +14,6 @@ Options:
Restarts all stopped and running services.
If you make changes to your `docker-compose.yml` configuration these changes will not be reflected after running this command.
If you make changes to your `docker-compose.yml` configuration these changes are not reflected after running this command.
For example, changes to environment variables (which are added after a container is built, but before the container's command is executed) will not be updated after restarting.
For example, changes to environment variables (which are added after a container is built, but before the container's command is executed) are not updated after restarting.

View File

@ -16,12 +16,12 @@ Options:
Removes stopped service containers.
By default, anonymous volumes attached to containers will not be removed. You
By default, anonymous volumes attached to containers are not removed. You
can override this with `-v`. To list all volumes, use `docker volume ls`.
Any data which is not in a volume will be lost.
Any data which is not in a volume is lost.
Running the command with no options will also remove one-off containers created
Running the command with no options also removes one-off containers created
by `docker-compose up` or `docker-compose run`:
```none

View File

@ -49,7 +49,7 @@ If you start a service configured with links, the `run` command first checks to
docker-compose run db psql -h db -U docker
This will open an interactive PostgreSQL shell for the linked `db` container.
This opens an interactive PostgreSQL shell for the linked `db` container.
If you do not want the `run` command to start linked containers, use the `--no-deps` flag:

View File

@ -22,4 +22,4 @@ Numbers are specified as arguments in the form `service=num`. For example:
[Compose file version 3.x](/compose/compose-file/index.md), you can specify
[replicas](/compose/compose-file/index.md#replicas)
under the [deploy](/compose/compose-file/index.md#deploy) key as part of a
service configuration for [Swarm mode](/engine/swarm/). Note that the `deploy` key and its sub-options (including `replicas`) will only work with the `docker stack deploy` command, not `docker compose up` or `docker-compose run`.
service configuration for [Swarm mode](/engine/swarm/). The `deploy` key and its sub-options (including `replicas`) only works with the `docker stack deploy` command, not `docker compose up` or `docker-compose run`.

View File

@ -10,7 +10,7 @@ You can control the order of service startup with the
containers in dependency order, where dependencies are determined by
`depends_on`, `links`, `volumes_from`, and `network_mode: "service:..."`.
However, Compose will not wait until a container is "ready" (whatever that means
However, Compose does not wait until a container is "ready" (whatever that means
for your particular application) - only until it's running. There's a good
reason for this.
@ -19,9 +19,9 @@ a subset of a much larger problem of distributed systems. In production, your
database could become unavailable or move hosts at any time. Your application
needs to be resilient to these types of failures.
To handle this, your application should attempt to re-establish a connection to
To handle this, design your application to attempt to re-establish a connection to
the database after a failure. If the application retries the connection,
it should eventually be able to connect to the database.
it can eventually connect to the database.
The best solution is to perform this check in your application code, both at
startup and whenever a connection is lost for any reason. However, if you don't
@ -31,7 +31,7 @@ script:
- Use a tool such as [wait-for-it](https://github.com/vishnubob/wait-for-it),
[dockerize](https://github.com/jwilder/dockerize), or sh-compatible
[wait-for](https://github.com/Eficode/wait-for). These are small
wrapper scripts which you can include in your application's image and will
wrapper scripts which you can include in your application's image to
poll a given host and port until it's accepting TCP connections.
For example, to use `wait-for-it.sh` or `wait-for` to wrap your service's command:
@ -48,7 +48,7 @@ script:
db:
image: postgres
>**Tip**: There are limitations to this first solution; e.g., it doesn't verify when a specific service is really ready. If you add more arguments to the command, you'll need to use the `bash shift` command with a loop, as shown in the next example.
>**Tip**: There are limitations to this first solution. For example, it doesn't verify when a specific service is really ready. If you add more arguments to the command, use the `bash shift` command with a loop, as shown in the next example.
- Alternatively, write your own wrapper script to perform a more application-specific health
check. For example, you might want to wait until Postgres is definitely

View File

@ -13,8 +13,8 @@ you were using a single Docker host.
The actual extent of integration depends on which version of the [Compose file
format](compose-file.md#versioning) you are using:
1. If you're using version 1 along with `links`, your app will work, but Swarm
will schedule all containers on one host, because links between containers
1. If you're using version 1 along with `links`, your app works, but Swarm
schedules all containers on one host, because links between containers
do not work across hosts with the old networking system.
2. If you're using version 2, your app should work with no changes:
@ -35,12 +35,12 @@ set up a Swarm cluster with [Docker Machine](/machine/overview.md) and the overl
### Building images
Swarm can build an image from a Dockerfile just like a single-host Docker
instance can, but the resulting image will only live on a single node and won't
instance can, but the resulting image only lives on a single node and won't
be distributed to other nodes.
If you want to use Compose to scale the service in question to multiple nodes,
you'll have to build it yourself, push it to a registry (e.g. the Docker Hub)
and reference it from `docker-compose.yml`:
build the image, push it to a registry such as Docker Hub, and reference it
from `docker-compose.yml`:
$ docker build -t myusername/web .
$ docker push myusername/web
@ -56,7 +56,7 @@ and reference it from `docker-compose.yml`:
If a service has multiple dependencies of the type which force co-scheduling
(see [Automatic scheduling](swarm.md#automatic-scheduling) below), it's possible that
Swarm will schedule the dependencies on different nodes, making the dependent
Swarm schedules the dependencies on different nodes, making the dependent
service impossible to schedule. For example, here `foo` needs to be co-scheduled
with `bar` and `baz`:
@ -97,7 +97,7 @@ all three services end up on the same node:
### Host ports and recreating containers
If a service maps a port from the host, e.g. `80:8000`, then you may get an
If a service maps a port from the host, such as `80:8000`, then you may get an
error like this when running `docker-compose up` on it after the first time:
docker: Error response from daemon: unable to find a node that satisfies
@ -130,7 +130,7 @@ There are two viable workarounds for this problem:
web-logs:
driver: custom-volume-driver
- Remove the old container before creating the new one. You will lose any data
- Remove the old container before creating the new one. You lose any data
in the volume.
$ docker-compose stop web
@ -141,7 +141,7 @@ There are two viable workarounds for this problem:
### Automatic scheduling
Some configuration options will result in containers being automatically
Some configuration options result in containers being automatically
scheduled on the same Swarm node to ensure that they work correctly. These are:
- `network_mode: "service:..."` and `network_mode: "container:..."` (and

View File

@ -6,7 +6,7 @@ title: "Quickstart: Compose and WordPress"
You can use Docker Compose to easily run WordPress in an isolated environment
built with Docker containers. This quick-start guide demonstrates how to use
Compose to set up and run WordPress. Before starting, you'll need to have
Compose to set up and run WordPress. Before starting, install
[Compose installed](/compose/install.md).
### Define the project
@ -17,8 +17,8 @@ Compose to set up and run WordPress. Before starting, you'll need to have
This directory is the context for your application image. The
directory should only contain resources to build that image.
This project directory will contain a `docker-compose.yml` file which will
be complete in itself for a good starter wordpress project.
This project directory contains a `docker-compose.yml` file which
is complete in itself for a good starter wordpress project.
>**Tip**: You can use either a `.yml` or `.yaml` extension for
this file. They both work.
@ -29,7 +29,7 @@ Compose to set up and run WordPress. Before starting, you'll need to have
cd my_wordpress/
3. Create a `docker-compose.yml` file that will start your
3. Create a `docker-compose.yml` file that starts your
`WordPress` blog and a separate `MySQL` instance with a volume
mount for data persistence:
@ -112,7 +112,7 @@ At this point, WordPress should be running on port `8000` of your Docker Host,
and you can complete the "famous five-minute installation" as a WordPress
administrator.
> **Note**: The WordPress site will not be immediately available on port `8000`
> **Note**: The WordPress site is not immediately available on port `8000`
because the containers are still being initialized and may take a couple of
minutes before the first load.

View File

@ -116,7 +116,7 @@ to update its RHEL kernel.
$ sudo docker info
```
8. Only users with `sudo` access will be able to run `docker` commands.
8. Only users with `sudo` access can run `docker` commands.
Optionally, add non-sudo access to the Docker socket by adding your user
to the `docker` group.
@ -207,7 +207,7 @@ to update its RHEL kernel.
$ sudo docker info
```
6. Only users with `sudo` access will be able to run `docker` commands.
6. Only users with `sudo` access can run `docker` commands.
Optionally, add non-sudo access to the Docker socket by adding your user
to the `docker` group.
@ -286,7 +286,7 @@ to update its RHEL kernel.
$ sudo docker info
```
6. Only users with `sudo` access will be able to run `docker` commands.
6. Only users with `sudo` access can run `docker` commands.
Optionally, add non-sudo access to the Docker socket by adding your user
to the `docker` group.
@ -298,7 +298,7 @@ to update its RHEL kernel.
7. [Configure Btrfs for graph storage](/engine/userguide/storagedriver/btrfs-driver.md).
This is the only graph storage driver supported on SLES.
## Install using packages
If you need to install Docker on an air-gapped system with no access to the

View File

@ -162,7 +162,7 @@ Use these instructions to update APT-based systems.
## Upgrade from a legacy version
Use these instructions if you're upgrading your CS Docker Engine from a version
prior to 1.9. In this case you'll have to first uninstall CS Docker Engine, and
prior to 1.9. In this case, first uninstall CS Docker Engine, and
then install the latest version.
### CentOS 7.1 & RHEL 7.0/7.1

View File

@ -119,7 +119,7 @@ to update its RHEL kernel.
$ sudo docker info
```
8. Only users with `sudo` access will be able to run `docker` commands.
8. Only users with `sudo` access can run `docker` commands.
Optionally, add non-sudo access to the Docker socket by adding your user
to the `docker` group.
@ -209,7 +209,7 @@ to update its RHEL kernel.
$ sudo docker info
```
6. Only users with `sudo` access will be able to run `docker` commands.
6. Only users with `sudo` access can run `docker` commands.
Optionally, add non-sudo access to the Docker socket by adding your user
to the `docker` group.
@ -288,7 +288,7 @@ to update its RHEL kernel.
$ sudo docker info
```
6. Only users with `sudo` access will be able to run `docker` commands.
6. Only users with `sudo` access can run `docker` commands.
Optionally, add non-sudo access to the Docker socket by adding your user
to the `docker` group.

View File

@ -157,7 +157,7 @@ Use these instructions to update APT-based systems.
## Upgrade from a legacy version
Use these instructions if you're upgrading your CS Docker Engine from a version
prior to 1.9. In this case you'll have to first uninstall CS Docker Engine, and
prior to 1.9. In this case, first uninstall CS Docker Engine, and
then install the latest version.
### CentOS 7.1 & RHEL 7.0/7.1

View File

@ -36,7 +36,7 @@ trusted images. After pushing images in the Trusted Registry, you can see
which image tags were signed by viewing the appropriate repositories through
Trusted Registry's web interface.
To configure your Docker client to be able to push signed images to Docker
To configure your Docker client to push signed images to Docker
Trusted Registry refer to the CLI Reference's [Environment Variables
Section](/engine/reference/commandline/cli.md#environment-variables) and
[Notary Section](/engine/reference/commandline/cli.md#notary).

View File

@ -13,7 +13,7 @@ This cert must be accompanied by its private key, entered below.
* *SSL Private Key*: The hash from the private key associated with the provided
SSL Certificate (as a standard x509 key pair).
In order to run, the Trusted Registry requires encrypted communications through
The Trusted Registry requires encrypted communications through
HTTPS/SSL between (a) the Trusted Registry and your Docker Engine(s), and (b)
between your web browser and the Trusted Registry admin server. There are a few
options for setting this up:
@ -45,7 +45,7 @@ use it.
2. If your enterprise can't provide keys, you can use a public Certificate
Authority (CA) like "InstantSSL.com" or "RapidSSL.com" to generate a
certificate. If your certificates are generated using a globally trusted
Certificate Authority, you won't need to install them on all of your
Certificate Authority, you don't need to install them on all of your
client Docker daemons.
3. Use the self-signed registry certificate generated by Docker Trusted
@ -131,7 +131,7 @@ $ sudo /bin/systemctl restart docker.service
#### Docker Machine and Boot2Docker
You'll need to make some persistent changes using `bootsync.sh` in your
You need to make some persistent changes using `bootsync.sh` in your
Boot2Docker-based virtual machine (as documented in [local customization](https://github.com/boot2docker/boot2docker/blob/master/doc/FAQ.md#local-customisation-with-persistent-partition)). To do this:
1. `docker-machine ssh dev` to enter the VM
@ -167,7 +167,7 @@ If for some reason you can't install the certificate chain on a client Docker
host, or your certificates do not have a global CA, you can configure your
Docker daemon to run in "insecure" mode. This is done by adding an extra flag,
`--insecure-registry host-ip|domain-name`, to your client Docker daemon startup
flags. You'll need to restart the Docker daemon for the change to take effect.
flags. Restart the Docker daemon for the change to take effect.
This flag means that the communications between your Docker client and the
Trusted Registry server are still encrypted, but the client Docker daemon is not

View File

@ -23,7 +23,7 @@ different storage backend allows you to:
* Take advantage of other features that are critical to your organization
At first, you might have explored Docker Trusted Registry and Docker Engine by
[installing](../install/index.md) them on your system in order to familiarize
[installing](../install/index.md) them on your system to familiarize
yourself with them. However, for various reasons such as deployment purposes or
continuous integration, it makes sense to think about your long term
organizations needs when selecting a storage backend. The Trusted Registry
@ -223,7 +223,7 @@ include:
2. Select Download to get the text based file. It contains a minimum amount
of information and you're going to need additional data based on your driver and
business requirements.
3. Go [here](/registry/configuration.md#list-of-configuration-options") to see the open source YAML file. Copy the sections you need and paste into your `storage.yml` file. Note that some settings may contradict others, so
3. Go [here](/registry/configuration.md#list-of-configuration-options") to see the open source YAML file. Copy the sections you need and paste into your `storage.yml` file. Some settings may contradict others, so
ensure your choices make sense.
4. Save the YAML file and return to the UI.
5. On the Storage screen, upload the file, review your changes, and click Save.

View File

@ -11,7 +11,7 @@ Docker Trusted Registry (DTR) is designed for high availability.
When you first install DTR, you create a cluster with a single DTR replica.
Replicas are single instances of DTR that can be joined together to form a
cluster.
When joining new replicas to the cluster, you'll be creating new DTR instances
When joining new replicas to the cluster, you create new DTR instances
that are running the same set of services. Any change to the state of an
instance is replicated across all other instances.

View File

@ -11,7 +11,7 @@ By default, you don't need to license your Docker Trusted Registry. When
installing DTR, it automatically starts using the same license file used on
your Docker Universal Control Plane cluster.
However, there are some situations when you have to manually license your
However, there are some situations when you need to manually license your
DTR installation:
* When upgrading to a new major version,

View File

@ -26,7 +26,7 @@ To upgrade to DTR 2.0, you first need to do a fresh installation of DTR 2.0.
This can be done on the same node where DTR 1.4.3 is already running or on a
new node.
If you decide to install the new DTR on the same node, you'll need
If you decide to install the new DTR on the same node, you need
to install it on a port other than 443, since DTR 1.4.3 is already using it.
Use these instructions to install DTR 2.0:

View File

@ -301,9 +301,12 @@ The following notable issues have been remediated:
**DHE 1.0 Upgrade Warning**
Customers who are currently using DHE 1.0 **must** follow the [upgrading instructions](https://forums.docker.com/t/upgrading-docker-hub-enterprise-to-docker-trusted-registry/1925) in our support Knowledge Base. These instructions will show you how to modify existing authentication data and storage volume
settings to move to Docker Trusted Registry. Note that automatic upgrading has
been disabled for DHE users because of these issues.
If you currently use DHE 1.0, you **must** follow the
[upgrading instructions](https://forums.docker.com/t/upgrading-docker-hub-enterprise-to-docker-trusted-registry/1925)
in our support Knowledge Base. These instructions show you how to modify
existing authentication data and storage volume settings to move to Docker
Trusted Registry. Automatic upgrading has been disabled for DHE users because of
these issues.
## Version 1.0.1
(11 May 2015)

View File

@ -9,7 +9,7 @@ image registry like Docker Trusted Registry.
If DTR is using the default configurations or was configured to use
self-signed certificates, you need to configure your Docker Engine to trust DTR.
Otherwise, when you try to login or push and pull images to DTR, you'll get an
Otherwise, when you try to login or push and pull images to DTR, you get an
error:
```bash

View File

@ -12,9 +12,9 @@ title: Push an image to DTR
Pushing an image to Docker Trusted Registry is the same as pushing an image
to Docker Hub.
Since DTR is secure by default, you need to create the image repository before
being able to push the image to DTR.
you can push the image to DTR.
In this example, we'll create the 'golang' repository in DTR, and push the
In this example, we create the 'golang' repository in DTR, and push the
Golang 1.7 image to it.
## Create a repository

View File

@ -1,6 +1,5 @@
---
description: Learn how to set up organizations to enforce security in Docker Trusted
Registry.
description: Learn how to set up organizations to enforce security in Docker Trusted Registry.
keywords: docker, registry, security, permissions, organizations
redirect_from:
- /docker-trusted-registry/user-management/create-and-manage-orgs/
@ -25,7 +24,7 @@ organization.
![](../images/create-and-manage-orgs-2.png)
Repositories owned by this organization will contain the organization name, so
to pull an image from that repository, you'll use:
to pull an image from that repository, use:
```bash
$ docker pull <dtr-domain-name>/<organization>/<repository>:<tag>
@ -33,7 +32,7 @@ $ docker pull <dtr-domain-name>/<organization>/<repository>:<tag>
Click **Save** to create the organization, and then **click the organization**
to define which users are allowed to manage this
organization. These users will be able to edit the organization settings, edit
organization. These users can edit the organization settings, edit
all repositories owned by the organization, and define the user permissions for
this organization.

View File

@ -14,7 +14,7 @@ A team defines the permissions a set of users have for a set of repositories.
To create a new team, go to the **DTR web UI**, and navigate to the
**Organizations** page.
Then **click the organization** where you want to create the team. In this
example, we'll create the 'billing' team under the 'whale' organization.
example, we create the 'billing' team under the 'whale' organization.
![](../images/create-and-manage-teams-1.png)

View File

@ -20,7 +20,7 @@ While there is a default storage backend, `filesystem`, the Trusted Registry off
At first, you might have explored Docker Trusted Registry and Docker Engine by
[installing](../install/index.md)
them on your system in order to familiarize yourself with them.
them on your system to familiarize yourself with them.
However, for various reasons such as deployment purposes or continuous
integration, it makes sense to think about your long term organizations needs
when selecting a storage backend. The Trusted Registry natively supports TLS and
@ -198,13 +198,19 @@ include:
**To configure**:
1. Navigate to the Trusted Registry UI > Settings > Storage.
2. Select Download to get the text based file. It contains a minimum amount
of information and you're going to need additional data based on your driver and
business requirements.
3. Go [here](/registry/configuration.md#list-of-configuration-options") to see the open source YAML file. Copy the sections you need and paste into your `storage.yml` file. Note that some settings may contradict others, so
ensure your choices make sense.
4. Save the YAML file and return to the UI.
1. Navigate to the Trusted Registry UI > Settings > Storage.
2. Select Download to get the text based file. It contains a minimum amount of
information and you're going to need additional data based on your driver
and business requirements.
3. Go [here](/registry/configuration.md#list-of-configuration-options") to see
the open source YAML file. Copy the sections you need and paste into your
`storage.yml` file. Some settings may contradict others, so ensure your
choices make sense.
4. Save the YAML file and return to the UI.
5. On the Storage screen, upload the file, review your changes, and click Save.
## See also

View File

@ -8,7 +8,7 @@ title: Use your own certificates
By default the DTR services are exposed using HTTPS, to ensure all
communications between clients and DTR is encrypted. Since DTR
replicas use self-signed certificates for this, when a client accesses
DTR, their browsers won't trust this certificate, so the browser displays a
DTR, their browsers don't trust this certificate, so the browser displays a
warning message.
You can configure DTR to use your own certificates, so that it is automatically
@ -37,7 +37,7 @@ Finally, click **Save** for the changes to take effect.
If you're using certificates issued by a globally trusted certificate authority,
any web browser or client tool should now trust DTR. If you're using an internal
certificate authority, you'll need to [configure your system to trust that
certificate authority, you need to [configure your system to trust that
certificate authority](../repos-and-images/index.md).
## Where to go next

View File

@ -9,7 +9,7 @@ Docker Trusted Registry (DTR) is designed for high availability.
When you first install DTR, you create a cluster with a single DTR replica.
Replicas are single instances of DTR that can be joined together to form a
cluster.
When joining new replicas to the cluster, you'll be creating new DTR instances
When joining new replicas to the cluster, you create new DTR instances
that are running the same set of services. Any change to the state of an
instance is replicated across all other instances.

View File

@ -8,7 +8,7 @@ By default, you don't need to license your Docker Trusted Registry. When
installing DTR, it automatically starts using the same license file used on
your Docker Universal Control Plane cluster.
However, there are some situations when you have to manually license your
However, there are some situations when you need to manually license your
DTR installation:
* When upgrading to a new major version,

View File

@ -10,7 +10,7 @@ image registry like Docker Trusted Registry.
If DTR is using the default configurations or was configured to use
self-signed certificates, you need to configure your Docker Engine to trust DTR.
Otherwise, when you try to login or push and pull images to DTR, you'll get an
Otherwise, when you try to login or push and pull images to DTR, you get an
error:
```none

View File

@ -7,9 +7,9 @@ title: Push an image to DTR
Pushing an image to Docker Trusted Registry is the same as pushing an image
to Docker Hub.
Since DTR is secure by default, you need to create the image repository before
being able to push the image to DTR.
you can push the image to DTR.
In this example, we'll create the 'golang' repository in DTR, and push the
In this example, we create the 'golang' repository in DTR, and push the
Golang 1.7 image to it.
## Create a repository

View File

@ -23,7 +23,7 @@ organization.
![](../images/create-and-manage-orgs-2.png)
Repositories owned by this organization will contain the organization name, so
to pull an image from that repository, you'll use:
to pull an image from that repository, use:
```bash
$ docker pull <dtr-domain-name>/<organization>/<repository>:<tag>
@ -31,7 +31,7 @@ $ docker pull <dtr-domain-name>/<organization>/<repository>:<tag>
Click **Save** to create the organization, and then **click the organization**
to define which users are allowed to manage this
organization. These users will be able to edit the organization settings, edit
organization. These users can edit the organization settings, edit
all repositories owned by the organization, and define the user permissions for
this organization.

View File

@ -12,7 +12,7 @@ A team defines the permissions a set of users have for a set of repositories.
To create a new team, go to the **DTR web UI**, and navigate to the
**Organizations** page.
Then **click the organization** where you want to create the team. In this
example, we'll create the 'billing' team under the 'whale' organization.
example, we create the 'billing' team under the 'whale' organization.
![](../images/create-and-manage-teams-1.png)

View File

@ -50,7 +50,7 @@ the 'join' command.
|`--etcd-snapshot-count`|Set etcd's number of changes before creating a snapshot.|
|`--ucp-insecure-tls`|Disable TLS verification for UCP|
|`--ucp-ca`|Use a PEM-encoded TLS CA certificate for UCP|
|`--nfs-storage-url`|URL (with IP address or hostname) of the NFS mount if using NFS (e.g. nfs://<ip address>/<mount point>)|
|`--nfs-storage-url`|URL (with IP address or hostname) of the NFS mount if using NFS. For example, `nfs://<ip address>/<mount point>`|
|`--ucp-node`|Specify the host to install Docker Trusted Registry|
|`--replica-id`|Specify the replica ID. Must be unique per replica, leave blank for random|
|`--unsafe`|Enable this flag to skip safety checks when installing or joining|

View File

@ -53,5 +53,5 @@ effect. To have no down time, configure your DTR for high-availability.
|`--etcd-snapshot-count`|Set etcd's number of changes before creating a snapshot.|
|`--ucp-insecure-tls`|Disable TLS verification for UCP|
|`--ucp-ca`|Use a PEM-encoded TLS CA certificate for UCP|
|`--nfs-storage-url`|URL (with IP address or hostname) of the NFS mount if using NFS (e.g. nfs://<ip address>/<mount point>)|
|`--nfs-storage-url`|URL (with IP address or hostname) of the NFS mount if using NFS. For example, `nfs://<ip address>/<mount point>`|
|`--existing-replica-id`|ID of an existing replica in a cluster|

View File

@ -159,7 +159,7 @@ To restore DTR, you need to:
You need to restore DTR on the same UCP cluster where you've created the
backup. If you restore on a different UCP cluster, all DTR resources will be
owned by users that don't exist, so you'll not be able to manage the resources,
owned by users that don't exist, so you can't manage the resources,
even though they're stored in the DTR data store.
When restoring, you need to use the same version of the `docker/dtr` image

View File

@ -25,7 +25,7 @@ organization.
![](../images/create-and-manage-orgs-2.png)
Repositories owned by this organization will contain the organization name, so
to pull an image from that repository, you'll use:
to pull an image from that repository, use:
```bash
$ docker pull <dtr-domain-name>/<organization>/<repository>:<tag>
@ -33,7 +33,7 @@ $ docker pull <dtr-domain-name>/<organization>/<repository>:<tag>
Click **Save** to create the organization, and then **click the organization**
to define which users are allowed to manage this
organization. These users will be able to edit the organization settings, edit
organization. These users can edit the organization settings, edit
all repositories owned by the organization, and define the user permissions for
this organization.

View File

@ -12,7 +12,7 @@ caches together for faster pulls.
Too many levels of chaining might slow down pulls, so you should try different
configurations and benchmark them, to find out the right configuration.
In this example we'll show how to configure two caches. A dedicated cache for
This example shows how to configure two caches. A dedicated cache for
the Asia region that pulls images directly from DTR, and a cache for China, that
pulls images from the Asia cache.
@ -73,7 +73,7 @@ middleware:
- /certs/dtr-ca.pem
```
Both CAs are needed for the downstream cache.
Both CAs are needed for the downstream cache.
Similarly, the China cache needs to be registered with DTR. See [deploy a simple cache](/datacenter/dtr/2.2/guides/admin/configure/deploy-caches/#deploy-a-simple-cache) for how to use the API.
Ultimately the downstream cache needs to be configured for the user in question.
Similarly, the China cache needs to be registered with DTR. See [deploy a simple cache](/datacenter/dtr/2.2/guides/admin/configure/deploy-caches/#deploy-a-simple-cache) for how to use the API.
Ultimately the downstream cache needs to be configured for the user in question.

View File

@ -223,7 +223,7 @@ tab, and change the **Content Cache** settings to use the **region-us** cache.
![](../../../images/cache-docker-images-4.png){: .with-border}
Now when you pull images, you'll be using the cache. To test this, try pulling
Now when you pull images, you use the cache. To test this, try pulling
an image from DTR. You can inspect the logs of the cache service, to validate
that the cache is being used, and troubleshoot problems.

View File

@ -15,16 +15,16 @@ You can learn more about the supported configuration in the
## Get the TLS certificate and keys
Before deploying a DTR cache with TLS, you need to get a public key
certificate for the domain name where you'll deploy the cache. You'll also
certificate for the domain name used to deploy the cache. You also
need the public and private key files for that certificate.
Once you have then, transfer those files to the host where you'll deploy
Once you have then, transfer those files to the host used to deploy
the DTR cache.
## Create the cache configuration
Use SSH to log into the host where you'll deploy the DTR cache, and navigate to
Use SSH to log into the host used to deploy the DTR cache, and navigate to
the directory where you've stored the TLS certificate and keys.
Create the `config.yml` file with the following content:

View File

@ -26,8 +26,8 @@ Then, as a best practice you should
just for the DTR
integration and apply a IAM policy that ensures the user has limited permissions.
This user only needs permissions to access the bucket that you'll use to store
images, and be able to read, write, and delete files.
This user only needs permissions to access the bucket that you use to store
images, and to read, write, and delete files.
Here's an example of a policy like that:

View File

@ -38,7 +38,7 @@ Here you can configure GC to run **until it's done** or **with a timeout**.
The timeout ensures that your registry will be in read-only mode for a maximum
amount of time.
Select an option (either "Until done" or "For N minutes") and you'll have the
Select an option (either "Until done" or "For N minutes") and you have the
option to configure GC to run via a cron job, with several default crons
provided:
@ -88,7 +88,7 @@ If we delete `example.com/user/blog:latest` but *not*
`example.com/user/blog:1.11.0` we expect that `example.com/user/blog:1.11.0`
can still be pulled.
This means that we can't delete layers when tags or manifests are deleted.
This means that we can't delete layers when tags or manifests are deleted.
Instead, we need to pause writing and take reference counts to see how many
times a file is used. If the file is never used only then is it safe to delete.

View File

@ -8,7 +8,7 @@ By default, you don't need to license your Docker Trusted Registry. When
installing DTR, it automatically starts using the same license file used on
your Docker Universal Control Plane cluster.
However, there are some situations when you have to manually license your
However, there are some situations when you need to manually license your
DTR installation:
* When upgrading to a new major version,

View File

@ -51,9 +51,8 @@ with more details on any one of these services:
* Metadata persistence (rethinkdb)
* Content trust (notary)
Note that this endpoint is for checking the health of a *single* replica. To get
the health of every replica in a cluster, querying each replica individiually is
the preferred way to do it in real time.
This endpoint only checks the health of a *single* replica. To get
the health of every replica in a cluster, query each replica individually.
The `/api/v0/meta/cluster_status`
[endpoint](https://docs.docker.com/datacenter/dtr/2.2/reference/api/)

View File

@ -8,7 +8,7 @@ keywords: docker, dtr, tls
By default the DTR services are exposed using HTTPS, to ensure all
communications between clients and DTR is encrypted. Since DTR
replicas use self-signed certificates for this, when a client accesses
DTR, their browsers won't trust this certificate, so the browser displays a
DTR, their browsers don't trust this certificate, so the browser displays a
warning message.
You can configure DTR to use your own certificates, so that it is automatically
@ -37,7 +37,7 @@ Finally, click **Save** for the changes to take effect.
If you're using certificates issued by a globally trusted certificate authority,
any web browser or client tool should now trust DTR. If you're using an internal
certificate authority, you'll need to configure your system to trust that
certificate authority, you need to configure your system to trust that
certificate authority.
## Where to go next

View File

@ -12,7 +12,7 @@ defines the permissions a set of users have for a set of repositories.
To create a new team, go to the **DTR web UI**, and navigate to the
**Organizations** page.
Then **click the organization** where you want to create the team. In this
example, we'll create the 'billing' team under the 'whale' organization.
example, we create the 'billing' team under the 'whale' organization.
![](../../images/create-and-manage-teams-1.png)

View File

@ -68,7 +68,7 @@ Jobs can be in one of the following status:
## Job capacity
Each job runner has a limited capacity and won't claim jobs that require an
Each job runner has a limited capacity and doesn't claim jobs that require an
higher capacity. You can see the capacity of a job runner using the
`GET /api/v0/workers` endpoint:
@ -123,8 +123,8 @@ are available:
}
```
Our worker will be able to pick up job id `0` and `2` since it has the capacity
for both, while id `1` will have to wait until the previous scan job is complete:
Our worker can pick up job id `0` and `2` since it has the capacity
for both, while id `1` needs to wait until the previous scan job is complete:
```json
{

View File

@ -13,8 +13,8 @@ support upgrades according to the following rules:
* When upgrading between minor versions, you can't skip versions, but you can
upgrade from any patch versions of the previous minor version to any patch
version of the current minor version.
* When upgrading between major versions you also have to upgrade one major
version at a time, but you have to upgrade to the earliest available minor
* When upgrading between major versions you also need to upgrade one major
version at a time, but you need to upgrade to the earliest available minor
version. We also strongly recommend upgrading to the latest minor/patch
version for your major version first.

View File

@ -6,8 +6,8 @@ keywords: docker, registry, notary, trust
The Docker CLI client makes it easy to sign images but to streamline that
process it generates a set of private and public keys that are not tied
to your UCP account. This means that you'll be able to push and sign images to
DTR, but UCP won't trust those images since it doesn't know anything about
to your UCP account. This means that you can push and sign images to
DTR, but UCP doesn't trust those images since it doesn't know anything about
the keys you're using.
So before signing and pushing images to DTR you should:
@ -111,7 +111,7 @@ Import the private key in your UCP bundle into the Notary CLI client:
notary key import <path-to-key.pem>
```
The private key is copied to `~/.docker/trust`, and you'll be prompted for a
The private key is copied to `~/.docker/trust`, and you are prompted for a
password to encrypt it.
You can validate what keys Notary knows about by running:

View File

@ -9,7 +9,7 @@ image registry like Docker Trusted Registry.
If DTR is using the default configurations or was configured to use self-signed
certificates, you need to configure your Docker Engine to trust DTR. Otherwise,
when you try to log in, push to, or pull images from DTR, you'll get an error:
when you try to log in, push to, or pull images from DTR, you get an error:
```none
$ docker login dtr.example.org

View File

@ -27,15 +27,14 @@ The webhook events you can subscribe to are:
- Security scanner update complete
In order to subscribe to an event you need to be at least an admin of the
particular repository (for repository events) or namespace
(for namespace events). A global administrator can subscribe to any event.
You need to be at least an admin of the repository or namespace in question to
subscribe to an event for the repository or namespace. A global administrator can subscribe to any event.
For example, a user must be an admin of repository "foo/bar" to subscribe to
its tag push events.
## Subscribing to events
In order to subscribe to events you must send an API query to
To subscribe to events you must send an API query to
`/api/v0/webhooks` with the following payload:
```
@ -110,12 +109,12 @@ fake data. To send a test payload, fire a `POST` request to
```
Change `type` to the event type that you want to receive. DTR will then send
an example payload to the endpoint specified. Note that the example
an example payload to the endpoint specified. The example
payload sent is always the same.
## Content structure
Note that comments (`// here`) are added for documentation only; they are not
Comments (`// here`) are added for documentation only; they are not
present in POST payloads.
### Repository event content structure
@ -307,7 +306,6 @@ To delete a webhook subscription send a `DELETE` request to
`/api/v0/webhooks/{id}`, replacing `{id}` with the webhook subscription ID
which you would like to delete.
Note that in order to delete a subscription you must be either a system
administrator or an administrator for the resource which the payload subscribes
to. For example, as a normal user you can only delete subscriptions for
repositories which you are an admin of.
Only a system administrator or an administrator for the resource which the
payload subscribes to can delete a subscription. As a normal user, you can only
delete subscriptions for repositories which you administer.

View File

@ -5,9 +5,9 @@ keywords: docker, registry, repository
---
Since DTR is secure by default, you need to create the image repository before
being able to push the image to DTR.
you can push the image to DTR.
In this example, we'll create the 'golang' repository in DTR.
In this example, we create the 'golang' repository in DTR.
## Create a repository

View File

@ -41,7 +41,7 @@ to store the image. In this example the full name of our repository is
### Tag the image
In this example we'll pull the Golang image from Docker Hub and tag with
In this example we pull the Golang image from Docker Hub and tag with
the full DTR and repository name. A tag defines where the image was pulled
from, and where it will be pushed to.

View File

@ -49,7 +49,7 @@ UCP requires that you delegate trust to two different roles:
* `targets/releases`
* `targets/<role>`, where `<role>` is the UCP team the user belongs to
In this example we'll delegate trust to `targets/releases` and `targets/qa`:
In this example we delegate trust to `targets/releases` and `targets/qa`:
```none
# Delegate trust, and add that public key with the role targets/releases
@ -63,9 +63,9 @@ notary delegation add --publish \
--all-paths <user-1-cert.pem> <user-2-cert.pem>
```
Now members from the QA team just have to [configure their Notary CLI client
Now members from the QA team just need to [configure their Notary CLI client
with UCP private keys](../../access-dtr/configure-your-notary-client.md)
to be able to [push and sign images](index.md) into the `dev/nginx` repository.
before [pushing and signing images](index.md) into the `dev/nginx` repository.
## Where to go next

View File

@ -29,7 +29,7 @@ to the Notary Server internal to DTR.
## Sign images that UCP can trust
With the command above you'll be able to sign your DTR images, but UCP won't
With the command above you can sign your DTR images, but UCP doesn't
trust them because it can't tie the private key you're using to sign the images
to your UCP account.
@ -41,8 +41,8 @@ To sign images in a way that UCP trusts them you need to:
In this example we're going to pull an NGINX image from Docker Store,
re-tag it as `dtr.example.org/dev/nginx:1`, push the image to DTR and sign it
in a way that is trusted by UCP. If you manage multiple repositories, you'll
have to do the same procedure for every one of them.
in a way that is trusted by UCP. If you manage multiple repositories, you
need to do the same procedure for every one of them.
### Configure your Notary client
@ -79,7 +79,7 @@ repository.
![DTR](../../../images/sign-an-image-3.png){: .with-border}
DTR shows that the image is signed, but UCP won't trust the image
DTR shows that the image is signed, but UCP doesn't trust the image
because it doesn't have any information about the private keys used to sign
the image.
@ -94,7 +94,7 @@ UCP requires that you delegate trust to two different roles:
* `targets/releases`
* `targets/<role>`, where `<role>` is the UCP team the user belongs to
In this example we'll delegate trust to `targets/releases` and `targets/admin`:
In this example we delegate trust to `targets/releases` and `targets/admin`:
```none
# Delegate trust, and add that public key with the role targets/releases
@ -108,7 +108,7 @@ notary delegation add --publish \
--all-paths <ucp-cert.pem>
```
To push the new signing metadata to the Notary server, you'll have to push
To push the new signing metadata to the Notary server, you need to push
the image again:
```none

View File

@ -163,7 +163,7 @@ To restore DTR, you need to:
You need to restore DTR on the same UCP cluster where you've created the
backup. If you restore on a different UCP cluster, all DTR resources will be
owned by users that don't exist, so you'll not be able to manage the resources,
owned by users that don't exist, so you can't manage the resources,
even though they're stored in the DTR data store.
When restoring, you need to use the same version of the `docker/dtr` image

View File

@ -12,7 +12,7 @@ caches together for faster pulls.
Too many levels of chaining might slow down pulls, so you should try different
configurations and benchmark them, to find out the right configuration.
In this example we'll show how to configure two caches. A dedicated cache for
This example shows how to configure two caches. A dedicated cache for
the Asia region that pulls images directly from DTR, and a cache for China, that
pulls images from the Asia cache.

View File

@ -209,7 +209,7 @@ tab, and change the **Content Cache** settings to use the **region-us** cache.
You can also automate this through the `/api/v0/accounts/{username}/settings`
API.
Now when you pull images, you'll be using the cache. To test this, try pulling
Now when you pull images, you use the cache. To test this, try pulling
an image from DTR. You can inspect the logs of the cache service, to validate
that the cache is being used, and troubleshoot problems.

Some files were not shown because too many files have changed in this diff Show More