Merge pull request #11620 from thaJeztah/old_engines

Remove references to obsolete engine versions
This commit is contained in:
Usha Mandya 2020-10-26 14:49:51 +00:00 committed by GitHub
commit cb4a1293a7
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
42 changed files with 252 additions and 333 deletions

View File

@ -17,7 +17,7 @@
### Project version(s) affected
<!-- If this problem only affects specific versions of a project (like Docker
Engine 1.13), tell us here. The fix may need to take that into account. -->
Engine 19.03), tell us here. The fix may need to take that into account. -->
### Suggestions for a fix

View File

@ -9,7 +9,7 @@ your Compose file and their name start with the `x-` character sequence.
> of service, volume, network, config and secret definitions.
```yaml
version: '3.4'
version: "{{ site.compose_file_v3 }}"
x-custom:
items:
- a
@ -35,7 +35,7 @@ logging:
You may write your Compose file as follows:
```yaml
version: '3.4'
version: "{{ site.compose_file_v3 }}"
x-logging:
&default-logging
options:
@ -56,7 +56,7 @@ It is also possible to partially override values in extension fields using
the [YAML merge type](https://yaml.org/type/merge.html). For example:
```yaml
version: '3.4'
version: "{{ site.compose_file_v3 }}"
x-volumes:
&default-volume
driver: foobar-storage

View File

@ -141,7 +141,7 @@ Docker has created the following demo app that you can deploy to swarm mode or
to Kubernetes using the `docker stack deploy` command.
```yaml
version: '3.3'
version: "{{ site.compose_file_v3 }}"
services:
web:

View File

@ -95,7 +95,7 @@ configure this app to use our SQL Server database, and then create a
> base 10 digits and/or non-alphanumeric symbols.
```yaml
version: "3"
version: "{{ site.compose_file_v3 }}"
services:
web:
build: .

View File

@ -12,23 +12,27 @@ container for a service joins the default network and is both *reachable* by
other containers on that network, and *discoverable* by them at a hostname
identical to the container name.
> **Note**: Your app's network is given a name based on the "project name",
> **Note**
>
> Your app's network is given a name based on the "project name",
> which is based on the name of the directory it lives in. You can override the
> project name with either the [`--project-name` flag](reference/overview.md)
> or the [`COMPOSE_PROJECT_NAME` environment variable](reference/envvars.md#compose_project_name).
For example, suppose your app is in a directory called `myapp`, and your `docker-compose.yml` looks like this:
version: "3"
services:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
ports:
- "8001:5432"
```yaml
version: "{{ site.compose_file_v3 }}"
services:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
ports:
- "8001:5432"
```
When you run `docker-compose up`, the following happens:

View File

@ -68,25 +68,29 @@ one's Docker image (the database just runs on a pre-made PostgreSQL image, and
the web app is built from the current directory), and the configuration needed
to link them together and expose the web app's port.
version: '3'
services:
db:
image: postgres
volumes:
- ./tmp/db:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: password
web:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
```yaml
version: "{{ site.compose_file_v3 }}"
services:
db:
image: postgres
volumes:
- ./tmp/db:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: password
web:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
```
>**Tip**: You can use either a `.yml` or `.yaml` extension for this file.
> **Tip**
>
> You can use either a `.yml` or `.yaml` extension for this file.
### Build the project

View File

@ -7,10 +7,10 @@ redirect_from:
---
By default, when the Docker daemon terminates, it shuts down running containers.
Starting with Docker Engine 1.12, you can configure the daemon so that
containers remain running if the daemon becomes unavailable. This functionality
is called _live restore_. The live restore option helps reduce container
downtime due to daemon crashes, planned outages, or upgrades.
You can configure the daemon so that containers remain running if the daemon
becomes unavailable. This functionality is called _live restore_. The live restore
option helps reduce container downtime due to daemon crashes, planned outages,
or upgrades.
> **Note**
>

View File

@ -167,9 +167,8 @@ by viewing `/proc/<PID>/status` on the host machine.
By default, each container's access to the host machine's CPU cycles is unlimited.
You can set various constraints to limit a given container's access to the host
machine's CPU cycles. Most users use and configure the
[default CFS scheduler](#configure-the-default-cfs-scheduler). In Docker 1.13
and higher, you can also configure the
[realtime scheduler](#configure-the-realtime-scheduler).
[default CFS scheduler](#configure-the-default-cfs-scheduler). You can also
configure the [realtime scheduler](#configure-the-realtime-scheduler).
### Configure the default CFS scheduler
@ -180,22 +179,19 @@ the container's cgroup on the host machine.
| Option | Description |
|:-----------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `--cpus=<value>` | Specify how much of the available CPU resources a container can use. For instance, if the host machine has two CPUs and you set `--cpus="1.5"`, the container is guaranteed at most one and a half of the CPUs. This is the equivalent of setting `--cpu-period="100000"` and `--cpu-quota="150000"`. Available in Docker 1.13 and higher. |
| `--cpu-period=<value>` | Specify the CPU CFS scheduler period, which is used alongside `--cpu-quota`. Defaults to 100000 microseconds (100 milliseconds). Most users do not change this from the default. If you use Docker 1.13 or higher, use `--cpus` instead. |
| `--cpu-quota=<value>` | Impose a CPU CFS quota on the container. The number of microseconds per `--cpu-period` that the container is limited to before throttled. As such acting as the effective ceiling. If you use Docker 1.13 or higher, use `--cpus` instead. |
| `--cpus=<value>` | Specify how much of the available CPU resources a container can use. For instance, if the host machine has two CPUs and you set `--cpus="1.5"`, the container is guaranteed at most one and a half of the CPUs. This is the equivalent of setting `--cpu-period="100000"` and `--cpu-quota="150000"`. |
| `--cpu-period=<value>` | Specify the CPU CFS scheduler period, which is used alongside `--cpu-quota`. Defaults to 100000 microseconds (100 milliseconds). Most users do not change this from the default. For most use-cases, `--cpus` is a more convenient alternative. |
| `--cpu-quota=<value>` | Impose a CPU CFS quota on the container. The number of microseconds per `--cpu-period` that the container is limited to before throttled. As such acting as the effective ceiling. For most use-cases, `--cpus` is a more convenient alternative. |
| `--cpuset-cpus` | Limit the specific CPUs or cores a container can use. A comma-separated list or hyphen-separated range of CPUs a container can use, if you have more than one CPU. The first CPU is numbered 0. A valid value might be `0-3` (to use the first, second, third, and fourth CPU) or `1,3` (to use the second and fourth CPU). |
| `--cpu-shares` | Set this flag to a value greater or less than the default of 1024 to increase or reduce the container's weight, and give it access to a greater or lesser proportion of the host machine's CPU cycles. This is only enforced when CPU cycles are constrained. When plenty of CPU cycles are available, all containers use as much CPU as they need. In that way, this is a soft limit. `--cpu-shares` does not prevent containers from being scheduled in swarm mode. It prioritizes container CPU resources for the available CPU cycles. It does not guarantee or reserve any specific CPU access. |
If you have 1 CPU, each of the following commands guarantees the container at
most 50% of the CPU every second.
**Docker 1.13 and higher**:
```bash
docker run -it --cpus=".5" ubuntu /bin/bash
```
**Docker 1.12 and lower**:
Which is the equivalent to manually specifying `--cpu-period` and `--cpu-quota`;
```bash
$ docker run -it --cpu-period=100000 --cpu-quota=50000 ubuntu /bin/bash
@ -203,8 +199,8 @@ $ docker run -it --cpu-period=100000 --cpu-quota=50000 ubuntu /bin/bash
### Configure the realtime scheduler
In Docker 1.13 and higher, you can configure your container to use the
realtime scheduler, for tasks which cannot use the CFS scheduler. You need to
You can configure your container to use the realtime scheduler, for tasks which
cannot use the CFS scheduler. You need to
[make sure the host machine's kernel is configured correctly](#configure-the-host-machines-kernel)
before you can [configure the Docker daemon](#configure-the-docker-daemon) or
[configure individual containers](#configure-individual-containers).

View File

@ -144,9 +144,8 @@ for more examples.
## Prune everything
The `docker system prune` command is a shortcut that prunes images, containers,
and networks. In Docker 17.06.0 and earlier, volumes are also pruned. In Docker
17.06.1 and higher, you must specify the `--volumes` flag for
`docker system prune` to prune volumes.
and networks. Volumes are not pruned by default, and you must specify the
`--volumes` flag for `docker system prune` to prune volumes.
```bash
$ docker system prune
@ -159,8 +158,7 @@ WARNING! This will remove:
Are you sure you want to continue? [y/N] y
```
If you are on Docker 17.06.1 or higher and want to also prune volumes, add
the `--volumes` flag:
To also prune volumes, add the `--volumes` flag:
```bash
$ docker system prune --volumes

View File

@ -6,9 +6,8 @@ redirect_from:
- /engine/userguide/eng-image/multistage-build/
---
Multi-stage builds are a new feature requiring Docker 17.05 or higher on the
daemon and client. Multistage builds are useful to anyone who has struggled to
optimize Dockerfiles while keeping them easy to read and maintain.
Multistage builds are useful to anyone who has struggled to optimize Dockerfiles
while keeping them easy to read and maintain.
> **Acknowledgment**:
> Special thanks to [Alex Ellis](https://twitter.com/alexellisuk) for granting

View File

@ -481,5 +481,4 @@ After you have successfully authenticated, you can access your organizations and
[Docker CLI Reference Guide](../engine/api/index.md){: target="_blank" rel="noopener" class="_"}.
* Check out the blog post, [Whats New in Docker 17.06 Community Edition
(CE)](https://blog.docker.com/2017/07/whats-new-docker-17-06-community-edition-ce/){:
target="_blank" rel="noopener" class="_"}.
(CE)](https://blog.docker.com/2017/07/whats-new-docker-17-06-community-edition-ce/){: target="_blank" rel="noopener" class="_"}.

View File

@ -88,10 +88,10 @@ and **nightly**:
Year-month releases are made from a release branch diverged from the master
branch. The branch is created with format `<year>.<month>`, for example
`18.09`. The year-month name indicates the earliest possible calendar
`19.03`. The year-month name indicates the earliest possible calendar
month to expect the release to be generally available. All further patch
releases are performed from that branch. For example, once `v18.09.0` is
released, all subsequent patch releases are built from the `18.09` branch.
releases are performed from that branch. For example, once `v19.03.0` is
released, all subsequent patch releases are built from the `19.03` branch.
### Test
@ -113,10 +113,8 @@ format:
where the time is the commit time in UTC and the final suffix is the prefix
of the commit hash, for example `0.0.0-20180720214833-f61e0f7`.
These builds allow for testing from the latest code on the master branch.
> **Note:**
> No qualifications or guarantees are made for the nightly builds.
These builds allow for testing from the latest code on the master branch. No
qualifications or guarantees are made for the nightly builds.
## Support

View File

@ -10,10 +10,8 @@ administrator associates an AppArmor security profile with each program. Docker
expects to find an AppArmor policy loaded and enforced.
Docker automatically generates and loads a default profile for containers named
`docker-default`. On Docker versions `1.13.0` and later, the Docker binary generates
this profile in `tmpfs` and then loads it into the kernel. On Docker versions
earlier than `1.13.0`, this profile is generated in `/etc/apparmor.d/docker`
instead.
`docker-default`. The Docker binary generates this profile in `tmpfs` and then
loads it into the kernel.
> **Note**: This profile is used on containers, _not_ on the Docker Daemon.

View File

@ -23,12 +23,12 @@ A custom certificate is configured by creating a directory under
`/etc/docker/certs.d` using the same name as the registry's hostname, such as
`localhost`. All `*.crt` files are added to this directory as CA roots.
> **Note**:
> As of Docker 1.13, on Linux any root certificates authorities are merged
> with the system defaults, including as the host's root CA set. On prior
versions of Docker, and on Docker Enterprise Edition for Windows Server,
> the system default certificates are only used when no custom root certificates
> are configured.
> **Note**
>
> On Linux any root certificates authorities are merged with the system defaults,
> including the host's root CA set. If you are running Docker on Windows Server,
> or Docker Desktop for Windows with Windows containers, the system default
> certificates are only used when no custom root certificates are configured.
The presence of one or more `<filename>.key/cert` pairs indicates to Docker
that there are custom certificates required for access to the desired

View File

@ -86,12 +86,13 @@ The following image depicts the various signing keys and their relationships:
![Content Trust components](images/trust_components.png)
> **WARNING**:
> **WARNING**
>
> Loss of the root key is **very difficult** to recover from.
>Correcting this loss requires intervention from [Docker
>Support](https://support.docker.com) to reset the repository state. This loss
>also requires **manual intervention** from every consumer that used a signed
>tag from this repository prior to the loss.
> Correcting this loss requires intervention from [Docker
> Support](https://support.docker.com) to reset the repository state. This loss
> also requires **manual intervention** from every consumer that used a signed
> tag from this repository prior to the loss.
{:.warning}
You should back up the root key somewhere safe. Given that it is only required
@ -101,9 +102,6 @@ read how to [manage keys for DCT](trust_key_mng.md).
## Signing Images with Docker Content Trust
> **Note:**
> This applies to Docker Community Engine 17.12 and newer.
Within the Docker CLI we can sign and push a container image with the
`$ docker trust` command syntax. This is built on top of the Notary feature
set, more information on Notary can be found [here](/notary/getting_started/).

View File

@ -239,9 +239,8 @@ you demote or remove a manager.
## Back up the swarm
Docker manager nodes store the swarm state and manager logs in the
`/var/lib/docker/swarm/` directory. In 1.13 and higher, this data includes the
keys used to encrypt the Raft logs. Without these keys, you cannot restore the
swarm.
`/var/lib/docker/swarm/` directory. This data includes the keys used to encrypt
the Raft logs. Without these keys, you cannot restore the swarm.
You can back up the swarm using any manager. Use the following procedure.
@ -377,11 +376,10 @@ balance across the swarm. When new tasks start, or when a node with running
tasks becomes unavailable, those tasks are given to less busy nodes. The goal
is eventual balance, with minimal disruption to the end user.
In Docker 1.13 and higher, you can use the `--force` or `-f` flag with the
`docker service update` command to force the service to redistribute its tasks
across the available worker nodes. This causes the service tasks to restart.
Client applications may be disrupted. If you have configured it, your service
uses a [rolling update](swarm-tutorial/rolling-update.md).
You can use the `--force` or `-f` flag with the `docker service update` command
to force the service to redistribute its tasks across the available worker nodes.
This causes the service tasks to restart. Client applications may be disrupted.
If you have configured it, your service uses a [rolling update](swarm-tutorial/rolling-update.md).
If you use an earlier version and you want to achieve an even balance of load
across workers and don't mind disrupting running tasks, you can force your swarm

View File

@ -6,11 +6,10 @@ keywords: swarm, configuration, configs
## About configs
Docker 17.06 introduces swarm service configs, which allow you to store
non-sensitive information, such as configuration files, outside a service's
image or running containers. This allows you to keep your images as generic
as possible, without the need to bind-mount configuration files into the
containers or use environment variables.
Docker swarm service configs allow you to store non-sensitive information,
such as configuration files, outside a service's image or running containers.
This allows you to keep your images as generic as possible, without the need to
bind-mount configuration files into the containers or use environment variables.
Configs operate in a similar way to [secrets](secrets.md), except that they are
not encrypted at rest and are mounted directly into the container's filesystem
@ -27,9 +26,9 @@ Configs are supported on both Linux and Windows services.
### Windows support
Docker 17.06 and higher include support for configs on Windows containers.
Where there are differences in the implementations, they are called out in the
examples below. Keep the following notable differences in mind:
Docker includes support for configs on Windows containers, but there are differences
in the implementations, which are called out in the examples below. Keep the
following notable differences in mind:
- Config files with custom targets are not directly bind-mounted into Windows
containers, since Windows does not support non-directory file bind-mounts.
@ -230,15 +229,15 @@ real-world example, continue to
### Simple example: Use configs in a Windows service
This is a very simple example which shows how to use configs with a Microsoft
IIS service running on Docker 17.06 EE on Microsoft Windows Server 2016 or Docker
for Windows 17.06 CE on Microsoft Windows 10. It stores the webpage in a config.
IIS service running on Docker for Windows running Windows containers on
Microsoft Windows 10. It is a naive example that stores the webpage in a config.
This example assumes that you have PowerShell installed.
1. Save the following into a new file `index.html`.
```html
<html>
<html lang="en">
<head><title>Hello Docker</title></head>
<body>
<p>Hello Docker! You have deployed a HTML page.</p>
@ -287,7 +286,7 @@ name as its argument. The template will be rendered when container is created.
1. Save the following into a new file `index.html.tmpl`.
```html
<html>
<html lang="en">
<head><title>Hello Docker</title></head>
<body>
<p>Hello {% raw %}{{ env "HELLO" }}{% endraw %}! I'm service {% raw %}{{ .Service.Name }}{% endraw %}.</p>
@ -320,7 +319,7 @@ name as its argument. The template will be rendered when container is created.
```bash
$ curl http://0.0.0.0:3000
<html>
<html lang="en">
<head><title>Hello Docker</title></head>
<body>
<p>Hello Docker! I'm service hello-template.</p>

View File

@ -79,18 +79,9 @@ happen in sequence:
new node certificates. This ensures that nodes that still trust the old root
CA can still validate a certificate signed by the new CA.
2. In Docker 17.06 and higher, Docker also tells all nodes to immediately
renew their TLS certificates. This process may take several minutes,
depending on the number of nodes in the swarm.
> **Note**: If your swarm has nodes with different Docker versions, the
> following two things are true:
> - Only a manager that is running as the leader **and** running Docker 17.06
> or higher tells nodes to renew their TLS certificates.
> - Only nodes running Docker 17.06 or higher obey this directive.
>
> For the most predictable behavior, ensure that all swarm nodes are running
> Docker 17.06 or higher.
2. Docker also tells all nodes to immediately renew their TLS certificates.
This process may take several minutes, depending on the number of nodes in
the swarm.
3. After every node in the swarm has a new TLS certificate signed by the new CA,
Docker forgets about the old CA certificate and key material, and tells

View File

@ -98,8 +98,8 @@ The output shows the `<CONTAINER-PORT>` (labeled `TargetPort`) from the containe
By default, when you publish a port, it is a TCP port. You can
specifically publish a UDP port instead of or in addition to a TCP port. When
you publish both TCP and UDP ports, If you omit the protocol specifier,
the port is published as a TCP port. If you use the longer syntax (recommended
for Docker 1.13 and higher), set the `protocol` key to either `tcp` or `udp`.
the port is published as a TCP port. If you use the longer syntax (recommended),
set the `protocol` key to either `tcp` or `udp`.
#### TCP only

View File

@ -18,18 +18,11 @@ goes down, the remaining manager nodes elect a new leader and resume
orchestration and maintenance of the swarm state. By default, manager nodes
also run tasks.
Before you add nodes to a swarm you must install Docker Engine 1.12 or later on
the host machine.
The Docker Engine joins the swarm depending on the **join-token** you provide to
the `docker swarm join` command. The node only uses the token at join time. If
you subsequently rotate the token, it doesn't affect existing swarm nodes. Refer
to [Run Docker Engine in swarm mode](swarm-mode.md#view-the-join-command-or-update-a-swarm-join-token).
> **Note**: Docker engine allows a non-FIPS node to join a FIPS-enabled swarm cluster.
While a mixed FIPS environment makes upgrading or changing status easier, Docker recommends not running a mixed FIPS environment in production.
## Join as a worker node
To retrieve the join command including the join token for worker nodes, run the

View File

@ -189,11 +189,13 @@ respectively.
If your swarm service relies on one or more
[plugins](/engine/extend/plugin_api/), these plugins need to be available on
every node where the service could potentially be deployed. You can manually
install the plugin on each node or script the installation. In Docker 17.07 and
higher, you can also deploy the plugin in a similar way as a global service
using the Docker API, by specifying a `PluginSpec` instead of a `ContainerSpec`.
install the plugin on each node or script the installation. You can also deploy
the plugin in a similar way as a global service using the Docker API, by specifying
a `PluginSpec` instead of a `ContainerSpec`.
> **Note**: There is currently no way to deploy a plugin to a swarm using the
> **Note**
>
> There is currently no way to deploy a plugin to a swarm using the
> Docker CLI or Docker Compose. In addition, it is not possible to install
> plugins from a private repository.

View File

@ -34,8 +34,8 @@ The following three network concepts are important to swarm services:
`ingress` network.
The `ingress` network is created automatically when you initialize or join a
swarm. Most users do not need to customize its configuration, but Docker 17.05
and higher allows you to do so.
swarm. Most users do not need to customize its configuration, but Docker allows
you to do so.
- The **docker_gwbridge** is a bridge network that connects the overlay
networks (including the `ingress` network) to an individual Docker daemon's
@ -288,8 +288,8 @@ round robin (DNSRR). You can configure this per service.
## Customize the ingress network
Most users never need to configure the `ingress` network, but Docker 17.05 and
higher allow you to do so. This can be useful if the automatically-chosen subnet
Most users never need to configure the `ingress` network, but Docker allows you
to do so. This can be useful if the automatically-chosen subnet
conflicts with one that already exists on your network, or you need to customize
other low-level network settings such as the MTU.
@ -382,7 +382,7 @@ By default, all swarm traffic is sent over the same interface, including control
and management traffic for maintaining the swarm itself and data traffic to and
from the service containers.
In Docker 17.06 and higher, it is possible to separate this traffic by passing
You can separate this traffic by passing
the `--data-path-addr` flag when initializing or joining the swarm. If there are
multiple interfaces, `--advertise-addr` must be specified explicitly, and
`--data-path-addr` defaults to `--advertise-addr` if not specified. Traffic about

View File

@ -9,12 +9,11 @@ keywords: swarm, secrets, credentials, sensitive strings, sensitive data, securi
In terms of Docker Swarm services, a _secret_ is a blob of data, such as a
password, SSH private key, SSL certificate, or another piece of data that should
not be transmitted over a network or stored unencrypted in a Dockerfile or in
your application's source code. In Docker 1.13 and higher, you can use Docker
_secrets_ to centrally manage this data and securely transmit it to only those
containers that need access to it. Secrets are encrypted during transit and at
rest in a Docker swarm. A given secret is only accessible to those services
which have been granted explicit access to it, and only while those service
tasks are running.
your application's source code. You can use Docker _secrets_ to centrally manage
this data and securely transmit it to only those containers that need access to
it. Secrets are encrypted during transit and at rest in a Docker swarm. A given
secret is only accessible to those services which have been granted explicit
access to it, and only while those service tasks are running.
You can use secrets to manage any sensitive data which a container needs at
runtime but you don't want to store in the image or in source control, such as:
@ -39,14 +38,14 @@ containers only need to know the name of the secret to function in all
three environments.
You can also use secrets to manage non-sensitive data, such as configuration
files. However, Docker 17.06 and higher support the use of [configs](configs.md)
files. However, Docker supports the use of [configs](configs.md)
for storing non-sensitive data. Configs are mounted into the container's
filesystem directly, without the use of a RAM disk.
### Windows support
Docker 17.06 and higher include support for secrets on Windows containers.
Where there are differences in the implementations, they are called out in the
Docker includes support for secrets on Windows containers. Where there are
differences in the implementations, they are called out in the
examples below. Keep the following notable differences in mind:
- Microsoft Windows has no built-in driver for managing RAM disks, so within
@ -81,20 +80,12 @@ encrypted. The entire Raft log is replicated across the other managers, ensuring
the same high availability guarantees for secrets as for the rest of the swarm
management data.
> **Warning**: Raft data is encrypted in Docker 1.13 and higher. If any of your
> Swarm managers run an earlier version, and one of those managers becomes the
> manager of the swarm, the secrets are stored unencrypted in that node's
> Raft logs. Before adding any secrets, update all of your manager nodes to
> Docker 1.13 or higher to prevent secrets from being written to plain-text Raft
> logs.
{:.warning}
When you grant a newly-created or running service access to a secret, the
decrypted secret is mounted into the container in an in-memory filesystem. The
location of the mount point within the container defaults to
`/run/secrets/<secret_name>` in Linux containers, or
`C:\ProgramData\Docker\secrets` in Windows containers. You can specify a custom
location in Docker 17.06 and higher.
`C:\ProgramData\Docker\secrets` in Windows containers. You can also specify a
custom location.
You can update a service to grant it access to additional secrets or revoke its
access to a given secret at any time.
@ -140,8 +131,7 @@ a similar way, see
> **Note**: These examples use a single-Engine swarm and unscaled services for
> simplicity. The examples use Linux containers, but Windows containers also
> support secrets in Docker 17.06 and higher.
> See [Windows support](#windows-support).
> support secrets. See [Windows support](#windows-support).
### Defining and using secrets in compose files
@ -272,16 +262,15 @@ real-world example, continue to
### Simple example: Use secrets in a Windows service
This is a very simple example which shows how to use secrets with a Microsoft
IIS service running on Docker 17.06 EE on Microsoft Windows Server 2016 or Docker
Desktop for Mac 17.06 on Microsoft Windows 10. It is a naive example that stores the
webpage in a secret.
IIS service running on Docker for Windows running Windows containers on
Microsoft Windows 10. It is a naive example that stores the webpage in a secret.
This example assumes that you have PowerShell installed.
1. Save the following into a new file `index.html`.
```html
<html>
<html lang="en">
<head><title>Hello Docker</title></head>
<body>
<p>Hello Docker! You have deployed a HTML page.</p>
@ -311,8 +300,8 @@ This example assumes that you have PowerShell installed.
```
> **Note**: There is technically no reason to use secrets for this
> example. With Docker 17.06 and higher, [configs](configs.md) are
> a better fit. This example is for illustration only.
> example; [configs](configs.md) are a better fit. This example is
> for illustration only.
5. Access the IIS service at `http://localhost:8000/`. It should serve
the HTML content from the first step.
@ -480,48 +469,46 @@ generate the site key and certificate, name the files `site.key` and
> This example does not require a custom image. It puts the `site.conf`
> into place and runs the container all in one step.
In Docker 17.05 and earlier, secrets are always located within the
`/run/secrets/` directory. Docker 17.06 and higher allow you to specify a
custom location for a secret within the container. The two examples below
illustrate the difference. The older version of this command requires you to
create a symbolic link to the true location of the `site.conf` file so that
Nginx can read it, but the newer version does not require this. The older
example is preserved so that you can see the difference.
Secrets are located within the `/run/secrets/` directory in the container
by default, which may require extra steps in the container to make the
secret available in a different path. The example below creates a symbolic
link to the true location of the `site.conf` file so that Nginx can read it:
- **Docker 17.06 and higher**:
```bash
$ docker service create \
--name nginx \
--secret site.key \
--secret site.crt \
--secret site.conf \
--publish published=3000,target=443 \
nginx:latest \
sh -c "ln -s /run/secrets/site.conf /etc/nginx/conf.d/site.conf && exec nginx -g 'daemon off;'"
```
```bash
$ docker service create \
--name nginx \
--secret site.key \
--secret site.crt \
--secret source=site.conf,target=/etc/nginx/conf.d/site.conf \
--publish published=3000,target=443 \
nginx:latest \
sh -c "exec nginx -g 'daemon off;'"
```
Instead of creating symlinks, secrets allow you to specify a custom location
using the `target` option. The example below illustrates how the `site.conf`
secret is made available at `/etc/nginx/conf.d/site.conf` inside the container
without the use of symbolic links:
- **Docker 17.05 and earlier**:
```bash
$ docker service create \
--name nginx \
--secret site.key \
--secret site.crt \
--secret source=site.conf,target=/etc/nginx/conf.d/site.conf \
--publish published=3000,target=443 \
nginx:latest \
sh -c "exec nginx -g 'daemon off;'"
```
```bash
$ docker service create \
--name nginx \
--secret site.key \
--secret site.crt \
--secret site.conf \
--publish published=3000,target=443 \
nginx:latest \
sh -c "ln -s /run/secrets/site.conf /etc/nginx/conf.d/site.conf && exec nginx -g 'daemon off;'"
```
The first example shows both the short and long syntax for secrets, and the
second example shows only the short syntax. The short syntax creates files in
`/run/secrets/` with the same name as the secret. Within the running
containers, the following three files now exist:
The `site.key` and `site.crt` secrets use the short-hand syntax, without a
custom `target` location set. The short syntax mounts the secrets in `/run/secrets/
with the same name as the secret. Within the running containers, the following
three files now exist:
- `/run/secrets/site.key`
- `/run/secrets/site.crt`
- `/etc/nginx/conf.d/site.conf` (or `/run/secrets/site.conf` if you used the second example)
- `/etc/nginx/conf.d/site.conf`
5. Verify that the Nginx service is running.
@ -974,14 +961,16 @@ their values from a file (`WORDPRESS_DB_PASSWORD_FILE`). This strategy ensures
that backward compatibility is preserved, while allowing your container to read
the information from a Docker-managed secret instead of being passed directly.
>**Note**: Docker secrets do not set environment variables directly. This was a
conscious decision, because environment variables can unintentionally be leaked
between containers (for instance, if you use `--link`).
> **Note**
>
> Docker secrets do not set environment variables directly. This was a
> conscious decision, because environment variables can unintentionally be leaked
> between containers (for instance, if you use `--link`).
## Use Secrets in Compose
```yaml
version: '3.1'
version: "{{ site.compose_file_v3 }}"
services:
db:

View File

@ -339,11 +339,9 @@ service's image.
Each tag represents a digest, similar to a Git hash. Some tags, such as
`latest`, are updated often to point to a new digest. Others, such as
`ubuntu:16.04`, represent a released software version and are not expected to
update to point to a new digest often if at all. In Docker 1.13 and higher, when
you create a service, it is constrained to create tasks using a specific digest
of an image until you update the service using `service update` with the
`--image` flag. If you use an older version of Docker Engine, you must remove
and re-create the service to update its image.
update to point to a new digest often if at all. When you create a service, it
is constrained to create tasks using a specific digest of an image until you
update the service using `service update` with the `--image` flag.
When you run `service update` with the `--image` flag, the swarm manager queries
Docker Hub or your private Docker registry for the digest the tag currently
@ -404,11 +402,10 @@ outside the swarm in two ways:
choice for many types of services.
- [You can publish a service task's port directly on the swarm node](#publish-a-services-ports-directly-on-the-swarm-node)
where that service is running. This feature is available in Docker 1.13 and
higher. This bypasses the routing mesh and provides the maximum flexibility,
including the ability for you to develop your own routing framework. However,
you are responsible for keeping track of where each task is running and
routing requests to the tasks, and load-balancing across the nodes.
where that service is running. This bypasses the routing mesh and provides the
maximum flexibility, including the ability for you to develop your own routing
framework. However, you are responsible for keeping track of where each task is
running and routing requests to the tasks, and load-balancing across the nodes.
Keep reading for more information and use cases for each of these methods.
@ -554,7 +551,7 @@ flag. For more information, see
### Customize a service's isolation mode
Docker 17.12 CE and higher allow you to specify a swarm service's isolation
Docker allows you to specify a swarm service's isolation
mode. **This setting applies to Windows hosts only and is ignored for Linux
hosts.** The isolation mode can be one of the following:
@ -850,18 +847,12 @@ $ docker service update \
my_web
```
In Docker 17.04 and higher, you can configure a service to roll back
automatically if a service update fails to deploy. See
[Automatically roll back if an update fails](#automatically-roll-back-if-an-update-fails).
You can configure a service to roll back automatically if a service update fails
to deploy. See [Automatically roll back if an update fails](#automatically-roll-back-if-an-update-fails).
Related to the new automatic rollback feature, in Docker 17.04 and higher,
manual rollback is handled at the server side, rather than the client, if the
daemon is running Docker 17.04 or higher. This allows manually-initiated
rollbacks to respect the new rollback parameters. The client is version-aware,
so it still uses the old method against an older daemon.
Finally, in Docker 17.04 and higher, `--rollback` cannot be used in conjunction
with other flags to `docker service update`.
Manual rollback is handled at the server side, which allows manually-initiated
rollbacks to respect the new rollback parameters. Note that `--rollback` cannot
be used in conjunction with other flags to `docker service update`.
### Automatically roll back if an update fails

View File

@ -13,19 +13,21 @@ above. If you have an older version, see the [upgrade guide](../../compose/compo
To run through this tutorial, you need:
1. A Docker Engine of version 1.13.0 or later, running in [swarm mode](swarm-mode.md).
1. A Docker Engine running in [swarm mode](swarm-mode.md).
If you're not familiar with swarm mode, you might want to read
[Swarm mode key concepts](key-concepts.md)
and [How services work](how-swarm-mode-works/services.md).
> **Note**: If you're trying things out on a local development environment,
> **Note**
>
> If you're trying things out on a local development environment,
> you can put your engine into swarm mode with `docker swarm init`.
>
> If you've already got a multi-node swarm running, keep in mind that all
> `docker stack` and `docker service` commands must be run from a manager
> node.
2. [Docker Compose](../../compose/install.md) version 1.10 or later.
2. A current version of [Docker Compose](../../compose/install.md).
## Set up a Docker registry
@ -113,7 +115,7 @@ counter whenever you visit it.
5. Create a file called `docker-compose.yml` and paste this in:
```none
version: '3'
version: "{{ site.compose_file_v3 }}"
services:
web:

View File

@ -26,7 +26,6 @@ If you are brand new to Docker, see [About Docker Engine](../../index.md).
To run this tutorial, you need the following:
* [three Linux hosts which can communicate over a network, with Docker installed](#three-networked-host-machines)
* [Docker Engine 1.12 or later installed](#docker-engine-112-or-newer)
* [the IP address of the manager machine](#the-ip-address-of-the-manager-machine)
* [open ports between the hosts](#open-protocols-and-ports-between-the-hosts)
@ -47,16 +46,6 @@ workers (`worker1` and `worker2`).
as well, in which case you need only one host. Multi-node commands do not
work, but you can initialize a swarm, create services, and scale them.
### Docker Engine 1.12 or newer
This tutorial requires Docker Engine 1.12 or newer on each of the host machines.
Install Docker Engine and verify that the Docker Engine daemon is running on
each of the machines. You can get the latest version of Docker Engine as
follows:
* [install Docker Engine on Linux machines](#install-docker-engine-on-linux-machines)
* [use Docker Desktop for Mac or Docker Desktop for Windows](#use-docker-desktop-for-mac-or-docker-desktop-for-windows)
#### Install Docker Engine on Linux machines
@ -72,15 +61,12 @@ Alternatively, install the latest [Docker Desktop for Mac](../../../docker-for-m
computer. You can test both single-node and multi-node swarm from this computer,
but you need to use Docker Machine to test the multi-node scenarios.
* You can use Docker Desktop for Mac or Windows to test _single-node_ features of swarm
mode, including initializing a swarm with a single node, creating services,
and scaling services. Docker "Moby" on Hyperkit (Mac) or Hyper-V (Windows)
serve as the single swarm node.
<p />
* Currently, you cannot use Docker Desktop for Mac or Docker Desktop for Windows alone to test a
_multi-node_ swarm, but many examples are applicable to a single-node Swarm setup.
* You can use Docker Desktop for Mac or Windows to test _single-node_ features
of swarm mode, including initializing a swarm with a single node, creating
services, and scaling services.
* Currently, you cannot use Docker Desktop for Mac or Docker Desktop for Windows
alone to test a _multi-node_ swarm, but many examples are applicable to a
single-node Swarm setup.
### The IP address of the manager machine

View File

@ -4,18 +4,17 @@ keywords: swarm, manager, lock, unlock, autolock, encryption
title: Lock your swarm to protect its encryption key
---
In Docker 1.13 and higher, the Raft logs used by swarm managers are encrypted on
disk by default. This at-rest encryption protects your service's configuration
and data from attackers who gain access to the encrypted Raft logs. One of the
reasons this feature was introduced was in support of the [Docker secrets](secrets.md)
feature.
The Raft logs used by swarm managers are encrypted on disk by default. This at-rest
encryption protects your service's configuration and data from attackers who gain
access to the encrypted Raft logs. One of the reasons this feature was introduced
was in support of the [Docker secrets](secrets.md) feature.
When Docker restarts, both the TLS key used to encrypt communication among swarm
nodes, and the key used to encrypt and decrypt Raft logs on disk, are loaded
into each manager node's memory. Docker 1.13 introduces the ability to protect
the mutual TLS encryption key and the key used to encrypt and decrypt Raft logs
at rest, by allowing you to take ownership of these keys and to require manual
unlocking of your managers. This feature is called _autolock_.
into each manager node's memory. Docker has the ability to protect the mutual TLS
encryption key and the key used to encrypt and decrypt Raft logs at rest, by
allowing you to take ownership of these keys and to require manual unlocking of
your managers. This feature is called _autolock_.
When Docker restarts, you must
[unlock the swarm](#unlock-a-swarm) first, using a

View File

@ -68,8 +68,8 @@ $ docker-machine scp -r /Users/<username>/webapp MACHINE-NAME:/home/ubuntu/webap
Then write a docker-compose file that bind mounts it in:
```none
version: "3.1"
```yaml
version: "{{ site.compose_file_v3 }}"
services:
webapp:
image: alpine

View File

@ -42,8 +42,7 @@ exist by default, and provide core networking functionality:
[bridge networks](bridge.md).
- `host`: For standalone containers, remove network isolation between the
container and the Docker host, and use the host's networking directly. `host`
is only available for swarm services on Docker 17.06 and higher. See
container and the Docker host, and use the host's networking directly. See
[use the host network](host.md).
- `overlay`: Overlay networks connect multiple Docker daemons together and

View File

@ -30,17 +30,14 @@ host running elsewhere.
- [Communicate between a container and a swarm service](#communicate-between-a-container-and-a-swarm-service)
sets up communication between a standalone container and a swarm service,
using an attachable overlay network. This is supported in Docker 17.06 and
higher.
using an attachable overlay network.
## Prerequisites
These requires you to have at least a single-node swarm, which means that
These require you to have at least a single-node swarm, which means that
you have started Docker and run `docker swarm init` on the host. You can run
the examples on a multi-node swarm as well.
The last example requires Docker 17.06 or higher.
## Use the default overlay network
In this example, you start an `alpine` service and examine the characteristics
@ -53,9 +50,8 @@ the point of view of a service.
### Prerequisites
This tutorial requires three physical or virtual Docker hosts which can all
communicate with one another, all running new installations of Docker 17.03 or
higher. This tutorial assumes that the three hosts are running on the same
network with no firewall involved.
communicate with one another. This tutorial assumes that the three hosts are
running on the same network with no firewall involved.
These hosts will be referred to as `manager`, `worker-1`, and `worker-2`. The
`manager` host will function as both a manager and a worker, which means it can
@ -290,8 +286,8 @@ overlay network. Steps are:
### Prerequisites
For this test, you need two different Docker hosts that can communicate with
each other. Each host must have Docker 17.06 or higher with the following ports
open between the two Docker hosts:
each other. Each host must have the following ports open between the two Docker
hosts:
- TCP port 2377
- TCP and UDP port 7946
@ -437,12 +433,6 @@ example also uses Linux hosts, but the same commands work on Windows.
## Communicate between a container and a swarm service
### Prerequisites
You need Docker 17.06 or higher for this example.
### Walkthrough
In this example, you start two different `alpine` containers on the same Docker
host and do some tests to understand how they communicate with each other. You
need to have Docker installed and running.
@ -639,7 +629,7 @@ need to have Docker installed and running.
Remember, the default `bridge` network is not recommended for production. To
learn about user-defined bridge networks, continue to the
[next tutorial](#use-user-defined-bridge-networks).
[next tutorial](network-tutorial-standalone.md#use-user-defined-bridge-networks).
## Other networking tutorials

View File

@ -23,9 +23,8 @@ host running elsewhere.
running in production.
Although [overlay networks](overlay.md) are generally used for swarm services,
Docker 17.06 and higher allow you to use an overlay network for standalone
containers. That's covered as part of the
[tutorial on using overlay networks](network-tutorial-overlay.md#use-an-overlay-network-for-standalone-containers).
you can also use an overlay network for standalone containers. That's covered as
part of the [tutorial on using overlay networks](network-tutorial-overlay.md#use-an-overlay-network-for-standalone-containers).
## Use the default bridge network

View File

@ -111,10 +111,10 @@ $ docker network create --opt encrypted --driver overlay --attachable my-attacha
### Customize the default ingress network
Most users never need to configure the `ingress` network, but Docker 17.05 and
higher allow you to do so. This can be useful if the automatically-chosen subnet
conflicts with one that already exists on your network, or you need to customize
other low-level network settings such as the MTU.
Most users never need to configure the `ingress` network, but Docker allows you
to do so. This can be useful if the automatically-chosen subnet conflicts with
one that already exists on your network, or you need to customize other low-level
network settings such as the MTU.
Customizing the `ingress` network involves removing and recreating it. This is
usually done before you create any services in the swarm. If you have existing

View File

@ -520,8 +520,8 @@ following:
pushed, but are always fetched from their authorized location. This is fine
for internet-connected hosts, but not in an air-gapped set-up.
In Docker 17.06 and higher, you can configure the Docker daemon to allow
pushing non-distributable layers to private registries, in this scenario.
You can configure the Docker daemon to allow pushing non-distributable layers
to private registries.
**This is only useful in air-gapped set-ups in the presence of
non-distributable images, or in extremely bandwidth-limited situations.**
You are responsible for ensuring that you are in compliance with the terms of

View File

@ -25,11 +25,8 @@ manage bind mounts.
## Choose the -v or --mount flag
Originally, the `-v` or `--volume` flag was used for standalone containers and
the `--mount` flag was used for swarm services. However, starting with Docker
17.06, you can also use `--mount` with standalone containers. In general,
`--mount` is more explicit and verbose. The biggest difference is that the `-v`
syntax combines all the options together in one field, while the `--mount`
In general, `--mount` is more explicit and verbose. The biggest difference is that
the `-v` syntax combines all the options together in one field, while the `--mount`
syntax separates them. Here is a comparison of the syntax for each flag.
> **Tip**: New users should use the `--mount` syntax. Experienced users may

View File

@ -107,9 +107,9 @@ mounts is to think about where the data lives on the Docker host.
Bind mounts and volumes can both be mounted into containers using the `-v` or
`--volume` flag, but the syntax for each is slightly different. For `tmpfs`
mounts, you can use the `--tmpfs` flag. However, in Docker 17.06 and higher,
we recommend using the `--mount` flag for both containers and services, for
bind mounts, volumes, or `tmpfs` mounts, as the syntax is more clear.
mounts, you can use the `--tmpfs` flag. We recommend using the `--mount` flag
for both containers and services, for bind mounts, volumes, or `tmpfs` mounts,
as the syntax is more clear.
## Good use cases for volumes

View File

@ -144,12 +144,11 @@ below to configure Docker to use the `devicemapper` storage driver in
#### Allow Docker to configure direct-lvm mode
With Docker `17.06` and higher, Docker can manage the block device for you,
simplifying configuration of `direct-lvm` mode. **This is appropriate for fresh
Docker setups only.** You can only use a single block device. If you need to
use multiple block devices, [configure direct-lvm mode
manually](#configure-direct-lvm-mode-manually) instead. The following new
configuration options have been added:
Docker can manage the block device for you, simplifying configuration of `direct-lvm`
mode. **This is appropriate for fresh Docker setups only.** You can only use a
single block device. If you need to use multiple block devices,
[configure direct-lvm mode manually](#configure-direct-lvm-mode-manually) instead.
The following new configuration options are available:
| Option | Description | Required? | Default | Example |
|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------|:--------|:-----------------------------------|

View File

@ -23,10 +23,9 @@ storage driver as `overlay` or `overlay2`.
## Prerequisites
OverlayFS is supported if you meet the following prerequisites:
OverlayFS is the recommended storage driver, and supported if you meet the following
prerequisites:
- The `overlay2` driver is supported on Docker Engine - Community, and Docker EE 17.06.02-ee5 and
up, and is the recommended storage driver.
- Version 4.0 or higher of the Linux kernel, or RHEL or CentOS using
version 3.10.0-514 of the kernel or higher. If you use an older kernel, you need
to use the `overlay` driver, which is not recommended.

View File

@ -32,8 +32,8 @@ Docker supports the following storage drivers:
* `overlay2` is the preferred storage driver, for all currently supported
Linux distributions, and requires no extra configuration.
* `aufs` is the preferred storage driver for Docker 18.06 and older, when
running on Ubuntu 14.04 on kernel 3.13 which has no support for `overlay2`.
* `aufs` was the preferred storage driver for Docker 18.06 and older, when
running on Ubuntu 14.04 on kernel 3.13 which had no support for `overlay2`.
* `devicemapper` is supported, but requires `direct-lvm` for production
environments, because `loopback-lvm`, while zero-configuration, has very
poor performance. `devicemapper` was the recommended storage driver for
@ -62,7 +62,7 @@ After you have narrowed down which storage drivers you can choose from, your cho
characteristics of your workload and the level of stability you need. See [Other considerations](#other-considerations)
for help in making the final decision.
> ***NOTE***: Your choice may be limited by your Docker edition, operating system, and distribution.
> ***NOTE***: Your choice may be limited by your operating system and distribution.
> For instance, `aufs` is only supported on Ubuntu and Debian, and may require extra packages
> to be installed, while `btrfs` is only supported on SLES, which is only supported with Docker
> Enterprise. See [Support storage drivers per Linux distribution](#supported-storage-drivers-per-linux-distribution)
@ -91,13 +91,12 @@ configurations work on recent versions of the Linux distribution:
| Docker Engine - Community on CentOS | `overlay2` | `overlay`¹, `devicemapper`², `zfs`, `vfs` |
| Docker Engine - Community on Fedora | `overlay2` | `overlay`¹, `devicemapper`², `zfs`, `vfs` |
¹) The `overlay` storage driver is deprecated in Docker Engine - Enterprise 18.09, and will be
removed in a future release. It is recommended that users of the `overlay` storage driver
migrate to `overlay2`.
¹) The `overlay` storage driver is deprecated, and will be removed in a future
release. It is recommended that users of the `overlay` storage driver migrate to `overlay2`.
²) The `devicemapper` storage driver is deprecated in Docker Engine 18.09, and will be
removed in a future release. It is recommended that users of the `devicemapper` storage driver
migrate to `overlay2`.
²) The `devicemapper` storage driver is deprecated, and will be removed in a future
release. It is recommended that users of the `devicemapper` storage driver migrate
to `overlay2`.
When possible, `overlay2` is the recommended storage driver. When installing

View File

@ -31,8 +31,7 @@ mechanism to verify other storage back-ends against, in a testing environment.
```
If you want to set a quota to control the maximum size the VFS storage
driver can use, set the `size` option on the `storage-opts` key. Quotas
are only supported in Docker 17.12 and higher.
driver can use, set the `size` option on the `storage-opts` key.
```json
{

View File

@ -44,7 +44,9 @@ use unless you have substantial experience with ZFS on Linux.
and push existing images to Docker Hub or a private repository, so that you
do not need to re-create them later.
> **Note**: There is no need to use `MountFlags=slave` with Docker Engine 18.09 or
> **Note**
>
> There is no need to use `MountFlags=slave` with Docker Engine 18.09 or
> later because `dockerd` and `containerd` are in different mount namespaces.
## Configure Docker with the `zfs` storage driver

View File

@ -31,11 +31,8 @@ containers.
## Choose the --tmpfs or --mount flag
Originally, the `--tmpfs` flag was used for standalone containers and
the `--mount` flag was used for swarm services. However, starting with Docker
17.06, you can also use `--mount` with standalone containers. In general,
`--mount` is more explicit and verbose. The biggest difference is that the
`--tmpfs` flag does not support any configurable options.
In general, `--mount` is more explicit and verbose. The biggest difference is
that the `--tmpfs` flag does not support any configurable options.
- **`--tmpfs`**: Mounts a `tmpfs` mount without allowing you to specify any
configurable options, and can only be used with standalone containers.

View File

@ -41,15 +41,10 @@ configurable for volumes.
## Choose the -v or --mount flag
Originally, the `-v` or `--volume` flag was used for standalone containers and
the `--mount` flag was used for swarm services. However, starting with Docker
17.06, you can also use `--mount` with standalone containers. In general,
`--mount` is more explicit and verbose. The biggest difference is that the `-v`
syntax combines all the options together in one field, while the `--mount`
In general, `--mount` is more explicit and verbose. The biggest difference is that
the `-v` syntax combines all the options together in one field, while the `--mount`
syntax separates them. Here is a comparison of the syntax for each flag.
> New users should try `--mount` syntax which is simpler than `--volume` syntax.
If you need to specify volume driver options, you must use `--mount`.
- **`-v` or `--volume`**: Consists of three fields, separated by colon characters
@ -222,8 +217,8 @@ $ docker volume rm myvol2
A single docker compose service with a volume looks like this:
```yml
version: "3.7"
```yaml
version: "{{ site.compose_file_v3 }}"
services:
frontend:
image: node:lts
@ -239,8 +234,8 @@ volume will be reused on following invocations.
A volume may be created directly outside of compose with `docker volume create` and
then referenced inside `docker-compose.yml` as follows:
```yml
version: "3.7"
```yaml
version: "{{ site.compose_file_v3 }}"
services:
frontend:
image: node:lts