more swarm freshness (#18937)

* more swarm freshness

* fix build

* fix build
This commit is contained in:
Allie Sadler 2023-12-18 10:11:02 +00:00 committed by GitHub
parent a1015ad58d
commit 937041314e
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
11 changed files with 197 additions and 148 deletions

View File

@ -6,7 +6,7 @@ aliases:
- /engine/swarm/manager-administration-guide/
---
When you run a swarm of Docker Engines, **manager nodes** are the key components
When you run a swarm of Docker Engines, manager nodes are the key components
for managing the swarm and storing the swarm state. It is important to
understand some key features of manager nodes to properly deploy and
maintain the swarm.
@ -123,7 +123,7 @@ available to process requests and rebalance workloads.
By default manager nodes also act as a worker nodes. This means the scheduler
can assign tasks to a manager node. For small and non-critical swarms
assigning tasks to managers is relatively low-risk as long as you schedule
services using **resource constraints** for *cpu* and *memory*.
services using resource constraints for cpu and memory.
However, because manager nodes use the Raft consensus algorithm to replicate data
in a consistent way, they are sensitive to resource starvation. You should
@ -254,7 +254,7 @@ You can back up the swarm using any manager. Use the following procedure.
results are less predictable when restoring. While the manager is down,
other nodes continue generating swarm data that is not part of this backup.
> Note
> **Note**
>
> Be sure to maintain the quorum of swarm managers. During the
> time that a manager is shut down, your swarm is more vulnerable to
@ -285,7 +285,7 @@ restore the data to a new swarm.
3. Restore the `/var/lib/docker/swarm` directory with the contents of the
backup.
> Note
> **Note**
>
> The new node uses the same encryption key for on-disk
> storage as the old one. It is not possible to change the on-disk storage

View File

@ -18,7 +18,9 @@ any time, and services can share a config. You can even use configs in
conjunction with environment variables or labels, for maximum flexibility.
Config values can be generic strings or binary content (up to 500 kb in size).
> **Note**: Docker configs are only available to swarm services, not to
> **Note**
>
> Docker configs are only available to swarm services, not to
> standalone containers. To use this feature, consider adapting your container
> to run as a service with a scale of 1.
@ -121,7 +123,9 @@ Use these links to read about specific commands, or continue to the
This section includes graduated examples which illustrate how to use
Docker configs.
> **Note**: These examples use a single-Engine swarm and unscaled services for
> **Note**
>
> These examples use a single-engine swarm and unscaled services for
> simplicity. The examples use Linux containers, but Windows containers also
> support configs.

View File

@ -5,10 +5,10 @@ title: Join nodes to a swarm
---
When you first create a swarm, you place a single Docker Engine into
swarm mode. To take full advantage of swarm mode you can add nodes to the swarm:
Swarm mode. To take full advantage of Swarm mode you can add nodes to the swarm:
* Adding worker nodes increases capacity. When you deploy a service to a swarm,
the Engine schedules tasks on available nodes whether they are worker nodes or
the engine schedules tasks on available nodes whether they are worker nodes or
manager nodes. When you add workers to your swarm, you increase the scale of
the swarm to handle tasks without affecting the manager raft consensus.
* Manager nodes increase fault-tolerance. Manager nodes perform the
@ -18,7 +18,7 @@ goes down, the remaining manager nodes elect a new leader and resume
orchestration and maintenance of the swarm state. By default, manager nodes
also run tasks.
The Docker Engine joins the swarm depending on the **join-token** you provide to
Docker Engine joins the swarm depending on the **join-token** you provide to
the `docker swarm join` command. The node only uses the token at join time. If
you subsequently rotate the token, it doesn't affect existing swarm nodes. Refer
to [Run Docker Engine in swarm mode](swarm-mode.md#view-the-join-command-or-update-a-swarm-join-token).
@ -50,23 +50,23 @@ This node joined a swarm as a worker.
The `docker swarm join` command does the following:
* switches the Docker Engine on the current node into swarm mode.
* requests a TLS certificate from the manager.
* names the node with the machine hostname
* joins the current node to the swarm at the manager listen address based upon the swarm token.
* sets the current node to `Active` availability, meaning it can receive tasks
* Switches Docker Engine on the current node into Swarm mode.
* Requests a TLS certificate from the manager.
* Names the node with the machine hostname.
* Joins the current node to the swarm at the manager listen address based upon the swarm token.
* Sets the current node to `Active` availability, meaning it can receive tasks
from the scheduler.
* extends the `ingress` overlay network to the current node.
* Extends the `ingress` overlay network to the current node.
## Join as a manager node
When you run `docker swarm join` and pass the manager token, the Docker Engine
switches into swarm mode the same as for workers. Manager nodes also participate
When you run `docker swarm join` and pass the manager token, Docker Engine
switches into Swarm mode the same as for workers. Manager nodes also participate
in the raft consensus. The new nodes should be `Reachable`, but the existing
manager remains the swarm `Leader`.
Docker recommends three or five manager nodes per cluster to implement high
availability. Because swarm mode manager nodes share data using Raft, there
availability. Because Swarm-mode manager nodes share data using Raft, there
must be an odd number of managers. The swarm can continue to function after as
long as a quorum of more than half of the manager nodes are available.

View File

@ -4,12 +4,12 @@ keywords: guide, swarm mode, node
title: Manage nodes in a swarm
---
As part of the swarm management lifecycle, you may need to view or update a node as follows:
As part of the swarm management lifecycle, you may need to:
* [list nodes in the swarm](#list-nodes)
* [inspect an individual node](#inspect-an-individual-node)
* [update a node](#update-a-node)
* [leave the swarm](#leave-the-swarm)
* [List nodes in the swarm](#list-nodes)
* [Inspect an individual node](#inspect-an-individual-node)
* [Update a node](#update-a-node)
* [Leave the swarm](#leave-the-swarm)
## List nodes
@ -85,21 +85,21 @@ Engine Version: 1.12.0-dev
## Update a node
You can modify node attributes as follows:
You can modify node attributes to:
* [change node availability](#change-node-availability)
* [add or remove label metadata](#add-or-remove-label-metadata)
* [change a node role](#promote-or-demote-a-node)
* [Change node availability](#change-node-availability)
* [Add or remove label metadata](#add-or-remove-label-metadata)
* [Change a node role](#promote-or-demote-a-node)
### Change node availability
Changing node availability lets you:
* drain a manager node so that it only performs swarm management tasks and is
* Drain a manager node so that it only performs swarm management tasks and is
unavailable for task assignment.
* drain a node so you can take it down for maintenance.
* pause a node so it can't receive new tasks.
* restore unavailable or paused nodes availability status.
* Drain a node so you can take it down for maintenance.
* Pause a node so it can't receive new tasks.
* Restore unavailable or paused nodes availability status.
For example, to change a manager node to `Drain` availability:
@ -157,7 +157,9 @@ You can promote a worker node to the manager role. This is useful when a
manager node becomes unavailable or if you want to take a manager offline for
maintenance. Similarly, you can demote a manager node to the worker role.
> **Note**: Regardless of your reason to promote or demote
> **Note**
>
> Regardless of your reason to promote or demote
> a node, you must always maintain a quorum of manager nodes in the
> swarm. For more information refer to the [Swarm administration guide](admin_guide.md).
@ -216,7 +218,7 @@ $ docker swarm leave
Node left the swarm.
```
When a node leaves the swarm, the Docker Engine stops running in swarm
When a node leaves the swarm, Docker Engine stops running in Swarm
mode. The orchestrator no longer schedules tasks to the node.
If the node is a manager node, you receive a warning about maintaining the
@ -227,7 +229,7 @@ disaster recovery measures.
For information about maintaining a quorum and disaster recovery, refer to the
[Swarm administration guide](admin_guide.md).
After a node leaves the swarm, you can run the `docker node rm` command on a
After a node leaves the swarm, you can run `docker node rm` on a
manager node to remove the node from the node list.
For instance:

View File

@ -11,25 +11,25 @@ This topic discusses how to manage the application data for your swarm services.
## Swarm and types of traffic
A Docker swarm generates two different kinds of traffic:
- **Control and management plane traffic**: This includes swarm management
- Control and management plane traffic: This includes swarm management
messages, such as requests to join or leave the swarm. This traffic is
always encrypted.
- **Application data plane traffic**: This includes container traffic and
- Application data plane traffic: This includes container traffic and
traffic to and from external clients.
## Key network concepts
The following three network concepts are important to swarm services:
- **Overlay networks** manage communications among the Docker daemons
- Overlay networks manage communications among the Docker daemons
participating in the swarm. You can create overlay networks, in the same way
as user-defined networks for standalone containers. You can attach a service
to one or more existing overlay networks as well, to enable service-to-service
communication. Overlay networks are Docker networks that use the `overlay`
network driver.
- The **ingress network** is a special overlay network that facilitates
- The ingress network is a special overlay network that facilitates
load balancing among a service's nodes. When any swarm node receives a
request on a published port, it hands that request off to a module called
`IPVS`. `IPVS` keeps track of all the IP addresses participating in that
@ -40,7 +40,7 @@ The following three network concepts are important to swarm services:
swarm. Most users do not need to customize its configuration, but Docker allows
you to do so.
- The **docker_gwbridge** is a bridge network that connects the overlay
- The docker_gwbridge is a bridge network that connects the overlay
networks (including the `ingress` network) to an individual Docker daemon's
physical network. By default, each container a service is running is connected
to its local Docker daemon host's `docker_gwbridge` network.
@ -49,7 +49,10 @@ The following three network concepts are important to swarm services:
join a swarm. Most users do not need to customize its configuration, but
Docker allows you to do so.
> **See also** [Networking overview](../../network/index.md) for more details about Swarm networking in general.
> **Tip**
>
> See also [Networking overview](../../network/index.md) for more details about Swarm networking in general.
{ .tip }
## Firewall considerations
@ -216,7 +219,9 @@ Multiple pools can be configured if discontiguous address space is required. How
The default mask length can be configured and is the same for all networks. It is set to `/24` by default. To change the default subnet mask length, use the `--default-addr-pool-mask-length` command line option.
> **Note**: Default address pools can only be configured on `swarm init` and cannot be altered after cluster creation.
> **Note**
>
> Default address pools can only be configured on `swarm init` and cannot be altered after cluster creation.
##### Overlay network size limitations
@ -269,7 +274,7 @@ from any swarm node which is joined to the swarm and is in a `running` state.
### Configure service discovery
**Service discovery** is the mechanism Docker uses to route a request from your
Service discovery is the mechanism Docker uses to route a request from your
service's external clients to an individual swarm node, without the client
needing to know how many nodes are participating in the service or their
IP addresses or ports. You don't need to publish ports which are used between
@ -348,7 +353,9 @@ services which publish ports, such as a WordPress service which publishes port
my-ingress
```
> **Note**: You can name your `ingress` network something other than
> **Note**
>
> You can name your `ingress` network something other than
> `ingress`, but you can only have one. An attempt to create a second one
> fails.

View File

@ -4,16 +4,16 @@ keywords: docker, container, cluster, swarm, raft
title: Raft consensus in swarm mode
---
When the Docker Engine runs in swarm mode, manager nodes implement the
When Docker Engine runs in Swarm mode, manager nodes implement the
[Raft Consensus Algorithm](http://thesecretlivesofdata.com/raft/) to manage the global cluster state.
The reason why *Docker swarm mode* is using a consensus algorithm is to make sure that
The reason why Swarm mode is using a consensus algorithm is to make sure that
all the manager nodes that are in charge of managing and scheduling tasks in the cluster
are storing the same consistent state.
Having the same consistent state across the cluster means that in case of a failure,
any Manager node can pick up the tasks and restore the services to a stable state.
For example, if the *Leader Manager* which is responsible for scheduling tasks in the
For example, if the Leader Manager which is responsible for scheduling tasks in the
cluster dies unexpectedly, any other Manager can pick up the task of scheduling and
re-balance tasks to match the desired state.
@ -28,11 +28,11 @@ cannot process any more requests to schedule additional tasks. The existing
tasks keep running but the scheduler cannot rebalance tasks to
cope with failures if the manager set is not healthy.
The implementation of the consensus algorithm in swarm mode means it features
The implementation of the consensus algorithm in Swarm mode means it features
the properties inherent to distributed systems:
- *agreement on values* in a fault tolerant system. (Refer to [FLP impossibility theorem](https://www.the-paper-trail.org/post/2008-08-13-a-brief-tour-of-flp-impossibility/)
- Agreement on values in a fault tolerant system. (Refer to [FLP impossibility theorem](https://www.the-paper-trail.org/post/2008-08-13-a-brief-tour-of-flp-impossibility/)
and the [Raft Consensus Algorithm paper](https://www.usenix.org/system/files/conference/atc14/atc14-paper-ongaro.pdf))
- *mutual exclusion* through the leader election process
- *cluster membership* management
- *globally consistent object sequencing* and CAS (compare-and-swap) primitives
- Mutual exclusion through the leader election process
- Cluster membership management
- Globally consistent object sequencing and CAS (compare-and-swap) primitives

View File

@ -25,7 +25,9 @@ runtime but you don't want to store in the image or in source control, such as:
- Other important data such as the name of a database or internal server
- Generic strings or binary content (up to 500 kb in size)
> **Note**: Docker secrets are only available to swarm services, not to
> **Note**
>
> Docker secrets are only available to swarm services, not to
> standalone containers. To use this feature, consider adapting your container
> to run as a service. Stateful containers can typically run with a scale of 1
> without changing the container code.
@ -50,7 +52,7 @@ differences in the implementations, they are called out in the
examples below. Keep the following notable differences in mind:
- Microsoft Windows has no built-in driver for managing RAM disks, so within
running Windows containers, secrets **are** persisted in clear text to the
running Windows containers, secrets are persisted in clear text to the
container's root disk. However, the secrets are explicitly removed when a
container stops. In addition, Windows does not support persisting a running
container as an image using `docker commit` or similar commands.
@ -130,7 +132,9 @@ easier to use Docker secrets. To find out how to modify your own images in
a similar way, see
[Build support for Docker Secrets into your images](#build-support-for-docker-secrets-into-your-images).
> **Note**: These examples use a single-Engine swarm and unscaled services for
> **Note**
>
> These examples use a single-Engine swarm and unscaled services for
> simplicity. The examples use Linux containers, but Windows containers also
> support secrets. See [Windows support](#windows-support).
@ -208,7 +212,7 @@ real-world example, continue to
This is a secret
```
5. Verify that the secret is **not** available if you commit the container.
5. Verify that the secret is not available if you commit the container.
```none
$ docker commit $(docker ps --filter name=redis -q) committed_redis
@ -300,7 +304,9 @@ This example assumes that you have PowerShell installed.
microsoft/iis:nanoserver
```
> **Note**: There is technically no reason to use secrets for this
> **Note**
>
> There is technically no reason to use secrets for this
> example; [configs](configs.md) are a better fit. This example is
> for illustration only.
@ -465,7 +471,9 @@ generate the site key and certificate, name the files `site.key` and
actually starts, so you don't need to rebuild your image if you change the
Nginx configuration.
> **Note**: Normally you would create a Dockerfile which copies the `site.conf`
> **Note**
>
> Normally you would create a Dockerfile which copies the `site.conf`
> into place, build the image, and run a container using your custom image.
> This example does not require a custom image. It puts the `site.conf`
> into place and runs the container all in one step.
@ -621,7 +629,9 @@ This example illustrates some techniques to use Docker secrets to avoid saving
sensitive credentials within your image or passing them directly on the command
line.
> **Note**: This example uses a single-Engine swarm for simplicity, and uses a
> **Note**
>
> This example uses a single-Engine swarm for simplicity, and uses a
> single-node MySQL service because a single MySQL server instance cannot be
> scaled by simply using a replicated service, and setting up a MySQL cluster is
> beyond the scope of this example.
@ -637,7 +647,9 @@ line.
password. You can use another command to generate the password if you
choose.
> **Note**: After you create a secret, you cannot update it. You can only
> **Note**
>
> After you create a secret, you cannot update it. You can only
> remove and re-create it, and you cannot remove a secret that a service is
> using. However, you can grant or revoke a running service's access to
> secrets using `docker service update`. If you need the ability to update a
@ -828,7 +840,9 @@ This example builds upon the previous one. In this scenario, you create a new
secret with a new MySQL password, update the `mysql` and `wordpress` services to
use it, then remove the old secret.
> **Note**: Changing the password on a MySQL database involves running extra
> **Note**
>
> Changing the password on a MySQL database involves running extra
> queries or commands, as opposed to just changing a single environment variable
> or a file, since the image only sets the MySQL password if the database doesnt
> already exist, and MySQL stores the password within a MySQL database by default.
@ -863,7 +877,9 @@ use it, then remove the old secret.
Even though the MySQL service has access to both the old and new secrets
now, the MySQL password for the WordPress user has not yet been changed.
> **Note**: This example does not rotate the MySQL `root` password.
> **Note**
>
> This example does not rotate the MySQL `root` password.
3. Now, change the MySQL password for the `wordpress` user using the
`mysqladmin` CLI. This command reads the old and new password from the files
@ -889,7 +905,7 @@ use it, then remove the old secret.
bash -c 'mysqladmin --user=wordpress --password="$(< /run/secrets/old_mysql_password)" password "$(< /run/secrets/mysql_password)"'
```
**or**:
Or:
```console
$ docker container exec $(docker ps --filter name=mysql -q) \
@ -1008,15 +1024,15 @@ volumes:
```
This example creates a simple WordPress site using two secrets in
a compose file.
a Compose file.
The keyword `secrets:` defines two secrets `db_password:` and
`db_root_password:`.
The top-level element `secrets` defines two secrets `db_password` and
`db_root_password`.
When deploying, Docker creates these two secrets and populates them with the
content from the file specified in the compose file.
content from the file specified in the Compose file.
The db service uses both secrets, and the wordpress is using one.
The `db` service uses both secrets, and `wordpress` is using one.
When you deploy, Docker mounts a file under `/run/secrets/<secret_name>` in the
services. These files are never persisted on disk, but are managed in memory.
@ -1024,5 +1040,5 @@ services. These files are never persisted on disk, but are managed in memory.
Each service uses environment variables to specify where the service should look
for that secret data.
More information on short and long syntax for secrets can be found at
[Compose file version 3 reference](../../compose/compose-file/compose-file-v3.md#secrets).
More information on short and long syntax for secrets can be found in the
[Compose Specification](../../compose/compose-file/09-secrets.md).

View File

@ -5,20 +5,20 @@ title: Deploy services to a swarm
toc_max: 4
---
Swarm services use a *declarative* model, which means that you define the
Swarm services use a declarative model, which means that you define the
desired state of the service, and rely upon Docker to maintain this state. The
state includes information such as (but not limited to):
- the image name and tag the service containers should run
- how many containers participate in the service
- whether any ports are exposed to clients outside the swarm
- whether the service should start automatically when Docker starts
- the specific behavior that happens when the service is restarted (such as
- The image name and tag the service containers should run
- How many containers participate in the service
- Whether any ports are exposed to clients outside the swarm
- Whether the service should start automatically when Docker starts
- The specific behavior that happens when the service is restarted (such as
whether a rolling restart is used)
- characteristics of the nodes where the service can run (such as resource
- Characteristics of the nodes where the service can run (such as resource
constraints and placement preferences)
For an overview of swarm mode, see [Swarm mode key concepts](key-concepts.md).
For an overview of Swarm mode, see [Swarm mode key concepts](key-concepts.md).
For an overview of how services work, see
[How services work](how-swarm-mode-works/services.md).
@ -76,26 +76,27 @@ For more details about image tag resolution, see
### gMSA for Swarm
*This example will only work for a windows container*
> **Note**
>
> This example only works for a Windows container.
Swarm now allows using a Docker Config as a gMSA credential spec - a requirement for Active Directory-authenticated applications. This reduces the burden of distributing credential specs to the nodes they're used on.
Swarm now allows using a Docker config as a gMSA credential spec - a requirement for Active Directory-authenticated applications. This reduces the burden of distributing credential specs to the nodes they're used on.
The following example assumes a gMSA and its credential spec (called credspec.json) already exists, and that the nodes being deployed to are correctly configured for the gMSA.
To use a Config as a credential spec, first create the Docker Config containing the credential spec:
To use a config as a credential spec, first create the Docker config containing the credential spec:
```console
$ docker config create credspec credspec.json
```
Now, you should have a Docker Config named credspec, and you can create a service using this credential spec. To do so, use the --credential-spec flag with the config name, like this:
Now, you should have a Docker config named credspec, and you can create a service using this credential spec. To do so, use the --credential-spec flag with the config name, like this:
```console
$ docker service create --credential-spec="config://credspec" <your image>
```
Your service will use the gMSA credential spec when it starts, but unlike a typical Docker Config (used by passing the --config flag), the credential spec will not be mounted into the container.
Your service uses the gMSA credential spec when it starts, but unlike a typical Docker config (used by passing the --config flag), the credential spec is not mounted into the container.
### Create a service using an image on a private registry
@ -119,9 +120,11 @@ nodes are able to log into the registry and pull the image.
### Provide credential specs for managed service accounts
In Enterprise Edition 3.0, security is improved through the centralized distribution and management of Group Managed Service Account(gMSA) credentials using Docker Config functionality. Swarm now allows using a Docker Config as a gMSA credential spec, which reduces the burden of distributing credential specs to the nodes on which they are used.
In Enterprise Edition 3.0, security is improved through the centralized distribution and management of Group Managed Service Account(gMSA) credentials using Docker config functionality. Swarm now allows using a Docker config as a gMSA credential spec, which reduces the burden of distributing credential specs to the nodes on which they are used.
**Note**: This option is only applicable to services using Windows containers.
> **Note**
>
> This option is only applicable to services using Windows containers.
Credential spec files are applied at runtime, eliminating the need for host-based credential spec files or registry entries - no gMSA credentials are written to disk on worker nodes. You can make credential specs available to Docker Engine running swarm kit worker nodes before a container starts. When deploying a service using a gMSA-based config, the credential spec is passed directly to the runtime of containers in that service.
@ -142,7 +145,7 @@ $ echo $contents > contents.json
Make sure that the nodes to which you are deploying are correctly configured for the gMSA.
To use a Config as a credential spec, create a Docker Config in a credential spec file named `credpspec.json`.
To use a config as a credential spec, create a Docker config in a credential spec file named `credpspec.json`.
You can specify any name for the name of the `config`.
```console
@ -155,7 +158,7 @@ Now you can create a service using this credential spec. Specify the `--credenti
$ docker service create --credential-spec="config://credspec" <your image>
```
Your service uses the gMSA credential spec when it starts, but unlike a typical Docker Config (used by passing the --config flag), the credential spec is not mounted into the container.
Your service uses the gMSA credential spec when it starts, but unlike a typical Docker config (used by passing the --config flag), the credential spec is not mounted into the container.
## Update a service
@ -219,9 +222,9 @@ one of those commands with the `--help` flag.
You can configure the following options for the runtime environment in the
container:
* environment variables using the `--env` flag
* the working directory inside the container using the `--workdir` flag
* the username or UID using the `--user` flag
* Environment variables using the `--env` flag
* The working directory inside the container using the `--workdir` flag
* The username or UID using the `--user` flag
The following service's containers have an environment variable `$MYVAR`
set to `myvalue`, run from the `/tmp/` directory, and run as the
@ -256,7 +259,7 @@ different ways, depending on your desired outcome.
An image version can be expressed in several different ways:
- If you specify a tag, the manager (or the Docker client, if you use
[content trust](#image_resolution_with_trust)) resolves that tag to a digest.
[content trust](../security/trust/index.md)) resolves that tag to a digest.
When the request to create a container task is received on a worker node, the
worker node only sees the digest, not the tag.
@ -301,13 +304,14 @@ updated. This feature is particularly important if you do use often-changing tag
such as `latest`, because it ensures that all service tasks use the same version
of the image.
> **Note**: If [content trust](../security/trust/index.md) is
> **Note**>
>
> If [content trust](../security/trust/index.md) is
> enabled, the client actually resolves the image's tag to a digest before
> contacting the swarm manager, to verify that the image is signed.
> Thus, if you use content trust, the swarm manager receives the request
> pre-resolved. In this case, if the client cannot resolve the image to a
> digest, the request fails.
{ #image_resolution_with_trust }
If the manager can't resolve the tag to a digest, each worker
node is responsible for resolving the tag to a digest, and different nodes may
@ -352,7 +356,9 @@ When you run `service update` with the `--image` flag, the swarm manager queries
Docker Hub or your private Docker registry for the digest the tag currently
points to and updates the service tasks to use that digest.
> **Note**: If you use [content trust](#image_resolution_with_trust), the Docker
> **Note**
>
> If you use [content trust](../security/trust/index.md), the Docker
> client resolves image and the swarm manager receives the image and digest,
> rather than a tag.
@ -418,7 +424,7 @@ Keep reading for more information and use cases for each of these methods.
To publish a service's ports externally to the swarm, use the
`--publish <PUBLISHED-PORT>:<SERVICE-PORT>` flag. The swarm makes the service
accessible at the published port **on every swarm node**. If an external host
accessible at the published port on every swarm node. If an external host
connects to that port on any swarm node, the routing mesh routes it to a task.
The external host does not need to know the IP addresses or internally-used
ports of the service tasks to interact with the service. When a user or process
@ -439,7 +445,7 @@ $ docker service create --name my_web \
```
Three tasks run on up to three nodes. You don't need to know which nodes
are running the tasks; connecting to port 8080 on **any** of the 10 nodes
are running the tasks; connecting to port 8080 on any of the 10 nodes
connects you to one of the three `nginx` tasks. You can test this using `curl`.
The following example assumes that `localhost` is one of the swarm nodes. If
this is not the case, or `localhost` does not resolve to an IP address on your
@ -468,7 +474,9 @@ control of the process for routing requests to your service's tasks. To publish
a service's port directly on the node where it is running, use the `mode=host`
option to the `--publish` flag.
> **Note**: If you publish a service's ports directly on the swarm node using
> **Note**
>
> If you publish a service's ports directly on the swarm node using
> `mode=host` and also set `published=<PORT>` this creates an implicit
> limitation that you can only run one task for that service on a given swarm
> node. You can work around this by specifying `published` without a port
@ -483,7 +491,7 @@ option to the `--publish` flag.
[nginx](https://hub.docker.com/_/nginx/) is an open source reverse proxy, load
balancer, HTTP cache, and a web server. If you run nginx as a service using the
routing mesh, connecting to the nginx port on any swarm node shows you the
web page for (effectively) **a random swarm node** running the service.
web page for (effectively) a random swarm node running the service.
The following example runs nginx as a service on each node in your swarm and
exposes nginx port locally on each swarm node.
@ -500,7 +508,9 @@ You can reach the nginx server on port 8080 of every swarm node. If you add a
node to the swarm, a nginx task is started on it. You cannot start another
service or container on any swarm node which binds to port 8080.
> **Note**: This is a naive example. Creating an application-layer
> **Note**
>
> This is a purely illustrative example. Creating an application-layer
> routing framework for a multi-tiered service is complex and out of scope for
> this topic.
@ -556,9 +566,13 @@ flag. For more information, see
### Customize a service's isolation mode
> **Important**
>
> This setting applies to Windows hosts only and is ignored for Linux hosts.
{ .important }
Docker allows you to specify a swarm service's isolation
mode. **This setting applies to Windows hosts only and is ignored for Linux
hosts.** The isolation mode can be one of the following:
mode. The isolation mode can be one of the following:
- `default`: Use the default isolation mode configured for the Docker host, as
configured by the `-exec-opt` flag or `exec-opts` array in `daemon.json`. If
@ -568,7 +582,9 @@ hosts.** The isolation mode can be one of the following:
- `process`: Run the service tasks as a separate process on the host.
> **Note**: `process` isolation mode is only supported on Windows Server.
> **Note**
>
> `process` isolation mode is only supported on Windows Server.
> Windows 10 only supports `hyperv` isolation mode.
- `hyperv`: Run the service tasks as isolated `hyperv` tasks. This increases
@ -697,7 +713,7 @@ $ docker service create \
nginx
```
You can also use the `constraint` service-level key in a `docker-compose.yml`
You can also use the `constraint` service-level key in a `compose.yml`
file.
If you specify multiple placement constraints, the service only deploys onto
@ -733,6 +749,8 @@ Placement preferences are not strictly enforced. If no node has the label
you specify in your preference, the service is deployed as though the
preference were not set.
> **Note**
>
> Placement preferences are ignored for global services.
The following example sets a preference to spread the deployment across nodes
@ -748,13 +766,13 @@ $ docker service create \
redis:3.0.6
```
> Missing or null labels
> **Note**
>
> Nodes which are missing the label used to spread still receive
> task assignments. As a group, these nodes receive tasks in equal
> proportion to any of the other groups identified by a specific label
> value. In a sense, a missing label is the same as having the label with
> a null value attached to it. If the service should **only** run on
> a null value attached to it. If the service should only run on
> nodes with the label being used for the spread preference, the
> preference should be combined with a constraint.
@ -963,7 +981,9 @@ The following examples show bind mount syntax:
<IMAGE>
```
> **Important**: Bind mounts can be useful but they can also cause problems. In
> **Important**
>
> Bind mounts can be useful but they can also cause problems. In
> most cases, it is recommended that you architect your application such that
> mounting paths from the host is unnecessary. The main risks include the
> following:
@ -979,6 +999,7 @@ The following examples show bind mount syntax:
> - Host bind mounts are non-portable. When you use bind mounts, there is no
> guarantee that your application runs the same way in development as it does
> in production.
{ .important }
### Create services using templates

View File

@ -10,20 +10,17 @@ a stack description in the form of a [Compose file](../../compose/compose-file/c
{{< include "swarm-compose-compat.md" >}}
The `docker stack deploy` command supports any Compose file of version "3.x".
If you have an older version, see the [upgrade guide](../../compose/compose-file/compose-versioning.md#upgrading).
To run through this tutorial, you need:
1. A Docker Engine running in [swarm mode](swarm-mode.md).
If you're not familiar with swarm mode, you might want to read
1. A Docker Engine running in [Swarm mode](swarm-mode.md).
If you're not familiar with Swarm mode, you might want to read
[Swarm mode key concepts](key-concepts.md)
and [How services work](how-swarm-mode-works/services.md).
> **Note**
>
> If you're trying things out on a local development environment,
> you can put your engine into swarm mode with `docker swarm init`.
> you can put your engine into Swarm mode with `docker swarm init`.
>
> If you've already got a multi-node swarm running, keep in mind that all
> `docker stack` and `docker service` commands must be run from a manager
@ -31,7 +28,6 @@ To run through this tutorial, you need:
2. A current version of [Docker Compose](../../compose/install/index.md).
## Set up a Docker registry
Because a swarm consists of multiple Docker Engines, a registry is required to
@ -115,7 +111,7 @@ counter whenever you visit it.
CMD ["python", "app.py"]
```
5. Create a file called `docker-compose.yml` and paste this in:
5. Create a file called `compose.yml` and paste this in:
```none
services:
@ -224,7 +220,7 @@ The stack is now ready to be deployed.
1. Create the stack with `docker stack deploy`:
```console
$ docker stack deploy --compose-file docker-compose.yml stackdemo
$ docker stack deploy --compose-file compose.yml stackdemo
Ignoring unsupported options: build
@ -263,8 +259,8 @@ The stack is now ready to be deployed.
Hello World! I have been seen 3 times.
```
Thanks to Docker's built-in routing mesh, you can access any node in the
swarm on port 8000 and get routed to the app:
With Docker's built-in routing mesh, you can access any node in the
swarm on port `8000` and get routed to the app:
```console
$ curl http://address-of-other-node:8000
@ -288,7 +284,7 @@ The stack is now ready to be deployed.
```
5. If you're just testing things out on a local machine and want to bring your
Docker Engine out of swarm mode, use `docker swarm leave`:
Docker Engine out of Swarm mode, use `docker swarm leave`:
```console
$ docker swarm leave --force

View File

@ -1,52 +1,52 @@
---
description: Run Docker Engine in swarm mode
keywords: guide, swarm mode, node
keywords: guide, swarm mode, node, Docker Engines
title: Run Docker Engine in swarm mode
---
When you first install and start working with Docker Engine, swarm mode is
disabled by default. When you enable swarm mode, you work with the concept of
When you first install and start working with Docker Engine, Swarm mode is
disabled by default. When you enable Swarm mode, you work with the concept of
services managed through the `docker service` command.
There are two ways to run the Engine in swarm mode:
There are two ways to run the engine in Swarm mode:
* Create a new swarm, covered in this article.
* [Join an existing swarm](join-nodes.md).
When you run the Engine in swarm mode on your local machine, you can create and
When you run the engine in Swarm mode on your local machine, you can create and
test services based upon images you've created or other available images. In
your production environment, swarm mode provides a fault-tolerant platform with
your production environment, Swarm mode provides a fault-tolerant platform with
cluster management features to keep your services running and available.
These instructions assume you have installed the Docker Engine 1.12 or later on
These instructions assume you have installed the Docker Engine on
a machine to serve as a manager node in your swarm.
If you haven't already, read through the [swarm mode key concepts](key-concepts.md)
and try the [swarm mode tutorial](swarm-tutorial/index.md).
If you haven't already, read through the [Swarm mode key concepts](key-concepts.md)
and try the [Swarm mode tutorial](swarm-tutorial/index.md).
## Create a swarm
When you run the command to create a swarm, the Docker Engine starts running in swarm mode.
When you run the command to create a swarm, Docker Engine starts running in Swarm mode.
Run [`docker swarm init`](../reference/commandline/swarm_init.md)
to create a single-node swarm on the current node. The Engine sets up the swarm
to create a single-node swarm on the current node. The engine sets up the swarm
as follows:
* switches the current node into swarm mode.
* creates a swarm named `default`.
* designates the current node as a leader manager node for the swarm.
* names the node with the machine hostname.
* configures the manager to listen on an active network interface on port 2377.
* sets the current node to `Active` availability, meaning it can receive tasks
* Switches the current node into Swarm mode.
* Creates a swarm named `default`.
* Designates the current node as a leader manager node for the swarm.
* Names the node with the machine hostname.
* Configures the manager to listen on an active network interface on port `2377``.
* Sets the current node to `Active` availability, meaning it can receive tasks
from the scheduler.
* starts an internal distributed data store for Engines participating in the
* Starts an internal distributed data store for Engines participating in the
swarm to maintain a consistent view of the swarm and all services running on it.
* by default, generates a self-signed root CA for the swarm.
* by default, generates tokens for worker and manager nodes to join the
* By default, generates a self-signed root CA for the swarm.
* By default, generates tokens for worker and manager nodes to join the
swarm.
* creates an overlay network named `ingress` for publishing service ports
* Creates an overlay network named `ingress` for publishing service ports
external to the swarm.
* creates an overlay default IP addresses and subnet mask for your networks
* Creates an overlay default IP addresses and subnet mask for your networks
The output for `docker swarm init` provides the connection command to use when
you join new worker nodes to the swarm:
@ -66,15 +66,15 @@ To add a manager to this swarm, run 'docker swarm join-token manager' and follow
### Configuring default address pools
By default Docker Swarm uses a default address pool `10.0.0.0/8` for global scope (overlay) networks. Every
By default Swarm mode uses a default address pool `10.0.0.0/8` for global scope (overlay) networks. Every
network that does not have a subnet specified will have a subnet sequentially allocated from this pool. In
some circumstances it may be desirable to use a different default IP address pool for networks.
For example, if the default `10.0.0.0/8` range conflicts with already allocated address space in your network,
then it is desirable to ensure that networks use a different range without requiring Swarm users to specify
then it is desirable to ensure that networks use a different range without requiring swarm users to specify
each subnet with the `--subnet` command.
To configure custom default address pools, you must define pools at Swarm initialization using the
To configure custom default address pools, you must define pools at swarm initialization using the
`--default-addr-pool` command line option. This command line option uses CIDR notation for defining the subnet mask.
To create the custom address pool for Swarm, you must define at least one default address pool, and an optional default address pool subnet mask. For example, for the `10.0.0.0/27`, use the value `27`.
@ -88,7 +88,7 @@ The format of the command is:
$ docker swarm init --default-addr-pool <IP range in CIDR> [--default-addr-pool <IP range in CIDR> --default-addr-pool-mask-length <CIDR value>]
```
The command to create a default IP address pool with a /16 (class B) for the 10.20.0.0 network looks like this:
The command to create a default IP address pool with a /16 (class B) for the `10.20.0.0` network looks like this:
```console
$ docker swarm init --default-addr-pool 10.20.0.0/16

View File

@ -14,14 +14,16 @@ nodes and the key used to encrypt and decrypt Raft logs on disk are loaded
into each manager node's memory. Docker has the ability to protect the mutual TLS
encryption key and the key used to encrypt and decrypt Raft logs at rest, by
allowing you to take ownership of these keys and to require manual unlocking of
your managers. This feature is called _autolock_.
your managers. This feature is called autolock.
When Docker restarts, you must
[unlock the swarm](#unlock-a-swarm) first, using a
_key encryption key_ generated by Docker when the swarm was locked. You can
key encryption key generated by Docker when the swarm was locked. You can
rotate this key encryption key at any time.
> **Note**: You don't need to unlock the swarm when a new node joins the swarm,
> **Note**
>
> You don't need to unlock the swarm when a new node joins the swarm,
> because the key is propagated to it over mutual TLS.
## Initialize a swarm with autolocking enabled
@ -150,7 +152,8 @@ Please remember to store this key in a password manager, since without it you
will not be able to restart the manager.
```
> **Warning**:
> **Warning**
>
> When you rotate the unlock key, keep a record of the old key
> around for a few minutes, so that if a manager goes down before it gets the new
> key, it may still be unlocked with the old one.