mirror of https://github.com/docker/docs.git
Proofread Swarm
Co-authored-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
This commit is contained in:
parent
3c9c346877
commit
f24fc7970e
|
@ -185,7 +185,7 @@ manager:
|
|||
|
||||
- Restart the daemon and see if the manager comes back as reachable.
|
||||
- Reboot the machine.
|
||||
- If neither restarting or rebooting work, you should add another manager node or promote a worker to be a manager node. You also need to cleanly remove the failed node entry from the manager set with `docker node demote <NODE>` and `docker node rm <id-node>`.
|
||||
- If neither restarting nor rebooting works, you should add another manager node or promote a worker to be a manager node. You also need to cleanly remove the failed node entry from the manager set with `docker node demote <NODE>` and `docker node rm <id-node>`.
|
||||
|
||||
Alternatively you can also get an overview of the swarm health from a manager
|
||||
node with `docker node ls`:
|
||||
|
@ -207,8 +207,8 @@ You should never restart a manager node by copying the `raft` directory from ano
|
|||
|
||||
To cleanly re-join a manager node to a cluster:
|
||||
|
||||
1. To demote the node to a worker, run `docker node demote <NODE>`.
|
||||
2. To remove the node from the swarm, run `docker node rm <NODE>`.
|
||||
1. Demote the node to a worker using `docker node demote <NODE>`.
|
||||
2. Remove the node from the swarm using `docker node rm <NODE>`.
|
||||
3. Re-join the node to the swarm with a fresh state using `docker swarm join`.
|
||||
|
||||
For more information on joining a manager node to a swarm, refer to
|
||||
|
@ -318,7 +318,7 @@ restore the data to a new swarm.
|
|||
|
||||
### Recover from losing the quorum
|
||||
|
||||
Swarm is resilient to failures and the swarm can recover from any number
|
||||
Swarm is resilient to failures and can recover from any number
|
||||
of temporary node failures (machine reboots or crash with restart) or other
|
||||
transient errors. However, a swarm cannot automatically recover if it loses a
|
||||
quorum. Tasks on existing worker nodes continue to run, but administrative
|
||||
|
|
|
@ -52,7 +52,7 @@ fails its health check or crashes, the orchestrator creates a new replica task
|
|||
that spawns a new container.
|
||||
|
||||
A task is a one-directional mechanism. It progresses monotonically through a
|
||||
series of states: assigned, prepared, running, etc. If the task fails the
|
||||
series of states: assigned, prepared, running, etc. If the task fails, the
|
||||
orchestrator removes the task and its container and then creates a new task to
|
||||
replace it according to the desired state specified by the service.
|
||||
|
||||
|
@ -60,7 +60,7 @@ The underlying logic of Docker swarm mode is a general purpose scheduler and
|
|||
orchestrator. The service and task abstractions themselves are unaware of the
|
||||
containers they implement. Hypothetically, you could implement other types of
|
||||
tasks such as virtual machine tasks or non-containerized process tasks. The
|
||||
scheduler and orchestrator are agnostic about the type of task. However, the
|
||||
scheduler and orchestrator are agnostic about the type of the task. However, the
|
||||
current version of Docker only supports container tasks.
|
||||
|
||||
The diagram below shows how swarm mode accepts service create requests and
|
||||
|
@ -109,9 +109,8 @@ serving the same content.
|
|||
A global service is a service that runs one task on every node. There is no
|
||||
pre-specified number of tasks. Each time you add a node to the swarm, the
|
||||
orchestrator creates a task and the scheduler assigns the task to the new node.
|
||||
Good candidates for global services are monitoring agents, an anti-virus
|
||||
scanners or other types of containers that you want to run on every node in the
|
||||
swarm.
|
||||
Good candidates for global services are monitoring agents, anti-virus scanners
|
||||
or other types of containers that you want to run on every node in the swarm.
|
||||
|
||||
The diagram below shows a three-service replica in yellow and a global service
|
||||
in gray.
|
||||
|
|
|
@ -91,7 +91,7 @@ services. The swarm manager automatically assigns addresses to the containers
|
|||
on the overlay network when it initializes or updates the application.
|
||||
|
||||
* **Service discovery:** Swarm manager nodes assign each service in the swarm a
|
||||
unique DNS name and load balances running containers. You can query every
|
||||
unique DNS name and load balance running containers. You can query every
|
||||
container running in the swarm through a DNS server embedded in the swarm.
|
||||
|
||||
* **Load balancing:** You can expose the ports for services to an
|
||||
|
|
|
@ -22,13 +22,13 @@ define its optimal state (number of replicas, network and storage resources
|
|||
available to it, ports the service exposes to the outside world, and more).
|
||||
Docker works to maintain that desired state. For instance, if a worker node
|
||||
becomes unavailable, Docker schedules that node's tasks on other nodes. A _task_
|
||||
is a running container which is part of a swarm service and managed by a swarm
|
||||
manager, as opposed to a standalone container.
|
||||
is a running container which is part of a swarm service and is managed by a
|
||||
swarm manager, as opposed to a standalone container.
|
||||
|
||||
One of the key advantages of swarm services over standalone containers is that
|
||||
you can modify a service's configuration, including the networks and volumes it
|
||||
is connected to, without the need to manually restart the service. Docker will
|
||||
update the configuration, stop the service tasks with the out of date
|
||||
update the configuration, stop the service tasks with out of date
|
||||
configuration, and create new ones matching the desired configuration.
|
||||
|
||||
When Docker is running in swarm mode, you can still run standalone containers
|
||||
|
@ -41,7 +41,7 @@ workers, or both.
|
|||
In the same way that you can use [Docker Compose](../../compose/index.md) to define and run
|
||||
containers, you can define and run [Swarm service](services.md) stacks.
|
||||
|
||||
Keep reading for details about concepts relating to Docker swarm services,
|
||||
Keep reading for details about concepts related to Docker swarm services,
|
||||
including nodes, services, tasks, and load balancing.
|
||||
|
||||
## Nodes
|
||||
|
|
|
@ -220,12 +220,12 @@ The default mask length can be configured and is the same for all networks. It i
|
|||
|
||||
##### Overlay network size limitations
|
||||
|
||||
Docker recommends creating overlay networks with `/24` blocks. The `/24` overlay network blocks, which limits the network to 256 IP addresses.
|
||||
Docker recommends creating overlay networks with `/24` blocks. The `/24` overlay network blocks limit the network to 256 IP addresses.
|
||||
|
||||
This recommendation addresses [limitations with swarm mode](https://github.com/moby/moby/issues/30820).
|
||||
If you need more than 256 IP addresses, do not increase the IP block size. You can either use `dnsrr`
|
||||
endpoint mode with an external load balancer, or use multiple smaller overlay networks. See
|
||||
[Configure service discovery](#configure-service-discovery) or more information about different endpoint modes.
|
||||
[Configure service discovery](#configure-service-discovery) for more information about different endpoint modes.
|
||||
|
||||
#### Configure encryption of application data
|
||||
|
||||
|
|
|
@ -8,7 +8,7 @@ When the Docker Engine runs in swarm mode, manager nodes implement the
|
|||
[Raft Consensus Algorithm](http://thesecretlivesofdata.com/raft/) to manage the global cluster state.
|
||||
|
||||
The reason why *Docker swarm mode* is using a consensus algorithm is to make sure that
|
||||
all the manager nodes that are in charge of managing and scheduling tasks in the cluster,
|
||||
all the manager nodes that are in charge of managing and scheduling tasks in the cluster
|
||||
are storing the same consistent state.
|
||||
|
||||
Having the same consistent state across the cluster means that in case of a failure,
|
||||
|
|
|
@ -926,10 +926,8 @@ use it, then remove the old secret.
|
|||
```
|
||||
|
||||
|
||||
7. If you want to try the running all of these examples again or just want to
|
||||
clean up after running through them, use these commands to remove the
|
||||
WordPress service, the MySQL container, the `mydata` and `wpdata` volumes,
|
||||
and the Docker secrets.
|
||||
7. Run the following commands to remove the WordPress service, the MySQL container,
|
||||
the `mydata` and `wpdata` volumes, and the Docker secrets:
|
||||
|
||||
```console
|
||||
$ docker service rm wordpress mysql
|
||||
|
@ -1021,7 +1019,7 @@ content from the file specified in the compose file.
|
|||
The db service uses both secrets, and the wordpress is using one.
|
||||
|
||||
When you deploy, Docker mounts a file under `/run/secrets/<secret_name>` in the
|
||||
services. These files are never persisted in disk, but are managed in memory.
|
||||
services. These files are never persisted on disk, but are managed in memory.
|
||||
|
||||
Each service uses environment variables to specify where the service should look
|
||||
for that secret data.
|
||||
|
|
|
@ -45,7 +45,7 @@ a3iixnklxuem quizzical_lamarr replicated 1/1
|
|||
|
||||
Created services do not always run right away. A service can be in a pending
|
||||
state if its image is unavailable, if no node meets the requirements you
|
||||
configure for the service, or other reasons. See
|
||||
configure for the service, or for other reasons. See
|
||||
[Pending services](how-swarm-mode-works/services.md#pending-services) for more
|
||||
information.
|
||||
|
||||
|
@ -125,7 +125,7 @@ nodes are able to log into the registry and pull the image.
|
|||
|
||||
Credential spec files are applied at runtime, eliminating the need for host-based credential spec files or registry entries - no gMSA credentials are written to disk on worker nodes. You can make credential specs available to Docker Engine running swarm kit worker nodes before a container starts. When deploying a service using a gMSA-based config, the credential spec is passed directly to the runtime of containers in that service.
|
||||
|
||||
The `--credential-spec` must be one of the following formats:
|
||||
The `--credential-spec` must be in one of the following formats:
|
||||
|
||||
- `file://<filename>`: The referenced file must be present in the `CredentialSpecs` subdirectory in the docker data directory, which defaults to `C:\ProgramData\Docker\` on Windows. For example, specifying `file://spec.json` loads `C:\ProgramData\Docker\CredentialSpecs\spec.json`.
|
||||
- `registry://<value-name>`: The credential spec is read from the Windows registry on the daemon’s host.
|
||||
|
@ -804,7 +804,7 @@ that the scheduler updates simultaneously.
|
|||
|
||||
When an update to an individual task returns a state of `RUNNING`, the scheduler
|
||||
continues the update by continuing to another task until all tasks are updated.
|
||||
If, at any time during an update a task returns `FAILED`, the scheduler pauses
|
||||
If at any time during an update a task returns `FAILED`, the scheduler pauses
|
||||
the update. You can control the behavior using the `--update-failure-action`
|
||||
flag for `docker service create` or `docker service update`.
|
||||
|
||||
|
@ -830,7 +830,7 @@ after 10% of the tasks being updated fail, the update is paused.
|
|||
An individual task update is considered to have failed if the task doesn't
|
||||
start up, or if it stops running within the monitoring period specified with
|
||||
the `--update-monitor` flag. The default value for `--update-monitor` is 30
|
||||
seconds, which means that a task failing in the first 30 seconds after its
|
||||
seconds, which means that a task failing in the first 30 seconds after it's
|
||||
started counts towards the service update failure threshold, and a failure
|
||||
after that is not counted.
|
||||
|
||||
|
@ -843,7 +843,7 @@ to the configuration that was in place before the most recent
|
|||
`docker service update` command.
|
||||
|
||||
Other options can be combined with `--rollback`; for example,
|
||||
`--update-delay 0s` to execute the rollback without a delay between tasks:
|
||||
`--update-delay 0s`, to execute the rollback without a delay between tasks:
|
||||
|
||||
```console
|
||||
$ docker service update \
|
||||
|
@ -893,8 +893,8 @@ $ docker service create --name=my_redis \
|
|||
### Give a service access to volumes or bind mounts
|
||||
|
||||
For best performance and portability, you should avoid writing important data
|
||||
directly into a container's writable layer, instead using data volumes or bind
|
||||
mounts. This principle also applies to services.
|
||||
directly into a container's writable layer. You should instead use data volumes
|
||||
or bind mounts. This principle also applies to services.
|
||||
|
||||
You can create two types of mounts for services in a swarm, `volume` mounts or
|
||||
`bind` mounts. Regardless of which type of mount you use, configure it using the
|
||||
|
@ -920,7 +920,7 @@ $ docker service create \
|
|||
<IMAGE>
|
||||
```
|
||||
|
||||
If a volume with the same `<VOLUME-NAME>` does not exist when a task is
|
||||
If a volume with the name `<VOLUME-NAME>` doesn't exist when a task is
|
||||
scheduled to a particular host, then one is created. The default volume
|
||||
driver is `local`. To use a different volume driver with this create-on-demand
|
||||
pattern, specify the driver and its options with the `--mount` flag:
|
||||
|
|
|
@ -88,13 +88,13 @@ The format of the command is:
|
|||
$ docker swarm init --default-addr-pool <IP range in CIDR> [--default-addr-pool <IP range in CIDR> --default-addr-pool-mask-length <CIDR value>]
|
||||
```
|
||||
|
||||
To create a default IP address pool with a /16 (class B) for the 10.20.0.0 network looks like this:
|
||||
The command to create a default IP address pool with a /16 (class B) for the 10.20.0.0 network looks like this:
|
||||
|
||||
```console
|
||||
$ docker swarm init --default-addr-pool 10.20.0.0/16
|
||||
```
|
||||
|
||||
To create a default IP address pool with a `/16` (class B) for the `10.20.0.0` and `10.30.0.0` networks, and to
|
||||
The command to create a default IP address pool with a `/16` (class B) for the `10.20.0.0` and `10.30.0.0` networks, and to
|
||||
create a subnet mask of `/26` for each network looks like this:
|
||||
|
||||
```console
|
||||
|
|
|
@ -73,7 +73,7 @@ machines.
|
|||
The `*` next to the node ID indicates that you're currently connected on
|
||||
this node.
|
||||
|
||||
Docker Engine swarm mode automatically names the node for the machine host
|
||||
Docker Engine swarm mode automatically names the node with the machine host
|
||||
name. The tutorial covers other columns in later steps.
|
||||
|
||||
## What's next?
|
||||
|
|
|
@ -40,7 +40,7 @@ Redis 3.0.7 container image using rolling updates.
|
|||
|
||||
By default, when an update to an individual task returns a state of
|
||||
`RUNNING`, the scheduler schedules another task to update until all tasks
|
||||
are updated. If, at any time during an update a task returns `FAILED`, the
|
||||
are updated. If at any time during an update a task returns `FAILED`, the
|
||||
scheduler pauses the update. You can control the behavior using the
|
||||
`--update-failure-action` flag for `docker service create` or
|
||||
`docker service update`.
|
||||
|
|
|
@ -10,7 +10,7 @@ access to the encrypted Raft logs. One of the reasons this feature was introduce
|
|||
was in support of the [Docker secrets](secrets.md) feature.
|
||||
|
||||
When Docker restarts, both the TLS key used to encrypt communication among swarm
|
||||
nodes, and the key used to encrypt and decrypt Raft logs on disk, are loaded
|
||||
nodes and the key used to encrypt and decrypt Raft logs on disk are loaded
|
||||
into each manager node's memory. Docker has the ability to protect the mutual TLS
|
||||
encryption key and the key used to encrypt and decrypt Raft logs at rest, by
|
||||
allowing you to take ownership of these keys and to require manual unlocking of
|
||||
|
|
Loading…
Reference in New Issue