mirror of https://github.com/docker/docs.git
Explain placement constraints and preferences (#4635)
This commit is contained in:
parent
c450374aa7
commit
3c614d655f
|
@ -2,9 +2,10 @@
|
|||
description: Deploy services to a swarm
|
||||
keywords: guide, swarm mode, swarm, service
|
||||
title: Deploy services to a swarm
|
||||
toc_max: 4
|
||||
---
|
||||
|
||||
Swarm services uses a *declarative* model, which means that you define the
|
||||
Swarm services use a *declarative* model, which means that you define the
|
||||
desired state of the service, and rely upon Docker to maintain this state. The
|
||||
state includes information such as (but not limited to):
|
||||
|
||||
|
@ -470,7 +471,40 @@ To create a service with access to Docker-managed secrets, use the `--secret`
|
|||
flag. For more information, see
|
||||
[Manage sensitive strings (secrets) for Docker services](secrets.md)
|
||||
|
||||
### Control service scale and placement
|
||||
### Control service placement
|
||||
|
||||
Swarm services provide a few different ways for you to control scale and
|
||||
placement of services on different nodes.
|
||||
|
||||
- You can specify whether the service needs to run a specific number of replicas
|
||||
or should run globally on every worker node. See
|
||||
[Replicated or global services](#replicated-or-global-services).
|
||||
|
||||
- You can configure the service's
|
||||
[CPU or memory requirements](#reserve-memory-or-cpus-for-a-service), and the
|
||||
service will only run on nodes which can meet those requirements.
|
||||
|
||||
- [Placement constraints](#placement-constraints) let you configure the service
|
||||
to run only on nodes with specific (arbitrary) metadata set, and cause the
|
||||
deployment to fail if appropriate nodes do not exist. For instance, you can
|
||||
specify that your service should only run on nodes where an arbitrary label
|
||||
`pci_compliant` is set to `true`.
|
||||
|
||||
- [Placement preferences](#placement-preferences) let you apply an arbitrary
|
||||
label with a range of values to each node, and spread your service's tasks
|
||||
across those nodes using an algorithm. Currently, the only supported algorithm
|
||||
is `spread`, which which tries to place them evenly. For instance, if you
|
||||
label each node with a label `rack` which has a value from 1-10, then specify
|
||||
a placement preference keyed on `rack`, then service tasks are placed as
|
||||
evenly as possible across all nodes with the label `rack`, after taking other
|
||||
placement constraints, placement preferences, and other node-specific
|
||||
limitations into account.
|
||||
|
||||
Unlike constraints, placement preferences are best-effort, and a service will
|
||||
not fail to deploy if no nodes can satisfy the preference.
|
||||
|
||||
|
||||
#### Replicated or global services
|
||||
|
||||
Swarm mode has two types of services: replicated and global. For replicated
|
||||
services, you specify the number of replica tasks for the swarm manager to
|
||||
|
@ -507,17 +541,7 @@ service based upon node attributes and metadata or engine metadata. For more
|
|||
information on constraints, refer to the `docker service create`
|
||||
[CLI reference](/engine/reference/commandline/service_create.md).
|
||||
|
||||
Use placement preferences to divide tasks evenly over different categories of
|
||||
nodes. An example of where this may be useful is balancing tasks between
|
||||
multiple datacenters or availability zones. In this case, you can use a
|
||||
placement preference to spread out tasks to multiple datacenters and make the
|
||||
service more resilient in the face of a localized outage. You can use
|
||||
additional placement preferences to further divide tasks over groups of nodes.
|
||||
For example, you can balance them over multiple racks within each datacenter.
|
||||
For more information on constraints, refer to the `docker service create`
|
||||
[CLI reference](/engine/reference/commandline/service_create.md).
|
||||
|
||||
### Reserve memory or CPUs for a service
|
||||
#### Reserve memory or CPUs for a service
|
||||
|
||||
To reserve a given amount of memory or number of CPUs for a service, use the
|
||||
`--reserve-memory` or `--reserve-cpu` flags. If no available nodes can satisfy
|
||||
|
@ -525,7 +549,7 @@ the requirement (for instance, if you request 4 CPUs and no node in the swarm
|
|||
has 4 CPUs), the service remains in a pending state until a node is available to
|
||||
run its tasks.
|
||||
|
||||
#### Out Of Memory Exceptions (OOME)
|
||||
##### Out Of Memory Exceptions (OOME)
|
||||
|
||||
If your service attempts to use more memory than the swarm node has available,
|
||||
you may experience an Out Of Memory Exception (OOME) and a container, or the
|
||||
|
@ -537,11 +561,64 @@ see
|
|||
Swarm services allow you to use resource constraints, placement preferences, and
|
||||
labels to ensure that your service is deployed to the appropriate swarm nodes.
|
||||
|
||||
### Specify service placement preferences (--placement-pref)
|
||||
#### Placement constraints
|
||||
|
||||
You can set up the service to divide tasks evenly over different categories of
|
||||
nodes. One example of where this can be useful is to balance tasks over a set
|
||||
of datacenters or availability zones. The example below illustrates this:
|
||||
Use placement constraints to control the nodes a service can be assigned to. In
|
||||
the following example, the service only runs on nodes with the
|
||||
[label](engine/swarm/manage-nodes.md#add-or-remove-label-metadata)
|
||||
`region` set to `east`. If no appropriately-labelled nodes are available,
|
||||
deployment will fail. The `--constraint` flag uses an equality operator
|
||||
(`==` or `!=`). It is possible that all services will run on the same node, or
|
||||
each node will only run one replica, or that some nodes won't run any replicas.
|
||||
|
||||
```bash
|
||||
$ docker service create \
|
||||
--name my-nginx \
|
||||
--replicas 5 \
|
||||
--constraint region==east \
|
||||
nginx
|
||||
```
|
||||
|
||||
You can also use the `constraint` service-level key in a `docker-compose.yml`
|
||||
file.
|
||||
|
||||
If you specify multiple placement constraints, the service will only deploy onto
|
||||
nodes where they are all met. The following example limits the service to run on
|
||||
nodes with `region` set to `east` and where `type` is not set to `devel`:
|
||||
|
||||
```bash
|
||||
$ docker service create \
|
||||
--name my-nginx \
|
||||
--replicas 5 \
|
||||
--constraint region==east \
|
||||
--constraint type!=devel \
|
||||
nginx
|
||||
```
|
||||
|
||||
You can also use placement constraints in conjunction with placement preferences
|
||||
and CPU/memory constraints. Be careful not to use settings that will not be
|
||||
possible to fulfill.
|
||||
|
||||
For more information on constraints, refer to the `docker service create`
|
||||
[CLI reference](/engine/reference/commandline/service_create.md).
|
||||
|
||||
#### Placement preferences
|
||||
|
||||
While [placement constraints](#placement-constraints) limit the nodes a service
|
||||
can run on, _placement preferences_ try to place services on appropriate nodes
|
||||
in an algorithmic way (currently, only spread evenly). For instance, if you
|
||||
assign each node a `rack` label, you can set a placement preference to spread
|
||||
the service evenly across nodes with the `rack` label, by value. This way, if
|
||||
you lose a rack, the service will still be running on nodes on other racks.
|
||||
|
||||
Placement preferences are not strictly enforced. If no node has the label
|
||||
you specify in your preference, the service will be deployed as though the
|
||||
preference were not set.
|
||||
|
||||
The following example sets a preference to spread the deployment across nodes
|
||||
based on the value of the `datacenter` label. If some nodes have
|
||||
`datacenter=us-east` and others have `datacenter=us-west`, the service will be
|
||||
deployed as evenly as possible across the two sets of nodes.
|
||||
|
||||
```bash
|
||||
$ docker service create \
|
||||
|
@ -551,48 +628,10 @@ $ docker service create \
|
|||
redis:3.0.6
|
||||
```
|
||||
|
||||
This uses `--placement-pref` with a `spread` strategy (currently the only
|
||||
supported strategy) to spread tasks evenly over the values of the `datacenter`
|
||||
node label. In this example, we assume that every node has a `datacenter` node
|
||||
label attached to it. If there are three different values of this label among
|
||||
nodes in the swarm, one third of the tasks will be placed on the nodes
|
||||
associated with each value. This is true even if there are more nodes with one
|
||||
value than another. For example, consider the following set of nodes:
|
||||
|
||||
- Three nodes with `node.labels.datacenter=east`
|
||||
- Two nodes with `node.labels.datacenter=south`
|
||||
- One node with `node.labels.datacenter=west`
|
||||
|
||||
Since we are spreading over the values of the `datacenter` label and the
|
||||
service has 9 replicas, 3 replicas will end up in each datacenter. There are
|
||||
three nodes associated with the value `east`, so each one will get one of the
|
||||
three replicas reserved for this value. There are two nodes with the value
|
||||
`south`, and the three replicas for this value will be divided between them,
|
||||
with one receiving two replicas and another receiving just one. Finally, `west`
|
||||
has a single node that will get all three replicas reserved for `west`.
|
||||
|
||||
If the nodes in one category (for example, those with
|
||||
`node.labels.datacenter=south`) can't handle their fair share of tasks due to
|
||||
constraints or resource limitations, the extra tasks will be assigned to other
|
||||
nodes instead, if possible.
|
||||
|
||||
Both engine labels and node labels are supported by placement preferences. The
|
||||
example above uses a node label, because the label is referenced with
|
||||
`node.labels.datacenter`. To spread over the values of an engine label, use
|
||||
`--placement-pref spread=engine.labels.<labelname>`.
|
||||
|
||||
It is possible to add multiple placement preferences to a service. This
|
||||
establishes a hierarchy of preferences, so that tasks are first divided over
|
||||
one category, and then further divided over additional categories. One example
|
||||
of where this may be useful is dividing tasks fairly between datacenters, and
|
||||
then splitting the tasks within each datacenter over a choice of racks. To add
|
||||
multiple placement preferences, specify the `--placement-pref` flag multiple
|
||||
times. The order is significant, and the placement preferences will be applied
|
||||
in the order given when making scheduling decisions.
|
||||
|
||||
The following example sets up a service with multiple placement preferences.
|
||||
Tasks are spread first over the various datacenters, and then over racks
|
||||
(as indicated by the respective labels):
|
||||
You can specify multiple placement preferences, and they are processed in the
|
||||
order they are encountered. The following example sets up a service with
|
||||
multiple placement preferences. Tasks are spread first over the various
|
||||
datacenters, and then over racks (as indicated by the respective labels):
|
||||
|
||||
```bash
|
||||
$ docker service create \
|
||||
|
@ -603,6 +642,10 @@ $ docker service create \
|
|||
redis:3.0.6
|
||||
```
|
||||
|
||||
You can also use placement preferences in conjunction with placement constraints
|
||||
or CPU/memory constraints. Be careful not to use settings that will not be
|
||||
possible to fulfill.
|
||||
|
||||
This diagram illustrates how placement preferences work:
|
||||
|
||||

|
||||
|
@ -612,6 +655,7 @@ appends a new placement preference after all existing placement preferences.
|
|||
`--placement-pref-rm` removes an existing placement preference that matches the
|
||||
argument.
|
||||
|
||||
|
||||
### Configure a service's update behavior
|
||||
|
||||
When you create a service, you can specify a rolling update behavior for how the
|
||||
|
|
Loading…
Reference in New Issue