Address feedback about placement constraints and prefs (#5101)

This commit is contained in:
Misty Stanley-Jones 2017-10-24 16:25:25 -07:00 committed by GitHub
parent cfc6020a0a
commit 6e2ebea106
1 changed files with 21 additions and 6 deletions

View File

@ -501,7 +501,15 @@ placement of services on different nodes.
limitations into account.
Unlike constraints, placement preferences are best-effort, and a service will
not fail to deploy if no nodes can satisfy the preference.
not fail to deploy if no nodes can satisfy the preference. If you specify a
placement preference for a service, nodes that match that preference are
ranked higher when the swarm managers decide which nodes should run the
service tasks. Other factors, such as high availability of the service,
will also factor into which nodes are scheduled to run service tasks. For
example, if you have N nodes with the rack label (and then some others), and
your service is configured to run N+1 replicas, the +1 will be scheduled on a
node that doesn't already have the service on it if there is one, regardless
of whether that node has the `rack` label or not.
#### Replicated or global services
@ -509,7 +517,9 @@ placement of services on different nodes.
Swarm mode has two types of services: replicated and global. For replicated
services, you specify the number of replica tasks for the swarm manager to
schedule onto available nodes. For global services, the scheduler places one
task on each available node.
task on each available node that meets the service's
[placement constraints](#placement-constraints) and
[resource requirements](#reserve-cpu-or-memory-for-a-service).
You control the type of service using the `--mode` flag. If you don't specify a
mode, the service defaults to `replicated`. For replicated services, you specify
@ -568,8 +578,11 @@ the following example, the service only runs on nodes with the
[label](engine/swarm/manage-nodes.md#add-or-remove-label-metadata)
`region` set to `east`. If no appropriately-labelled nodes are available,
deployment will fail. The `--constraint` flag uses an equality operator
(`==` or `!=`). It is possible that all services will run on the same node, or
each node will only run one replica, or that some nodes won't run any replicas.
(`==` or `!=`). For replicated services, it is possible that all services will
run on the same node, or each node will only run one replica, or that some nodes
won't run any replicas. For global services, the service will run on every node
that meets the placement constraint and any
[resource requirements](#reserve-cpu-or-memory-for-a-service).
```bash
$ docker service create \
@ -584,12 +597,12 @@ file.
If you specify multiple placement constraints, the service will only deploy onto
nodes where they are all met. The following example limits the service to run on
nodes with `region` set to `east` and where `type` is not set to `devel`:
all nodes where `region` is set to `east` and `type` is not set to `devel`:
```bash
$ docker service create \
--name my-nginx \
--replicas 5 \
--global \
--constraint region==east \
--constraint type!=devel \
nginx
@ -615,6 +628,8 @@ Placement preferences are not strictly enforced. If no node has the label
you specify in your preference, the service will be deployed as though the
preference were not set.
> Placement preferences are ignored for global services.
The following example sets a preference to spread the deployment across nodes
based on the value of the `datacenter` label. If some nodes have
`datacenter=us-east` and others have `datacenter=us-west`, the service will be