Updating READMEs

Signed-off-by: Mary Anthony <mary@docker.com>
This commit is contained in:
Mary Anthony 2016-01-06 18:58:12 -08:00
parent 2aff182135
commit 07031ed7ba
5 changed files with 36 additions and 875 deletions

View File

@ -1,38 +1,9 @@
---
page_title: Docker Swarm API
page_description: Swarm API
page_keywords: docker, swarm, clustering, api
---
# Docker Swarm API README
# Docker Swarm API
The Docker Swarm API is mostly compatible with the [Docker Remote API](https://docs.docker.com/reference/api/docker_remote_api/). This document is an overview of the differences between the Swarm API and the Docker Remote API.
## Endpoints which behave differently
* `GET "/containers/{name:.*}/json"`: New field `Node` added:
```json
"Node": {
"Id": "ODAI:IC6Q:MSBL:TPB5:HIEE:6IKC:VCAM:QRNH:PRGX:ERZT:OK46:PMFX",
"Ip": "0.0.0.0",
"Addr": "http://0.0.0.0:4243",
"Name": "vagrant-ubuntu-saucy-64",
},
```
* `GET "/containers/{name:.*}/json"`: `HostIP` replaced by the the actual Node's IP if `HostIP` is `0.0.0.0`
* `GET "/containers/json"`: Node's name prepended to the container name.
* `GET "/containers/json"`: `HostIP` replaced by the the actual Node's IP if `HostIP` is `0.0.0.0`
* `GET "/containers/json"` : Containers started from the `swarm` official image are hidden by default, use `all=1` to display them.
* `GET "/images/json"` : Use '--filter node=\<Node name\>' to show images of the specific node.
## Docker Swarm documentation index
- [User guide](https://docs.docker.com/swarm/)
- [Discovery options](https://docs.docker.com/swarm/discovery/)
- [Scheduler strategies](https://docs.docker.com/swarm/scheduler/strategy/)
- [Scheduler filters](https://docs.docker.com/swarm/scheduler/filter/)
The Docker Swarm API is mostly compatible with the [Docker Remote
API](https://docs.docker.com/reference/api/docker_remote_api/). To read the
end-user API documentation, visit [the Swarm API documentation on
docs.docker.com](https://docs.docker.com/swarm/api/swarm-api/). If you want to
modify the discovery API documentation, start with [the `docs/api/swarm-api.md`
file](https://github.com/docker/swarm/blob/master/docs/api/swarm-api.md) in this
project.

View File

@ -1,286 +1,8 @@
---
page_title: Docker Swarm discovery
page_description: Swarm discovery
page_keywords: docker, swarm, clustering, discovery
---
# Discovery
Docker Swarm comes with multiple Discovery backends.
## Backends
You use a hosted discovery service with Docker Swarm. The service
maintains a list of IPs in your swarm. There are several available
services, such as `etcd`, `consul` and `zookeeper` depending on what
is best suited for your environment. You can even use a static
file. Docker Hub also provides a hosted discovery service which you
can use.
### Hosted Discovery with Docker Hub
This example uses the hosted discovery service on Docker Hub. It is not
meant to be used in production scenarios but for development/testing only.
Using Docker Hub's hosted discovery service requires that each node in the
swarm is connected to the internet. To create your swarm:
First we create a cluster.
```bash
# create a cluster
$ swarm create
6856663cdefdec325839a4b7e1de38e8 # <- this is your unique <cluster_id>
```
Then we create each node and join them to the cluster.
```bash
# on each of your nodes, start the swarm agent
# <node_ip> doesn't have to be public (eg. 192.168.0.X),
# as long as the swarm manager can access it.
$ swarm join --advertise=<node_ip:2375> token://<cluster_id>
```
Finally, we start the Swarm manager. This can be on any machine or even
your laptop.
```bash
$ swarm manage -H tcp://<swarm_ip:swarm_port> token://<cluster_id>
```
You can then use regular Docker commands to interact with your swarm.
```bash
docker -H tcp://<swarm_ip:swarm_port> info
docker -H tcp://<swarm_ip:swarm_port> run ...
docker -H tcp://<swarm_ip:swarm_port> ps
docker -H tcp://<swarm_ip:swarm_port> logs ...
...
```
You can also list the nodes in your cluster.
```bash
swarm list token://<cluster_id>
<node_ip:2375>
```
### Using a static file describing the cluster
For each of your nodes, add a line to a file. The node IP address
doesn't need to be public as long the Swarm manager can access it.
```bash
echo <node_ip1:2375> >> /tmp/my_cluster
echo <node_ip2:2375> >> /tmp/my_cluster
echo <node_ip3:2375> >> /tmp/my_cluster
```
Then start the Swarm manager on any machine.
```bash
swarm manage -H tcp://<swarm_ip:swarm_port> file:///tmp/my_cluster
```
And then use the regular Docker commands.
```bash
docker -H tcp://<swarm_ip:swarm_port> info
docker -H tcp://<swarm_ip:swarm_port> run ...
docker -H tcp://<swarm_ip:swarm_port> ps
docker -H tcp://<swarm_ip:swarm_port> logs ...
...
```
You can list the nodes in your cluster.
```bash
$ swarm list file:///tmp/my_cluster
<node_ip1:2375>
<node_ip2:2375>
<node_ip3:2375>
```
### Using etcd
On each of your nodes, start the Swarm agent. The node IP address
doesn't have to be public as long as the swarm manager can access it.
```bash
swarm join --advertise=<node_ip:2375> etcd://<etcd_ip>/<path>
```
Start the manager on any machine or your laptop.
```bash
swarm manage -H tcp://<swarm_ip:swarm_port> etcd://<etcd_ip>/<path>
```
And then use the regular Docker commands.
```bash
docker -H tcp://<swarm_ip:swarm_port> info
docker -H tcp://<swarm_ip:swarm_port> run ...
docker -H tcp://<swarm_ip:swarm_port> ps
docker -H tcp://<swarm_ip:swarm_port> logs ...
...
```
You can list the nodes in your cluster.
```bash
swarm list etcd://<etcd_ip>/<path>
<node_ip:2375>
```
### Using consul
On each of your nodes, start the Swarm agent. The node IP address
doesn't need to be public as long as the Swarm manager can access it.
```bash
swarm join --advertise=<node_ip:2375> consul://<consul_addr>/<path>
```
Start the manager on any machine or your laptop.
```bash
swarm manage -H tcp://<swarm_ip:swarm_port> consul://<consul_addr>/<path>
```
And then use the regular Docker commands.
```bash
docker -H tcp://<swarm_ip:swarm_port> info
docker -H tcp://<swarm_ip:swarm_port> run ...
docker -H tcp://<swarm_ip:swarm_port> ps
docker -H tcp://<swarm_ip:swarm_port> logs ...
...
```
You can list the nodes in your cluster.
```bash
swarm list consul://<consul_addr>/<path>
<node_ip:2375>
```
### Using zookeeper
On each of your nodes, start the Swarm agent. The node IP doesn't have
to be public as long as the swarm manager can access it.
```bash
swarm join --advertise=<node_ip:2375> zk://<zookeeper_addr1>,<zookeeper_addr2>/<path>
```
Start the manager on any machine or your laptop.
```bash
swarm manage -H tcp://<swarm_ip:swarm_port> zk://<zookeeper_addr1>,<zookeeper_addr2>/<path>
```
You can then use the regular Docker commands.
```bash
docker -H tcp://<swarm_ip:swarm_port> info
docker -H tcp://<swarm_ip:swarm_port> run ...
docker -H tcp://<swarm_ip:swarm_port> ps
docker -H tcp://<swarm_ip:swarm_port> logs ...
...
```
You can list the nodes in the cluster.
```bash
swarm list zk://<zookeeper_addr1>,<zookeeper_addr2>/<path>
<node_ip:2375>
```
### Using a static list of IP addresses
Start the manager on any machine or your laptop
```bash
swarm manage -H <swarm_ip:swarm_port> nodes://<node_ip1:2375>,<node_ip2:2375>
```
Or
```bash
swarm manage -H <swarm_ip:swarm_port> <node_ip1:2375>,<node_ip2:2375>
```
Then use the regular Docker commands.
```bash
docker -H <swarm_ip:swarm_port> info
docker -H <swarm_ip:swarm_port> run ...
docker -H <swarm_ip:swarm_port> ps
docker -H <swarm_ip:swarm_port> logs ...
...
```
### Range pattern for IP addresses
The `file` and `nodes` discoveries support a range pattern to specify IP
addresses, i.e., `10.0.0.[10:200]` will be a list of nodes starting from
`10.0.0.10` to `10.0.0.200`.
For example for the `file` discovery method.
```bash
$ echo "10.0.0.[11:100]:2375" >> /tmp/my_cluster
$ echo "10.0.1.[15:20]:2375" >> /tmp/my_cluster
$ echo "192.168.1.2:[2:20]375" >> /tmp/my_cluster
```
Then start the manager.
```bash
swarm manage -H tcp://<swarm_ip:swarm_port> file:///tmp/my_cluster
```
And for the `nodes` discovery method.
```bash
swarm manage -H <swarm_ip:swarm_port> "nodes://10.0.0.[10:200]:2375,10.0.1.[2:250]:2375"
```
## Contributing a new discovery backend
Contributing a new discovery backend is easy, simply implement this
interface:
```go
type Discovery interface {
Initialize(string, int) error
Fetch() ([]string, error)
Watch(WatchCallback)
Register(string) error
}
```
### Initialize
The parameters are `discovery` location without the scheme and a heartbeat (in seconds).
### Fetch
Returns the list of all the nodes from the discovery.
### Watch
Triggers an update (`Fetch`). This can happen either via a timer (like
`token`) or use backend specific features (like `etcd`).
### Register
Add a new node to the discovery service.
## Docker Swarm documentation index
- [User guide](../docs/index.md)
- [Scheduler strategies](../docs/scheduler/strategy.md)
- [Scheduler filters](../docs/scheduler/filter.md)
- [Swarm API](../docs/api/swarm-api.md)
# Contribute a new discovery backend
Docker Swarm comes with multiple Discovery backends. To read the end-user
documentation, visit the [Swarm discovery documentation on
docs.docker.com](https://docs.docker.com/swarm/discovery/). If you want to
modify the discovery end-user documentation, start with [the `docs/discovery.md`
file](https://github.com/docker/swarm/blob/master/docs/discovery.md) in this
project.

View File

@ -1,29 +1,7 @@
#discovery.hub.docker.com
Docker Swarm comes with a simple discovery service built into the [Docker Hub](http://hub.docker.com)
#####Create a new cluster
`-> POST https://discovery.hub.docker.com/v1/clusters`
`<- <token>`
#####Add new nodes to a cluster
`-> POST https://discovery.hub.docker.com/v1/clusters/<token>?ttl=<ttl> Request body: "<ip>:<port1>"`
`<- OK`
`-> POST https://discovery.hub.docker.com/v1/clusters/<token>?ttl=<ttl> Request body: "<ip>:<port2>")`
`<- OK`
#####List nodes in a cluster
`-> GET https://discovery.hub.docker.com/v1/clusters/<token>`
`<- ["<ip>:<port1>", "<ip>:<port2>"]`
#####Delete a cluster (all the nodes in a cluster)
`-> DELETE https://discovery.hub.docker.com/v1/clusters/<token>`
`<- OK`
Docker Swarm comes with a simple discovery service built into the [Docker
Hub](http://hub.docker.com). To read the end-user API documentation, visit [the
Swarm discovery documentation on
docs.docker.com](https://docs.docker.com/swarm/discovery/). If you want to
modify the discovery documentation, start with [the `docs/discovery.md`]

View File

@ -1,408 +1,8 @@
---
page_title: Docker Swarm filters
page_description: Swarm filters
page_keywords: docker, swarm, clustering, filters
---
# Filters
The `Docker Swarm` scheduler comes with multiple filters.
The following filters are currently used to schedule containers on a subset of nodes:
* [Constraint](#constraint-filter)
* [Affinity](#affinity-filter)
* [Port](#port-filter)
* [Dependency](#dependency-filter)
* [Health](#health-filter)
You can choose the filter(s) you want to use with the `--filter` flag of `swarm manage`
## Constraint Filter
Constraints are key/value pairs associated to particular nodes. You can see them
as *node tags*.
When creating a container, the user can select a subset of nodes that should be
considered for scheduling by specifying one or more sets of matching key/value pairs.
This approach has several practical use cases such as:
* Selecting specific host properties (such as `storage=ssd`, in order to schedule
containers on specific hardware).
* Tagging nodes based on their physical location (`region=us-east`, to force
containers to run on a given location).
* Logical cluster partitioning (`environment=production`, to split a cluster into
sub-clusters with different properties).
To tag a node with a specific set of key/value pairs, one must pass a list of
`--label` options at docker startup time.
For instance, let's start `node-1` with the `storage=ssd` label:
```bash
$ docker -d --label storage=ssd
$ swarm join --advertise=192.168.0.42:2375 token://XXXXXXXXXXXXXXXXXX
```
Again, but this time `node-2` with `storage=disk`:
```bash
$ docker -d --label storage=disk
$ swarm join --advertise=192.168.0.43:2375 token://XXXXXXXXXXXXXXXXXX
```
Once the nodes are registered with the cluster, the master pulls their respective
tags and will take them into account when scheduling new containers.
Let's start a MySQL server and make sure it gets good I/O performance by selecting
nodes with flash drives:
```bash
$ docker run -d -P -e constraint:storage==ssd --name db mysql
f8b693db9cd6
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES
f8b693db9cd6 mysql:latest "mysqld" Less than a second ago running 192.168.0.42:49178->3306/tcp node-1 db
```
In this case, the master selected all nodes that met the `storage=ssd` constraint
and applied resource management on top of them, as discussed earlier.
`node-1` was selected in this example since it's the only host running flash.
Now we want to run an Nginx frontend in our cluster. However, we don't want
*flash* drives since we'll mostly write logs to disk.
```bash
$ docker run -d -P -e constraint:storage==disk --name frontend nginx
963841b138d8
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES
963841b138d8 nginx:latest "nginx" Less than a second ago running 192.168.0.43:49177->80/tcp node-2 frontend
f8b693db9cd6 mysql:latest "mysqld" Up About a minute running 192.168.0.42:49178->3306/tcp node-1 db
```
The scheduler selected `node-2` since it was started with the `storage=disk` label.
## Standard Constraints
Additionally, a standard set of constraints can be used when scheduling containers
without specifying them when starting the node. Those tags are sourced from
`docker info` and currently include:
* node ID or node Name (using key "node")
* storagedriver
* executiondriver
* kernelversion
* operatingsystem
## Affinity filter
You use an `--affinity:<filter>` to create "attractions" between containers. For
example, you can run a container and instruct it to locate and run next to
another container based on an identifier, an image, or a label. These
attractions ensure that containers run on the same network node &mdash; without
you having to know what each node is running.
#### Container affinity
You can schedule a new container to run next to another based on a container
name or ID. For example, you can start a container called `frontend` running
`nginx`:
```bash
$ docker run -d -p 80:80 --name frontend nginx
87c4376856a8
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES
87c4376856a8 nginx:latest "nginx" Less than a second ago running 192.168.0.42:80->80/tcp node-1 frontend
```
Then, using `-e affinity:container==frontend` flag schedule a second container to
locate and run next to `frontend`.
```bash
$ docker run -d --name logger -e affinity:container==frontend logger
87c4376856a8
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES
87c4376856a8 nginx:latest "nginx" Less than a second ago running 192.168.0.42:80->80/tcp node-1 frontend
963841b138d8 logger:latest "logger" Less than a second ago running node-1 logger
```
Because of name affinity, the `logger` container ends up on `node-1` along with
the `frontend` container. Instead of the `frontend` name you could have supplied its
ID as follows:
```bash
docker run -d --name logger -e affinity:container==87c4376856a8`
```
#### Image affinity
You can schedule a container to run only on nodes where a specific image is already pulled.
```bash
$ docker -H node-1:2375 pull redis
$ docker -H node-2:2375 pull mysql
$ docker -H node-3:2375 pull redis
```
Only `node-1` and `node-3` have the `redis` image. Specify a `-e
affinity:image==redis` filter to schedule several additional containers to run on
these nodes.
```bash
$ docker run -d --name redis1 -e affinity:image==redis redis
$ docker run -d --name redis2 -e affinity:image==redis redis
$ docker run -d --name redis3 -e affinity:image==redis redis
$ docker run -d --name redis4 -e affinity:image==redis redis
$ docker run -d --name redis5 -e affinity:image==redis redis
$ docker run -d --name redis6 -e affinity:image==redis redis
$ docker run -d --name redis7 -e affinity:image==redis redis
$ docker run -d --name redis8 -e affinity:image==redis redis
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES
87c4376856a8 redis:latest "redis" Less than a second ago running node-1 redis1
1212386856a8 redis:latest "redis" Less than a second ago running node-1 redis2
87c4376639a8 redis:latest "redis" Less than a second ago running node-3 redis3
1234376856a8 redis:latest "redis" Less than a second ago running node-1 redis4
86c2136253a8 redis:latest "redis" Less than a second ago running node-3 redis5
87c3236856a8 redis:latest "redis" Less than a second ago running node-3 redis6
87c4376856a8 redis:latest "redis" Less than a second ago running node-3 redis7
963841b138d8 redis:latest "redis" Less than a second ago running node-1 redis8
```
As you can see here, the containers were only scheduled on nodes that had the
`redis` image. Instead of the image name, you could have specified the image ID.
```bash
$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
redis latest 06a1f75304ba 2 days ago 111.1 MB
$ docker run -d --name redis1 -e affinity:image==06a1f75304ba redis
```
#### Label affinity
Label affinity allows you to set up an attraction based on a container's label.
For example, you can run a `nginx` container with the `com.example.type=frontend` label.
```bash
$ docker run -d -p 80:80 --label com.example.type=frontend nginx
87c4376856a8
$ docker ps --filter "label=com.example.type=front"
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES
87c4376856a8 nginx:latest "nginx" Less than a second ago running 192.168.0.42:80->80/tcp node-1 trusting_yonath
```
Then, use `-e affinity:com.example.type==frontend` to schedule a container next to
the container with the `com.example.type==frontend` label.
```bash
$ docker run -d -e affinity:com.example.type==frontend logger
87c4376856a8
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES
87c4376856a8 nginx:latest "nginx" Less than a second ago running 192.168.0.42:80->80/tcp node-1 trusting_yonath
963841b138d8 logger:latest "logger" Less than a second ago running node-1 happy_hawking
```
The `logger` container ends up on `node-1` because its affinity with the `com.example.type==frontend` label.
#### Expression Syntax
An affinity or a constraint expression consists of a `key` and a `value`.
A `key` must conform the alpha-numeric pattern, with the leading alphabet or underscore.
A `value` must be one of the following:
* An alpha-numeric string, dots, hyphens, and underscores.
* A globbing pattern, i.e., `abc*`.
* A regular expression in the form of `/regexp/`. We support the Go's regular expression syntax.
Currently Swarm supports the following affinity/constraint operators: `==` and `!=`.
For example,
* `constraint:node==node1` will match node `node1`.
* `constraint:node!=node1` will match all nodes, except `node1`.
* `constraint:region!=us*` will match all nodes outside the regions prefixed with `us`.
* `constraint:node==/node[12]/` will match nodes `node1` and `node2`.
* `constraint:node==/node\d/` will match all nodes with `node` + 1 digit.
* `constraint:node!=/node-[01]/` will match all nodes, except `node-0` and `node-1`.
* `constraint:node!=/foo\[bar\]/` will match all nodes, except `foo[bar]`. You can see the use of escape characters here.
* `constraint:node==/(?i)node1/` will match node `node1` case-insensitive. So `NoDe1` or `NODE1` will also match.
#### Soft Affinities/Constraints
By default, affinities and constraints are hard enforced. If an affinity or
constraint is not met, the container won't be scheduled. With soft
affinities/constraints the scheduler will try to meet the rule. If it is not
met, the scheduler will discard the filter and schedule the container according
to the scheduler's strategy.
Soft affinities/constraints are expressed with a **~** in the
expression, for example:
```bash
$ docker run -d --name redis1 -e affinity:image==~redis redis
```
If none of the nodes in the cluster has the image `redis`, the scheduler will
discard the affinity and schedule according to the strategy.
```bash
$ docker run -d --name redis2 -e constraint:region==~us* redis
```
If none of the nodes in the cluster belongs to the `us` region, the scheduler will
discard the constraint and schedule according to the strategy.
```bash
$ docker run -d --name redis5 -e affinity:container!=~redis* redis
```
The affinity filter will be used to schedule a new `redis5` container to a
different node that doesn't have a container with the name that satisfies
`redis*`. If each node in the cluster has a `redis*` container, the scheduler
will discard the affinity rule and schedule according to the strategy.
## Port Filter
With this filter, `ports` are considered unique resources.
```bash
$ docker run -d -p 80:80 nginx
87c4376856a8
$ docker ps
CONTAINER ID IMAGE COMMAND PORTS NODE NAMES
87c4376856a8 nginx:latest "nginx" 192.168.0.42:80->80/tcp node-1 prickly_engelbart
```
Docker cluster selects a node where the public `80` port is available and schedules
a container on it, in this case `node-1`.
Attempting to run another container with the public `80` port will result in
the cluster selecting a different node, since that port is already occupied on `node-1`:
```bash
$ docker run -d -p 80:80 nginx
963841b138d8
$ docker ps
CONTAINER ID IMAGE COMMAND PORTS NODE NAMES
963841b138d8 nginx:latest "nginx" 192.168.0.43:80->80/tcp node-2 dreamy_turing
87c4376856a8 nginx:latest "nginx" 192.168.0.42:80->80/tcp node-1 prickly_engelbart
```
Again, repeating the same command will result in the selection of `node-3`, since
port `80` is neither available on `node-1` nor `node-2`:
```bash
$ docker run -d -p 80:80 nginx
963841b138d8
$ docker ps
CONTAINER ID IMAGE COMMAND PORTS NODE NAMES
f8b693db9cd6 nginx:latest "nginx" 192.168.0.44:80->80/tcp node-3 stoic_albattani
963841b138d8 nginx:latest "nginx" 192.168.0.43:80->80/tcp node-2 dreamy_turing
87c4376856a8 nginx:latest "nginx" 192.168.0.42:80->80/tcp node-1 prickly_engelbart
```
Finally, Docker Swarm will refuse to run another container that requires port
`80` since not a single node in the cluster has it available:
```bash
$ docker run -d -p 80:80 nginx
2014/10/29 00:33:20 Error response from daemon: no resources available to schedule container
```
### Port filter in Host Mode
Docker in the host mode, running with `--net=host`, differs from the
default `bridge` mode as the `host` mode does not perform any port
binding. So, it require that you explicitly expose one or more port numbers
(using `EXPOSE` in the `Dockerfile` or `--expose` on the command line).
Swarm makes use of this information in conjunction with the `host`
mode to choose an available node for a new container.
For example, the following commands start `nginx` on 3-node cluster.
```bash
$ docker run -d --expose=80 --net=host nginx
640297cb29a7
$ docker run -d --expose=80 --net=host nginx
7ecf562b1b3f
$ docker run -d --expose=80 --net=host nginx
09a92f582bc2
```
Port binding information will not be available through the `docker ps` command because all the nodes are started in the `host` mode.
```bash
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
640297cb29a7 nginx:1 "nginx -g 'daemon of Less than a second ago Up 30 seconds box3/furious_heisenberg
7ecf562b1b3f nginx:1 "nginx -g 'daemon of Less than a second ago Up 28 seconds box2/ecstatic_meitner
09a92f582bc2 nginx:1 "nginx -g 'daemon of 46 seconds ago Up 27 seconds box1/mad_goldstine
```
The swarm will refuse the operation when trying to instantiate the 4th container.
```bash
$ docker run -d --expose=80 --net=host nginx
FATA[0000] Error response from daemon: unable to find a node with port 80/tcp available in the Host mode
```
However port binding to the different value, e.g. `81`, is still allowed.
```bash
$ docker run -d -p 81:80 nginx:latest
832f42819adc
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
832f42819adc nginx:1 "nginx -g 'daemon of Less than a second ago Up Less than a second 443/tcp, 192.168.136.136:81->80/tcp box3/thirsty_hawking
640297cb29a7 nginx:1 "nginx -g 'daemon of 8 seconds ago Up About a minute box3/furious_heisenberg
7ecf562b1b3f nginx:1 "nginx -g 'daemon of 13 seconds ago Up About a minute box2/ecstatic_meitner
09a92f582bc2 nginx:1 "nginx -g 'daemon of About a minute ago Up About a minute box1/mad_goldstine
```
## Dependency Filter
This filter co-schedules dependent containers on the same node.
Currently, dependencies are declared as follows:
- Shared volumes: `--volumes-from=dependency`
- Links: `--link=dependency:alias`
- Shared network stack: `--net=container:dependency`
Swarm will attempt to co-locate the dependent container on the same node. If it
cannot be done (because the dependent container doesn't exist, or because the
node doesn't have enough resources), it will prevent the container creation.
The combination of multiple dependencies will be honored if possible. For
instance, `--volumes-from=A --net=container:B` will attempt to co-locate the
container on the same node as `A` and `B`. If those containers are running on
different nodes, Swarm will prevent you from scheduling the container.
## Health Filter
This filter will prevent scheduling containers on unhealthy nodes.
## Docker Swarm documentation index
- [User guide](https://docs.docker.com/swarm/)
- [Discovery options](https://docs.docker.com/swarm/discovery/)
- [Scheduler strategies](https://docs.docker.com/swarm/scheduler/strategy/)
- [Swarm API](https://docs.docker.com/swarm/API/)
# Filters README
The `Docker Swarm` scheduler comes with multiple filters. To read the end-user
filters documentation, visit [the Swarm API documentation on
docs.docker.com](https://docs.docker.com/swarm/scheduler/filter/). If you want
to modify the filter documentation, start with [the `docs/scheduler`
directory](https://github.com/docker/swarm/blob/master/docs/scheduler/index.md)
in this project.

View File

@ -1,118 +1,8 @@
---
page_title: Docker Swarm strategies
page_description: Swarm strategies
page_keywords: docker, swarm, clustering, strategies
---
# Strategies README
# Strategies
The Docker Swarm scheduler features multiple strategies for ranking nodes. The
strategy you choose determines how Swarm computes ranking. When you run a new
container, Swarm chooses to place it on the node with the highest computed ranking
for your chosen strategy.
To choose a ranking strategy, pass the `--strategy` flag and a strategy value to
the `swarm manage` command. Swarm currently supports these values:
* `spread`
* `binpack`
* `random`
The `spread` and `binpack` strategies compute rank according to a node's
available CPU, its RAM, and the number of containers it is running. The `random`
strategy uses no computation. It selects a node at random and is primarily
intended for debugging.
Your goal in choosing a strategy is to best optimize your swarm according to
your company's needs.
Under the `spread` strategy, Swarm optimizes for the node with the fewest running containers.
The `binpack` strategy causes Swarm to optimize for the node which is most packed.
The `random` strategy, like it sounds, chooses nodes at random regardless of their available CPU or RAM.
Using the `spread` strategy results in containers spread thinly over many
machines. The advantage of this strategy is that if a node goes down you only
lose a few containers.
The `binpack` strategy avoids fragmentation because it leaves room for bigger
containers on unused machines. The strategic advantage of `binpack` is that you
use fewer machines as Swarm tries to pack as many containers as it can on a
node.
If you do not specify a `--strategy` Swarm uses `spread` by default.
## Spread strategy example
In this example, your swarm is using the `spread` strategy which optimizes for
nodes that have the fewest containers. In this swarm, both `node-1` and `node-2`
have 2G of RAM, 2 CPUs, and neither node is running a container. Under this strategy
`node-1` and `node-2` have the same ranking.
When you run a new container, the system chooses `node-1` at random from the swarm
of two equally ranked nodes:
```bash
$ docker run -d -P -m 1G --name db mysql
f8b693db9cd6
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES
f8b693db9cd6 mysql:latest "mysqld" Less than a second ago running 192.168.0.42:49178->3306/tcp node-1 db
```
Now, we start another container and ask for 1G of RAM again.
```bash
$ docker run -d -P -m 1G --name frontend nginx
963841b138d8
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES
963841b138d8 nginx:latest "nginx" Less than a second ago running 192.168.0.42:49177->80/tcp node-2 frontend
f8b693db9cd6 mysql:latest "mysqld" Up About a minute running 192.168.0.42:49178->3306/tcp node-1 db
```
The container `frontend` was started on `node-2` because it was the node with the
fewest running containers. If two nodes have the same amount of available RAM and
CPUs, the `spread` strategy prefers the node with the fewest containers running.
## BinPack strategy example
In this example, let's says that both `node-1` and `node-2` have 2G of RAM and
neither is running a container. Again, the nodes are equal. When you run a new
container, the system chooses `node-1` at random from the swarm:
```bash
$ docker run -d -P -m 1G --name db mysql
f8b693db9cd6
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES
f8b693db9cd6 mysql:latest "mysqld" Less than a second ago running 192.168.0.42:49178->3306/tcp node-1 db
```
Now, you start another container, asking for 1G of RAM again.
```bash
$ docker run -d -P -m 1G --name frontend nginx
963841b138d8
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES
963841b138d8 nginx:latest "nginx" Less than a second ago running 192.168.0.42:49177->80/tcp node-1 frontend
f8b693db9cd6 mysql:latest "mysqld" Up About a minute running 192.168.0.42:49178->3306/tcp node-1 db
```
The system starts the new `frontend` container on `node-1` because it was the
node with the most running containers. This allows us to start a container requiring 2G
of RAM on `node-2`.
If two nodes have the same amount of available RAM and CPUs, the `binpack`
strategy prefers the node with the most running containers.
## Docker Swarm documentation index
- [User guide](https://docs.docker.com/swarm/)
- [Discovery options](https://docs.docker.com/swarm/discovery/)
- [Scheduler filters](https://docs.docker.com/swarm/scheduler/filter/)
- [Swarm API](https://docs.docker.com/swarm/api/swarm-api/)
The `Docker Swarm` scheduler comes with multiple strategies. To read the
end-user strategy documentation, visit [the Swarm strategy documentation on
docs.docker.com](https://docs.docker.com/swarm/scheduler/strategy/). If you want
to modify the filter documentation, start with [the `docs/scheduler`
directory](https://github.com/docker/swarm/blob/master/docs/scheduler/index.md)
in this project.