Updating the Introduction

- Clarifying two types of install
- Splitting out release notes
- Splitting out two install methods
- Adding in comments from review
- Adding release notes
- Updating with abronan's comments
- Updating for vieux
- Updating with comments and pulling bash

Signed-off-by: Mary Anthony <mary@docker.com>
This commit is contained in:
Mary Anthony 2015-05-23 18:35:35 -07:00
parent c1934925fe
commit 607d585adb
10 changed files with 707 additions and 592 deletions

View File

@ -5,7 +5,7 @@ MAINTAINER Mary <mary@docker.com> (@moxiegirl)
COPY . /src
# Reset the /docs dir so we can replace the theme meta with the new repo's git info
RUN git reset --hard
# RUN git reset --hard
RUN grep "VERSION =" /src/version/version.go | sed 's/.*"\(.*\)".*/\1/' > /docs/VERSION

View File

@ -14,200 +14,170 @@ Docker Swarm comes with multiple Discovery backends.
First we create a cluster.
```bash
# create a cluster
$ swarm create
6856663cdefdec325839a4b7e1de38e8 # <- this is your unique <cluster_id>
```
# create a cluster
$ swarm create
6856663cdefdec325839a4b7e1de38e8 # <- this is your unique <cluster_id>
Then we create each node and join them to the cluster.
```bash
# on each of your nodes, start the swarm agent
# <node_ip> doesn't have to be public (eg. 192.168.0.X),
# as long as the swarm manager can access it.
$ swarm join --advertise=<node_ip:2375> token://<cluster_id>
```
# on each of your nodes, start the swarm agent
# <node_ip> doesn't have to be public (eg. 192.168.0.X),
# as long as the swarm manager can access it.
$ swarm join --advertise=<node_ip:2375> token://<cluster_id>
Finally, we start the Swarm manager. This can be on any machine or even
your laptop.
```bash
$ swarm manage -H tcp://<swarm_ip:swarm_port> token://<cluster_id>
```
$ swarm manage -H tcp://<swarm_ip:swarm_port> token://<cluster_id>
You can then use regular Docker commands to interact with your swarm.
```bash
docker -H tcp://<swarm_ip:swarm_port> info
docker -H tcp://<swarm_ip:swarm_port> run ...
docker -H tcp://<swarm_ip:swarm_port> ps
docker -H tcp://<swarm_ip:swarm_port> logs ...
...
```
docker -H tcp://<swarm_ip:swarm_port> info
docker -H tcp://<swarm_ip:swarm_port> run ...
docker -H tcp://<swarm_ip:swarm_port> ps
docker -H tcp://<swarm_ip:swarm_port> logs ...
...
You can also list the nodes in your cluster.
```bash
swarm list token://<cluster_id>
<node_ip:2375>
```
swarm list token://<cluster_id>
<node_ip:2375>
### Using a static file describing the cluster
For each of your nodes, add a line to a file. The node IP address
doesn't need to be public as long the Swarm manager can access it.
```bash
echo <node_ip1:2375> >> /tmp/my_cluster
echo <node_ip2:2375> >> /tmp/my_cluster
echo <node_ip3:2375> >> /tmp/my_cluster
```
echo <node_ip1:2375> >> /tmp/my_cluster
echo <node_ip2:2375> >> /tmp/my_cluster
echo <node_ip3:2375> >> /tmp/my_cluster
Then start the Swarm manager on any machine.
```bash
swarm manage -H tcp://<swarm_ip:swarm_port> file:///tmp/my_cluster
```
swarm manage -H tcp://<swarm_ip:swarm_port> file:///tmp/my_cluster
And then use the regular Docker commands.
```bash
docker -H tcp://<swarm_ip:swarm_port> info
docker -H tcp://<swarm_ip:swarm_port> run ...
docker -H tcp://<swarm_ip:swarm_port> ps
docker -H tcp://<swarm_ip:swarm_port> logs ...
...
```
docker -H tcp://<swarm_ip:swarm_port> info
docker -H tcp://<swarm_ip:swarm_port> run ...
docker -H tcp://<swarm_ip:swarm_port> ps
docker -H tcp://<swarm_ip:swarm_port> logs ...
...
You can list the nodes in your cluster.
```bash
$ swarm list file:///tmp/my_cluster
<node_ip1:2375>
<node_ip2:2375>
<node_ip3:2375>
```
$ swarm list file:///tmp/my_cluster
<node_ip1:2375>
<node_ip2:2375>
<node_ip3:2375>
### Using etcd
On each of your nodes, start the Swarm agent. The node IP address
doesn't have to be public as long as the swarm manager can access it.
```bash
swarm join --advertise=<node_ip:2375> etcd://<etcd_addr1>,<etcd_addr2>/<optional path prefix>
```
swarm join --advertise=<node_ip:2375> etcd://<etcd_addr1>,<etcd_addr2>/<optional path prefix>
Start the manager on any machine or your laptop.
```bash
swarm manage -H tcp://<swarm_ip:swarm_port> etcd://<etcd_addr1>,<etcd_addr2>/<optional path prefix>
```
swarm manage -H tcp://<swarm_ip:swarm_port> etcd://<etcd_addr1>,<etcd_addr2>/<optional path prefix>
And then use the regular Docker commands.
```bash
docker -H tcp://<swarm_ip:swarm_port> info
docker -H tcp://<swarm_ip:swarm_port> run ...
docker -H tcp://<swarm_ip:swarm_port> ps
docker -H tcp://<swarm_ip:swarm_port> logs ...
...
```
docker -H tcp://<swarm_ip:swarm_port> info
docker -H tcp://<swarm_ip:swarm_port> run ...
docker -H tcp://<swarm_ip:swarm_port> ps
docker -H tcp://<swarm_ip:swarm_port> logs ...
...
You can list the nodes in your cluster.
```bash
swarm list etcd://<etcd_addr1>,<etcd_addr2>/<optional path prefix>
<node_ip:2375>
```
swarm list etcd://<etcd_addr1>,<etcd_addr2>/<optional path prefix>
<node_ip:2375>
### Using consul
On each of your nodes, start the Swarm agent. The node IP address
doesn't need to be public as long as the Swarm manager can access it.
```bash
swarm join --advertise=<node_ip:2375> consul://<consul_addr>/<optional path prefix>
```
swarm join --advertise=<node_ip:2375> consul://<consul_addr>/<optional path prefix>
Start the manager on any machine or your laptop.
```bash
swarm manage -H tcp://<swarm_ip:swarm_port> consul://<consul_addr>/<optional path prefix>
```
swarm manage -H tcp://<swarm_ip:swarm_port> consul://<consul_addr>/<optional path prefix>
And then use the regular Docker commands.
```bash
docker -H tcp://<swarm_ip:swarm_port> info
docker -H tcp://<swarm_ip:swarm_port> run ...
docker -H tcp://<swarm_ip:swarm_port> ps
docker -H tcp://<swarm_ip:swarm_port> logs ...
...
```
docker -H tcp://<swarm_ip:swarm_port> info
docker -H tcp://<swarm_ip:swarm_port> run ...
docker -H tcp://<swarm_ip:swarm_port> ps
docker -H tcp://<swarm_ip:swarm_port> logs ...
...
You can list the nodes in your cluster.
```bash
swarm list consul://<consul_addr>/<optional path prefix>
<node_ip:2375>
```
swarm list consul://<consul_addr>/<optional path prefix>
<node_ip:2375>
### Using zookeeper
On each of your nodes, start the Swarm agent. The node IP doesn't have
to be public as long as the swarm manager can access it.
```bash
swarm join --advertise=<node_ip:2375> zk://<zookeeper_addr1>,<zookeeper_addr2>/<optional path prefix>
```
swarm join --advertise=<node_ip:2375> zk://<zookeeper_addr1>,<zookeeper_addr2>/<optional path prefix>
Start the manager on any machine or your laptop.
```bash
swarm manage -H tcp://<swarm_ip:swarm_port> zk://<zookeeper_addr1>,<zookeeper_addr2>/<optional path prefix>
```
swarm manage -H tcp://<swarm_ip:swarm_port> zk://<zookeeper_addr1>,<zookeeper_addr2>/<optional path prefix>
You can then use the regular Docker commands.
```bash
docker -H tcp://<swarm_ip:swarm_port> info
docker -H tcp://<swarm_ip:swarm_port> run ...
docker -H tcp://<swarm_ip:swarm_port> ps
docker -H tcp://<swarm_ip:swarm_port> logs ...
...
```
docker -H tcp://<swarm_ip:swarm_port> info
docker -H tcp://<swarm_ip:swarm_port> run ...
docker -H tcp://<swarm_ip:swarm_port> ps
docker -H tcp://<swarm_ip:swarm_port> logs ...
...
You can list the nodes in the cluster.
```bash
swarm list zk://<zookeeper_addr1>,<zookeeper_addr2>/<optional path prefix>
<node_ip:2375>
```
swarm list zk://<zookeeper_addr1>,<zookeeper_addr2>/<optional path prefix>
<node_ip:2375>
### Using a static list of IP addresses
Start the manager on any machine or your laptop
```bash
swarm manage -H <swarm_ip:swarm_port> nodes://<node_ip1:2375>,<node_ip2:2375>
```
swarm manage -H <swarm_ip:swarm_port> nodes://<node_ip1:2375>,<node_ip2:2375>
Or
```bash
swarm manage -H <swarm_ip:swarm_port> <node_ip1:2375>,<node_ip2:2375>
```
swarm manage -H <swarm_ip:swarm_port> <node_ip1:2375>,<node_ip2:2375>
Then use the regular Docker commands.
```bash
docker -H <swarm_ip:swarm_port> info
docker -H <swarm_ip:swarm_port> run ...
docker -H <swarm_ip:swarm_port> ps
docker -H <swarm_ip:swarm_port> logs ...
...
```
docker -H <swarm_ip:swarm_port> info
docker -H <swarm_ip:swarm_port> run ...
docker -H <swarm_ip:swarm_port> ps
docker -H <swarm_ip:swarm_port> logs ...
### Range pattern for IP addresses
@ -217,23 +187,19 @@ addresses, i.e., `10.0.0.[10:200]` will be a list of nodes starting from
For example for the `file` discovery method.
```bash
$ echo "10.0.0.[11:100]:2375" >> /tmp/my_cluster
$ echo "10.0.1.[15:20]:2375" >> /tmp/my_cluster
$ echo "192.168.1.2:[2:20]375" >> /tmp/my_cluster
```
$ echo "10.0.0.[11:100]:2375" >> /tmp/my_cluster
$ echo "10.0.1.[15:20]:2375" >> /tmp/my_cluster
$ echo "192.168.1.2:[2:20]375" >> /tmp/my_cluster
Then start the manager.
```bash
swarm manage -H tcp://<swarm_ip:swarm_port> file:///tmp/my_cluster
```
swarm manage -H tcp://<swarm_ip:swarm_port> file:///tmp/my_cluster
And for the `nodes` discovery method.
```bash
swarm manage -H <swarm_ip:swarm_port> "nodes://10.0.0.[10:200]:2375,10.0.1.[2:250]:2375"
```
swarm manage -H <swarm_ip:swarm_port> "nodes://10.0.0.[10:200]:2375,10.0.1.[2:250]:2375"
## Contributing a new discovery backend

BIN
docs/images/virtual-box.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 104 KiB

View File

@ -6,306 +6,46 @@ page_keywords: docker, swarm, clustering
# Docker Swarm
Docker Swarm is native clustering for Docker. It turns a pool of Docker hosts
into a single, virtual host.
Docker Swarm is native clustering for Docker. It allows you create and access to
a pool of Docker hosts using the full suite of Docker tools. Because Docker
Swarm serves the standard Docker API, any tool that already communicates with a
Docker daemon can use Swarm to transparently scale to multiple hosts. Supported
tools include, but are not limited to, the following:
Swarm serves the standard Docker API, so any tool which already communicates
with a Docker daemon can use Swarm to transparently scale to multiple hosts:
Dokku, Compose, Krane, Deis, DockerUI, Shipyard, Drone, Jenkins... and,
of course, the Docker client itself.
- Dokku
- Docker Compose
- Krane
- Jenkins
Like other Docker projects, Swarm follows the "batteries included but removable"
principle. It ships with a simple scheduling backend out of the box, and as
initial development settles, an API will develop to enable pluggable backends.
The goal is to provide a smooth out-of-box experience for simple use cases, and
allow swapping in more powerful backends, like Mesos, for large scale production
deployments.
And of course, the Docker client itself is also supported.
## Release Notes
Like other Docker projects, Docker Swarm follows the "swap, plug, and play"
principle. As initial development settles, an API will develop to enable
pluggable backends. This means you can swap out the scheduling backend
Docker Swarm uses out-of-the-box with a backend you prefer. Swarm's swappable design provides a smooth out-of-box experience for most use cases, and allows large-scale production deployments to swap for more powerful backends, like Mesos.
### Version 0.2.0 (April 16, 2015)
For complete information on this release, see the
[0.2.0 Milestone project page](https://github.com/docker/swarm/wiki/0.2.0-Milestone-Project-Page).
In addition to bug fixes and refinements, this release adds the following:
* Increased API coverage. Swarm now supports more of the Docker API. For
details, see
[the API README](https://github.com/docker/swarm/blob/master/api/README.md).
* Swarm Scheduler: Spread strategy. Swarm has a new default strategy for
ranking nodes, called "spread". With this strategy, Swarm will optimize
for the node with the fewest running containers. For details, see the
[scheduler strategy README](https://github.com/docker/swarm/blob/master/scheduler/strategy/README.md)
* Swarm Scheduler: Soft affinities. Soft affinities allow Swarm more flexibility
in applying rules and filters for node selection. If the scheduler can't obey a
filtering rule, it will discard the filter and fall back to the assigned
strategy. For details, see the [scheduler filter README](https://github.com/docker/swarm/tree/master/scheduler/filter#soft-affinitiesconstraints).
* Better integration with Compose. Swarm will now schedule inter-dependent
containers on the same host. For details, see
[PR #972](https://github.com/docker/compose/pull/972).
## Pre-requisites for running Swarm
You must install Docker 1.4.0 or later on all nodes. While each node's IP need not
be public, the Swarm manager must be able to access each node across the network.
To enable communication between the Swarm manager and the Swarm node agent on
each node, each node must listen to the same network interface (TCP port).
Follow the set up below to ensure you configure your nodes correctly for this
behavior.
> **Note**: Swarm is currently in beta, so things are likely to change. We
> **Note**: Swarm is currently in BETA, so things are likely to change. We
> don't recommend you use it in production yet.
## Install Swarm
## Understand swarm creation
The easiest way to get started with Swarm is to use the
[official Docker image](https://registry.hub.docker.com/_/swarm/).
The first step to creating a swarm on your network is to pull the Docker Swarm image. Then, using Docker, you configure the swarm manager and all the nodes to run Docker Swarm. This method requires that you:
```bash
$ docker pull swarm
```
* open a TCP port on each node for communication with the swarm manager
* install Docker on each node
* create and manage TLS certificates to secure your swarm
## Set up a secured Swarm cluster using docker-machine
As a starting point, the manual method is is best suited for experienced administrators or programmers contributing to Docker Swarm. The alternative is to use `docker-machine` to install a swarm.
`docker-machine` is a tool to provision docker nodes locally or within a Cloud provider infrastructure.
Using Docker Machine, you can quickly install a Docker Swarm on cloud providers or inside your own data center. If you have VirtualBox installed on your local machine, you can quickly build and explore Docker Swarm in your local environment. This method automatically generates a certificate to secure your swarm.
You can setup a Swarm cluster (**secured by default** using TLS) with this tool.
Using Docker Machine is the best method for users getting started with Swarm for the first time. To try the recommended method of getting started, see [Get Started with Docker Swarm](install-w-machine.md).
### Installation
See the steps in the [Docker Machine documentation](https://docs.docker.com/machine/) to install `docker-machine`
Make sure you have **Virtualbox** installed locally, this will be the driver used to create our Swarm Virtual Machines
### Create a token
First, create a Swarm token. Optionally, you can use another discovery service.
See the Swarm documentation on alternative solutions in [Discovery Documentation](https://github.com/docker/swarm/blob/master/discovery/README.md)
To create the token, first create a Machine. This example will use VirtualBox.
```
$ docker-machine create -d virtualbox local
```
Load the Machine configuration into your shell:
```
$ eval "$(docker-machine env local)"
```
Then run generate the token using the Swarm Docker image:
```
$ docker run swarm create
1257e0f0bbb499b5cd04b4c9bdb2dab3
```
Once you have the token, you can create the cluster.
### Launch the Swarm manager
Use this command to launch the *Swarm Manager*:
```
docker-machine create \
-d virtualbox \
--swarm \
--swarm-master \
--swarm-discovery token://<TOKEN-FROM-ABOVE> \
swarm-master
```
The *Swarm Manager* is the machine in charge of orchestrating and scheduling containers
on the entire cluster. The *Swarm Manager* rules a set of *Swarm-Agents* (also called **nodes** or **docker nodes**).
### Launch Swarm agents
Now that the *Swarm Manager* is up and running, we can launch as many *Swarm
Agents* as we want using:
```
docker-machine create \
-d virtualbox \
--swarm \
--swarm-discovery token://<TOKEN-FROM-ABOVE> \
swarm-agent-00
```
We can create more: `swarm-agent-01`, `swarm-agent-02`, etc..
*Swarm agents* are responsible for hosting containers, they are regular docker daemons and
we communicate with them using the standard *docker remote API*.
### Point the docker cli to our Swarm Manager
Last step is to point to the machine running the *Swarm Manager* so we can use the `docker` command on it. To do so we are going to load those informations into our environment with:
```
eval "$(docker-machine env --swarm swarm-master)"
```
### Time to talk to our Swarm!
Now that the setup is done, we can use the `docker` command on our cluster:
```
$ docker info
Containers: 1
Nodes: 1
swarm-master: 192.168.99.100:2376
└ Containers: 2
└ Reserved CPUs: 0 / 4
└ Reserved Memory: 0 B / 999.9 MiB
Nodes: 2
swarm-agent-00: 192.168.99.101:2376
└ Containers: 1
└ Reserved CPUs: 0 / 4
└ Reserved Memory: 0 B / 999.9 MiB
Nodes: 3
swarm-agent-01: 192.168.99.102:2376
└ Containers: 1
└ Reserved CPUs: 0 / 4
└ Reserved Memory: 0 B / 999.9 MiB
$ docker ps
[...]
```
## Set up a Swarm cluster manually (insecure)
> **Warning**: use these steps only for debugging and testing purposes and if your network environment is secured (use of firewall, etc.)
Each Swarm node will run a Swarm node agent. The agent registers the referenced
Docker daemon, monitors it, and updates the discovery backend with the node's status.
The following example uses the Docker Hub based `token` discovery service:
1. Create a Swarm cluster using the `docker` command.
```bash
$ docker run --rm swarm create
6856663cdefdec325839a4b7e1de38e8 # <- this is your unique <cluster_id>
```
The `create` command returns a unique cluster ID (`cluster_id`). You'll need
this ID when starting the Swarm agent on a node.
2. Log into **each node** and do the following.
1. Start the Docker daemon with the `-H` flag. This ensures that the Docker remote API on *Swarm Agents* is available over TCP for the *Swarm Manager*.
```bash
$ docker -H tcp://0.0.0.0:2375 -d
```
2. Register the Swarm agents to the discovery service. The node's IP must be accessible from the Swarm Manager. Use the following command and replace with the proper `node_ip` and `cluster_id` to start an agent:
```bash
docker run -d swarm join --addr=<node_ip:2375> token://<cluster_id>
```
For example:
```bash
$ docker run -d swarm join --addr=172.31.40.100:2375 token://6856663cdefdec325839a4b7e1de38e8
```
3. Start the Swarm manager on any machine or your laptop. The following command
illustrates how to do this:
```bash
docker run -d -p <swarm_port>:2375 swarm manage token://<cluster_id>
```
4. Once the manager is running, check your configuration by running `docker info` as follows:
```bash
docker -H tcp://<manager_ip:manager_port> info
```
For example, if you run the manager locally on your machine:
```bash
$ docker -H tcp://0.0.0.0:2375 info
Containers: 0
Nodes: 3
agent-2: 172.31.40.102:2375
└ Containers: 0
└ Reserved CPUs: 0 / 1
└ Reserved Memory: 0 B / 514.5 MiB
agent-1: 172.31.40.101:2375
└ Containers: 0
└ Reserved CPUs: 0 / 1
└ Reserved Memory: 0 B / 514.5 MiB
agent-0: 172.31.40.100:2375
└ Containers: 0
└ Reserved CPUs: 0 / 1
└ Reserved Memory: 0 B / 514.5 MiB
```
If you are running a test cluster without TLS enabled, you may get an error. In that case, be sure to unset `DOCKER_TLS_VERIFY` with:
```bash
$ unset DOCKER_TLS_VERIFY
```
## Using the docker CLI
You can now use the regular Docker CLI to access your nodes:
```bash
docker -H tcp://<manager_ip:manager_port> info
docker -H tcp://<manager_ip:manager_port> run ...
docker -H tcp://<manager_ip:manager_port> ps
docker -H tcp://<manager_ip:manager_port> logs ...
```
## List nodes in your cluster
You can get a list of all your running nodes using the `swarm list` command:
```bash
docker run --rm swarm list token://<cluster_id>
<node_ip:2375>
```
For example:
```bash
$ docker run --rm swarm list token://6856663cdefdec325839a4b7e1de38e8
172.31.40.100:2375
172.31.40.101:2375
172.31.40.102:2375
```
## TLS
Swarm supports TLS authentication between the CLI and Swarm but also between
Swarm and the Docker nodes. _However_, all the Docker daemon certificates and client
certificates **must** be signed using the same CA-certificate.
In order to enable TLS for both client and server, the same command line options
as Docker can be specified:
```bash
swarm manage --tlsverify --tlscacert=<CACERT> --tlscert=<CERT> --tlskey=<KEY> [...]
```
Please refer to the [Docker documentation](https://docs.docker.com/articles/https/)
for more information on how to set up TLS authentication on Docker and generating
the certificates.
> **Note**: Swarm certificates must be generated with `extendedKeyUsage = clientAuth,serverAuth`.
If you are interested manually installing or interested in contributing, see [Create a swarm for development](install-manual.md).
## Discovery services
See the [Discovery service](https://docs.docker.com/swarm/discovery/) document
for more information.
To dynamically configure and manage the services in your containers, you use a discovery backend with Docker Swarm. For information on which backends are available, see the [Discovery service](https://docs.docker.com/swarm/discovery/) documentation.
## Advanced Scheduling
@ -319,7 +59,7 @@ the [Docker remote
API](http://docs.docker.com/reference/api/docker_remote_api/), and extends it
with some new endpoints.
## Getting help
# Getting help
Docker Swarm is still in its infancy and under active development. If you need
help, would like to contribute, or simply want to talk about the project with
@ -331,4 +71,4 @@ like-minded individuals, we have a number of open channels for communication.
* To contribute code or documentation changes: please submit a [pull request on Github](https://github.com/docker/machine/pulls).
For more information and resources, please visit the [Getting Help project page](https://docs.docker.com/project/get-help/).
For more information and resources, please visit the [Getting Help project page](https://docs.docker.com/project/get-help/).

133
docs/install-manual.md Normal file
View File

@ -0,0 +1,133 @@
# Create a swarm for development
This section tells you how to create a Docker Swarm on your network to use only for debugging, testing, or development purposes. You can also use this type of installation if you are developing custom applications for Docker Swarm or contributing to it.
> **Caution**: Only use this set up if your network environment is secured by a firewall or other measures.
## Prerequisites
You install Docker Swarm on a single system which is known as your Docker Swarm manager. You create the cluster, or swarm, on one or more additional nodes on your network. Each node in your swarm must:
* be accessible by the swarm manager across your network
* a TCP port open to listen for the swarm manager
You can run Docker Swarm on Linux 64-bit architectures. You can also install and run it on 64-bit Windows and Max OSX but these architectures are *not* regularly tested for compatibility in the BETA phase.
Take a moment and identify the systems on your network that you intend to use. Ensure each node meets the requirements listed above.
## Pull the swarm image and create a cluster.
The easiest way to get started with Swarm is to use the
[official Docker image](https://registry.hub.docker.com/_/swarm/).
1. Pull the swarm image.
$ docker pull swarm
1. Create a Swarm cluster using the `docker` command.
$ docker run --rm swarm create
6856663cdefdec325839a4b7e1de38e8 #
The `create` command returns a unique cluster ID (`cluster_id`). You'll need
this ID when starting the Docker Swarm agent on a node.
## Create swarm nodes
Each Swarm node will run a Swarm node agent. The agent registers the referenced
Docker daemon, monitors it, and updates the discovery backend with the node's status.
This example uses the Docker Hub based `token` discovery service. Log into **each node** and do the following.
1. Start the Docker daemon with the `-H` flag. This ensures that the Docker remote API on *Swarm Agents* is available over TCP for the *Swarm Manager*.
$ docker -H tcp://0.0.0.0:2375 -d
2. Register the Swarm agents to the discovery service. The node's IP must be accessible from the Swarm Manager. Use the following command and replace with the proper `node_ip` and `cluster_id` to start an agent:
docker run -d swarm join --addr=<node_ip:2375> token://<cluster_id>
For example:
$ docker run -d swarm join --addr=172.31.40.100:2375 token://6856663cdefdec325839a4b7e1de38e8
3. Start the Swarm manager on any machine or your laptop.
The following command illustrates how to do this:
docker run -d -p <swarm_port>:2375 swarm manage token://<cluster_id>
4. Once the manager is running, check your configuration by running `docker info` as follows:
docker -H tcp://<manager_ip:manager_port> info
For example, if you run the manager locally on your machine:
$ docker -H tcp://0.0.0.0:2375 info
Containers: 0
Nodes: 3
agent-2: 172.31.40.102:2375
└ Containers: 0
└ Reserved CPUs: 0 / 1
└ Reserved Memory: 0 B / 514.5 MiB
agent-1: 172.31.40.101:2375
└ Containers: 0
└ Reserved CPUs: 0 / 1
└ Reserved Memory: 0 B / 514.5 MiB
agent-0: 172.31.40.100:2375
└ Containers: 0
└ Reserved CPUs: 0 / 1
└ Reserved Memory: 0 B / 514.5 MiB
If you are running a test cluster without TLS enabled, you may get an error.
In that case, be sure to unset `DOCKER_TLS_VERIFY` with:
$ unset DOCKER_TLS_VERIFY
## Using the docker CLI
You can now use the regular Docker CLI to access your nodes:
docker -H tcp://<manager_ip:manager_port> info
docker -H tcp://<manager_ip:manager_port> run ...
docker -H tcp://<manager_ip:manager_port> ps
docker -H tcp://<manager_ip:manager_port> logs ...
## List nodes in your cluster
You can get a list of all your running nodes using the `swarm list` command:
docker run --rm swarm list token://<cluster_id>
<node_ip:2375>
For example:
$ docker run --rm swarm list token://6856663cdefdec325839a4b7e1de38e8
172.31.40.100:2375
172.31.40.101:2375
172.31.40.102:2375
## TLS
Swarm supports TLS authentication between the CLI and Swarm but also between
Swarm and the Docker nodes. _However_, all the Docker daemon certificates and client
certificates **must** be signed using the same CA-certificate.
In order to enable TLS for both client and server, the same command line options
as Docker can be specified:
swarm manage --tlsverify --tlscacert=<CACERT> --tlscert=<CERT> --tlskey=<KEY> [...]
Please refer to the [Docker documentation](https://docs.docker.com/articles/https/)
for more information on how to set up TLS authentication on Docker and generating
the certificates.
> **Note**: Swarm certificates must be generated with `extendedKeyUsage = clientAuth,serverAuth`.

224
docs/install-w-machine.md Normal file
View File

@ -0,0 +1,224 @@
---
page_title: Get Started with Docker Swarm
page_description: Swarm release notes
page_keywords: docker, swarm, clustering, discovery, release, notes
---
# Get Started with Docker Swarm
This page introduces you to Docker Swarm by teaching you how to create a swarm
on your local machine using Docker Machine and VirtualBox. Once you have a feel
for creating and interacting with a swarm, you can use what you learned to test
deployment on a cloud provider or your own network.
Remember, Docker Swarm is currently in BETA, so things are likely to change. We
don't recommend you use it in production yet.
## Prerequisites
Make sure your local system has VirtualBox installed. If you are using Mac OSX
or Windows and have installed Docker, you should have VirtualBox already
installed.
Using the instructions appropriate to your system architecture, [install Docker
Machine](http://docs.docker.com/machine/#installation).
## Create a Docker Swarm
Docker Machine gets hosts ready to run Docker containers. Each node in your
Docker Swarm must have access to Docker to pull images and run them in
containers. Docker Machine manages all this provisioning for your swarm.
Before you create a swarm with `docker-machine`, you create a discovery token.
Docker Swarm uses tokens for discovery and agent registration. Using this token,
you can create a swarm that is secured.
1. List the machines on your system.
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
dev * virtualbox Running tcp://192.168.99.100:2376
This example was run a Mac OSX system with `boot2docker` installed. So, the
`dev` environment in the list represents the `boot2docker` machine.
2. Create a VirtualBox machine called `local` on your system.
$ docker-machine create -d virtualbox local
INFO[0000] Creating SSH key...
INFO[0000] Creating VirtualBox VM...
INFO[0005] Starting VirtualBox VM...
INFO[0005] Waiting for VM to start...
INFO[0050] "local" has been created and is now the active machine.
INFO[0050] To point your Docker client at it, run this in your shell: eval "$(docker-machine env local)"
3. Load the `local` machine configuration into your shell.
$ eval "$(docker-machine env local)"
4. Generate a discovery token using the Docker Swarm image.
The command below runs the `swarm create` command in a container. If you
haven't got the `swarm:latest` image on your local machine, Docker pulls it
for you.
$ docker run swarm create
Unable to find image 'swarm:latest' locally
latest: Pulling from swarm
de939d6ed512: Pull complete
79195899a8a4: Pull complete
79ad4f2cc8e0: Pull complete
0db1696be81b: Pull complete
ae3b6728155e: Pull complete
57ec2f5f3e06: Pull complete
73504b2882a3: Already exists
swarm:latest: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security.
Digest: sha256:aaaf6c18b8be01a75099cc554b4fb372b8ec677ae81764dcdf85470279a61d6f
Status: Downloaded newer image for swarm:latest
fe0cc96a72cf04dba8c1c4aa79536ec3
The `swarm create` command returned the `fe0cc96a72cf04dba8c1c4aa79536ec3`
token.
5. Save the token in a safe place.
You'll use this token in the next step to create a Docker Swarm.
## Launch the Swarm manager
A single system in your network is known as your Docker Swarm manager. The swarm
manager orchestrates and schedules containers on the entire cluster. The swarm
manager rules a set of agents (also called nodes or Docker nodes).
Swarm agents are responsible for hosting containers. They are regular docker
daemons and you can communicate with them using the Docker remote API.
In this section, you create a swarm manager and two nodes.
1. Create a swarm manager under VirtualBox.
docker-machine create \
-d virtualbox \
--swarm \
--swarm-master \
--swarm-discovery token://<TOKEN-FROM-ABOVE> \
swarm-master
For example:
$ docker-machine create -d virtualbox --swarm --swarm-master --swarm-discovery token://fe0cc96a72cf04dba8c1c4aa79536ec3 swarm-master
INFO[0000] Creating SSH key...
INFO[0000] Creating VirtualBox VM...
INFO[0005] Starting VirtualBox VM...
INFO[0005] Waiting for VM to start...
INFO[0060] "swarm-master" has been created and is now the active machine.
INFO[0060] To point your Docker client at it, run this in your shell: eval "$(docker-machine env swarm-master)"
2. Open your VirtualBox Manager, it should contain the `local` machine and the
new `swarm-master` machine.
![VirtualBox](../virtual-box.png)
3. Create a swarm node.
docker-machine create \
-d virtualbox \
--swarm \
--swarm-discovery token://<TOKEN-FROM-ABOVE> \
swarm-agent-00
For example:
$ docker-machine create -d virtualbox --swarm --swarm-discovery token://fe0cc96a72cf04dba8c1c4aa79536ec3 swarm-agent-00
INFO[0000] Creating SSH key...
INFO[0000] Creating VirtualBox VM...
INFO[0005] Starting VirtualBox VM...
INFO[0006] Waiting for VM to start...
INFO[0066] "swarm-agent-00" has been created and is now the active machine.
INFO[0066] To point your Docker client at it, run this in your shell: eval "$(docker-machine env swarm-agent-00)"
3. Add another agent called `swarm-agent-01`.
$ docker-machine create -d virtualbox --swarm --swarm-discovery token://fe0cc96a72cf04dba8c1c4aa79536ec3 swarm-agent-01
You should see the two agents in your VirtualBox Manager.
## Direct your swarm
In this step, you connect to the swarm machine, display information related to
your swarm, and start an image on your swarm.
1. Point your Docker environment to the machine running the swarm master.
$ eval $(docker-machine env --swarm swarm-master)
2. Get information on your new swarm using the `docker` command.
$ docker info
Containers: 4
Strategy: spread
Filters: affinity, health, constraint, port, dependency
Nodes: 3
swarm-agent-00: 192.168.99.105:2376
└ Containers: 1
└ Reserved CPUs: 0 / 8
└ Reserved Memory: 0 B / 1.023 GiB
swarm-agent-01: 192.168.99.106:2376
└ Containers: 1
└ Reserved CPUs: 0 / 8
└ Reserved Memory: 0 B / 1.023 GiB
swarm-master: 192.168.99.104:2376
└ Containers: 2
└ Reserved CPUs: 0 / 8
You can see that each agent and the master all have port `2376` exposed. When you create a swarm, you can use any port you like and even different ports on different nodes. Each swarm node runs the swarm agent container.
The master is running both the swarm manager and a swarm agent container. This isn't recommended in a production environment because it can cause problems with agent failover. However, it is perfectly fine to do this in a learning environment like this one.
3. Check the images currently running on your swarm.
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
78be991b58d1 swarm:latest "/swarm join --addr 3 minutes ago Up 2 minutes 2375/tcp swarm-agent-01/swarm-agent
da5127e4f0f9 swarm:latest "/swarm join --addr 6 minutes ago Up 6 minutes 2375/tcp swarm-agent-00/swarm-agent
ef395f316c59 swarm:latest "/swarm join --addr 16 minutes ago Up 16 minutes 2375/tcp swarm-master/swarm-agent
45821ca5208e swarm:latest "/swarm manage --tls 16 minutes ago Up 16 minutes 2375/tcp, 192.168.99.104:3376->3376/tcp swarm-master/swarm-agent-master
4. Run the Docker `hello-world` test image on your swarm.
$ docker run hello-world
Hello from Docker.
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(Assuming it was not already locally available.)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
For more examples and ideas, visit:
http://docs.docker.com/userguide/
5. Use the `docker ps` command to find out which node the container an on.
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
54a8690043dd hello-world:latest "/hello" 22 seconds ago Exited (0) 3 seconds ago swarm-agent-00/modest_goodall
78be991b58d1 swarm:latest "/swarm join --addr 5 minutes ago Up 4 minutes 2375/tcp swarm-agent-01/swarm-agent
da5127e4f0f9 swarm:latest "/swarm join --addr 8 minutes ago Up 8 minutes 2375/tcp swarm-agent-00/swarm-agent
ef395f316c59 swarm:latest "/swarm join --addr 18 minutes ago Up 18 minutes 2375/tcp swarm-master/swarm-agent
45821ca5208e swarm:latest "/swarm manage --tls 18 minutes ago Up 18 minutes 2375/tcp, 192.168.99.104:3376->3376/tcp swarm-master/swarm-agent-master

View File

@ -1,5 +1,9 @@
- ['swarm/index.md', 'User Guide', 'Docker Swarm' ]
- ['swarm/install-w-machine.md', 'User Guide', '&nbsp;&nbsp;&nbsp;&nbsp;&blacksquare;&nbsp; Create a swarm with docker-machine' ]
- ['swarm/install-manual.md', 'User Guide', '&nbsp;&nbsp;&nbsp;&nbsp;&blacksquare;&nbsp; Create a swarm for development' ]
- ['swarm/release-notes.md', 'User Guide', '&nbsp;&nbsp;&nbsp;&nbsp;&blacksquare;&nbsp; Release Notes']
- ['swarm/discovery.md', 'Reference', 'Swarm discovery']
- ['swarm/api/swarm-api.md', 'Reference', 'Swarm API']
- ['swarm/scheduler/filter.md', 'Reference', 'Swarm filters']
- ['swarm/scheduler/strategy.md', 'Reference', 'Swarm strategies']

50
docs/release-notes.md Normal file
View File

@ -0,0 +1,50 @@
---
page_title: Docker Swarm Release Notes
page_description: Swarm release notes
page_keywords: docker, swarm, clustering, discovery, release, notes
---
# Docker Swarm Release Notes
This page shows the cumulative release notes across all releases of Docker Swarm.
## Version 0.3.0 (June 16, 2015)
For complete information on this release, see the
[0.3.0 Milestone project page](https://github.com/docker/swarm/wiki/0.3.0-Milestone-Project-Page).
In addition to bug fixes and refinements, this release adds the following:
- **API Compatibility**: Swarm is now 100% compatible with the Docker API. Everything that runs on top of the Docker API should work seamlessly with a Docker Swarm.
- **Mesos**: Experimental support for Mesos as a cluster driver.
- **Multi-tenancy**: Multiple swarm replicas of can now run in the same cluster and will automatically fail-over (using leader election).
This is our most stable release yet. We built an exhaustive testing infrastructure (integration, regression, stress).We've almost doubled our test code coverage which resulted in many bug fixes and stability improvements.
Finally, we are garnering a growing community of Docker Swarm contributors. Please consider contributing by filling bugs, feature requests, commenting on existing issues, and, of course, code.
## Version 0.2.0 (April 16, 2015)
For complete information on this release, see the
[0.2.0 Milestone project page](https://github.com/docker/swarm/wiki/0.2.0-Milestone-Project-Page).
In addition to bug fixes and refinements, this release adds the following:
* **Increased API coverage**: Swarm now supports more of the Docker API. For
details, see
[the API README](https://github.com/docker/swarm/blob/master/api/README.md).
* **Swarm Scheduler**: Spread strategy. Swarm has a new default strategy for
ranking nodes, called "spread". With this strategy, Swarm will optimize
for the node with the fewest running containers. For details, see the
[scheduler strategy README](https://github.com/docker/swarm/blob/master/scheduler/strategy/README.md)
* **Swarm Scheduler**: Soft affinities. Soft affinities allow Swarm more flexibility
in applying rules and filters for node selection. If the scheduler can't obey a
filtering rule, it will discard the filter and fall back to the assigned
strategy. For details, see the [scheduler filter README](https://github.com/docker/swarm/tree/master/scheduler/filter#soft-affinitiesconstraints).
* **Better integration with Compose**: Swarm will now schedule inter-dependent
containers on the same host. For details, see
[PR #972](https://github.com/docker/compose/pull/972).

View File

@ -39,17 +39,17 @@ To tag a node with a specific set of key/value pairs, one must pass a list of
For instance, let's start `node-1` with the `storage=ssd` label:
```bash
$ docker -d --label storage=ssd
$ swarm join --advertise=192.168.0.42:2375 token://XXXXXXXXXXXXXXXXXX
```
$ docker -d --label storage=ssd
$ swarm join --advertise=192.168.0.42:2375 token://XXXXXXXXXXXXXXXXXX
Again, but this time `node-2` with `storage=disk`:
```bash
$ docker -d --label storage=disk
$ swarm join --advertise=192.168.0.43:2375 token://XXXXXXXXXXXXXXXXXX
```
$ docker -d --label storage=disk
$ swarm join --advertise=192.168.0.43:2375 token://XXXXXXXXXXXXXXXXXX
Once the nodes are registered with the cluster, the master pulls their respective
tags and will take them into account when scheduling new containers.
@ -57,14 +57,14 @@ tags and will take them into account when scheduling new containers.
Let's start a MySQL server and make sure it gets good I/O performance by selecting
nodes with flash drives:
```bash
$ docker run -d -P -e constraint:storage==ssd --name db mysql
f8b693db9cd6
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES
f8b693db9cd6 mysql:latest "mysqld" Less than a second ago running 192.168.0.42:49178->3306/tcp node-1 db
```
$ docker run -d -P -e constraint:storage==ssd --name db mysql
f8b693db9cd6
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES
f8b693db9cd6 mysql:latest "mysqld" Less than a second ago running 192.168.0.42:49178->3306/tcp node-1 db
In this case, the master selected all nodes that met the `storage=ssd` constraint
and applied resource management on top of them, as discussed earlier.
@ -73,15 +73,15 @@ and applied resource management on top of them, as discussed earlier.
Now we want to run an Nginx frontend in our cluster. However, we don't want
*flash* drives since we'll mostly write logs to disk.
```bash
$ docker run -d -P -e constraint:storage==disk --name frontend nginx
963841b138d8
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES
963841b138d8 nginx:latest "nginx" Less than a second ago running 192.168.0.43:49177->80/tcp node-2 frontend
f8b693db9cd6 mysql:latest "mysqld" Up About a minute running 192.168.0.42:49178->3306/tcp node-1 db
```
$ docker run -d -P -e constraint:storage==disk --name frontend nginx
963841b138d8
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES
963841b138d8 nginx:latest "nginx" Less than a second ago running 192.168.0.43:49177->80/tcp node-2 frontend
f8b693db9cd6 mysql:latest "mysqld" Up About a minute running 192.168.0.42:49178->3306/tcp node-1 db
The scheduler selected `node-2` since it was started with the `storage=disk` label.
@ -111,84 +111,84 @@ You can schedule a new container to run next to another based on a container
name or ID. For example, you can start a container called `frontend` running
`nginx`:
```bash
$ docker run -d -p 80:80 --name front nginx
87c4376856a8
$ docker run -d -p 80:80 --name front nginx
87c4376856a8
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES
87c4376856a8 nginx:latest "nginx" Less than a second ago running 192.168.0.42:80->80/tcp node-1 front
```
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES
87c4376856a8 nginx:latest "nginx" Less than a second ago running 192.168.0.42:80->80/tcp node-1 front
Then, using `-e affinity:container==frontend` flag schedule a second container to
locate and run next to `frontend`.
```bash
$ docker run -d --name logger -e affinity:container==frontend logger
87c4376856a8
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES
87c4376856a8 nginx:latest "nginx" Less than a second ago running 192.168.0.42:80->80/tcp node-1 frontend
963841b138d8 logger:latest "logger" Less than a second ago running node-1 logger
```
$ docker run -d --name logger -e affinity:container==frontend logger
87c4376856a8
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES
87c4376856a8 nginx:latest "nginx" Less than a second ago running 192.168.0.42:80->80/tcp node-1 frontend
963841b138d8 logger:latest "logger" Less than a second ago running node-1 logger
Because of name affinity, the `logger` container ends up on `node-1` along with
the `frontend` container. Instead of the `frontend` name you could have supplied its
ID as follows:
```bash
docker run -d --name logger -e affinity:container==87c4376856a8`
```
docker run -d --name logger -e affinity:container==87c4376856a8`
#### Image affinity
You can schedule a container to run only on nodes where a specific image is already pulled.
```bash
$ docker -H node-1:2375 pull redis
$ docker -H node-2:2375 pull mysql
$ docker -H node-3:2375 pull redis
```
$ docker -H node-1:2375 pull redis
$ docker -H node-2:2375 pull mysql
$ docker -H node-3:2375 pull redis
Only `node-1` and `node-3` have the `redis` image. Specify a `-e
affinity:image=redis` filter to schedule several additional containers to run on
these nodes.
```bash
$ docker run -d --name redis1 -e affinity:image==redis redis
$ docker run -d --name redis2 -e affinity:image==redis redis
$ docker run -d --name redis3 -e affinity:image==redis redis
$ docker run -d --name redis4 -e affinity:image==redis redis
$ docker run -d --name redis5 -e affinity:image==redis redis
$ docker run -d --name redis6 -e affinity:image==redis redis
$ docker run -d --name redis7 -e affinity:image==redis redis
$ docker run -d --name redis8 -e affinity:image==redis redis
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES
87c4376856a8 redis:latest "redis" Less than a second ago running node-1 redis1
1212386856a8 redis:latest "redis" Less than a second ago running node-1 redis2
87c4376639a8 redis:latest "redis" Less than a second ago running node-3 redis3
1234376856a8 redis:latest "redis" Less than a second ago running node-1 redis4
86c2136253a8 redis:latest "redis" Less than a second ago running node-3 redis5
87c3236856a8 redis:latest "redis" Less than a second ago running node-3 redis6
87c4376856a8 redis:latest "redis" Less than a second ago running node-3 redis7
963841b138d8 redis:latest "redis" Less than a second ago running node-1 redis8
```
$ docker run -d --name redis1 -e affinity:image==redis redis
$ docker run -d --name redis2 -e affinity:image==redis redis
$ docker run -d --name redis3 -e affinity:image==redis redis
$ docker run -d --name redis4 -e affinity:image==redis redis
$ docker run -d --name redis5 -e affinity:image==redis redis
$ docker run -d --name redis6 -e affinity:image==redis redis
$ docker run -d --name redis7 -e affinity:image==redis redis
$ docker run -d --name redis8 -e affinity:image==redis redis
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES
87c4376856a8 redis:latest "redis" Less than a second ago running node-1 redis1
1212386856a8 redis:latest "redis" Less than a second ago running node-1 redis2
87c4376639a8 redis:latest "redis" Less than a second ago running node-3 redis3
1234376856a8 redis:latest "redis" Less than a second ago running node-1 redis4
86c2136253a8 redis:latest "redis" Less than a second ago running node-3 redis5
87c3236856a8 redis:latest "redis" Less than a second ago running node-3 redis6
87c4376856a8 redis:latest "redis" Less than a second ago running node-3 redis7
963841b138d8 redis:latest "redis" Less than a second ago running node-1 redis8
As you can see here, the containers were only scheduled on nodes that had the
`redis` image. Instead of the image name, you could have specified the image ID.
```bash
$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
redis latest 06a1f75304ba 2 days ago 111.1 MB
$ docker run -d --name redis1 -e affinity:image==06a1f75304ba redis
```
$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
redis latest 06a1f75304ba 2 days ago 111.1 MB
$ docker run -d --name redis1 -e affinity:image==06a1f75304ba redis
#### Label affinity
@ -196,27 +196,27 @@ $ docker run -d --name redis1 -e affinity:image==06a1f75304ba redis
Label affinity allows you to set up an attraction based on a container's label.
For example, you can run a `nginx` container with the `com.example.type=frontend` label.
```bash
$ docker run -d -p 80:80 --label com.example.type=frontend nginx
87c4376856a8
$ docker ps --filter "label=com.example.type=front"
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES
87c4376856a8 nginx:latest "nginx" Less than a second ago running 192.168.0.42:80->80/tcp node-1 trusting_yonath
```
$ docker run -d -p 80:80 --label com.example.type=frontend nginx
87c4376856a8
$ docker ps --filter "label=com.example.type=front"
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES
87c4376856a8 nginx:latest "nginx" Less than a second ago running 192.168.0.42:80->80/tcp node-1 trusting_yonath
Then, use `-e affinity:com.example.type==frontend` to schedule a container next to
the container with the `com.example.type==frontend` label.
```bash
$ docker run -d -e affinity:com.example.type==frontend logger
87c4376856a8
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES
87c4376856a8 nginx:latest "nginx" Less than a second ago running 192.168.0.42:80->80/tcp node-1 trusting_yonath
963841b138d8 logger:latest "logger" Less than a second ago running node-1 happy_hawking
```
$ docker run -d -e affinity:com.example.type==frontend logger
87c4376856a8
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES
87c4376856a8 nginx:latest "nginx" Less than a second ago running 192.168.0.42:80->80/tcp node-1 trusting_yonath
963841b138d8 logger:latest "logger" Less than a second ago running node-1 happy_hawking
The `logger` container ends up on `node-1` because its affinity with the `com.example.type==frontend` label.
@ -253,23 +253,23 @@ to the scheduler's strategy.
Soft affinities/constraints are expressed with a **~** in the
expression, for example:
```bash
$ docker run -d --name redis1 -e affinity:image==~redis redis
```
$ docker run -d --name redis1 -e affinity:image==~redis redis
If none of the nodes in the cluster has the image `redis`, the scheduler will
discard the affinity and schedule according to the strategy.
```bash
$ docker run -d --name redis2 -e constraint:region==~us* redis
```
$ docker run -d --name redis2 -e constraint:region==~us* redis
If none of the nodes in the cluster belongs to the `us` region, the scheduler will
discard the constraint and schedule according to the strategy.
```bash
$ docker run -d --name redis5 -e affinity:container!=~redis* redis
```
$ docker run -d --name redis5 -e affinity:container!=~redis* redis
The affinity filter will be used to schedule a new `redis5` container to a
different node that doesn't have a container with the name that satisfies
@ -280,14 +280,14 @@ will discard the affinity rule and schedule according to the strategy.
With this filter, `ports` are considered unique resources.
```bash
$ docker run -d -p 80:80 nginx
87c4376856a8
$ docker ps
CONTAINER ID IMAGE COMMAND PORTS NODE NAMES
87c4376856a8 nginx:latest "nginx" 192.168.0.42:80->80/tcp node-1 prickly_engelbart
```
$ docker run -d -p 80:80 nginx
87c4376856a8
$ docker ps
CONTAINER ID IMAGE COMMAND PORTS NODE NAMES
87c4376856a8 nginx:latest "nginx" 192.168.0.42:80->80/tcp node-1 prickly_engelbart
Docker cluster selects a node where the public `80` port is available and schedules
a container on it, in this case `node-1`.
@ -295,37 +295,37 @@ a container on it, in this case `node-1`.
Attempting to run another container with the public `80` port will result in
the cluster selecting a different node, since that port is already occupied on `node-1`:
```bash
$ docker run -d -p 80:80 nginx
963841b138d8
$ docker ps
CONTAINER ID IMAGE COMMAND PORTS NODE NAMES
963841b138d8 nginx:latest "nginx" 192.168.0.43:80->80/tcp node-2 dreamy_turing
87c4376856a8 nginx:latest "nginx" 192.168.0.42:80->80/tcp node-1 prickly_engelbart
```
$ docker run -d -p 80:80 nginx
963841b138d8
$ docker ps
CONTAINER ID IMAGE COMMAND PORTS NODE NAMES
963841b138d8 nginx:latest "nginx" 192.168.0.43:80->80/tcp node-2 dreamy_turing
87c4376856a8 nginx:latest "nginx" 192.168.0.42:80->80/tcp node-1 prickly_engelbart
Again, repeating the same command will result in the selection of `node-3`, since
port `80` is neither available on `node-1` nor `node-2`:
```bash
$ docker run -d -p 80:80 nginx
963841b138d8
$ docker ps
CONTAINER ID IMAGE COMMAND PORTS NODE NAMES
f8b693db9cd6 nginx:latest "nginx" 192.168.0.44:80->80/tcp node-3 stoic_albattani
963841b138d8 nginx:latest "nginx" 192.168.0.43:80->80/tcp node-2 dreamy_turing
87c4376856a8 nginx:latest "nginx" 192.168.0.42:80->80/tcp node-1 prickly_engelbart
```
$ docker run -d -p 80:80 nginx
963841b138d8
$ docker ps
CONTAINER ID IMAGE COMMAND PORTS NODE NAMES
f8b693db9cd6 nginx:latest "nginx" 192.168.0.44:80->80/tcp node-3 stoic_albattani
963841b138d8 nginx:latest "nginx" 192.168.0.43:80->80/tcp node-2 dreamy_turing
87c4376856a8 nginx:latest "nginx" 192.168.0.42:80->80/tcp node-1 prickly_engelbart
Finally, Docker Swarm will refuse to run another container that requires port
`80` since not a single node in the cluster has it available:
```bash
$ docker run -d -p 80:80 nginx
2014/10/29 00:33:20 Error response from daemon: no resources available to schedule container
```
$ docker run -d -p 80:80 nginx
2014/10/29 00:33:20 Error response from daemon: no resources available to schedule container
### Port filter in Host Mode
@ -338,44 +338,44 @@ mode to choose an available node for a new container.
For example, the following commands start `nginx` on 3-node cluster.
```bash
$ docker run -d --expose=80 --net=host nginx
640297cb29a7
$ docker run -d --expose=80 --net=host nginx
7ecf562b1b3f
$ docker run -d --expose=80 --net=host nginx
09a92f582bc2
```
$ docker run -d --expose=80 --net=host nginx
640297cb29a7
$ docker run -d --expose=80 --net=host nginx
7ecf562b1b3f
$ docker run -d --expose=80 --net=host nginx
09a92f582bc2
Port binding information will not be available through the `docker ps` command because all the nodes are started in the `host` mode.
```bash
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
640297cb29a7 nginx:1 "nginx -g 'daemon of Less than a second ago Up 30 seconds box3/furious_heisenberg
7ecf562b1b3f nginx:1 "nginx -g 'daemon of Less than a second ago Up 28 seconds box2/ecstatic_meitner
09a92f582bc2 nginx:1 "nginx -g 'daemon of 46 seconds ago Up 27 seconds box1/mad_goldstine
```
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
640297cb29a7 nginx:1 "nginx -g 'daemon of Less than a second ago Up 30 seconds box3/furious_heisenberg
7ecf562b1b3f nginx:1 "nginx -g 'daemon of Less than a second ago Up 28 seconds box2/ecstatic_meitner
09a92f582bc2 nginx:1 "nginx -g 'daemon of 46 seconds ago Up 27 seconds box1/mad_goldstine
The swarm will refuse the operation when trying to instantiate the 4th container.
```bash
$ docker run -d --expose=80 --net=host nginx
FATA[0000] Error response from daemon: unable to find a node with port 80/tcp available in the Host mode
```
$ docker run -d --expose=80 --net=host nginx
FATA[0000] Error response from daemon: unable to find a node with port 80/tcp available in the Host mode
However port binding to the different value, e.g. `81`, is still allowed.
```bash
$ docker run -d -p 81:80 nginx:latest
832f42819adc
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
832f42819adc nginx:1 "nginx -g 'daemon of Less than a second ago Up Less than a second 443/tcp, 192.168.136.136:81->80/tcp box3/thirsty_hawking
640297cb29a7 nginx:1 "nginx -g 'daemon of 8 seconds ago Up About a minute box3/furious_heisenberg
7ecf562b1b3f nginx:1 "nginx -g 'daemon of 13 seconds ago Up About a minute box2/ecstatic_meitner
09a92f582bc2 nginx:1 "nginx -g 'daemon of About a minute ago Up About a minute box1/mad_goldstine
```
$ docker run -d -p 81:80 nginx:latest
832f42819adc
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
832f42819adc nginx:1 "nginx -g 'daemon of Less than a second ago Up Less than a second 443/tcp, 192.168.136.136:81->80/tcp box3/thirsty_hawking
640297cb29a7 nginx:1 "nginx -g 'daemon of 8 seconds ago Up About a minute box3/furious_heisenberg
7ecf562b1b3f nginx:1 "nginx -g 'daemon of 13 seconds ago Up About a minute box2/ecstatic_meitner
09a92f582bc2 nginx:1 "nginx -g 'daemon of About a minute ago Up About a minute box1/mad_goldstine
## Dependency Filter

View File

@ -52,26 +52,24 @@ have 2G of RAM, 2 CPUs, and neither node is running a container. Under this stra
When you run a new container, the system chooses `node-1` at random from the swarm
of two equally ranked nodes:
```bash
$ docker run -d -P -m 1G --name db mysql
f8b693db9cd6
$ docker run -d -P -m 1G --name db mysql
f8b693db9cd6
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES
f8b693db9cd6 mysql:latest "mysqld" Less than a second ago running 192.168.0.42:49178->3306/tcp node-1 db
```
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES
f8b693db9cd6 mysql:latest "mysqld" Less than a second ago running 192.168.0.42:49178->3306/tcp node-1 db
Now, we start another container and ask for 1G of RAM again.
```bash
$ docker run -d -P -m 1G --name frontend nginx
963841b138d8
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES
963841b138d8 nginx:latest "nginx" Less than a second ago running 192.168.0.42:49177->80/tcp node-2 frontend
f8b693db9cd6 mysql:latest "mysqld" Up About a minute running 192.168.0.42:49178->3306/tcp node-1 db
```
$ docker run -d -P -m 1G --name frontend nginx
963841b138d8
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES
963841b138d8 nginx:latest "nginx" Less than a second ago running 192.168.0.42:49177->80/tcp node-2 frontend
f8b693db9cd6 mysql:latest "mysqld" Up About a minute running 192.168.0.42:49178->3306/tcp node-1 db
The container `frontend` was started on `node-2` because it was the node the
least loaded already. If two nodes have the same amount of available RAM and
@ -83,26 +81,26 @@ In this example, let's says that both `node-1` and `node-2` have 2G of RAM and
neither is running a container. Again, the nodes are equal. When you run a new
container, the system chooses `node-1` at random from the swarm:
```bash
$ docker run -d -P -m 1G --name db mysql
f8b693db9cd6
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES
f8b693db9cd6 mysql:latest "mysqld" Less than a second ago running 192.168.0.42:49178->3306/tcp node-1 db
```
$ docker run -d -P -m 1G --name db mysql
f8b693db9cd6
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES
f8b693db9cd6 mysql:latest "mysqld" Less than a second ago running 192.168.0.42:49178->3306/tcp node-1 db
Now, you start another container, asking for 1G of RAM again.
```bash
$ docker run -d -P -m 1G --name frontend nginx
963841b138d8
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES
963841b138d8 nginx:latest "nginx" Less than a second ago running 192.168.0.42:49177->80/tcp node-1 frontend
f8b693db9cd6 mysql:latest "mysqld" Up About a minute running 192.168.0.42:49178->3306/tcp node-1 db
```
$ docker run -d -P -m 1G --name frontend nginx
963841b138d8
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NODE NAMES
963841b138d8 nginx:latest "nginx" Less than a second ago running 192.168.0.42:49177->80/tcp node-1 frontend
f8b693db9cd6 mysql:latest "mysqld" Up About a minute running 192.168.0.42:49178->3306/tcp node-1 db
The system starts the new `frontend` container on `node-1` because it was the
node the most packed already. This allows us to start a container requiring 2G