mirror of https://github.com/docker/docs.git
Add docker.docker.com metadata and reflow to 80-chars for GH diffs
Signed-off-by: Sven Dowideit <SvenDowideit@docker.com>
This commit is contained in:
parent
85fa8d4291
commit
14c4fb81cf
55
README.md
55
README.md
|
|
@ -1,16 +1,28 @@
|
||||||
## Swarm: a Docker-native clustering system [](https://travis-ci.org/docker/swarm)
|
---
|
||||||
|
page_title: Docker Swarm
|
||||||
|
page_description: Swarm: a Docker-native clustering system
|
||||||
|
page_keywords: docker, swarm, clustering
|
||||||
|
---
|
||||||
|
|
||||||
|
# Swarm: a Docker-native clustering system [](https://travis-ci.org/docker/swarm)
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
`swarm` is a simple tool which controls a cluster of Docker hosts and exposes it as a single "virtual" host.
|
`swarm` is a simple tool which controls a cluster of Docker hosts and exposes it
|
||||||
|
as a single "virtual" host.
|
||||||
|
|
||||||
`swarm` uses the standard Docker API as its frontend, which means any tool which speaks Docker can control swarm transparently: dokku, fig, krane, flynn, deis, docker-ui, shipyard, drone.io, Jenkins... and of course the Docker client itself.
|
`swarm` uses the standard Docker API as its frontend, which means any tool which
|
||||||
|
speaks Docker can control swarm transparently: dokku, fig, krane, flynn, deis,
|
||||||
|
docker-ui, shipyard, drone.io, Jenkins... and of course the Docker client itself.
|
||||||
|
|
||||||
Like the other Docker projects, `swarm` follows the "batteries included but removable" principle. It ships with a simple scheduling backend out of the box. The goal is to provide a smooth out-of-box experience for simple use cases, and allow swapping in more powerful backends, like `Mesos`, for large scale production deployments.
|
Like the other Docker projects, `swarm` follows the "batteries included but removable"
|
||||||
|
principle. It ships with a simple scheduling backend out of the box. The goal is
|
||||||
|
to provide a smooth out-of-box experience for simple use cases, and allow swapping
|
||||||
|
in more powerful backends, like `Mesos`, for large scale production deployments.
|
||||||
|
|
||||||
### Installation
|
## Installation
|
||||||
|
|
||||||
######1 - Download and install the current source code.
|
###1 - Download and install the current source code.
|
||||||
Ensure you have golang and git client installed (e.g. `apt-get install golang git` on Ubuntu).
|
Ensure you have golang and git client installed (e.g. `apt-get install golang git` on Ubuntu).
|
||||||
You may need to set `$GOPATH`, e.g `mkdir ~/gocode; export GOPATH=~/gocode`.
|
You may need to set `$GOPATH`, e.g `mkdir ~/gocode; export GOPATH=~/gocode`.
|
||||||
|
|
||||||
|
|
@ -20,13 +32,15 @@ The install `swarm` binary to your `$GOPATH` directory.
|
||||||
go get -u github.com/docker/swarm
|
go get -u github.com/docker/swarm
|
||||||
```
|
```
|
||||||
|
|
||||||
######2 - Nodes setup
|
###2 - Nodes setup
|
||||||
The only requirement for Swarm nodes is to run a regular Docker daemon (version `1.4.0` and later).
|
The only requirement for Swarm nodes is to run a regular Docker daemon (version
|
||||||
|
`1.4.0` and later).
|
||||||
|
|
||||||
In order for Swarm to be able to communicate with its nodes, they must bind on a network interface.
|
In order for Swarm to be able to communicate with its nodes, they must bind on a
|
||||||
This can be achieved by starting Docker with the `-H` flag (e.g. `-H tcp://0.0.0.0:2375`).
|
network interface. This can be achieved by starting Docker with the `-H` flag
|
||||||
|
(e.g. `-H tcp://0.0.0.0:2375`).
|
||||||
|
|
||||||
### Example usage
|
# Example usage
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# create a cluster
|
# create a cluster
|
||||||
|
|
@ -56,20 +70,23 @@ $ swarm list token://<cluster_id>
|
||||||
See [here](discovery) for more information about
|
See [here](discovery) for more information about
|
||||||
other discovery services.
|
other discovery services.
|
||||||
|
|
||||||
### Advanced Scheduling
|
## Advanced Scheduling
|
||||||
|
|
||||||
See [filters](scheduler/filter) and [strategies](scheduler/strategy) to learn more about advanced scheduling.
|
See [filters](scheduler/filter) and [strategies](scheduler/strategy) to learn
|
||||||
|
more about advanced scheduling.
|
||||||
|
|
||||||
### TLS
|
## TLS
|
||||||
|
|
||||||
Swarm supports TLS authentication between the CLI and Swarm but also between Swarm and the Docker nodes.
|
Swarm supports TLS authentication between the CLI and Swarm but also between
|
||||||
|
Swarm and the Docker nodes.
|
||||||
|
|
||||||
In order to enable TLS, the same command line options as Docker can be specified:
|
In order to enable TLS, the same command line options as Docker can be specified:
|
||||||
|
|
||||||
`swarm manage --tlsverify --tlscacert=<CACERT> --tlscert=<CERT> --tlskey=<KEY> [...]`
|
`swarm manage --tlsverify --tlscacert=<CACERT> --tlscert=<CERT> --tlskey=<KEY> [...]`
|
||||||
|
|
||||||
Please refer to the [Docker documentation](https://docs.docker.com/articles/https/) for more information on how
|
Please refer to the [Docker documentation](https://docs.docker.com/articles/https/)
|
||||||
to set up TLS authentication on Docker and generating the certificates.
|
for more information on how to set up TLS authentication on Docker and generating
|
||||||
|
the certificates.
|
||||||
|
|
||||||
Note that Swarm certificates must be generated with`extendedKeyUsage = clientAuth,serverAuth`.
|
Note that Swarm certificates must be generated with`extendedKeyUsage = clientAuth,serverAuth`.
|
||||||
|
|
||||||
|
|
@ -91,5 +108,7 @@ We welcome pull requests and patches; come say hi on IRC, #docker-swarm on freen
|
||||||
|
|
||||||
## Copyright and license
|
## Copyright and license
|
||||||
|
|
||||||
Code and documentation copyright 2014-2015 Docker, inc. Code released under the Apache 2.0 license.
|
Code and documentation copyright 2014-2015 Docker, inc. Code released under the
|
||||||
|
Apache 2.0 license.
|
||||||
|
|
||||||
Docs released under Creative commons.
|
Docs released under Creative commons.
|
||||||
|
|
|
||||||
|
|
@ -1,11 +1,16 @@
|
||||||
Docker Swarm API
|
---
|
||||||
================
|
page_title: Docker Swarm API
|
||||||
|
page_description: Swarm API
|
||||||
|
page_keywords: docker, swarm, clustering, api
|
||||||
|
---
|
||||||
|
|
||||||
|
# Docker Swarm API
|
||||||
|
|
||||||
The Docker Swarm API is compatible with the [Offical Docker API](https://docs.docker.com/reference/api/docker_remote_api/):
|
The Docker Swarm API is compatible with the [Offical Docker API](https://docs.docker.com/reference/api/docker_remote_api/):
|
||||||
|
|
||||||
Here are the main differences:
|
Here are the main differences:
|
||||||
|
|
||||||
####Some endpoints are not (yet) implemented
|
## Some endpoints are not (yet) implemented
|
||||||
|
|
||||||
```
|
```
|
||||||
GET "/images/get"
|
GET "/images/get"
|
||||||
|
|
@ -22,7 +27,7 @@ POST "/images/{name:.*}/tag"
|
||||||
DELETE "/images/{name:.*}"
|
DELETE "/images/{name:.*}"
|
||||||
```
|
```
|
||||||
|
|
||||||
####Some endpoints have more information
|
## Some endpoints have more information
|
||||||
|
|
||||||
* `GET "/containers/{name:.*}/json"`: New field `Node` added:
|
* `GET "/containers/{name:.*}/json"`: New field `Node` added:
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,11 +1,16 @@
|
||||||
Discovery
|
---
|
||||||
=========
|
page_title: Docker Swarm discovery
|
||||||
|
page_description: Swarm discovery
|
||||||
|
page_keywords: docker, swarm, clustering, discovery
|
||||||
|
---
|
||||||
|
|
||||||
|
# Discovery
|
||||||
|
|
||||||
`Docker Swarm` comes with multiple Discovery backends
|
`Docker Swarm` comes with multiple Discovery backends
|
||||||
|
|
||||||
## Examples
|
## Examples
|
||||||
|
|
||||||
##### Using the hosted discovery service
|
### Using the hosted discovery service
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# create a cluster
|
# create a cluster
|
||||||
|
|
@ -32,7 +37,7 @@ $ swarm list token://<cluster_id>
|
||||||
<node_ip:2375>
|
<node_ip:2375>
|
||||||
```
|
```
|
||||||
|
|
||||||
###### Using a static file describing the cluster
|
### Using a static file describing the cluster
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# for each of your nodes, add a line to a file
|
# for each of your nodes, add a line to a file
|
||||||
|
|
@ -59,7 +64,7 @@ $ swarm list file:///tmp/my_cluster
|
||||||
<node_ip3:2375>
|
<node_ip3:2375>
|
||||||
```
|
```
|
||||||
|
|
||||||
###### Using etcd
|
### Using etcd
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# on each of your nodes, start the swarm agent
|
# on each of your nodes, start the swarm agent
|
||||||
|
|
@ -82,7 +87,7 @@ $ swarm list etcd://<etcd_ip>/<path>
|
||||||
<node_ip:2375>
|
<node_ip:2375>
|
||||||
```
|
```
|
||||||
|
|
||||||
###### Using consul
|
### Using consul
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# on each of your nodes, start the swarm agent
|
# on each of your nodes, start the swarm agent
|
||||||
|
|
@ -105,7 +110,7 @@ $ swarm list consul://<consul_addr>/<path>
|
||||||
<node_ip:2375>
|
<node_ip:2375>
|
||||||
```
|
```
|
||||||
|
|
||||||
###### Using zookeeper
|
### Using zookeeper
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# on each of your nodes, start the swarm agent
|
# on each of your nodes, start the swarm agent
|
||||||
|
|
@ -128,7 +133,7 @@ $ swarm list zk://<zookeeper_addr1>,<zookeeper_addr2>/<path>
|
||||||
<node_ip:2375>
|
<node_ip:2375>
|
||||||
```
|
```
|
||||||
|
|
||||||
###### Using a static list of ips
|
### Using a static list of ips
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# start the manager on any machine or your laptop
|
# start the manager on any machine or your laptop
|
||||||
|
|
@ -158,15 +163,17 @@ type DiscoveryService interface {
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
######Initialize
|
## Extra tips
|
||||||
|
|
||||||
|
### Initialize
|
||||||
take the `discovery` without the scheme and a heartbeat (in seconds)
|
take the `discovery` without the scheme and a heartbeat (in seconds)
|
||||||
|
|
||||||
######Fetch
|
### Fetch
|
||||||
returns the list of all the nodes from the discovery
|
returns the list of all the nodes from the discovery
|
||||||
|
|
||||||
######Watch
|
### Watch
|
||||||
triggers an update (`Fetch`),it can happen either via
|
triggers an update (`Fetch`),it can happen either via
|
||||||
a timer (like `token`) or use backend specific features (like `etcd`)
|
a timer (like `token`) or use backend specific features (like `etcd`)
|
||||||
|
|
||||||
######Register
|
### Register
|
||||||
add a new node to the discovery
|
add a new node to the discovery
|
||||||
|
|
|
||||||
|
|
@ -1,30 +1,41 @@
|
||||||
Filters
|
---
|
||||||
=======
|
page_title: Docker Swarm filters
|
||||||
|
page_description: Swarm filters
|
||||||
|
page_keywords: docker, swarm, clustering, filters
|
||||||
|
---
|
||||||
|
|
||||||
|
# Filters
|
||||||
|
|
||||||
The `Docker Swarm` scheduler comes with multiple filters.
|
The `Docker Swarm` scheduler comes with multiple filters.
|
||||||
|
|
||||||
These filters are used to schedule containers on a subset of nodes.
|
These filters are used to schedule containers on a subset of nodes.
|
||||||
|
|
||||||
`Docker Swarm` currently supports 4 filters:
|
`Docker Swarm` currently supports 4 filters:
|
||||||
* [Constraint](README.md#constraint-filter)
|
* [Constraint](#constraint-filter)
|
||||||
* [Affinity](README.md#affinity-filter)
|
* [Affinity](#affinity-filter)
|
||||||
* [Port](README.md#port-filter)
|
* [Port](#port-filter)
|
||||||
* [Healthy](README.md#healthy-filter)
|
* [Healthy](#healthy-filter)
|
||||||
|
|
||||||
You can choose the filter(s) you want to use with the `--filter` flag of `swarm manage`
|
You can choose the filter(s) you want to use with the `--filter` flag of `swarm manage`
|
||||||
|
|
||||||
## Constraint Filter
|
## Constraint Filter
|
||||||
|
|
||||||
Constraints are key/value pairs associated to particular nodes. You can see them as *node tags*.
|
Constraints are key/value pairs associated to particular nodes. You can see them
|
||||||
|
as *node tags*.
|
||||||
|
|
||||||
When creating a container, the user can select a subset of nodes that should be considered for scheduling by specifying one or more sets of matching key/value pairs.
|
When creating a container, the user can select a subset of nodes that should be
|
||||||
|
considered for scheduling by specifying one or more sets of matching key/value pairs.
|
||||||
|
|
||||||
This approach has several practical use cases such as:
|
This approach has several practical use cases such as:
|
||||||
* Selecting specific host properties (such as `storage=ssd`, in order to schedule containers on specific hardware).
|
* Selecting specific host properties (such as `storage=ssd`, in order to schedule
|
||||||
* Tagging nodes based on their physical location (`region=us-east`, to force containers to run on a given location).
|
containers on specific hardware).
|
||||||
* Logical cluster partitioning (`environment=production`, to split a cluster into sub-clusters with different properties).
|
* Tagging nodes based on their physical location (`region=us-east`, to force
|
||||||
|
containers to run on a given location).
|
||||||
|
* Logical cluster partitioning (`environment=production`, to split a cluster into
|
||||||
|
sub-clusters with different properties).
|
||||||
|
|
||||||
To tag a node with a specific set of key/value pairs, one must pass a list of `--label` options at docker startup time.
|
To tag a node with a specific set of key/value pairs, one must pass a list of
|
||||||
|
`--label` options at docker startup time.
|
||||||
|
|
||||||
For instance, let's start `node-1` with the `storage=ssd` label:
|
For instance, let's start `node-1` with the `storage=ssd` label:
|
||||||
|
|
||||||
|
|
@ -40,9 +51,11 @@ $ docker -d --label storage=disk
|
||||||
$ swarm join --addr=192.168.0.43:2375 token://XXXXXXXXXXXXXXXXXX
|
$ swarm join --addr=192.168.0.43:2375 token://XXXXXXXXXXXXXXXXXX
|
||||||
```
|
```
|
||||||
|
|
||||||
Once the nodes are registered with the cluster, the master pulls their respective tags and will take them into account when scheduling new containers.
|
Once the nodes are registered with the cluster, the master pulls their respective
|
||||||
|
tags and will take them into account when scheduling new containers.
|
||||||
|
|
||||||
Let's start a MySQL server and make sure it gets good I/O performance by selecting nodes with flash drives:
|
Let's start a MySQL server and make sure it gets good I/O performance by selecting
|
||||||
|
nodes with flash drives:
|
||||||
|
|
||||||
```
|
```
|
||||||
$ docker run -d -P -e constraint:storage==ssd --name db mysql
|
$ docker run -d -P -e constraint:storage==ssd --name db mysql
|
||||||
|
|
@ -53,10 +66,12 @@ CONTAINER ID IMAGE COMMAND CREATED
|
||||||
f8b693db9cd6 mysql:latest "mysqld" Less than a second ago running 192.168.0.42:49178->3306/tcp node-1 db
|
f8b693db9cd6 mysql:latest "mysqld" Less than a second ago running 192.168.0.42:49178->3306/tcp node-1 db
|
||||||
```
|
```
|
||||||
|
|
||||||
In this case, the master selected all nodes that met the `storage=ssd` constraint and applied resource management on top of them, as discussed earlier.
|
In this case, the master selected all nodes that met the `storage=ssd` constraint
|
||||||
|
and applied resource management on top of them, as discussed earlier.
|
||||||
`node-1` was selected in this example since it's the only host running flash.
|
`node-1` was selected in this example since it's the only host running flash.
|
||||||
|
|
||||||
Now we want to run an `nginx` frontend in our cluster. However, we don't want *flash* drives since we'll mostly write logs to disk.
|
Now we want to run an `nginx` frontend in our cluster. However, we don't want
|
||||||
|
*flash* drives since we'll mostly write logs to disk.
|
||||||
|
|
||||||
```
|
```
|
||||||
$ docker run -d -P -e constraint:storage==disk --name frontend nginx
|
$ docker run -d -P -e constraint:storage==disk --name frontend nginx
|
||||||
|
|
@ -70,10 +85,11 @@ f8b693db9cd6 mysql:latest "mysqld" Up About a minute
|
||||||
|
|
||||||
The scheduler selected `node-2` since it was started with the `storage=disk` label.
|
The scheduler selected `node-2` since it was started with the `storage=disk` label.
|
||||||
|
|
||||||
#### Standard Constraints
|
## Standard Constraints
|
||||||
|
|
||||||
Additionally, a standard set of constraints can be used when scheduling containers without specifying them when starting the node.
|
Additionally, a standard set of constraints can be used when scheduling containers
|
||||||
Those tags are sourced from `docker info` and currently include:
|
without specifying them when starting the node. Those tags are sourced from
|
||||||
|
`docker info` and currently include:
|
||||||
|
|
||||||
* storagedriver
|
* storagedriver
|
||||||
* executiondriver
|
* executiondriver
|
||||||
|
|
@ -182,9 +198,12 @@ CONTAINER ID IMAGE COMMAND CREATED
|
||||||
87c4376856a8 nginx:latest "nginx" Less than a second ago running 192.168.0.42:80->80/tcp node-1 prickly_engelbart
|
87c4376856a8 nginx:latest "nginx" Less than a second ago running 192.168.0.42:80->80/tcp node-1 prickly_engelbart
|
||||||
```
|
```
|
||||||
|
|
||||||
Docker cluster selects a node where the public `80` port is available and schedules a container on it, in this case `node-1`.
|
Docker cluster selects a node where the public `80` port is available and schedules
|
||||||
|
a container on it, in this case `node-1`.
|
||||||
|
|
||||||
|
Attempting to run another container with the public `80` port will result in
|
||||||
|
clustering selecting a different node, since that port is already occupied on `node-1`:
|
||||||
|
|
||||||
Attempting to run another container with the public `80` port will result in clustering selecting a different node, since that port is already occupied on `node-1`:
|
|
||||||
```
|
```
|
||||||
$ docker run -d -p 80:80 nginx
|
$ docker run -d -p 80:80 nginx
|
||||||
963841b138d8
|
963841b138d8
|
||||||
|
|
@ -195,7 +214,9 @@ CONTAINER ID IMAGE COMMAND CREATED
|
||||||
87c4376856a8 nginx:latest "nginx" Up About a minute running 192.168.0.42:80->80/tcp node-1 prickly_engelbart
|
87c4376856a8 nginx:latest "nginx" Up About a minute running 192.168.0.42:80->80/tcp node-1 prickly_engelbart
|
||||||
```
|
```
|
||||||
|
|
||||||
Again, repeating the same command will result in the selection of `node-3`, since port `80` is neither available on `node-1` nor `node-2`:
|
Again, repeating the same command will result in the selection of `node-3`, since
|
||||||
|
port `80` is neither available on `node-1` nor `node-2`:
|
||||||
|
|
||||||
```
|
```
|
||||||
$ docker run -d -p 80:80 nginx
|
$ docker run -d -p 80:80 nginx
|
||||||
963841b138d8
|
963841b138d8
|
||||||
|
|
@ -207,7 +228,9 @@ f8b693db9cd6 nginx:latest "nginx" Less than a second a
|
||||||
87c4376856a8 nginx:latest "nginx" Up About a minute running 192.168.0.42:80->80/tcp node-1 prickly_engelbart
|
87c4376856a8 nginx:latest "nginx" Up About a minute running 192.168.0.42:80->80/tcp node-1 prickly_engelbart
|
||||||
```
|
```
|
||||||
|
|
||||||
Finally, Docker Cluster will refuse to run another container that requires port `80` since not a single node in the cluster has it available:
|
Finally, Docker Cluster will refuse to run another container that requires port
|
||||||
|
`80` since not a single node in the cluster has it available:
|
||||||
|
|
||||||
```
|
```
|
||||||
$ docker run -d -p 80:80 nginx
|
$ docker run -d -p 80:80 nginx
|
||||||
2014/10/29 00:33:20 Error response from daemon: no resources availalble to schedule container
|
2014/10/29 00:33:20 Error response from daemon: no resources availalble to schedule container
|
||||||
|
|
|
||||||
Loading…
Reference in New Issue