mirror of https://github.com/docker/docs.git
commit
bdbff32ee3
27
README.md
27
README.md
|
@ -26,36 +26,35 @@ Full documentation [is available here](http://docs.docker.com/swarm/).
|
|||
|
||||
## Development installation
|
||||
|
||||
You can download and install from source instead of using the Docker image.
|
||||
Ensure you have golang, godep and git client installed.
|
||||
You can download and install from source instead of using the Docker
|
||||
image. Ensure you have golang, godep and the git client installed.
|
||||
|
||||
**For example, on Ubuntu you'd run:**
|
||||
|
||||
```bash
|
||||
apt-get install golang git
|
||||
go get github.com/tools/godep
|
||||
$ apt-get install golang git
|
||||
$ go get github.com/tools/godep
|
||||
```
|
||||
|
||||
You may need to set `$GOPATH`, e.g `mkdir ~/gocode; export GOPATH=~/gocode`.
|
||||
|
||||
|
||||
**For example, on Mac OS X you'd run:**
|
||||
|
||||
```bash
|
||||
brew install go
|
||||
export GOPATH=~/go
|
||||
export PATH=$PATH:~/go/bin
|
||||
go get github.com/tools/godep
|
||||
$ brew install go
|
||||
$ export GOPATH=~/go
|
||||
$ export PATH=$PATH:~/go/bin
|
||||
$ go get github.com/tools/godep
|
||||
```
|
||||
|
||||
Then install the `swarm` binary:
|
||||
|
||||
```bash
|
||||
mkdir -p $GOPATH/src/github.com/docker/
|
||||
cd $GOPATH/src/github.com/docker/
|
||||
git clone https://github.com/docker/swarm
|
||||
cd swarm
|
||||
godep go install .
|
||||
$ mkdir -p $GOPATH/src/github.com/docker/
|
||||
$ cd $GOPATH/src/github.com/docker/
|
||||
$ git clone https://github.com/docker/swarm
|
||||
$ cd swarm
|
||||
$ godep go install .
|
||||
```
|
||||
|
||||
From here, you can follow the instructions [in the main documentation](http://docs.docker.com/swarm/),
|
||||
|
|
|
@ -6,58 +6,83 @@ page_keywords: docker, swarm, clustering, discovery
|
|||
|
||||
# Discovery
|
||||
|
||||
`Docker Swarm` comes with multiple Discovery backends
|
||||
Docker Swarm comes with multiple Discovery backends.
|
||||
|
||||
## Examples
|
||||
## Backends
|
||||
|
||||
### Using the hosted discovery service
|
||||
### Hosted Discovery with Docker Hub
|
||||
|
||||
First we create a cluster.
|
||||
|
||||
```bash
|
||||
# create a cluster
|
||||
$ swarm create
|
||||
6856663cdefdec325839a4b7e1de38e8 # <- this is your unique <cluster_id>
|
||||
```
|
||||
|
||||
Then we create each node and join them to the cluster.
|
||||
|
||||
```bash
|
||||
# on each of your nodes, start the swarm agent
|
||||
# <node_ip> doesn't have to be public (eg. 192.168.0.X),
|
||||
# as long as the swarm manager can access it.
|
||||
$ swarm join --addr=<node_ip:2375> token://<cluster_id>
|
||||
```
|
||||
|
||||
# start the manager on any machine or your laptop
|
||||
Finally, we start the Swarm manager. This can be on any machine or even
|
||||
your laptop.
|
||||
|
||||
```bash
|
||||
$ swarm manage -H tcp://<swarm_ip:swarm_port> token://<cluster_id>
|
||||
```
|
||||
|
||||
# use the regular docker cli
|
||||
$ docker -H tcp://<swarm_ip:swarm_port> info
|
||||
$ docker -H tcp://<swarm_ip:swarm_port> run ...
|
||||
$ docker -H tcp://<swarm_ip:swarm_port> ps
|
||||
$ docker -H tcp://<swarm_ip:swarm_port> logs ...
|
||||
You can then use regular Docker commands to interact with your swarm.
|
||||
|
||||
```bash
|
||||
docker -H tcp://<swarm_ip:swarm_port> info
|
||||
docker -H tcp://<swarm_ip:swarm_port> run ...
|
||||
docker -H tcp://<swarm_ip:swarm_port> ps
|
||||
docker -H tcp://<swarm_ip:swarm_port> logs ...
|
||||
...
|
||||
```
|
||||
|
||||
# list nodes in your cluster
|
||||
$ swarm list token://<cluster_id>
|
||||
You can also list the nodes in your cluster.
|
||||
|
||||
```bash
|
||||
swarm list token://<cluster_id>
|
||||
<node_ip:2375>
|
||||
```
|
||||
|
||||
### Using a static file describing the cluster
|
||||
|
||||
For each of your nodes, add a line to a file. The node IP address
|
||||
doesn't need to be public as long the Swarm manager can access it.
|
||||
|
||||
```bash
|
||||
# for each of your nodes, add a line to a file
|
||||
# <node_ip> doesn't have to be public (eg. 192.168.0.X),
|
||||
# as long as the swarm manager can access it.
|
||||
$ echo <node_ip1:2375> >> /tmp/my_cluster
|
||||
$ echo <node_ip2:2375> >> /tmp/my_cluster
|
||||
$ echo <node_ip3:2375> >> /tmp/my_cluster
|
||||
echo <node_ip1:2375> >> /tmp/my_cluster
|
||||
echo <node_ip2:2375> >> /tmp/my_cluster
|
||||
echo <node_ip3:2375> >> /tmp/my_cluster
|
||||
```
|
||||
|
||||
# start the manager on any machine or your laptop
|
||||
$ swarm manage -H tcp://<swarm_ip:swarm_port> file:///tmp/my_cluster
|
||||
Then start the Swarm manager on any machine.
|
||||
|
||||
# use the regular docker cli
|
||||
$ docker -H tcp://<swarm_ip:swarm_port> info
|
||||
$ docker -H tcp://<swarm_ip:swarm_port> run ...
|
||||
$ docker -H tcp://<swarm_ip:swarm_port> ps
|
||||
$ docker -H tcp://<swarm_ip:swarm_port> logs ...
|
||||
```bash
|
||||
swarm manage -H tcp://<swarm_ip:swarm_port> file:///tmp/my_cluster
|
||||
```
|
||||
|
||||
And then use the regular Docker commands.
|
||||
|
||||
```bash
|
||||
docker -H tcp://<swarm_ip:swarm_port> info
|
||||
docker -H tcp://<swarm_ip:swarm_port> run ...
|
||||
docker -H tcp://<swarm_ip:swarm_port> ps
|
||||
docker -H tcp://<swarm_ip:swarm_port> logs ...
|
||||
...
|
||||
```
|
||||
|
||||
# list nodes in your cluster
|
||||
You can list the nodes in your cluster.
|
||||
|
||||
```bash
|
||||
$ swarm list file:///tmp/my_cluster
|
||||
<node_ip1:2375>
|
||||
<node_ip2:2375>
|
||||
|
@ -66,114 +91,154 @@ $ swarm list file:///tmp/my_cluster
|
|||
|
||||
### Using etcd
|
||||
|
||||
On each of your nodes, start the Swarm agent. The node IP address
|
||||
doesn't have to be public as long as the swarm manager can access it.
|
||||
|
||||
```bash
|
||||
# on each of your nodes, start the swarm agent
|
||||
# <node_ip> doesn't have to be public (eg. 192.168.0.X),
|
||||
# as long as the swarm manager can access it.
|
||||
$ swarm join --addr=<node_ip:2375> etcd://<etcd_ip>/<path>
|
||||
swarm join --addr=<node_ip:2375> etcd://<etcd_ip>/<path>
|
||||
```
|
||||
|
||||
# start the manager on any machine or your laptop
|
||||
$ swarm manage -H tcp://<swarm_ip:swarm_port> etcd://<etcd_ip>/<path>
|
||||
Start the manager on any machine or your laptop.
|
||||
|
||||
# use the regular docker cli
|
||||
$ docker -H tcp://<swarm_ip:swarm_port> info
|
||||
$ docker -H tcp://<swarm_ip:swarm_port> run ...
|
||||
$ docker -H tcp://<swarm_ip:swarm_port> ps
|
||||
$ docker -H tcp://<swarm_ip:swarm_port> logs ...
|
||||
```bash
|
||||
swarm manage -H tcp://<swarm_ip:swarm_port> etcd://<etcd_ip>/<path>
|
||||
```
|
||||
|
||||
And then use the regular Docker commands.
|
||||
|
||||
```bash
|
||||
docker -H tcp://<swarm_ip:swarm_port> info
|
||||
docker -H tcp://<swarm_ip:swarm_port> run ...
|
||||
docker -H tcp://<swarm_ip:swarm_port> ps
|
||||
docker -H tcp://<swarm_ip:swarm_port> logs ...
|
||||
...
|
||||
```
|
||||
|
||||
# list nodes in your cluster
|
||||
$ swarm list etcd://<etcd_ip>/<path>
|
||||
You can list the nodes in your cluster.
|
||||
|
||||
```bash
|
||||
swarm list etcd://<etcd_ip>/<path>
|
||||
<node_ip:2375>
|
||||
```
|
||||
|
||||
### Using consul
|
||||
|
||||
On each of your nodes, start the Swarm agent. The node IP address
|
||||
doesn't need to be public as long as the Swarm manager can access it.
|
||||
|
||||
```bash
|
||||
# on each of your nodes, start the swarm agent
|
||||
# <node_ip> doesn't have to be public (eg. 192.168.0.X),
|
||||
# as long as the swarm manager can access it.
|
||||
$ swarm join --addr=<node_ip:2375> consul://<consul_addr>/<path>
|
||||
swarm join --addr=<node_ip:2375> consul://<consul_addr>/<path>
|
||||
```
|
||||
|
||||
# start the manager on any machine or your laptop
|
||||
$ swarm manage -H tcp://<swarm_ip:swarm_port> consul://<consul_addr>/<path>
|
||||
Start the manager on any machine or your laptop.
|
||||
|
||||
# use the regular docker cli
|
||||
$ docker -H tcp://<swarm_ip:swarm_port> info
|
||||
$ docker -H tcp://<swarm_ip:swarm_port> run ...
|
||||
$ docker -H tcp://<swarm_ip:swarm_port> ps
|
||||
$ docker -H tcp://<swarm_ip:swarm_port> logs ...
|
||||
```bash
|
||||
swarm manage -H tcp://<swarm_ip:swarm_port> consul://<consul_addr>/<path>
|
||||
```
|
||||
|
||||
And then use the regular Docker commands.
|
||||
|
||||
```bash
|
||||
docker -H tcp://<swarm_ip:swarm_port> info
|
||||
docker -H tcp://<swarm_ip:swarm_port> run ...
|
||||
docker -H tcp://<swarm_ip:swarm_port> ps
|
||||
docker -H tcp://<swarm_ip:swarm_port> logs ...
|
||||
...
|
||||
```
|
||||
|
||||
# list nodes in your cluster
|
||||
$ swarm list consul://<consul_addr>/<path>
|
||||
You can list the nodes in your cluster.
|
||||
|
||||
```bash
|
||||
swarm list consul://<consul_addr>/<path>
|
||||
<node_ip:2375>
|
||||
```
|
||||
|
||||
### Using zookeeper
|
||||
|
||||
On each of your nodes, start the Swarm agent. The node IP doesn't have
|
||||
to be public as long as the swarm manager can access it.
|
||||
|
||||
```bash
|
||||
# on each of your nodes, start the swarm agent
|
||||
# <node_ip> doesn't have to be public (eg. 192.168.0.X),
|
||||
# as long as the swarm manager can access it.
|
||||
$ swarm join --addr=<node_ip:2375> zk://<zookeeper_addr1>,<zookeeper_addr2>/<path>
|
||||
swarm join --addr=<node_ip:2375> zk://<zookeeper_addr1>,<zookeeper_addr2>/<path>
|
||||
```
|
||||
|
||||
# start the manager on any machine or your laptop
|
||||
$ swarm manage -H tcp://<swarm_ip:swarm_port> zk://<zookeeper_addr1>,<zookeeper_addr2>/<path>
|
||||
Start the manager on any machine or your laptop.
|
||||
|
||||
# use the regular docker cli
|
||||
$ docker -H tcp://<swarm_ip:swarm_port> info
|
||||
$ docker -H tcp://<swarm_ip:swarm_port> run ...
|
||||
$ docker -H tcp://<swarm_ip:swarm_port> ps
|
||||
$ docker -H tcp://<swarm_ip:swarm_port> logs ...
|
||||
```bash
|
||||
swarm manage -H tcp://<swarm_ip:swarm_port> zk://<zookeeper_addr1>,<zookeeper_addr2>/<path>
|
||||
```
|
||||
|
||||
You can then use the regular Docker commands.
|
||||
|
||||
```bash
|
||||
docker -H tcp://<swarm_ip:swarm_port> info
|
||||
docker -H tcp://<swarm_ip:swarm_port> run ...
|
||||
docker -H tcp://<swarm_ip:swarm_port> ps
|
||||
docker -H tcp://<swarm_ip:swarm_port> logs ...
|
||||
...
|
||||
```
|
||||
|
||||
# list nodes in your cluster
|
||||
$ swarm list zk://<zookeeper_addr1>,<zookeeper_addr2>/<path>
|
||||
You can list the nodes in the cluster.
|
||||
|
||||
```bash
|
||||
swarm list zk://<zookeeper_addr1>,<zookeeper_addr2>/<path>
|
||||
<node_ip:2375>
|
||||
```
|
||||
|
||||
### Using a static list of ips
|
||||
### Using a static list of IP addresses
|
||||
|
||||
Start the manager on any machine or your laptop
|
||||
|
||||
```bash
|
||||
# start the manager on any machine or your laptop
|
||||
$ swarm manage -H <swarm_ip:swarm_port> nodes://<node_ip1:2375>,<node_ip2:2375>
|
||||
# or
|
||||
$ swarm manage -H <swarm_ip:swarm_port> <node_ip1:2375>,<node_ip2:2375>
|
||||
swarm manage -H <swarm_ip:swarm_port> nodes://<node_ip1:2375>,<node_ip2:2375>
|
||||
```
|
||||
|
||||
# use the regular docker cli
|
||||
$ docker -H <swarm_ip:swarm_port> info
|
||||
$ docker -H <swarm_ip:swarm_port> run ...
|
||||
$ docker -H <swarm_ip:swarm_port> ps
|
||||
$ docker -H <swarm_ip:swarm_port> logs ...
|
||||
Or
|
||||
|
||||
```bash
|
||||
swarm manage -H <swarm_ip:swarm_port> <node_ip1:2375>,<node_ip2:2375>
|
||||
```
|
||||
|
||||
Then use the regular Docker commands.
|
||||
|
||||
```bash
|
||||
docker -H <swarm_ip:swarm_port> info
|
||||
docker -H <swarm_ip:swarm_port> run ...
|
||||
docker -H <swarm_ip:swarm_port> ps
|
||||
docker -H <swarm_ip:swarm_port> logs ...
|
||||
...
|
||||
```
|
||||
|
||||
### Range pattern for IP addresses
|
||||
|
||||
The `file` and `nodes` discoveries support a range pattern to specify IP addresses, i.e., `10.0.0.[10:200]` will be a list of nodes starting from `10.0.0.10` to `10.0.0.200`.
|
||||
The `file` and `nodes` discoveries support a range pattern to specify IP
|
||||
addresses, i.e., `10.0.0.[10:200]` will be a list of nodes starting from
|
||||
`10.0.0.10` to `10.0.0.200`.
|
||||
|
||||
For example,
|
||||
For example for the `file` discovery method.
|
||||
|
||||
```bash
|
||||
# file example
|
||||
$ echo "10.0.0.[11:100]:2375" >> /tmp/my_cluster
|
||||
$ echo "10.0.1.[15:20]:2375" >> /tmp/my_cluster
|
||||
$ echo "192.168.1.2:[2:20]375" >> /tmp/my_cluster
|
||||
|
||||
# start the manager
|
||||
$ swarm manage -H tcp://<swarm_ip:swarm_port> file:///tmp/my_cluster
|
||||
```
|
||||
|
||||
Then start the manager.
|
||||
|
||||
```bash
|
||||
# nodes example
|
||||
$ swarm manage -H <swarm_ip:swarm_port> "nodes://10.0.0.[10:200]:2375,10.0.1.[2:250]:2375"
|
||||
swarm manage -H tcp://<swarm_ip:swarm_port> file:///tmp/my_cluster
|
||||
```
|
||||
|
||||
And for the `nodes` discovery method.
|
||||
|
||||
```bash
|
||||
swarm manage -H <swarm_ip:swarm_port> "nodes://10.0.0.[10:200]:2375,10.0.1.[2:250]:2375"
|
||||
```
|
||||
|
||||
## Contributing a new discovery backend
|
||||
|
||||
Contributing a new discovery backend is easy,
|
||||
simply implements this interface:
|
||||
Contributing a new discovery backend is easy, simply implement this
|
||||
interface:
|
||||
|
||||
```go
|
||||
type DiscoveryService interface {
|
||||
|
@ -185,16 +250,20 @@ type DiscoveryService interface {
|
|||
```
|
||||
|
||||
### Initialize
|
||||
The parameters are `discovery` location without the scheme and a heartbeat (in seconds)
|
||||
|
||||
The parameters are `discovery` location without the scheme and a heartbeat (in seconds).
|
||||
|
||||
### Fetch
|
||||
Returns the list of all the nodes from the discovery
|
||||
|
||||
Returns the list of all the nodes from the discovery.
|
||||
|
||||
### Watch
|
||||
Triggers an update (`Fetch`). This can happen either via
|
||||
a timer (like `token`) or use backend specific features (like `etcd`)
|
||||
|
||||
Triggers an update (`Fetch`). This can happen either via a timer (like
|
||||
`token`) or use backend specific features (like `etcd`).
|
||||
|
||||
### Register
|
||||
|
||||
Add a new node to the discovery service.
|
||||
|
||||
## Docker Swarm documentation index
|
||||
|
|
|
@ -47,16 +47,15 @@ strategy. For details, see the [scheduler filter README](https://github.com/dock
|
|||
containers on the same host. For details, see
|
||||
[PR #972](https://github.com/docker/compose/pull/972).
|
||||
|
||||
|
||||
|
||||
## Pre-requisites for running Swarm
|
||||
|
||||
You must install Docker 1.4.0 or later on all nodes. While each node's IP need not
|
||||
be public, the Swarm manager must be able to access each node across the network.
|
||||
|
||||
To enable communication between the Swarm manager and the Swarm node agent on each
|
||||
node, each node must listen to the same network interface (tcp port). Follow the set
|
||||
up below to ensure you configure your nodes correctly for this behavior.
|
||||
To enable communication between the Swarm manager and the Swarm node agent on
|
||||
each node, each node must listen to the same network interface (TCP port).
|
||||
Follow the set up below to ensure you configure your nodes correctly for this
|
||||
behavior.
|
||||
|
||||
> **Note**: Swarm is currently in beta, so things are likely to change. We
|
||||
> don't recommend you use it in production yet.
|
||||
|
@ -67,12 +66,12 @@ The easiest way to get started with Swarm is to use the
|
|||
[official Docker image](https://registry.hub.docker.com/_/swarm/).
|
||||
|
||||
```bash
|
||||
docker pull swarm
|
||||
$ docker pull swarm
|
||||
```
|
||||
|
||||
## Set up Swarm nodes
|
||||
|
||||
Each swarm node will run a swarm node agent. The agent registers the referenced
|
||||
Each Swarm node will run a Swarm node agent. The agent registers the referenced
|
||||
Docker daemon, monitors it, and updates the discovery backend with the node's status.
|
||||
|
||||
The following example uses the Docker Hub based `token` discovery service:
|
||||
|
@ -84,18 +83,22 @@ The following example uses the Docker Hub based `token` discovery service:
|
|||
6856663cdefdec325839a4b7e1de38e8 # <- this is your unique <cluster_id>
|
||||
```
|
||||
|
||||
The create command returns a unique cluster id (`cluster_id`). You'll need
|
||||
this id when starting the Swarm agent on a node.
|
||||
The `create` command returns a unique cluster ID (`cluster_id`). You'll need
|
||||
this ID when starting the Swarm agent on a node.
|
||||
|
||||
2. Log into **each node** and do the following.
|
||||
|
||||
1. Start the docker daemon with the `-H` flag. This ensures that the docker remote API on *Swarm Agents* is available over TCP for the *Swarm Manager*.
|
||||
1. Start the Docker daemon with the `-H` flag. This ensures that the Docker remote API on *Swarm Agents* is available over TCP for the *Swarm Manager*.
|
||||
|
||||
```bash
|
||||
$ docker -H tcp://0.0.0.0:2375 -d
|
||||
```
|
||||
|
||||
2. Register the Swarm agents to the discovery service. The node's IP must be accessible from the Swarm Manager. Use the following command and replace with the proper `node_ip` and `cluster_id` to start an agent:
|
||||
|
||||
docker run -d swarm join --addr=<node_ip:2375> token://<cluster_id>
|
||||
```bash
|
||||
docker run -d swarm join --addr=<node_ip:2375> token://<cluster_id>
|
||||
```
|
||||
|
||||
For example:
|
||||
|
||||
|
@ -106,41 +109,47 @@ The following example uses the Docker Hub based `token` discovery service:
|
|||
3. Start the Swarm manager on any machine or your laptop. The following command
|
||||
illustrates how to do this:
|
||||
|
||||
```bash
|
||||
docker run -d -p <swarm_port>:2375 swarm manage token://<cluster_id>
|
||||
```
|
||||
|
||||
4. Once the manager is running, check your configuration by running `docker info` as follows:
|
||||
|
||||
```bash
|
||||
docker -H tcp://<manager_ip:manager_port> info
|
||||
|
||||
For example, if you run the manager locally on your machine:
|
||||
```
|
||||
|
||||
For example, if you run the manager locally on your machine:
|
||||
|
||||
```bash
|
||||
$ docker -H tcp://0.0.0.0:2375 info
|
||||
Containers: 0
|
||||
Nodes: 3
|
||||
agent-2: 172.31.40.102:2375
|
||||
└ Containers: 0
|
||||
└ Reserved CPUs: 0 / 1
|
||||
└ Reserved Memory: 0 B / 514.5 MiB
|
||||
agent-1: 172.31.40.101:2375
|
||||
└ Containers: 0
|
||||
└ Reserved CPUs: 0 / 1
|
||||
└ Reserved Memory: 0 B / 514.5 MiB
|
||||
agent-0: 172.31.40.100:2375
|
||||
└ Containers: 0
|
||||
└ Reserved CPUs: 0 / 1
|
||||
└ Reserved Memory: 0 B / 514.5 MiB
|
||||
```
|
||||
|
||||
```bash
|
||||
$ docker -H tcp://0.0.0.0:2375 info
|
||||
Containers: 0
|
||||
Nodes: 3
|
||||
agent-2: 172.31.40.102:2375
|
||||
└ Containers: 0
|
||||
└ Reserved CPUs: 0 / 1
|
||||
└ Reserved Memory: 0 B / 514.5 MiB
|
||||
agent-1: 172.31.40.101:2375
|
||||
└ Containers: 0
|
||||
└ Reserved CPUs: 0 / 1
|
||||
└ Reserved Memory: 0 B / 514.5 MiB
|
||||
agent-0: 172.31.40.100:2375
|
||||
└ Containers: 0
|
||||
└ Reserved CPUs: 0 / 1
|
||||
└ Reserved Memory: 0 B / 514.5 MiB
|
||||
```
|
||||
|
||||
If you are running a test cluster without TLS enabled, you may get an error. In that case, be sure to unset `DOCKER_TLS_VERIFY` with:
|
||||
|
||||
|
||||
```bash
|
||||
$ unset DOCKER_TLS_VERIFY
|
||||
```
|
||||
|
||||
## Using the docker CLI
|
||||
|
||||
You can now use the regular `docker` CLI to access your nodes:
|
||||
You can now use the regular Docker CLI to access your nodes:
|
||||
|
||||
```
|
||||
```bash
|
||||
docker -H tcp://<manager_ip:manager_port> info
|
||||
docker -H tcp://<manager_ip:manager_port> run ...
|
||||
docker -H tcp://<manager_ip:manager_port> ps
|
||||
|
@ -151,8 +160,8 @@ docker -H tcp://<manager_ip:manager_port> logs ...
|
|||
|
||||
You can get a list of all your running nodes using the `swarm list` command:
|
||||
|
||||
```
|
||||
`docker run --rm swarm list token://<cluster_id>`
|
||||
```bash
|
||||
docker run --rm swarm list token://<cluster_id>
|
||||
<node_ip:2375>
|
||||
```
|
||||
|
||||
|
@ -174,13 +183,15 @@ certificates **must** be signed using the same CA-certificate.
|
|||
In order to enable TLS for both client and server, the same command line options
|
||||
as Docker can be specified:
|
||||
|
||||
`swarm manage --tlsverify --tlscacert=<CACERT> --tlscert=<CERT> --tlskey=<KEY> [...]`
|
||||
```bash
|
||||
swarm manage --tlsverify --tlscacert=<CACERT> --tlscert=<CERT> --tlskey=<KEY> [...]
|
||||
```
|
||||
|
||||
Please refer to the [Docker documentation](https://docs.docker.com/articles/https/)
|
||||
for more information on how to set up TLS authentication on Docker and generating
|
||||
the certificates.
|
||||
|
||||
> **Note**: Swarm certificates must be generated with`extendedKeyUsage = clientAuth,serverAuth`.
|
||||
> **Note**: Swarm certificates must be generated with `extendedKeyUsage = clientAuth,serverAuth`.
|
||||
|
||||
## Discovery services
|
||||
|
||||
|
@ -195,10 +206,9 @@ more about advanced scheduling.
|
|||
## Swarm API
|
||||
|
||||
The [Docker Swarm API](https://docs.docker.com/swarm/API/) is compatible with
|
||||
the [Docker
|
||||
remote API](http://docs.docker.com/reference/api/docker_remote_api/), and
|
||||
extends it with some new endpoints.
|
||||
|
||||
the [Docker remote
|
||||
API](http://docs.docker.com/reference/api/docker_remote_api/), and extends it
|
||||
with some new endpoints.
|
||||
|
||||
## Getting help
|
||||
|
||||
|
@ -212,4 +222,4 @@ like-minded individuals, we have a number of open channels for communication.
|
|||
|
||||
* To contribute code or documentation changes: please submit a [pull request on Github](https://github.com/docker/machine/pulls).
|
||||
|
||||
For more information and resources, please visit the [Getting Help project page](https://docs.docker.com/project/get-help/).
|
||||
For more information and resources, please visit the [Getting Help project page](https://docs.docker.com/project/get-help/).
|
||||
|
|
|
@ -57,7 +57,7 @@ tags and will take them into account when scheduling new containers.
|
|||
Let's start a MySQL server and make sure it gets good I/O performance by selecting
|
||||
nodes with flash drives:
|
||||
|
||||
```
|
||||
```bash
|
||||
$ docker run -d -P -e constraint:storage==ssd --name db mysql
|
||||
f8b693db9cd6
|
||||
|
||||
|
@ -70,10 +70,10 @@ In this case, the master selected all nodes that met the `storage=ssd` constrain
|
|||
and applied resource management on top of them, as discussed earlier.
|
||||
`node-1` was selected in this example since it's the only host running flash.
|
||||
|
||||
Now we want to run an `nginx` frontend in our cluster. However, we don't want
|
||||
Now we want to run an Nginx frontend in our cluster. However, we don't want
|
||||
*flash* drives since we'll mostly write logs to disk.
|
||||
|
||||
```
|
||||
```bash
|
||||
$ docker run -d -P -e constraint:storage==disk --name frontend nginx
|
||||
963841b138d8
|
||||
|
||||
|
@ -102,7 +102,7 @@ without specifying them when starting the node. Those tags are sourced from
|
|||
|
||||
You can schedule 2 containers and make the container #2 next to the container #1.
|
||||
|
||||
```
|
||||
```bash
|
||||
$ docker run -d -p 80:80 --name front nginx
|
||||
87c4376856a8
|
||||
|
||||
|
@ -114,7 +114,7 @@ CONTAINER ID IMAGE COMMAND CREATED
|
|||
Using `-e affinity:container==front` will schedule a container next to the container `front`.
|
||||
You can also use IDs instead of name: `-e affinity:container==87c4376856a8`
|
||||
|
||||
```
|
||||
```bash
|
||||
$ docker run -d --name logger -e affinity:container==front logger
|
||||
87c4376856a8
|
||||
|
||||
|
@ -128,9 +128,9 @@ The `logger` container ends up on `node-1` because its affinity with the contain
|
|||
|
||||
#### Images
|
||||
|
||||
You can schedule a container only on nodes where the images are already pulled.
|
||||
You can schedule a container only on nodes where a specific image is already pulled.
|
||||
|
||||
```
|
||||
```bash
|
||||
$ docker -H node-1:2375 pull redis
|
||||
$ docker -H node-2:2375 pull mysql
|
||||
$ docker -H node-3:2375 pull redis
|
||||
|
@ -139,7 +139,7 @@ $ docker -H node-3:2375 pull redis
|
|||
Here only `node-1` and `node-3` have the `redis` image. Using `-e affinity:image=redis` we can
|
||||
schedule container only on these 2 nodes. You can also use the image ID instead of its name.
|
||||
|
||||
```
|
||||
```bash
|
||||
$ docker run -d --name redis1 -e affinity:image==redis redis
|
||||
$ docker run -d --name redis2 -e affinity:image==redis redis
|
||||
$ docker run -d --name redis3 -e affinity:image==redis redis
|
||||
|
@ -161,7 +161,7 @@ CONTAINER ID IMAGE COMMAND CREATED
|
|||
963841b138d8 redis:latest "redis" Less than a second ago running node-1 redis8
|
||||
```
|
||||
|
||||
As you can see here, the containers were only scheduled on nodes with the redis image already pulled.
|
||||
As you can see here, the containers were only scheduled on nodes with the `redis` image already pulled.
|
||||
|
||||
#### Expression Syntax
|
||||
|
||||
|
@ -173,7 +173,7 @@ A `value` must be one of the following:
|
|||
* A globbing pattern, i.e., `abc*`.
|
||||
* A regular expression in the form of `/regexp/`. We support the Go's regular expression syntax.
|
||||
|
||||
Current `swarm` supports affinity/constraint operators as the following: `==` and `!=`.
|
||||
Currently Swarm supports the following affinity/constraint operators: `==` and `!=`.
|
||||
|
||||
For example,
|
||||
* `constraint:node==node1` will match node `node1`.
|
||||
|
@ -183,7 +183,7 @@ For example,
|
|||
* `constraint:node==/node\d/` will match all nodes with `node` + 1 digit.
|
||||
* `constraint:node!=/node-[01]/` will match all nodes, except `node-0` and `node-1`.
|
||||
* `constraint:node!=/foo\[bar\]/` will match all nodes, except `foo[bar]`. You can see the use of escape characters here.
|
||||
* `constraint:node==/(?i)node1/` will match node `node1` case-insensitive. So 'NoDe1' or 'NODE1' will also match.
|
||||
* `constraint:node==/(?i)node1/` will match node `node1` case-insensitive. So `NoDe1` or `NODE1` will also match.
|
||||
|
||||
#### Soft Affinities/Constraints
|
||||
|
||||
|
@ -193,33 +193,37 @@ affinities/constraints the scheduler will try to meet the rule. If it is not
|
|||
met, the scheduler will discard the filter and schedule the container according
|
||||
to the scheduler's strategy.
|
||||
|
||||
soft affinities/constraints are expressed with a **~** in the expression
|
||||
Example ,
|
||||
```
|
||||
Soft affinities/constraints are expressed with a **~** in the
|
||||
expression, for example:
|
||||
|
||||
```bash
|
||||
$ docker run -d --name redis1 -e affinity:image==~redis redis
|
||||
```
|
||||
If none of the nodes in the cluster has the image redis, the scheduler will
|
||||
discard the affinity and schedules according to the strategy.
|
||||
|
||||
```
|
||||
If none of the nodes in the cluster has the image `redis`, the scheduler will
|
||||
discard the affinity and schedule according to the strategy.
|
||||
|
||||
```bash
|
||||
$ docker run -d --name redis2 -e constraint:region==~us* redis
|
||||
```
|
||||
If none of the nodes in the cluster belongs to `us` region, the scheduler will
|
||||
discard the constraint and schedules according to the strategy.
|
||||
|
||||
```
|
||||
If none of the nodes in the cluster belongs to the `us` region, the scheduler will
|
||||
discard the constraint and schedule according to the strategy.
|
||||
|
||||
```bash
|
||||
$ docker run -d --name redis5 -e affinity:container!=~redis* redis
|
||||
```
|
||||
|
||||
The affinity filter will be used to schedule a new `redis5` container to a
|
||||
different node that doesn't have a container with the name that satisfies
|
||||
`redis*`. If each node in the cluster has a `redis*` container, the scheduler
|
||||
will discard the affinity rule and schedules according to the strategy.
|
||||
will discard the affinity rule and schedule according to the strategy.
|
||||
|
||||
## Port Filter
|
||||
|
||||
With this filter, `ports` are considered as unique resources.
|
||||
With this filter, `ports` are considered unique resources.
|
||||
|
||||
```
|
||||
```bash
|
||||
$ docker run -d -p 80:80 nginx
|
||||
87c4376856a8
|
||||
|
||||
|
@ -232,9 +236,9 @@ Docker cluster selects a node where the public `80` port is available and schedu
|
|||
a container on it, in this case `node-1`.
|
||||
|
||||
Attempting to run another container with the public `80` port will result in
|
||||
clustering selecting a different node, since that port is already occupied on `node-1`:
|
||||
the cluster selecting a different node, since that port is already occupied on `node-1`:
|
||||
|
||||
```
|
||||
```bash
|
||||
$ docker run -d -p 80:80 nginx
|
||||
963841b138d8
|
||||
|
||||
|
@ -247,7 +251,7 @@ CONTAINER ID IMAGE COMMAND PORTS
|
|||
Again, repeating the same command will result in the selection of `node-3`, since
|
||||
port `80` is neither available on `node-1` nor `node-2`:
|
||||
|
||||
```
|
||||
```bash
|
||||
$ docker run -d -p 80:80 nginx
|
||||
963841b138d8
|
||||
|
||||
|
@ -258,20 +262,26 @@ f8b693db9cd6 nginx:latest "nginx" 192.168.0.44:80->80/tcp
|
|||
87c4376856a8 nginx:latest "nginx" 192.168.0.42:80->80/tcp node-1 prickly_engelbart
|
||||
```
|
||||
|
||||
Finally, Docker Cluster will refuse to run another container that requires port
|
||||
Finally, Docker Swarm will refuse to run another container that requires port
|
||||
`80` since not a single node in the cluster has it available:
|
||||
|
||||
```
|
||||
```bash
|
||||
$ docker run -d -p 80:80 nginx
|
||||
2014/10/29 00:33:20 Error response from daemon: no resources available to schedule container
|
||||
```
|
||||
|
||||
### Port filter in Host Mode
|
||||
|
||||
Docker in the host mode, running with `--net=host`, differs from the default `bridge` mode as the `host` mode does not perform any port binding. So, it requires to explicitly expose one or more port numbers (using `EXPOSE` in the Dockerfile or `--expose` on the command line). `Swarm` makes use of this information in conjunction with the `host` mode to choose an available node for a new container.
|
||||
Docker in the host mode, running with `--net=host`, differs from the
|
||||
default `bridge` mode as the `host` mode does not perform any port
|
||||
binding. So, it require that you explicitly expose one or more port numbers
|
||||
(using `EXPOSE` in the `Dockerfile` or `--expose` on the command line).
|
||||
Swarm makes use of this information in conjunction with the `host`
|
||||
mode to choose an available node for a new container.
|
||||
|
||||
For example, the following commands start `nginx` on 3-node cluster.
|
||||
```
|
||||
|
||||
```bash
|
||||
$ docker run -d --expose=80 --net=host nginx
|
||||
640297cb29a7
|
||||
$ docker run -d --expose=80 --net=host nginx
|
||||
|
@ -280,8 +290,9 @@ $ docker run -d --expose=80 --net=host nginx
|
|||
09a92f582bc2
|
||||
```
|
||||
|
||||
Port binding information will not be available through `ps` command because they are all started in the `host` mode.
|
||||
```
|
||||
Port binding information will not be available through the `docker ps` command because all the nodes are started in the `host` mode.
|
||||
|
||||
```bash
|
||||
$ docker ps
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
640297cb29a7 nginx:1 "nginx -g 'daemon of Less than a second ago Up 30 seconds box3/furious_heisenberg
|
||||
|
@ -289,15 +300,16 @@ CONTAINER ID IMAGE COMMAND CREATED
|
|||
09a92f582bc2 nginx:1 "nginx -g 'daemon of 46 seconds ago Up 27 seconds box1/mad_goldstine
|
||||
```
|
||||
|
||||
Docker cluster will refuse the operation when trying to instantiate the 4th one.
|
||||
```
|
||||
The swarm will refuse the operation when trying to instantiate the 4th container.
|
||||
|
||||
```bash
|
||||
$ docker run -d --expose=80 --net=host nginx
|
||||
FATA[0000] Error response from daemon: unable to find a node with port 80/tcp available in the Host mode
|
||||
```
|
||||
|
||||
However port binding to the different value, e.g. 81, is still allowed.
|
||||
However port binding to the different value, e.g. `81`, is still allowed.
|
||||
|
||||
```
|
||||
```bash
|
||||
$ docker run -d -p 81:80 nginx:latest
|
||||
832f42819adc
|
||||
$ docker ps
|
||||
|
|
|
@ -24,7 +24,7 @@ strategy uses no computation. It selects a node at random and is primarily
|
|||
intended for debugging.
|
||||
|
||||
Your goal in choosing a strategy is to best optimize your swarm according to
|
||||
your company's needs.
|
||||
your company's needs.
|
||||
|
||||
Under the `spread` strategy, Swarm optimizes for the node with the least number
|
||||
of running containers. The `binpack` strategy causes Swarm to optimize for the
|
||||
|
@ -33,22 +33,21 @@ nodes at random regardless of their available CPU or RAM.
|
|||
|
||||
Using the `spread` strategy results in containers spread thinly over many
|
||||
machines. The advantage of this strategy is that if a node goes down you only
|
||||
loose a few containers.
|
||||
lose a few containers.
|
||||
|
||||
The `binpack` strategy avoids fragmentation because it leaves room for bigger
|
||||
containers on unused machines. The strategic advantage of `binpack` is that you
|
||||
use fewer machines as Swarm tries to pack as many containers as it can on a
|
||||
node.
|
||||
node.
|
||||
|
||||
If you do not specify a `--strategy` Swarm uses `spread` by default.
|
||||
|
||||
|
||||
## Spread strategy example
|
||||
|
||||
In this example, your swarm is using the `spread` strategy which optimizes for
|
||||
nodes that have the fewest containers. In this swarm, both `node-1` and `node-2`
|
||||
have 2G of RAM, 2 CPUs, and neither node is running a container. Under this strategy
|
||||
`node-1` and `node-2` have the same ranking.
|
||||
`node-1` and `node-2` have the same ranking.
|
||||
|
||||
When you run a new container, the system chooses `node-1` at random from the swarm
|
||||
of two equally ranked nodes:
|
||||
|
@ -75,17 +74,15 @@ f8b693db9cd6 mysql:latest "mysqld" Up About a minute
|
|||
```
|
||||
|
||||
The container `frontend` was started on `node-2` because it was the node the
|
||||
least packed already. If two nodes have the same amount of available RAM and
|
||||
least loaded already. If two nodes have the same amount of available RAM and
|
||||
CPUs, the `spread` strategy prefers the node with least containers running.
|
||||
|
||||
|
||||
## BinPack strategy example
|
||||
|
||||
In this example, let's says that both `node-1` and `node-2` have 2G of RAM and
|
||||
neither is running a container. Again, the nodes are equal. When you run a new
|
||||
container, the system chooses `node-1` at random from the swarm:
|
||||
|
||||
|
||||
```bash
|
||||
$ docker run -d -P -m 1G --name db mysql
|
||||
f8b693db9cd6
|
||||
|
@ -114,10 +111,9 @@ of RAM on `node-2`.
|
|||
If two nodes have the same amount of available RAM and CPUs, the `binpack`
|
||||
strategy prefers the node with most containers running.
|
||||
|
||||
|
||||
## Docker Swarm documentation index
|
||||
|
||||
- [User guide](https://docs.docker.com/swarm/)
|
||||
- [Discovery options](https://docs.docker.com/swarm/discovery/)
|
||||
- [Scheduler filters](https://docs.docker.com/swarm/scheduler/filter/)
|
||||
- [Swarm API](https://docs.docker.com/swarm/API/)
|
||||
- [Swarm API](https://docs.docker.com/swarm/API/)
|
||||
|
|
Loading…
Reference in New Issue