docs/discovery
Sven Dowideit c530cd1523 add an index and go make some small tweaks
Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>
2015-02-26 11:19:14 +10:00
..
consul update import url 2015-02-11 21:25:55 +08:00
etcd refactor createEntries 2015-02-10 02:25:38 +00:00
file refactor createEntries 2015-02-10 02:25:38 +00:00
nodes replace discovery.Node by discovery.Entry 2015-02-10 01:58:43 +00:00
token fix panic in token discovery 2015-02-18 22:08:07 -08:00
zookeeper refactor createEntries 2015-02-10 02:25:38 +00:00
README.md add an index and go make some small tweaks 2015-02-26 11:19:14 +10:00
discovery.go Ignore empty addresses when creating discovery entries 2015-02-24 22:10:31 -08:00
discovery_test.go Ignore empty addresses when creating discovery entries 2015-02-24 22:10:31 -08:00

README.md

page_title page_description page_keywords
Docker Swarm discovery Swarm discovery docker, swarm, clustering, discovery

Discovery

Docker Swarm comes with multiple Discovery backends

Examples

Using the hosted discovery service

# create a cluster
$ swarm create
6856663cdefdec325839a4b7e1de38e8 # <- this is your unique <cluster_id>

# on each of your nodes, start the swarm agent
#  <node_ip> doesn't have to be public (eg. 192.168.0.X),
#  as long as the swarm manager can access it.
$ swarm join --addr=<node_ip:2375> token://<cluster_id>

# start the manager on any machine or your laptop
$ swarm manage -H tcp://<swarm_ip:swarm_port> token://<cluster_id>

# use the regular docker cli
$ docker -H tcp://<swarm_ip:swarm_port> info
$ docker -H tcp://<swarm_ip:swarm_port> run ...
$ docker -H tcp://<swarm_ip:swarm_port> ps
$ docker -H tcp://<swarm_ip:swarm_port> logs ...
...

# list nodes in your cluster
$ swarm list token://<cluster_id>
<node_ip:2375>

Using a static file describing the cluster

# for each of your nodes, add a line to a file
#  <node_ip> doesn't have to be public (eg. 192.168.0.X),
#  as long as the swarm manager can access it.
$ echo <node_ip1:2375> >> /tmp/my_cluster
$ echo <node_ip2:2375> >> /tmp/my_cluster
$ echo <node_ip3:2375> >> /tmp/my_cluster

# start the manager on any machine or your laptop
$ swarm manage -H tcp://<swarm_ip:swarm_port> file:///tmp/my_cluster

# use the regular docker cli
$ docker -H tcp://<swarm_ip:swarm_port> info
$ docker -H tcp://<swarm_ip:swarm_port> run ...
$ docker -H tcp://<swarm_ip:swarm_port> ps
$ docker -H tcp://<swarm_ip:swarm_port> logs ...
...

# list nodes in your cluster
$ swarm list file:///tmp/my_cluster
<node_ip1:2375>
<node_ip2:2375>
<node_ip3:2375>

Using etcd

# on each of your nodes, start the swarm agent
#  <node_ip> doesn't have to be public (eg. 192.168.0.X),
#  as long as the swarm manager can access it.
$ swarm join --addr=<node_ip:2375> etcd://<etcd_ip>/<path>

# start the manager on any machine or your laptop
$ swarm manage -H tcp://<swarm_ip:swarm_port> etcd://<etcd_ip>/<path>

# use the regular docker cli
$ docker -H tcp://<swarm_ip:swarm_port> info
$ docker -H tcp://<swarm_ip:swarm_port> run ...
$ docker -H tcp://<swarm_ip:swarm_port> ps
$ docker -H tcp://<swarm_ip:swarm_port> logs ...
...

# list nodes in your cluster
$ swarm list etcd://<etcd_ip>/<path>
<node_ip:2375>

Using consul

# on each of your nodes, start the swarm agent
#  <node_ip> doesn't have to be public (eg. 192.168.0.X),
#  as long as the swarm manager can access it.
$ swarm join --addr=<node_ip:2375> consul://<consul_addr>/<path>

# start the manager on any machine or your laptop
$ swarm manage -H tcp://<swarm_ip:swarm_port> consul://<consul_addr>/<path>

# use the regular docker cli
$ docker -H tcp://<swarm_ip:swarm_port> info
$ docker -H tcp://<swarm_ip:swarm_port> run ...
$ docker -H tcp://<swarm_ip:swarm_port> ps
$ docker -H tcp://<swarm_ip:swarm_port> logs ...
...

# list nodes in your cluster
$ swarm list consul://<consul_addr>/<path>
<node_ip:2375>

Using zookeeper

# on each of your nodes, start the swarm agent
#  <node_ip> doesn't have to be public (eg. 192.168.0.X),
#  as long as the swarm manager can access it.
$ swarm join --addr=<node_ip:2375> zk://<zookeeper_addr1>,<zookeeper_addr2>/<path>

# start the manager on any machine or your laptop
$ swarm manage -H tcp://<swarm_ip:swarm_port> zk://<zookeeper_addr1>,<zookeeper_addr2>/<path>

# use the regular docker cli
$ docker -H tcp://<swarm_ip:swarm_port> info
$ docker -H tcp://<swarm_ip:swarm_port> run ...
$ docker -H tcp://<swarm_ip:swarm_port> ps
$ docker -H tcp://<swarm_ip:swarm_port> logs ...
...

# list nodes in your cluster
$ swarm list zk://<zookeeper_addr1>,<zookeeper_addr2>/<path>
<node_ip:2375>

Using a static list of ips

# start the manager on any machine or your laptop
$ swarm manage -H <swarm_ip:swarm_port> nodes://<node_ip1:2375>,<node_ip2:2375>
# or
$ swarm manage -H <swarm_ip:swarm_port> <node_ip1:2375>,<node_ip2:2375>

# use the regular docker cli
$ docker -H <swarm_ip:swarm_port> info
$ docker -H <swarm_ip:swarm_port> run ...
$ docker -H <swarm_ip:swarm_port> ps
$ docker -H <swarm_ip:swarm_port> logs ...
...

Contributing a new discovery backend

Contributing a new discovery backend is easy, simply implements this interface:

type DiscoveryService interface {
     Initialize(string, int) error
     Fetch() ([]string, error)
     Watch(WatchCallback)
     Register(string) error
}

Initialize

The parameters are discovery location without the scheme and a heartbeat (in seconds)

Fetch

Returns the list of all the nodes from the discovery

Watch

Triggers an update (Fetch). This can happen either via a timer (like token) or use backend specific features (like etcd)

Register

Add a new node to the discovery service.

Docker Swarm documentation index