diff --git a/_data/toc.yaml b/_data/toc.yaml index 1c3e0d25a7..e371de2310 100644 --- a/_data/toc.yaml +++ b/_data/toc.yaml @@ -206,8 +206,10 @@ guides: title: Docker container networking - path: /engine/userguide/networking/work-with-networks/ title: Work with network commands - - path: /engine/userguide/networking/get-started-overlay/ - title: Get started with multi-host networking + - path: /engine/swarm/networking/ + title: Manage swarm service networks + - path: /engine/userguide/networking/overlay-standalone-swarm/ + title: Multi-host networking with standalone swarms - path: /engine/userguide/networking/get-started-macvlan/ title: Get started with macvlan network driver - path: /engine/userguide/networking/overlay-security-model/ diff --git a/engine/swarm/networking.md b/engine/swarm/networking.md index 4a8922bfb0..6b69fc860d 100644 --- a/engine/swarm/networking.md +++ b/engine/swarm/networking.md @@ -1,5 +1,5 @@ --- -description: Use swarm mode networking features +description: Use swarm mode overlay networking features keywords: swarm, networking, ingress, overlay, service discovery title: Manage swarm service networks --- diff --git a/engine/userguide/networking/get-started-overlay.md b/engine/userguide/networking/get-started-overlay.md deleted file mode 100644 index f672070dbe..0000000000 --- a/engine/userguide/networking/get-started-overlay.md +++ /dev/null @@ -1,385 +0,0 @@ ---- -description: Use overlay for multi-host networking -keywords: Examples, Usage, network, docker, documentation, user guide, multihost, cluster -title: Get started with multi-host networking ---- - -This article uses an example to explain the basics of creating a multi-host -network. Docker supports multi-host networking out-of-the-box through the -`overlay` network driver. Unlike `bridge` networks, overlay networks require -some pre-existing conditions before you can create one: - -* [Docker running in swarm mode](#overlay-networking-and-swarm-mode) - -OR - -* [A cluster of hosts using a key value store](#overlay-networking-with-an-external-key-value-store) - -## Overlay networking and swarm mode - -Using Docker running in [swarm mode](../../swarm/swarm-mode.md), you can create an overlay network on a manager node. - -The swarm makes the overlay network available only to nodes in the swarm that -require it for a service. When you create a service that uses an overlay -network, the manager node automatically extends the overlay network to nodes -that run service tasks. - -To learn more about running Docker in swarm mode, refer to the -[Swarm mode overview](../../swarm/index.md). - -The example below shows how to create a network and use it for a service from a manager node in the swarm: - -```bash -# Create an overlay network `my-multi-host-network`. -$ docker network create \ - --driver overlay \ - --subnet 10.0.9.0/24 \ - my-multi-host-network - -400g6bwzd68jizzdx5pgyoe95 - -# Create an nginx service and extend the my-multi-host-network to nodes where -# the service's tasks run. -$ docker service create --replicas 2 --network my-multi-host-network --name my-web nginx - -716thylsndqma81j6kkkb5aus -``` - -Overlay networks for a swarm are not available to unmanaged containers. For more information refer to [Docker swarm mode overlay network security model](overlay-security-model.md). - -See also [Attach services to an overlay network](../../swarm/networking.md). - -## Overlay networking with an external key-value store - -To use a Docker engine with an external key-value store, you need the -following: - -* Access to the key-value store. Docker supports Consul, Etcd, and ZooKeeper -(Distributed store) key-value stores. -* A cluster of hosts with connectivity to the key-value store. -* A properly configured Engine `daemon` on each host in the cluster. -* Hosts within the cluster must have unique hostnames because the key-value -store uses the hostnames to identify cluster members. - -Though Docker Machine and Docker Swarm are not mandatory to experience Docker -multi-host networking with a key-value store, this example uses them to -illustrate how they are integrated. You'll use Machine to create both the -key-value store server and the host cluster. This example creates a swarm -cluster. - ->**Note**: Docker Engine running in swarm mode is not compatible with networking -with an external key-value store. - -### Prerequisites - -Before you begin, make sure you have a system on your network with the latest -version of Docker Engine and Docker Machine installed. The example also relies -on VirtualBox. If you installed on a Mac or Windows with Docker Toolbox, you -have all of these installed already. - -If you have not already done so, make sure you upgrade Docker Engine and Docker -Machine to the latest versions. - - -### Set up a key-value store - -An overlay network requires a key-value store. The key-value store holds -information about the network state which includes discovery, networks, -endpoints, IP addresses, and more. Docker supports Consul, Etcd, and ZooKeeper -key-value stores. This example uses Consul. - -1. Log into a system prepared with the prerequisite Docker Engine, Docker Machine, and VirtualBox software. - -2. Provision a VirtualBox machine called `mh-keystore`. - - $ docker-machine create -d virtualbox mh-keystore - - When you provision a new machine, the process adds Docker Engine to the - host. This means rather than installing Consul manually, you can create an - instance using the [consul image from Docker - Hub](https://hub.docker.com/r/progrium/consul/). You'll do this in the next step. - -3. Set your local environment to the `mh-keystore` machine. - - $ eval "$(docker-machine env mh-keystore)" - -4. Start a `progrium/consul` container running on the `mh-keystore` machine. - - $ docker run -d \ - -p "8500:8500" \ - -h "consul" \ - progrium/consul -server -bootstrap - - The client starts a `progrium/consul` image running in the - `mh-keystore` machine. The server is called `consul` and is - listening on port `8500`. - -5. Run the `docker ps` command to see the `consul` container. - - $ docker ps - - CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES - 4d51392253b3 progrium/consul "/bin/start -server -" 25 minutes ago Up 25 minutes 53/tcp, 53/udp, 8300-8302/tcp, 0.0.0.0:8500->8500/tcp, 8400/tcp, 8301-8302/udp admiring_panini - -Keep your terminal open and move onto the next step. - - -### Create a swarm cluster - -In this step, you use `docker-machine` to provision the hosts for your network. -At this point, you won't actually create the network. You'll create several -machines in VirtualBox. One of the machines will act as the swarm master; -you'll create that first. As you create each host, you'll pass the Engine on -that machine options that are needed by the `overlay` network driver. - -1. Create a swarm master. - - $ docker-machine create \ - -d virtualbox \ - --swarm --swarm-master \ - --swarm-discovery="consul://$(docker-machine ip mh-keystore):8500" \ - --engine-opt="cluster-store=consul://$(docker-machine ip mh-keystore):8500" \ - --engine-opt="cluster-advertise=eth1:2376" \ - mhs-demo0 - - At creation time, you supply the Engine `daemon` with the `--cluster-store` option. This option tells the Engine the location of the key-value store for the `overlay` network. The bash expansion `$(docker-machine ip mh-keystore)` resolves to the IP address of the Consul server you created in "STEP 1". The `--cluster-advertise` option advertises the machine on the network. - -2. Create another host and add it to the swarm cluster. - - $ docker-machine create -d virtualbox \ - --swarm \ - --swarm-discovery="consul://$(docker-machine ip mh-keystore):8500" \ - --engine-opt="cluster-store=consul://$(docker-machine ip mh-keystore):8500" \ - --engine-opt="cluster-advertise=eth1:2376" \ - mhs-demo1 - -3. List your machines to confirm they are all up and running. - - $ docker-machine ls - - NAME ACTIVE DRIVER STATE URL SWARM - default - virtualbox Running tcp://192.168.99.100:2376 - mh-keystore * virtualbox Running tcp://192.168.99.103:2376 - mhs-demo0 - virtualbox Running tcp://192.168.99.104:2376 mhs-demo0 (master) - mhs-demo1 - virtualbox Running tcp://192.168.99.105:2376 mhs-demo0 - -At this point you have a set of hosts running on your network. You are ready to create a multi-host network for containers using these hosts. - -Leave your terminal open and go onto the next step. - -### Create the overlay Network - -To create an overlay network - -1. Set your docker environment to the swarm master. - - $ eval $(docker-machine env --swarm mhs-demo0) - - Using the `--swarm` flag with `docker-machine` restricts the `docker` commands to swarm information alone. - -2. Use the `docker info` command to view the swarm. - - $ docker info - - Containers: 3 - Images: 2 - Role: primary - Strategy: spread - Filters: affinity, health, constraint, port, dependency - Nodes: 2 - mhs-demo0: 192.168.99.104:2376 - └ Containers: 2 - └ Reserved CPUs: 0 / 1 - └ Reserved Memory: 0 B / 1.021 GiB - └ Labels: executiondriver=native-0.2, kernelversion=4.1.10-boot2docker, operatingsystem=Boot2Docker 1.9.0 (TCL 6.4); master : 4187d2c - Wed Oct 14 14:00:28 UTC 2015, provider=virtualbox, storagedriver=aufs - mhs-demo1: 192.168.99.105:2376 - └ Containers: 1 - └ Reserved CPUs: 0 / 1 - └ Reserved Memory: 0 B / 1.021 GiB - └ Labels: executiondriver=native-0.2, kernelversion=4.1.10-boot2docker, operatingsystem=Boot2Docker 1.9.0 (TCL 6.4); master : 4187d2c - Wed Oct 14 14:00:28 UTC 2015, provider=virtualbox, storagedriver=aufs - CPUs: 2 - Total Memory: 2.043 GiB - Name: 30438ece0915 - - From this information, you can see that you are running three containers and two images on the Master. - -3. Create your `overlay` network. - - $ docker network create --driver overlay --subnet=10.0.9.0/24 my-net - - You only need to create the network on a single host in the cluster. In this case, - you used the swarm master but you could easily have run it on any host in the cluster. - - > **Note**: It is highly recommended to use the `--subnet` option when creating - > a network. If the `--subnet` is not specified, the docker daemon automatically - > chooses and assigns a subnet for the network and it could overlap with another subnet - > in your infrastructure that is not managed by docker. Such overlaps can cause - > connectivity issues or failures when containers are connected to that network. - -4. Check that the network is running: - - $ docker network ls - - NETWORK ID NAME DRIVER - 412c2496d0eb mhs-demo1/host host - dd51763e6dd2 mhs-demo0/bridge bridge - 6b07d0be843f my-net overlay - b4234109bd9b mhs-demo0/none null - 1aeead6dd890 mhs-demo0/host host - d0bb78cbe7bd mhs-demo1/bridge bridge - 1c0eb8f69ebb mhs-demo1/none null - - As you are in the swarm master environment, you see all the networks on all - the swarm agents: the default networks on each engine and the single overlay - network. Notice that each `NETWORK ID` is unique. - -5. Switch to each swarm agent in turn and list the networks. - - $ eval $(docker-machine env mhs-demo0) - - $ docker network ls - - NETWORK ID NAME DRIVER - 6b07d0be843f my-net overlay - dd51763e6dd2 bridge bridge - b4234109bd9b none null - 1aeead6dd890 host host - - $ eval $(docker-machine env mhs-demo1) - - $ docker network ls - - NETWORK ID NAME DRIVER - d0bb78cbe7bd bridge bridge - 1c0eb8f69ebb none null - 412c2496d0eb host host - 6b07d0be843f my-net overlay - -Both agents report they have the `my-net` network with the `6b07d0be843f` ID. -You now have a multi-host container network running! - -### Run an application on your network - -Once your network is created, you can start a container on any of the hosts and it automatically is part of the network. - -1. Point your environment to the swarm master. - - $ eval $(docker-machine env --swarm mhs-demo0) - -2. Start an Nginx web server on the `mhs-demo0` instance. - - $ docker run -itd --name=web --network=my-net --env="constraint:node==mhs-demo0" nginx - -4. Run a BusyBox instance on the `mhs-demo1` instance and get the contents of the Nginx server's home page. - - $ docker run -it --rm --network=my-net --env="constraint:node==mhs-demo1" busybox wget -O- http://web - - Unable to find image 'busybox:latest' locally - latest: Pulling from library/busybox - ab2b8a86ca6c: Pull complete - 2c5ac3f849df: Pull complete - Digest: sha256:5551dbdfc48d66734d0f01cafee0952cb6e8eeecd1e2492240bf2fd9640c2279 - Status: Downloaded newer image for busybox:latest - Connecting to web (10.0.0.2:80) - - - - Welcome to nginx! - - - -

Welcome to nginx!

-

If you see this page, the nginx web server is successfully installed and - working. Further configuration is required.

- -

For online documentation and support please refer to - nginx.org.
- Commercial support is available at - nginx.com.

- -

Thank you for using nginx.

- - - - 100% |*******************************| 612 0:00:00 ETA - -### Check external connectivity - -As you've seen, Docker's built-in overlay network driver provides out-of-the-box -connectivity between the containers on multiple hosts within the same network. -Additionally, containers connected to the multi-host network are automatically -connected to the `docker_gwbridge` network. This network allows the containers -to have external connectivity outside of their cluster. - -1. Change your environment to the swarm agent. - - $ eval $(docker-machine env mhs-demo1) - -2. View the `docker_gwbridge` network, by listing the networks. - - $ docker network ls - - NETWORK ID NAME DRIVER - 6b07d0be843f my-net overlay - dd51763e6dd2 bridge bridge - b4234109bd9b none null - 1aeead6dd890 host host - e1dbd5dff8be docker_gwbridge bridge - -3. Repeat steps 1 and 2 on the swarm master. - - $ eval $(docker-machine env mhs-demo0) - - $ docker network ls - - NETWORK ID NAME DRIVER - 6b07d0be843f my-net overlay - d0bb78cbe7bd bridge bridge - 1c0eb8f69ebb none null - 412c2496d0eb host host - 97102a22e8d2 docker_gwbridge bridge - -2. Check the Nginx container's network interfaces. - - $ docker exec web ip addr - - 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default - link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 - inet 127.0.0.1/8 scope host lo - valid_lft forever preferred_lft forever - inet6 ::1/128 scope host - valid_lft forever preferred_lft forever - 22: eth0: mtu 1450 qdisc noqueue state UP group default - link/ether 02:42:0a:00:09:03 brd ff:ff:ff:ff:ff:ff - inet 10.0.9.3/24 scope global eth0 - valid_lft forever preferred_lft forever - inet6 fe80::42:aff:fe00:903/64 scope link - valid_lft forever preferred_lft forever - 24: eth1: mtu 1500 qdisc noqueue state UP group default - link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff - inet 172.18.0.2/16 scope global eth1 - valid_lft forever preferred_lft forever - inet6 fe80::42:acff:fe12:2/64 scope link - valid_lft forever preferred_lft forever - - The `eth0` interface represents the container interface that is connected to - the `my-net` overlay network. While the `eth1` interface represents the - container interface that is connected to the `docker_gwbridge` network. - -### Extra credit with Docker Compose - -Please refer to the Networking feature introduced in -[Compose V2 format](/compose/networking/) -and execute the multi-host networking scenario in the swarm cluster used above. - -## Related information - -* [Understand Docker container networks](index.md) -* [Work with network commands](work-with-networks.md) -* [Docker Swarm overview](/swarm) -* [Docker Machine overview](/machine) diff --git a/engine/userguide/networking/overlay-standalone-swarm.md b/engine/userguide/networking/overlay-standalone-swarm.md new file mode 100644 index 0000000000..aae2091daf --- /dev/null +++ b/engine/userguide/networking/overlay-standalone-swarm.md @@ -0,0 +1,421 @@ +--- +description: Use overlay for multi-host networking +keywords: Examples, Usage, network, docker, documentation, user guide, multihost, cluster +title: Multi-host networking with standalone swarms +redirect_from: +- engine/userguide/networking/get-started-overlay/ +--- + +## Standalone swarm only! + +This article only applies to users who need to use a standalone swarm with +Docker, as opposed to swarm mode. Standalone swarms (sometimes known as Swarm +Classic) rely on an external key-value store to store networking information. +Docker swarm mode stores networking information in the Raft logs on the swarm +managers. If you use swarm mode, see +[swarm mode networking](/engine/swarm/networking.md) instead of this article. + +Users of Universal Control Plane **do** use an external key-value store, but UCP +manages it for you, and you do not need to manually intervene. +If you run into issues with the key-value store, see +[Troubleshoot the etcd key-value store](/datacenter/ucp/2.2/guides/admin/monitor-and-troubleshoot/troubleshoot-configurations.md#troubleshoot-the-etcd-key-value-store) + +If you are using standalone swarms and not using UCP, this article may be useful +to you. This article uses an example to explain the basics of creating a +multi-host network using a standalone swarm and the `overlay` network driver. +Unlike `bridge` networks, overlay networks require some pre-existing conditions +before you can create one: + +## Overlay networking with an external key-value store + +To use Docker with an external key-value store, you need the following: + +* Access to the key-value store. Docker supports Consul, Etcd, and ZooKeeper + (Distributed store) key-value stores. This example uses Consul. +* A cluster of hosts with connectivity to the key-value store. +* Docker running on each host in the cluster. +* Hosts within the cluster must have unique hostnames because the key-value + store uses the hostnames to identify cluster members. + +Docker Machine and Docker Swarm are not mandatory to experience Docker +multi-host networking with a key-value store. However, this example uses them to +illustrate how they are integrated. You'll use Machine to create both the +key-value store server and the host cluster using a standalone swarm. + +>**Note**: These examples are not relevant to Docker running in swarm mode and +> will not work in such a configuration. + +### Prerequisites + +Before you begin, make sure you have a system on your network with the latest +version of Docker and Docker Machine installed. The example also relies +on VirtualBox. If you installed on a Mac or Windows with Docker Toolbox, you +have all of these installed already. + +If you have not already done so, make sure you upgrade Docker and Docker +Machine to the latest versions. + +### Set up a key-value store + +An overlay network requires a key-value store. The key-value store holds +information about the network state which includes discovery, networks, +endpoints, IP addresses, and more. Docker supports Consul, Etcd, and ZooKeeper +key-value stores. This example uses Consul. + +1. Log into a system prepared with Docker and Docker Machine installed. + +2. Provision a VirtualBox machine called `mh-keystore`. + + ```bash + $ docker-machine create -d virtualbox mh-keystore + ``` + + When you provision a new machine, the process adds Docker to the + host. This means rather than installing Consul manually, you can create an + instance using the [consul image from Docker + Hub](https://hub.docker.com/_/consul/). You'll do this in the next step. + +3. Set your local environment to the `mh-keystore` machine. + + ```bash + $ eval "$(docker-machine env mh-keystore)" + ``` + +4. Start a `consul` container running on the `mh-keystore` Docker machine. + + ```bash + $ docker run -d \ + --name consul + -p "8500:8500" \ + -h "consul" \ + consul agent -server -bootstrap + ``` + + The client starts a `consul` image running in the + `mh-keystore` Docker machine. The server is called `consul` and is + listening on port `8500`. + +5. Run the `docker ps` command to see the `consul` container. + + ```bash + $ docker ps + + CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES + a47492d6c4d1 consul "docker-entrypoint..." 2 seconds ago Up 1 second 8300-8302/tcp, 8301-8302/udp, 8600/tcp, 8600/udp, 0.0.0.0:8500->8500/tcp consul + ``` + +Keep your terminal open and move on to +[Create a swarm cluster](#create-a-swarm-cluster). + +### Create a swarm cluster + +In this step, you use `docker-machine` to provision the hosts for your network. +You won't actually create the network yet. You'll create several +Docker machines in VirtualBox. One of the machines will act as the swarm manager +and you'll create that first. As you create each host, you'll pass the Docker +daemon on that machine options that are needed by the `overlay` network driver. + +> **Note**: This creates a standalone swarm cluster, rather than using Docker +> in swarm mode. These examples are not relevant to Docker running in swarm mode +> and will not work in such a configuration. + +1. Create a swarm manager. + + ```bash + $ docker-machine create \ + -d virtualbox \ + --swarm --swarm-master \ + --swarm-discovery="consul://$(docker-machine ip mh-keystore):8500" \ + --engine-opt="cluster-store=consul://$(docker-machine ip mh-keystore):8500" \ + --engine-opt="cluster-advertise=eth1:2376" \ + mhs-demo0 + ``` + + At creation time, you supply the Docker daemon with the `--cluster-store` + option. This option tells the Engine the location of the key-value store for + the `overlay` network. The bash expansion `$(docker-machine ip mh-keystore)` + resolves to the IP address of the Consul server you created in "STEP 1". The + `--cluster-advertise` option advertises the machine on the network. + +2. Create another host and add it to the swarm. + + ```bash + $ docker-machine create -d virtualbox \ + --swarm \ + --swarm-discovery="consul://$(docker-machine ip mh-keystore):8500" \ + --engine-opt="cluster-store=consul://$(docker-machine ip mh-keystore):8500" \ + --engine-opt="cluster-advertise=eth1:2376" \ + mhs-demo1 + ``` + +3. List your Docker machines to confirm they are all up and running. + + ```bash + $ docker-machine ls + + NAME ACTIVE DRIVER STATE URL SWARM + default - virtualbox Running tcp://192.168.99.100:2376 + mh-keystore * virtualbox Running tcp://192.168.99.103:2376 + mhs-demo0 - virtualbox Running tcp://192.168.99.104:2376 mhs-demo0 (master) + mhs-demo1 - virtualbox Running tcp://192.168.99.105:2376 mhs-demo0 + ``` + +At this point you have a set of hosts running on your network. You are ready to +create a multi-host network for containers using these hosts. + +Leave your terminal open and go on to +[Create the overlay network](#create-the-overlay-network). + +### Create the overlay network + +To create an overlay network: + +1. Set your docker environment to the swarm manager. + + ```bash + $ eval $(docker-machine env --swarm mhs-demo0) + ``` + + Using the `--swarm` flag with `docker-machine` restricts the `docker` + commands to swarm information alone. + +2. Use the `docker info` command to view the swarm. + + ```bash + $ docker info + + Containers: 3 + Images: 2 + Role: primary + Strategy: spread + Filters: affinity, health, constraint, port, dependency + Nodes: 2 + mhs-demo0: 192.168.99.104:2376 + └ Containers: 2 + └ Reserved CPUs: 0 / 1 + └ Reserved Memory: 0 B / 1.021 GiB + └ Labels: executiondriver=native-0.2, kernelversion=4.1.10-boot2docker, operatingsystem=Boot2Docker 1.9.0 (TCL 6.4); master : 4187d2c - Wed Oct 14 14:00:28 UTC 2015, provider=virtualbox, storagedriver=aufs + mhs-demo1: 192.168.99.105:2376 + └ Containers: 1 + └ Reserved CPUs: 0 / 1 + └ Reserved Memory: 0 B / 1.021 GiB + └ Labels: executiondriver=native-0.2, kernelversion=4.1.10-boot2docker, operatingsystem=Boot2Docker 1.9.0 (TCL 6.4); master : 4187d2c - Wed Oct 14 14:00:28 UTC 2015, provider=virtualbox, storagedriver=aufs + CPUs: 2 + Total Memory: 2.043 GiB + Name: 30438ece0915 + ``` + + This output shows that you are running three containers and two images on + the manager. + +3. Create your `overlay` network. + + ```bash + $ docker network create --driver overlay --subnet=10.0.9.0/24 my-net + ``` + + You only need to create the network on a single host in the cluster. In this + case, you used the swarm manager but you could easily have run it on any + host in the swarm. + + > **Note**: It is highly recommended to use the `--subnet` option when creating + > a network. If the `--subnet` is not specified, Docker automatically + > chooses and assigns a subnet for the network and it could overlap with another subnet + > in your infrastructure that is not managed by Docker. Such overlaps can cause + > connectivity issues or failures when containers are connected to that network. + +4. Check that the network exists: + + ```bash + $ docker network ls + + NETWORK ID NAME DRIVER + 412c2496d0eb mhs-demo1/host host + dd51763e6dd2 mhs-demo0/bridge bridge + 6b07d0be843f my-net overlay + b4234109bd9b mhs-demo0/none null + 1aeead6dd890 mhs-demo0/host host + d0bb78cbe7bd mhs-demo1/bridge bridge + 1c0eb8f69ebb mhs-demo1/none null + ``` + + Since you are in the swarm manager environment, you see all the networks on all + the swarm participants: the default networks on each Docker daemon and the single overlay + network. Each network has a unique ID and a namespaced name. + +5. Switch to each swarm agent in turn and list the networks. + + ```bash + $ eval $(docker-machine env mhs-demo0) + + $ docker network ls + + NETWORK ID NAME DRIVER + 6b07d0be843f my-net overlay + dd51763e6dd2 bridge bridge + b4234109bd9b none null + 1aeead6dd890 host host + + + $ eval $(docker-machine env mhs-demo1) + + $ docker network ls + + NETWORK ID NAME DRIVER + d0bb78cbe7bd bridge bridge + 1c0eb8f69ebb none null + 412c2496d0eb host host + 6b07d0be843f my-net overlay + ``` + +Both agents report they have the `my-net` network with the `6b07d0be843f` ID. +You now have a multi-host container network running! + +### Run an application on your network + +Once your network is created, you can start a container on any of the hosts and +it automatically is part of the network. + +1. Set your environment to the swarm manager. + + ```bash + $ eval $(docker-machine env --swarm mhs-demo0) + ``` + +2. Start an Nginx web server on the `mhs-demo0` instance. + + ```bash + $ docker run -itd \ + --name=web \ + --network=my-net \ + --env="constraint:node==mhs-demo0" \ + nginx + ``` + +4. Run a `busybox` instance on the `mhs-demo1` instance and get the contents of + the Nginx server's home page. + + ```none + $ docker run -it --rm \ + --network=my-net \ + --env="constraint:node==mhs-demo1" \ + busybox wget -O- http://web + + Unable to find image 'busybox:latest' locally + latest: Pulling from library/busybox + ab2b8a86ca6c: Pull complete + 2c5ac3f849df: Pull complete + Digest: sha256:5551dbdfc48d66734d0f01cafee0952cb6e8eeecd1e2492240bf2fd9640c2279 + Status: Downloaded newer image for busybox:latest + Connecting to web (10.0.0.2:80) + + + + Welcome to nginx! + + + +

Welcome to nginx!

+

If you see this page, the nginx web server is successfully installed and + working. Further configuration is required.

+ +

For online documentation and support please refer to + nginx.org.
+ Commercial support is available at + nginx.com.

+ +

Thank you for using nginx.

+ + + - 100% |*******************************| 612 0:00:00 ETA + ``` + +### Check external connectivity + +As you've seen, Docker's built-in overlay network driver provides out-of-the-box +connectivity between the containers on multiple hosts within the same network. +Additionally, containers connected to the multi-host network are automatically +connected to the `docker_gwbridge` network. This network allows the containers +to have external connectivity outside of their cluster. + +1. Change your environment to the swarm agent. + + ```bash + $ eval $(docker-machine env mhs-demo1) + ``` + +2. View the `docker_gwbridge` network, by listing the networks. + + ```bash + $ docker network ls + + NETWORK ID NAME DRIVER + 6b07d0be843f my-net overlay + dd51763e6dd2 bridge bridge + b4234109bd9b none null + 1aeead6dd890 host host + e1dbd5dff8be docker_gwbridge bridge + ``` + +3. Repeat steps 1 and 2 on the swarm manager. + + ```bash + $ eval $(docker-machine env mhs-demo0) + + $ docker network ls + + NETWORK ID NAME DRIVER + 6b07d0be843f my-net overlay + d0bb78cbe7bd bridge bridge + 1c0eb8f69ebb none null + 412c2496d0eb host host + 97102a22e8d2 docker_gwbridge bridge + ``` + +2. Check the Nginx container's network interfaces. + + ```basj + $ docker exec web ip addr + + 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default + link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 + inet 127.0.0.1/8 scope host lo + valid_lft forever preferred_lft forever + inet6 ::1/128 scope host + valid_lft forever preferred_lft forever + 22: eth0: mtu 1450 qdisc noqueue state UP group default + link/ether 02:42:0a:00:09:03 brd ff:ff:ff:ff:ff:ff + inet 10.0.9.3/24 scope global eth0 + valid_lft forever preferred_lft forever + inet6 fe80::42:aff:fe00:903/64 scope link + valid_lft forever preferred_lft forever + 24: eth1: mtu 1500 qdisc noqueue state UP group default + link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff + inet 172.18.0.2/16 scope global eth1 + valid_lft forever preferred_lft forever + inet6 fe80::42:acff:fe12:2/64 scope link + valid_lft forever preferred_lft forever + ``` + + The `eth0` interface represents the container interface that is connected to + the `my-net` overlay network. While the `eth1` interface represents the + container interface that is connected to the `docker_gwbridge` network. + +### Extra credit with Docker Compose + +Refer to the Networking feature introduced in +[Compose V2 format](/compose/networking/) +and execute the multi-host networking scenario in the swarm cluster used above. + +## Related information + +* [Understand Docker container networks](index.md) +* [Work with network commands](work-with-networks.md) +* [Docker Swarm overview](/swarm) +* [Docker Machine overview](/machine)