Restructure and rewrite network content
|
@ -243,39 +243,45 @@ guides:
|
||||||
title: Overview
|
title: Overview
|
||||||
- path: /develop/sdk/examples/
|
- path: /develop/sdk/examples/
|
||||||
title: SDK and API examples
|
title: SDK and API examples
|
||||||
|
|
||||||
- sectiontitle: Configure networking
|
- sectiontitle: Configure networking
|
||||||
section:
|
section:
|
||||||
- path: /engine/userguide/networking/
|
- path: /network/
|
||||||
title: Docker container networking
|
title: Networking overview
|
||||||
- path: /engine/userguide/networking/work-with-networks/
|
- path: /network/bridge/
|
||||||
title: Work with network commands
|
title: Use bridge networks
|
||||||
- path: /engine/swarm/networking/
|
- path: /network/overlay/
|
||||||
title: Manage swarm service networks
|
title: Use overlay networks
|
||||||
- path: /engine/userguide/networking/overlay-standalone-swarm/
|
- path: /network/host/
|
||||||
title: Multi-host networking with standalone swarms
|
title: Use host networking
|
||||||
- path: /engine/userguide/networking/get-started-macvlan/
|
- path: /network/macvlan/
|
||||||
title: Get started with macvlan network driver
|
title: Use Macvlan networks
|
||||||
- path: /engine/userguide/networking/overlay-security-model/
|
- path: /network/none/
|
||||||
title: Swarm mode overlay network security model
|
title: Disable networking for a container
|
||||||
- path: /engine/userguide/networking/configure-dns/
|
- sectiontitle: Networking tutorials
|
||||||
title: Configure container DNS in user-defined networks
|
section:
|
||||||
- sectiontitle: Default bridge network
|
- path: /network/network-tutorial-standalone/
|
||||||
section:
|
title: Bridge network tutorial
|
||||||
- path: /engine/userguide/networking/default_network/dockerlinks/
|
- path: /network/network-tutorial-host/
|
||||||
title: Legacy container links
|
title: Host networking tutorial
|
||||||
- path: /engine/userguide/networking/default_network/binding/
|
- path: /network/network-tutorial-overlay/
|
||||||
title: Bind container ports to the host
|
title: Overlay networking tutorial
|
||||||
- path: /engine/userguide/networking/default_network/build-bridges/
|
- path: /network/network-tutorial-macvlan/
|
||||||
title: Build your own bridge
|
title: Macvlan network tutorial
|
||||||
- path: /engine/userguide/networking/default_network/configure-dns/
|
- sectiontitle: Configure the daemon and containers
|
||||||
title: Configure container DNS
|
section:
|
||||||
- path: /engine/userguide/networking/default_network/custom-docker0/
|
- path: /config/daemon/ipv6/
|
||||||
title: Customize the docker0 bridge
|
title: Configure the daemon for IPv6
|
||||||
- path: /engine/userguide/networking/default_network/container-communication/
|
- path: /network/iptables/
|
||||||
title: Understand container communication
|
title: Docker and iptables
|
||||||
- path: /engine/userguide/networking/default_network/ipv6/
|
- path: /config/containers/container-networking/
|
||||||
title: IPv6 with Docker
|
title: Container networking
|
||||||
|
- sectiontitle: Legacy networking content
|
||||||
|
section:
|
||||||
|
- path: /network/links/
|
||||||
|
title: (Legacy) Container links
|
||||||
|
- path: /network/overlay-standalone.swarm/
|
||||||
|
title: Overlay networks for Swarm Classic
|
||||||
|
|
||||||
- sectiontitle: Manage application data
|
- sectiontitle: Manage application data
|
||||||
section:
|
section:
|
||||||
- path: /storage/
|
- path: /storage/
|
||||||
|
|
|
@ -0,0 +1,65 @@
|
||||||
|
---
|
||||||
|
title: Container networking
|
||||||
|
description: How networking works from the container's point of view
|
||||||
|
keywords: networking, container, standalone
|
||||||
|
redirect_from:
|
||||||
|
- /engine/userguide/networking/configure-dns/
|
||||||
|
- /engine/userguide/networking/default_network/binding/
|
||||||
|
---
|
||||||
|
|
||||||
|
The type of network a container uses, whether it is a [brudge](bridges.md), an
|
||||||
|
[overlay](overlay.md), a [macvlan network](macvlan.md), or a custom network
|
||||||
|
plugin, is transparent from within the container. From the container's point of
|
||||||
|
view, it has a network interface with an IP address, a gateway, a routing table,
|
||||||
|
DNS services, and other networking details (assuming the container is not using
|
||||||
|
the `none` network driver). This topic is about networking concerns from the
|
||||||
|
point of view of the container.
|
||||||
|
|
||||||
|
## Published ports
|
||||||
|
|
||||||
|
By default, when you create a container, it does not publish any of its ports
|
||||||
|
to the outside world. To make a port available to services outside of Docker, or
|
||||||
|
to Docker containers which are not connected to the container's network, use the
|
||||||
|
`--publish` or `-p` flag. This creates a firewall rule which maps a container
|
||||||
|
port to a port on the Docker host. Here are some examples.
|
||||||
|
|
||||||
|
| Flag value | Description |
|
||||||
|
|---------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||||
|
| `-p 8080:80` | Map TCP port 80 in the container to port 8080 on the Docker host. |
|
||||||
|
| `-p 8080:80/udp` | Map UDP port 80 in the container to port 8080 on the Docker host. |
|
||||||
|
| `-p 8080:80/tcp -p 8080:80/udp` | Map TCP port 80 in the container to TCP port 8080 on the Docker host, and map UDP port 80 in the container to UDP port 8080 on the Docker host. |
|
||||||
|
|
||||||
|
## IP address and hostname
|
||||||
|
|
||||||
|
By default, the container is assigned an IP address for every Docker network it
|
||||||
|
connects to. The IP address is assigned from the pool assigned to
|
||||||
|
the network, so the Docker daemon effectively acts as a DHCP server for each
|
||||||
|
container. Each network also has a default subnet mask and gateway.
|
||||||
|
|
||||||
|
When the container starts, it can only be connected to a single network, using
|
||||||
|
`--network`. However, you can connect a running container to multiple
|
||||||
|
networks using `docker network connect`. When you start a container using the
|
||||||
|
`--network` flag, you can specify the IP address assigned to the container on
|
||||||
|
that network using the `--ip` or `--ip6` flags.
|
||||||
|
|
||||||
|
When you connect an existing container to a different network using
|
||||||
|
`docker network connect`, you can use the `--ip` or `--ip6` flags on that
|
||||||
|
command to specify the container's IP address on the additional network.
|
||||||
|
|
||||||
|
In the same way, a container's hostname defaults to be the container's name in
|
||||||
|
Docker. You can override the hostname using `--hostname`. When connecting to an
|
||||||
|
existing network using `docker network connect`, you can use the `--alias`
|
||||||
|
flag to specify an additional network alias for the container on that network.
|
||||||
|
|
||||||
|
## DNS services
|
||||||
|
|
||||||
|
By default, a container inherits the DNS settings of the Docker daemon,
|
||||||
|
including the `/etc/hosts` and `/etc/resolv.conf`.You can override these
|
||||||
|
settings on a per-container basis.
|
||||||
|
|
||||||
|
| Flag | Description |
|
||||||
|
|----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||||
|
| `--dns` | The IP address of a DNS server. To specify multiple DNS servers, use multiple `--dns` flags. If the container cannot reach any of the IP addresses you specify, Google's public DNS server `8.8.8.8` is added, so that your container can resolve internet domains. |
|
||||||
|
| `--dns-search` | A DNS search domain to search non-fully-qualified hostnames. To specify multiple DNS search prefixes, use multiple `--dns-search` flags. |
|
||||||
|
| `--dns-opt` | A key-value pair representing a DNS option and its value. See your operating system's documentation for `resolv.conf` for valid options. |
|
||||||
|
| `--hostname` | The hostname a container uses for itself. Defaults to the container's name if not specified. |
|
|
@ -0,0 +1,38 @@
|
||||||
|
---
|
||||||
|
title: Enable IPv6 support
|
||||||
|
description: How to enable IPv6 support in the Docker daemon
|
||||||
|
keywords: daemon, network, networking, ipv6
|
||||||
|
redirect_from:
|
||||||
|
- /engine/userguide/networking/default_network/ipv6/
|
||||||
|
---
|
||||||
|
|
||||||
|
Before you can use IPv6 in Docker containers or swarm services, you need to
|
||||||
|
enable IPv6 support in the Docker daemon. Afterward, you can choose to use
|
||||||
|
either IPv4 or IPv6 (or both) with any container, service, or network.
|
||||||
|
|
||||||
|
> **Note**: IPv6 networking is only supported on Docker daemons running on Linux
|
||||||
|
> hosts.
|
||||||
|
|
||||||
|
1. Edit `/etc/docker/daemon.json` and set the `ipv6` key to `true`.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"ipv6": true
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Save the file.
|
||||||
|
|
||||||
|
2. Reload the Docker configuration file.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ systemctl reload docker
|
||||||
|
```
|
||||||
|
|
||||||
|
You can now create networks with the `--ipv6` flag and assign containers IPv6
|
||||||
|
addresses using the `--ip6` flag.
|
||||||
|
|
||||||
|
## Next steps
|
||||||
|
|
||||||
|
- [Networking overview](/network/index.md)
|
||||||
|
- [Container networking](/config/container/container-networking.md)
|
|
@ -1,135 +0,0 @@
|
||||||
---
|
|
||||||
description: Learn how to configure DNS in user-defined networks
|
|
||||||
keywords: docker, DNS, network
|
|
||||||
title: Embedded DNS server in user-defined networks
|
|
||||||
---
|
|
||||||
|
|
||||||
The information in this section covers the embedded DNS server operation for
|
|
||||||
containers in user-defined networks. DNS lookup for containers connected to
|
|
||||||
user-defined networks works differently compared to the containers connected
|
|
||||||
to `default bridge` network.
|
|
||||||
|
|
||||||
> **Note**: To maintain backward compatibility, the DNS configuration
|
|
||||||
> in `default bridge` network is retained with no behavioral change.
|
|
||||||
> Refer to the [DNS in default bridge network](default_network/configure-dns.md)
|
|
||||||
> for more information on DNS configuration in the `default bridge` network.
|
|
||||||
|
|
||||||
As of Docker 1.10, the docker daemon implements an embedded DNS server which
|
|
||||||
provides built-in service discovery for any container created with a valid
|
|
||||||
`name` or `net-alias` or aliased by `link`. The exact details of how Docker
|
|
||||||
manages the DNS configurations inside the container can change from one Docker
|
|
||||||
version to the next. So you should not assume the way the files such as
|
|
||||||
`/etc/hosts`, `/etc/resolv.conf` are managed inside the containers and leave
|
|
||||||
the files alone and use the following Docker options instead.
|
|
||||||
|
|
||||||
Various container options that affect container domain name services.
|
|
||||||
|
|
||||||
<table>
|
|
||||||
<tr>
|
|
||||||
<td>
|
|
||||||
<p>
|
|
||||||
<code>--name=CONTAINER-NAME</code>
|
|
||||||
</p>
|
|
||||||
</td>
|
|
||||||
<td>
|
|
||||||
<p>
|
|
||||||
Container name configured using <code>--name</code> is used to discover a container within
|
|
||||||
an user-defined docker network. The embedded DNS server maintains the mapping between
|
|
||||||
the container name and its IP address (on the network the container is connected to).
|
|
||||||
</p>
|
|
||||||
</td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td>
|
|
||||||
<p>
|
|
||||||
<code>--network-alias=ALIAS</code>
|
|
||||||
</p>
|
|
||||||
</td>
|
|
||||||
<td>
|
|
||||||
<p>
|
|
||||||
In addition to <code>--name</code> as described above, a container is discovered by one or more
|
|
||||||
of its configured <code>--network-alias</code> (or <code>--alias</code> in <code>docker network connect</code> command)
|
|
||||||
within the user-defined network. The embedded DNS server maintains the mapping between
|
|
||||||
all of the container aliases and its IP address on a specific user-defined network.
|
|
||||||
A container can have different aliases in different networks by using the <code>--alias</code>
|
|
||||||
option in <code>docker network connect</code> command.
|
|
||||||
</p>
|
|
||||||
</td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td>
|
|
||||||
<p>
|
|
||||||
<code>--link=CONTAINER_NAME:ALIAS</code>
|
|
||||||
</p>
|
|
||||||
</td>
|
|
||||||
<td>
|
|
||||||
<p>
|
|
||||||
Using this option as you <code>run</code> a container gives the embedded DNS
|
|
||||||
an extra entry named <code>ALIAS</code> that points to the IP address
|
|
||||||
of the container identified by <code>CONTAINER_NAME</code>. When using <code>--link</code>
|
|
||||||
the embedded DNS guarantees that localized lookup result only on that
|
|
||||||
container where the <code>--link</code> is used. This lets processes inside the new container
|
|
||||||
connect to container without having to know its name or IP.
|
|
||||||
</p>
|
|
||||||
</td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td><p>
|
|
||||||
<code>--dns=[IP_ADDRESS...]</code>
|
|
||||||
</p></td>
|
|
||||||
<td><p>
|
|
||||||
The IP addresses passed via the <code>--dns</code> option is used by the embedded DNS
|
|
||||||
server to forward the DNS query if embedded DNS server can't resolve a name
|
|
||||||
resolution request from the containers.
|
|
||||||
These <code>--dns</code> IP addresses are managed by the embedded DNS server and
|
|
||||||
are not updated in the container's <code>/etc/resolv.conf</code> file.
|
|
||||||
</p></td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td><p>
|
|
||||||
<code>--dns-search=DOMAIN...</code>
|
|
||||||
</p></td>
|
|
||||||
<td><p>
|
|
||||||
Sets the domain names that are searched when a bare unqualified hostname is
|
|
||||||
used inside of the container. These <code>--dns-search</code> options are managed by the
|
|
||||||
embedded DNS server and are not updated in the container's <code>/etc/resolv.conf</code> file.
|
|
||||||
When a container process attempts to access <code>host</code> and the search
|
|
||||||
domain <code>example.com</code> is set, for instance, the DNS logic looks up
|
|
||||||
both <code>host</code> and <code>host.example.com</code>.
|
|
||||||
</p>
|
|
||||||
</td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td><p>
|
|
||||||
<code>--dns-opt=OPTION...</code>
|
|
||||||
</p></td>
|
|
||||||
<td><p>
|
|
||||||
Sets the options used by DNS resolvers. These options are managed by the embedded
|
|
||||||
DNS server and are not updated in the container's <code>/etc/resolv.conf</code> file.
|
|
||||||
</p>
|
|
||||||
<p>
|
|
||||||
See documentation for <code>resolv.conf</code> for a list of valid options.
|
|
||||||
</p></td>
|
|
||||||
</tr>
|
|
||||||
</table>
|
|
||||||
|
|
||||||
|
|
||||||
In the absence of the `--dns=IP_ADDRESS...`, `--dns-search=DOMAIN...`, or
|
|
||||||
`--dns-opt=OPTION...` options, Docker uses the `/etc/resolv.conf` of the
|
|
||||||
host machine (where the `docker` daemon runs). While doing so the daemon
|
|
||||||
filters out all localhost IP address `nameserver` entries from the host's
|
|
||||||
original file.
|
|
||||||
|
|
||||||
Filtering is necessary because all localhost addresses on the host are
|
|
||||||
unreachable from the container's network. After this filtering, if there are
|
|
||||||
no more `nameserver` entries left in the container's `/etc/resolv.conf` file,
|
|
||||||
the daemon adds public Google DNS nameservers (8.8.8.8 and 8.8.4.4) to the
|
|
||||||
container's DNS configuration. If IPv6 is enabled on the daemon, the public
|
|
||||||
IPv6 Google DNS nameservers are also added (2001:4860:4860::8888 and
|
|
||||||
2001:4860:4860::8844).
|
|
||||||
|
|
||||||
> **Note**: If you need access to a host's localhost resolver, you must modify
|
|
||||||
> your DNS service on the host to listen on a non-localhost address that is
|
|
||||||
> reachable from within the container.
|
|
||||||
|
|
||||||
> **Note**: The DNS server is always at `127.0.0.11`.
|
|
|
@ -1,99 +0,0 @@
|
||||||
---
|
|
||||||
description: expose, port, docker, bind publish
|
|
||||||
keywords: Examples, Usage, network, docker, documentation, user guide, multihost, cluster
|
|
||||||
title: Bind container ports to the host
|
|
||||||
---
|
|
||||||
|
|
||||||
The information in this section explains binding container ports within the Docker default bridge. This is a `bridge` network named `bridge` created automatically when you install Docker.
|
|
||||||
|
|
||||||
> **Note**: The [Docker networks feature](../index.md) allows you to
|
|
||||||
create user-defined networks in addition to the default bridge network.
|
|
||||||
|
|
||||||
By default Docker containers can make connections to the outside world, but the
|
|
||||||
outside world cannot connect to containers. Each outgoing connection
|
|
||||||
appears to originate from one of the host machine's own IP addresses thanks to an
|
|
||||||
`iptables` masquerading rule on the host machine that the Docker server creates
|
|
||||||
when it starts:
|
|
||||||
|
|
||||||
```
|
|
||||||
$ sudo iptables -t nat -L -n
|
|
||||||
|
|
||||||
...
|
|
||||||
Chain POSTROUTING (policy ACCEPT)
|
|
||||||
target prot opt source destination
|
|
||||||
MASQUERADE all -- 172.17.0.0/16 0.0.0.0/0
|
|
||||||
...
|
|
||||||
```
|
|
||||||
The Docker server creates a masquerade rule that lets containers connect to IP
|
|
||||||
addresses in the outside world.
|
|
||||||
|
|
||||||
If you want containers to accept incoming connections, you need to provide
|
|
||||||
special options when invoking `docker run`. There are two approaches.
|
|
||||||
|
|
||||||
First, you can supply `-P` or `--publish-all=true|false` to `docker run` which
|
|
||||||
is a blanket operation that identifies every port with an `EXPOSE` line in the
|
|
||||||
image's `Dockerfile` or `--expose <port>` commandline flag and maps it to a host
|
|
||||||
port somewhere within an _ephemeral port range_. The `docker port` command then
|
|
||||||
needs to be used to inspect created mapping. The _ephemeral port range_ is
|
|
||||||
configured by `/proc/sys/net/ipv4/ip_local_port_range` kernel parameter,
|
|
||||||
typically ranging from 32768 to 61000.
|
|
||||||
|
|
||||||
Mapping can be specified explicitly using `-p SPEC` or `--publish=SPEC` option.
|
|
||||||
It allows you to particularize which port on docker server - which can be any
|
|
||||||
port at all, not just one within the _ephemeral port range_ -- you want mapped
|
|
||||||
to which port in the container.
|
|
||||||
|
|
||||||
Either way, you can peek at what Docker has accomplished in your
|
|
||||||
network stack by examining your NAT tables.
|
|
||||||
|
|
||||||
```
|
|
||||||
# What your NAT rules might look like when Docker
|
|
||||||
# is finished setting up a -P forward:
|
|
||||||
|
|
||||||
$ iptables -t nat -L -n
|
|
||||||
|
|
||||||
...
|
|
||||||
Chain DOCKER (2 references)
|
|
||||||
target prot opt source destination
|
|
||||||
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:49153 to:172.17.0.2:80
|
|
||||||
|
|
||||||
# What your NAT rules might look like when Docker
|
|
||||||
# is finished setting up a -p 80:80 forward:
|
|
||||||
|
|
||||||
Chain DOCKER (2 references)
|
|
||||||
target prot opt source destination
|
|
||||||
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 to:172.17.0.2:80
|
|
||||||
```
|
|
||||||
|
|
||||||
You can see that Docker has exposed these container ports on `0.0.0.0`, the
|
|
||||||
wildcard IP address that matches any possible incoming port on the host
|
|
||||||
machine. If you want to be more restrictive and only allow container services to
|
|
||||||
be contacted through a specific external interface on the host machine, you have
|
|
||||||
two choices. When you invoke `docker run` you can use either `-p
|
|
||||||
IP:host_port:container_port` or `-p IP::port` to specify the external interface
|
|
||||||
for one particular binding.
|
|
||||||
|
|
||||||
Or if you always want Docker port forwards to bind to one specific IP address,
|
|
||||||
you can edit your system-wide Docker server settings and add the option
|
|
||||||
`--ip=IP_ADDRESS`. Remember to restart your Docker server after editing this
|
|
||||||
setting.
|
|
||||||
|
|
||||||
> **Note**: With hairpin NAT enabled (`--userland-proxy=false`), containers port
|
|
||||||
exposure is achieved purely through iptables rules, and no attempt to bind the
|
|
||||||
exposed port is ever made. This means that nothing prevents shadowing a
|
|
||||||
previously listening service outside of Docker through exposing the same port
|
|
||||||
for a container. In such conflicting situation, Docker created iptables rules
|
|
||||||
take precedence and route to the container.
|
|
||||||
|
|
||||||
The `--userland-proxy` parameter, true by default, provides a userland
|
|
||||||
implementation for inter-container and outside-to-container communication. When
|
|
||||||
disabled, Docker uses both an additional `MASQUERADE` iptable rule and the
|
|
||||||
`net.ipv4.route_localnet` kernel parameter which allow the host machine to
|
|
||||||
connect to a local container exposed port through the commonly used loopback
|
|
||||||
address: this alternative is preferred for performance reasons.
|
|
||||||
|
|
||||||
## Related information
|
|
||||||
|
|
||||||
- [Understand Docker container networks](../index.md)
|
|
||||||
- [Work with network commands](../work-with-networks.md)
|
|
||||||
- [Legacy container links](dockerlinks.md)
|
|
|
@ -1,85 +0,0 @@
|
||||||
---
|
|
||||||
description: Learn how to build your own bridge interface
|
|
||||||
keywords: docker, bridge, docker0, network
|
|
||||||
title: Build your own bridge
|
|
||||||
---
|
|
||||||
|
|
||||||
This section explains how to build your own bridge to replace the Docker default
|
|
||||||
bridge. This is a `bridge` network named `bridge` created automatically when you
|
|
||||||
install Docker.
|
|
||||||
|
|
||||||
> **Note**: The [Docker networks feature](../index.md) allows you to
|
|
||||||
create user-defined networks in addition to the default bridge network.
|
|
||||||
|
|
||||||
You can set up your own bridge before starting Docker and configure Docker to
|
|
||||||
use your bridge instead of the default `docker0` bridge.
|
|
||||||
|
|
||||||
> **Note**: These instructions use the `ip` command, which is available on
|
|
||||||
> all modern Linux distributions. If you do not have the `ip` command, you may
|
|
||||||
> need to use the `brctl` command. Instructions for that command are out of
|
|
||||||
> scope for this topic.
|
|
||||||
|
|
||||||
1. Create the new bridge, configure it to use the IP address pool
|
|
||||||
`192.168.5.0 - 192.168.5.255`, and activate it.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ sudo ip link add name bridge0 type bridge
|
|
||||||
$ sudo ip addr add 192.168.5.1/24 dev bridge0
|
|
||||||
$ sudo ip link set dev bridge0 up
|
|
||||||
```
|
|
||||||
|
|
||||||
Display the new bridge's settings.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ ip addr show bridge0
|
|
||||||
|
|
||||||
4: bridge0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state UP group default
|
|
||||||
link/ether 66:38:d0:0d:76:18 brd ff:ff:ff:ff:ff:ff
|
|
||||||
inet 192.168.5.1/24 scope global bridge0
|
|
||||||
valid_lft forever preferred_lft forever
|
|
||||||
```
|
|
||||||
|
|
||||||
2. Configure Docker to use the new bridge by setting the option in the
|
|
||||||
`daemon.json` file, which is located in `/etc/docker/` on
|
|
||||||
Linux or `C:\ProgramData\docker\config\` on Windows Server. On Docker for
|
|
||||||
Mac or Docker for Windows, click the Docker icon, choose **Preferences**,
|
|
||||||
and go to **Daemon**.
|
|
||||||
|
|
||||||
If the `daemon.json` file does not exist, create it. Assuming there
|
|
||||||
are no other settings in the file, it should have the following contents:
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"bridge": "bridge0"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Restart Docker for the changes to take effect.
|
|
||||||
|
|
||||||
3. Confirm that the new outgoing NAT masquerade is set up.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ sudo iptables -t nat -L -n
|
|
||||||
|
|
||||||
Chain POSTROUTING (policy ACCEPT)
|
|
||||||
target prot opt source destination
|
|
||||||
MASQUERADE all -- 192.168.5.0/24 0.0.0.0/0
|
|
||||||
```
|
|
||||||
|
|
||||||
4. Remove the now-unused `docker0` bridge and flush the `POSTROUTING` table.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ sudo ip link set dev docker0 down
|
|
||||||
|
|
||||||
$ sudo ip link del name br0
|
|
||||||
|
|
||||||
$ sudo iptables -t nat -F POSTROUTING
|
|
||||||
```
|
|
||||||
|
|
||||||
5. Create a new container, and verify that it uses an IP the new IP address range.
|
|
||||||
|
|
||||||
When you add and remove interfaces from the bridge by starting and stopping
|
|
||||||
containers, you can run `ip addr` and `ip route` inside a container to confirm
|
|
||||||
that it has an address in the bridge's IP address range and uses the Docker
|
|
||||||
host's IP address on the bridge as its default gateway to the rest of the
|
|
||||||
Internet.
|
|
|
@ -1,127 +0,0 @@
|
||||||
---
|
|
||||||
description: Learn how to configure DNS in Docker
|
|
||||||
keywords: docker, bridge, docker0, network
|
|
||||||
title: Configure container DNS
|
|
||||||
---
|
|
||||||
|
|
||||||
The information in this section explains configuring container DNS within
|
|
||||||
the Docker default bridge. This is a `bridge` network named `bridge` created
|
|
||||||
automatically when you install Docker.
|
|
||||||
|
|
||||||
> **Note**: The [Docker networks feature](../index.md) allows you to create user-defined networks in addition to the default bridge network. Refer to the [Docker Embedded DNS](../configure-dns.md) section for more information on DNS configurations in user-defined networks.
|
|
||||||
|
|
||||||
How can Docker supply each container with a hostname and DNS configuration, without having to build a custom image with the hostname written inside? Its trick is to overlay three crucial `/etc` files inside the container with virtual files where it can write fresh information. You can see this by running `mount` inside a container:
|
|
||||||
|
|
||||||
```
|
|
||||||
root@f38c87f2a42d:/# mount
|
|
||||||
|
|
||||||
...
|
|
||||||
/dev/disk/by-uuid/1fec...ebdf on /etc/hostname type ext4 ...
|
|
||||||
/dev/disk/by-uuid/1fec...ebdf on /etc/hosts type ext4 ...
|
|
||||||
/dev/disk/by-uuid/1fec...ebdf on /etc/resolv.conf type ext4 ...
|
|
||||||
...
|
|
||||||
```
|
|
||||||
|
|
||||||
This arrangement allows Docker to do clever things like keep `resolv.conf` up to date across all containers when the host machine receives new configuration over DHCP later. The exact details of how Docker maintains these files inside the container can change from one Docker version to the next, so you should leave the files themselves alone and use the following Docker options instead.
|
|
||||||
|
|
||||||
Four different options affect container domain name services.
|
|
||||||
|
|
||||||
<table>
|
|
||||||
<tr>
|
|
||||||
<td>
|
|
||||||
<p>
|
|
||||||
<code>-h HOSTNAME</code> or <code>--hostname=HOSTNAME</code>
|
|
||||||
</p>
|
|
||||||
</td>
|
|
||||||
<td>
|
|
||||||
<p>
|
|
||||||
Sets the hostname by which the container knows itself. This is written
|
|
||||||
into <code>/etc/hostname</code>, into <code>/etc/hosts</code> as the name
|
|
||||||
of the container's host-facing IP address, and is the name that
|
|
||||||
<code>/bin/bash</code> inside the container displays inside its
|
|
||||||
prompt. But the hostname is not easy to see from outside the container.
|
|
||||||
It does not appear in <code>docker ps</code> nor in the
|
|
||||||
<code>/etc/hosts</code> file of any other container.
|
|
||||||
</p>
|
|
||||||
</td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td>
|
|
||||||
<p>
|
|
||||||
<code>--link=CONTAINER_NAME</code> or <code>ID:ALIAS</code>
|
|
||||||
</p>
|
|
||||||
</td>
|
|
||||||
<td>
|
|
||||||
<p>
|
|
||||||
Using this option as you <code>run</code> a container gives the new
|
|
||||||
container's <code>/etc/hosts</code> an extra entry named
|
|
||||||
<code>ALIAS</code> that points to the IP address of the container
|
|
||||||
identified by <code>CONTAINER_NAME_or_ID</code>. This lets processes
|
|
||||||
inside the new container connect to the hostname <code>ALIAS</code>
|
|
||||||
without having to know its IP. The <code>--link=</code> option is
|
|
||||||
discussed in more detail below. Because Docker may assign a different IP
|
|
||||||
address to the linked containers on restart, Docker updates the
|
|
||||||
<code>ALIAS</code> entry in the <code>/etc/hosts</code> file of the
|
|
||||||
recipient containers.
|
|
||||||
</p>
|
|
||||||
</td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td><p>
|
|
||||||
<code>--dns=IP_ADDRESS...</code>
|
|
||||||
</p></td>
|
|
||||||
<td><p>
|
|
||||||
Sets the IP addresses added as <code>nameserver</code> lines to the container's
|
|
||||||
<code>/etc/resolv.conf</code> file. Processes in the container, when
|
|
||||||
confronted with a hostname not in <code>/etc/hosts</code>, connect to
|
|
||||||
these IP addresses on port 53 looking for name resolution services. </p></td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td><p>
|
|
||||||
<code>--dns-search=DOMAIN...</code>
|
|
||||||
</p></td>
|
|
||||||
<td><p>
|
|
||||||
Sets the domain names that are searched when a bare unqualified hostname is
|
|
||||||
used inside of the container, by writing <code>search</code> lines into the
|
|
||||||
container's <code>/etc/resolv.conf</code>. When a container process attempts
|
|
||||||
to access <code>host</code> and the search domain <code>example.com</code>
|
|
||||||
is set, for instance, the DNS logic not only looks up <code>host</code>
|
|
||||||
but also <code>host.example.com</code>.
|
|
||||||
</p>
|
|
||||||
<p>
|
|
||||||
Use <code>--dns-search=.</code> if you don't wish to set the search domain.
|
|
||||||
</p>
|
|
||||||
</td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td><p>
|
|
||||||
<code>--dns-opt=OPTION...</code>
|
|
||||||
</p></td>
|
|
||||||
<td><p>
|
|
||||||
Sets the options used by DNS resolvers by writing an <code>options</code>
|
|
||||||
line into the container's <code>/etc/resolv.conf</code>.
|
|
||||||
</p>
|
|
||||||
<p>
|
|
||||||
See documentation for <code>resolv.conf</code> for a list of valid options
|
|
||||||
</p></td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td><p></p></td>
|
|
||||||
<td><p></p></td>
|
|
||||||
</tr>
|
|
||||||
</table>
|
|
||||||
|
|
||||||
|
|
||||||
Regarding DNS settings, in the absence of the `--dns=IP_ADDRESS...`, `--dns-search=DOMAIN...`, or `--dns-opt=OPTION...` options, Docker makes each container's `/etc/resolv.conf` look like the `/etc/resolv.conf` of the host machine (where the `docker` daemon runs). When creating the container's `/etc/resolv.conf`, the daemon filters out all localhost IP address `nameserver` entries from the host's original file.
|
|
||||||
|
|
||||||
Filtering is necessary because all localhost addresses on the host are unreachable from the container's network. After this filtering, if there are no more `nameserver` entries left in the container's `/etc/resolv.conf` file, the daemon adds public Google DNS nameservers (8.8.8.8 and 8.8.4.4) to the container's DNS configuration. If IPv6 is enabled on the daemon, the public IPv6 Google DNS nameservers are also added (2001:4860:4860::8888 and 2001:4860:4860::8844).
|
|
||||||
|
|
||||||
> **Note**: If you need access to a host's localhost resolver, you must modify your DNS service on the host to listen on a non-localhost address that is reachable from within the container.
|
|
||||||
|
|
||||||
You might wonder what happens when the host machine's `/etc/resolv.conf` file changes. The `docker` daemon has a file change notifier active which watches for changes to the host DNS configuration.
|
|
||||||
|
|
||||||
> **Note**: The file change notifier relies on the Linux kernel's inotify feature. Because this feature is currently incompatible with the overlay filesystem driver, a Docker daemon using "overlay" cannot take advantage of the `/etc/resolv.conf` auto-update feature.
|
|
||||||
|
|
||||||
When the host file changes, all stopped containers which have a matching `resolv.conf` to the host are updated immediately to this newest host configuration. Containers which are running when the host configuration changes need to stop and start to pick up the host changes due to lack of a facility to ensure atomic writes of the `resolv.conf` file while the container is running. If the container's `resolv.conf` has been edited since it was started with the default configuration, no replacement is attempted as it would overwrite the changes performed by the container. If the options (`--dns`, `--dns-search`, or `--dns-opt`) have been used to modify the default host configuration, then the replacement with an updated host's `/etc/resolv.conf` does not happen.
|
|
||||||
|
|
||||||
> **Note**: For containers which were created prior to the implementation of the `/etc/resolv.conf` update feature in Docker 1.5.0: those containers do **not** receive updates when the host `resolv.conf` file changes. Only containers created with Docker 1.5.0 and above utilize this auto-update feature.
|
|
|
@ -1,165 +0,0 @@
|
||||||
---
|
|
||||||
description: Understand container communication
|
|
||||||
keywords: docker, container, communication, network
|
|
||||||
title: Understand container communication
|
|
||||||
---
|
|
||||||
|
|
||||||
The information in this section explains container communication within the
|
|
||||||
Docker default bridge. This is a `bridge` network named `bridge` created
|
|
||||||
automatically when you install Docker.
|
|
||||||
|
|
||||||
**Note**: The [Docker networks feature](../index.md) allows you to create user-defined networks in addition to the default bridge network.
|
|
||||||
|
|
||||||
## Communicating to the outside world
|
|
||||||
|
|
||||||
Whether a container can talk to the world is governed by two factors. The first
|
|
||||||
factor is whether the host machine is forwarding its IP packets. The second is
|
|
||||||
whether the host's `iptables` allow this particular connection.
|
|
||||||
|
|
||||||
IP packet forwarding is governed by the `ip_forward` system parameter. Packets
|
|
||||||
can only pass between containers if this parameter is `1`. Usually, the default
|
|
||||||
setting of `--ip-forward=true` is correct, and causes and
|
|
||||||
Docker to set `ip_forward` to `1` for you when the server starts up. If you
|
|
||||||
set `--ip-forward=false` and your system's kernel has it enabled, the
|
|
||||||
`--ip-forward=false` option has no effect. To check the setting on your kernel
|
|
||||||
or to turn it on manually:
|
|
||||||
|
|
||||||
```
|
|
||||||
$ sysctl net.ipv4.conf.all.forwarding
|
|
||||||
|
|
||||||
net.ipv4.conf.all.forwarding = 0
|
|
||||||
|
|
||||||
$ sysctl net.ipv4.conf.all.forwarding=1
|
|
||||||
|
|
||||||
$ sysctl net.ipv4.conf.all.forwarding
|
|
||||||
|
|
||||||
net.ipv4.conf.all.forwarding = 1
|
|
||||||
```
|
|
||||||
|
|
||||||
> **Note**: this setting does not affect containers that use the host
|
|
||||||
> network stack (`--network=host`).
|
|
||||||
|
|
||||||
Many using Docker need `ip_forward` to be on, to at least make
|
|
||||||
communication _possible_ between containers and the wider world. May also be
|
|
||||||
needed for inter-container communication if you are in a multiple bridge setup.
|
|
||||||
|
|
||||||
Docker never makes changes to your system `iptables` rules if you set
|
|
||||||
`--iptables=false` when the daemon starts. Otherwise the Docker server
|
|
||||||
appends forwarding rules to the `DOCKER` filter chain.
|
|
||||||
|
|
||||||
Docker flushes any pre-existing rules from the `DOCKER` and `DOCKER-ISOLATION`
|
|
||||||
filter chains, if they exist. For this reason, any rules needed to further
|
|
||||||
restrict access to containers need to be added after Docker has started.
|
|
||||||
|
|
||||||
Docker's forward rules permit all external source IPs by default. To allow only
|
|
||||||
a specific IP or network to access the containers, insert a negated rule at the
|
|
||||||
top of the `DOCKER` filter chain. For example, to restrict external access such
|
|
||||||
that _only_ source IP 8.8.8.8 can access the containers, the following rule
|
|
||||||
could be added:
|
|
||||||
|
|
||||||
```
|
|
||||||
$ iptables -I DOCKER -i ext_if ! -s 8.8.8.8 -j DROP
|
|
||||||
```
|
|
||||||
|
|
||||||
where *ext_if* is the name of the interface providing external connectivity to the host.
|
|
||||||
|
|
||||||
## Communication between containers
|
|
||||||
|
|
||||||
Whether two containers can communicate is governed, at the operating system level, by two factors.
|
|
||||||
|
|
||||||
- Does the network topology even connect the containers' network interfaces? By default Docker attaches all containers to a single `docker0` bridge, providing a path for packets to travel between them. See the later sections of this document for other possible topologies.
|
|
||||||
|
|
||||||
- Do your `iptables` allow this particular connection? Docker never makes changes to your system `iptables` rules if you set `--iptables=false` when the daemon starts. Otherwise the Docker server adds a default rule to the `FORWARD` chain with a blanket `ACCEPT` policy if you retain the default `--icc=true`, or else sets the policy to `DROP` if `--icc=false`.
|
|
||||||
|
|
||||||
It is a strategic question whether to leave `--icc=true` or change it to
|
|
||||||
`--icc=false` so that `iptables` can protect other containers, and the Docker
|
|
||||||
host, from having arbitrary ports probed or accessed by a container that gets
|
|
||||||
compromised.
|
|
||||||
|
|
||||||
If you choose the most secure setting of `--icc=false`, then how can containers
|
|
||||||
communicate in those cases where you _want_ them to provide each other services?
|
|
||||||
The answer is the `--link=CONTAINER_NAME_or_ID:ALIAS` option, which was
|
|
||||||
mentioned in the previous section because of its effect upon name services. If
|
|
||||||
the Docker daemon is running with both `--icc=false` and `--iptables=true`
|
|
||||||
then, when it sees `docker run` invoked with the `--link=` option, the Docker
|
|
||||||
server inserts a pair of `iptables` `ACCEPT` rules so that the new
|
|
||||||
container can connect to the ports exposed by the other container -- the ports
|
|
||||||
that it mentioned in the `EXPOSE` lines of its `Dockerfile`.
|
|
||||||
|
|
||||||
> **Note**: The value `CONTAINER_NAME` in `--link=` must either be an
|
|
||||||
auto-assigned Docker name like `stupefied_pare` or the name you assigned
|
|
||||||
with `--name=` when you ran `docker run`. It cannot be a hostname, which Docker
|
|
||||||
does not recognize in the context of the `--link=` option.
|
|
||||||
|
|
||||||
You can run the `iptables` command on your Docker host to see whether the `FORWARD` chain has a default policy of `ACCEPT` or `DROP`:
|
|
||||||
|
|
||||||
```
|
|
||||||
# When --icc=false, you should see a DROP rule:
|
|
||||||
|
|
||||||
$ sudo iptables -L -n
|
|
||||||
|
|
||||||
...
|
|
||||||
Chain FORWARD (policy ACCEPT)
|
|
||||||
target prot opt source destination
|
|
||||||
DOCKER all -- 0.0.0.0/0 0.0.0.0/0
|
|
||||||
DROP all -- 0.0.0.0/0 0.0.0.0/0
|
|
||||||
...
|
|
||||||
|
|
||||||
# When a --link= has been created under --icc=false,
|
|
||||||
# you should see port-specific ACCEPT rules overriding
|
|
||||||
# the subsequent DROP policy for all other packets:
|
|
||||||
|
|
||||||
$ sudo iptables -L -n
|
|
||||||
|
|
||||||
...
|
|
||||||
Chain FORWARD (policy ACCEPT)
|
|
||||||
target prot opt source destination
|
|
||||||
DOCKER all -- 0.0.0.0/0 0.0.0.0/0
|
|
||||||
DROP all -- 0.0.0.0/0 0.0.0.0/0
|
|
||||||
|
|
||||||
Chain DOCKER (1 references)
|
|
||||||
target prot opt source destination
|
|
||||||
ACCEPT tcp -- 172.17.0.2 172.17.0.3 tcp spt:80
|
|
||||||
ACCEPT tcp -- 172.17.0.3 172.17.0.2 tcp dpt:80
|
|
||||||
```
|
|
||||||
|
|
||||||
> **Note**: Docker is careful that its host-wide `iptables` rules fully expose
|
|
||||||
containers to each other's raw IP addresses, so connections from one container
|
|
||||||
to another should always appear to be originating from the first container's own
|
|
||||||
IP address.
|
|
||||||
|
|
||||||
## Container communication between hosts
|
|
||||||
|
|
||||||
For security reasons, Docker configures the `iptables` rules to prevent containers
|
|
||||||
from forwarding traffic from outside the host machine, on Linux hosts. Docker sets
|
|
||||||
the default policy of the `FORWARD` chain to `DROP`.
|
|
||||||
|
|
||||||
To override this default behavior you can manually change the default policy:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ sudo iptables -P FORWARD ACCEPT
|
|
||||||
```
|
|
||||||
The `iptables` settings are lost when the system reboots. If you want
|
|
||||||
the change to be permanent, refer to your Linux distribution's documentation.
|
|
||||||
|
|
||||||
> **Note**: In Docker 1.12 and earlier, the default `FORWARD` chain policy was
|
|
||||||
> `ACCEPT`. When you upgrade to Docker 1.13 or higher, this default is
|
|
||||||
> automatically changed for you.
|
|
||||||
>
|
|
||||||
> If you had a previously working configuration with multiple containers
|
|
||||||
> spanned over multiple hosts, this change may cause the existing setup
|
|
||||||
> to stop working if you do not intervene.
|
|
||||||
|
|
||||||
### Why would you need to change the default `DROP` to `ACCEPT`?
|
|
||||||
|
|
||||||
Suppose you have two hosts and each has the following configuration
|
|
||||||
|
|
||||||
```none
|
|
||||||
host1: eth0/192.168.7.1, docker0/172.17.0.0/16
|
|
||||||
host2: eth0/192.168.8.1, docker0/172.18.0.0/16
|
|
||||||
```
|
|
||||||
If the container running on `host1` needs the ability to communicate directly
|
|
||||||
with a container on `host2`, you need a route from `host1` to `host2`. After
|
|
||||||
the route exists, `host2` needs the ability to accept packets destined for its
|
|
||||||
running container, and forward them along. Setting the policy to `ACCEPT`
|
|
||||||
accomplishes this.
|
|
|
@ -1,123 +0,0 @@
|
||||||
---
|
|
||||||
description: Customizing docker0
|
|
||||||
keywords: docker, bridge, docker0, network
|
|
||||||
title: Customize the docker0 bridge
|
|
||||||
---
|
|
||||||
|
|
||||||
The information in this section explains how to customize the Docker default
|
|
||||||
bridge. This is a `bridge` network named `bridge` created automatically when you
|
|
||||||
install Docker.
|
|
||||||
|
|
||||||
> **Note**: The [Docker networks feature](/engine/userguide/networking/index.md)
|
|
||||||
> allows you to create user-defined networks in addition to the default bridge network.
|
|
||||||
|
|
||||||
By default, the Docker server creates and configures the host system's `docker0`
|
|
||||||
a network interface called `docker0`, which is an ethernet bridge device. If you
|
|
||||||
don't specify a different network when starting a container, the container is
|
|
||||||
connected to the bridge and all traffic coming from and going to the container
|
|
||||||
flows over the bridge to the Docker daemon, which handles routing on behalf of
|
|
||||||
the container.
|
|
||||||
|
|
||||||
Docker configures `docker0` with an IP address, netmask, and IP allocation range.
|
|
||||||
Containers which are connected to the default bridge are allocated IP addresses
|
|
||||||
within this range. Certain default settings apply to the default bridge unless
|
|
||||||
you specify otherwise. For instance, the default maximum transmission unit (MTU),
|
|
||||||
or the largest packet length that the container allows, defaults to 1500
|
|
||||||
bytes.
|
|
||||||
|
|
||||||
You can configure the default bridge network's settings using flags to the
|
|
||||||
`dockerd` command. However, the recommended way to configure the Docker daemon
|
|
||||||
is to use the `daemon.json` file, which is located in `/etc/docker/` on Linux.
|
|
||||||
If the file does not exist, create it. You can specify one or more of the
|
|
||||||
following settings to configure the default bridge network:
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"bip": "192.168.1.5/24",
|
|
||||||
"fixed-cidr": "192.168.1.5/25",
|
|
||||||
"fixed-cidr-v6": "2001:db8::/64",
|
|
||||||
"mtu": 1500,
|
|
||||||
"default-gateway": "10.20.1.1",
|
|
||||||
"default-gateway-v6": "2001:db8:abcd::89",
|
|
||||||
"dns": ["10.20.1.2","10.20.1.3"]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Restart Docker after making changes to the `daemon.json` file.
|
|
||||||
|
|
||||||
The same options are presented as flags to `dockerd`, with an explanation for
|
|
||||||
each:
|
|
||||||
|
|
||||||
- `--bip=CIDR`: supply a specific IP address and netmask for the `docker0`
|
|
||||||
bridge, using standard CIDR notation. For example: `192.168.1.5/24`.
|
|
||||||
|
|
||||||
- `--fixed-cidr=CIDR` and `--fixed-cidr-v6=CIDRv6`: restrict the IP range from
|
|
||||||
the `docker0` subnet, using standard CIDR notation. For example:
|
|
||||||
`172.16.1.0/28`. This range must be an IPv4 range for fixed IPs, and must
|
|
||||||
be a subset of the bridge IP range (`docker0` or set
|
|
||||||
using `--bridge` or the `bip` key in the `daemon.json` file). For example,
|
|
||||||
with `--fixed-cidr=192.168.1.0/25`, IPs for your containers are chosen from
|
|
||||||
the first half of addresses included in the 192.168.1.0/24 subnet.
|
|
||||||
|
|
||||||
- `--mtu=BYTES`: override the maximum packet length on `docker0`.
|
|
||||||
|
|
||||||
- `--default-gateway=Container default Gateway IPV4 address` and
|
|
||||||
`--default-gateway-v6=Container default gateway IPV6 address`: designates the
|
|
||||||
default gateway for containers connected to the `docker0` bridge, which
|
|
||||||
controls where they route traffic by default. Applicable for addresses set
|
|
||||||
with `--bip` and `--fixed-cidr` flags. For instance, you can configure
|
|
||||||
`--fixed-cidr=172.17.2.0/24` and `default-gateway=172.17.1.1`.
|
|
||||||
|
|
||||||
- `--dns=[]`: The DNS servers to use. For example: `--dns=172.17.2.10`.
|
|
||||||
|
|
||||||
Once you have one or more containers up and running, you can confirm that Docker
|
|
||||||
has properly connected them to the `docker0` bridge by running the `brctl`
|
|
||||||
command on the host machine and looking at the `interfaces` column of the
|
|
||||||
output. This example shows a `docker0` bridge with two containers connected:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ sudo brctl show
|
|
||||||
|
|
||||||
bridge name bridge id STP enabled interfaces
|
|
||||||
docker0 8000.3a1d7362b4ee no veth65f9
|
|
||||||
vethdda6
|
|
||||||
```
|
|
||||||
|
|
||||||
If the `brctl` command is not installed on your Docker host, run
|
|
||||||
`sudo apt-get install bridge-utils` (on Ubuntu hosts) to install it. For other
|
|
||||||
operating systems, consult the OS documentation.
|
|
||||||
|
|
||||||
Finally, the `docker0` Ethernet bridge settings are used every time you create a
|
|
||||||
new container. Docker selects a free IP address from the range available on the
|
|
||||||
bridge each time you `docker run` a new container, and configures the
|
|
||||||
container's `eth0` interface with that IP address and the bridge's netmask. The
|
|
||||||
Docker host's own IP address on the bridge is used as the default gateway by
|
|
||||||
which each container reaches the rest of the Internet.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# The network, as seen from a container
|
|
||||||
|
|
||||||
$ docker run --rm -it alpine /bin/ash
|
|
||||||
|
|
||||||
root@f38c87f2a42d:/# ip addr show eth0
|
|
||||||
|
|
||||||
24: eth0: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
|
|
||||||
link/ether 32:6f:e0:35:57:91 brd ff:ff:ff:ff:ff:ff
|
|
||||||
inet 172.17.0.3/16 scope global eth0
|
|
||||||
valid_lft forever preferred_lft forever
|
|
||||||
inet6 fe80::306f:e0ff:fe35:5791/64 scope link
|
|
||||||
valid_lft forever preferred_lft forever
|
|
||||||
|
|
||||||
root@f38c87f2a42d:/# ip route
|
|
||||||
|
|
||||||
default via 172.17.42.1 dev eth0
|
|
||||||
172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.3
|
|
||||||
|
|
||||||
root@f38c87f2a42d:/# exit
|
|
||||||
```
|
|
||||||
|
|
||||||
The Docker host does not forward container packets
|
|
||||||
out to the outside world unless its `ip_forward` system setting is `1` -- see the
|
|
||||||
section on
|
|
||||||
[Communicating to the outside world](container-communication.md#communicating-to-the-outside-world)
|
|
||||||
for details.
|
|
Before Width: | Height: | Size: 30 KiB |
Before Width: | Height: | Size: 66 KiB |
Before Width: | Height: | Size: 207 KiB |
Before Width: | Height: | Size: 74 KiB |
Before Width: | Height: | Size: 175 KiB |
|
@ -1,18 +0,0 @@
|
||||||
---
|
|
||||||
description: Docker networking
|
|
||||||
keywords: network, networking, bridge, docker, documentation
|
|
||||||
title: Default bridge network
|
|
||||||
---
|
|
||||||
|
|
||||||
With the introduction of the Docker networks feature, you can create your own
|
|
||||||
user-defined networks. The Docker default bridge is created when you install
|
|
||||||
Docker Engine. It is a `bridge` network and is also named `bridge`. The topics
|
|
||||||
in this section are related to interacting with that default bridge network.
|
|
||||||
|
|
||||||
- [Understand container communication](container-communication.md)
|
|
||||||
- [Legacy container links](dockerlinks.md)
|
|
||||||
- [Binding container ports to the host](binding.md)
|
|
||||||
- [Build your own bridge](build-bridges.md)
|
|
||||||
- [Configure container DNS](configure-dns.md)
|
|
||||||
- [Customize the docker0 bridge](custom-docker0.md)
|
|
||||||
- [IPv6 with Docker](ipv6.md)
|
|
|
@ -1,274 +0,0 @@
|
||||||
---
|
|
||||||
description: How do we connect docker containers within and across hosts ?
|
|
||||||
keywords: docker, network, IPv6
|
|
||||||
title: IPv6 with Docker
|
|
||||||
---
|
|
||||||
|
|
||||||
The information in this section explains IPv6 with the Docker default bridge.
|
|
||||||
This is a `bridge` network named `bridge` created automatically when you install
|
|
||||||
Docker.
|
|
||||||
|
|
||||||
As we are [running out of IPv4
|
|
||||||
addresses](http://en.wikipedia.org/wiki/IPv4_address_exhaustion) the IETF has
|
|
||||||
standardized an IPv4 successor, [Internet Protocol Version
|
|
||||||
6](http://en.wikipedia.org/wiki/IPv6) , in [RFC
|
|
||||||
2460](https://www.ietf.org/rfc/rfc2460.txt). Both protocols, IPv4 and IPv6,
|
|
||||||
reside on layer 3 of the [OSI model](http://en.wikipedia.org/wiki/OSI_model).
|
|
||||||
|
|
||||||
## How IPv6 works on Docker
|
|
||||||
|
|
||||||
By default, the Docker daemon configures the container network for IPv4 only.
|
|
||||||
You can enable IPv4/IPv6 dualstack support by running the Docker daemon with the
|
|
||||||
`--ipv6` flag. Docker sets up the bridge `docker0` with the IPv6 [link-local
|
|
||||||
address](http://en.wikipedia.org/wiki/Link-local_address) `fe80::1`.
|
|
||||||
|
|
||||||
By default, containers that are created only get a link-local IPv6 address.
|
|
||||||
To assign globally routable IPv6 addresses to your containers you need to
|
|
||||||
specify an IPv6 subnet to pick the addresses from. Set the IPv6 subnet via the
|
|
||||||
`--fixed-cidr-v6` parameter when starting Docker daemon:
|
|
||||||
|
|
||||||
You can run `dockerd` with these flags directly, but it is recommended that you
|
|
||||||
set them in the
|
|
||||||
[`daemon.json`](/engine/reference/commandline/dockerd.md#daemon-configuration-file)
|
|
||||||
configuration file instead. The following example `daemon.json` enables IPv6 and
|
|
||||||
sets the IPv6 subnet to `2001:db8:1::/64`.
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"ipv6": true,
|
|
||||||
"fixed-cidr-v6": "2001:db8:1::/64"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
The subnet for Docker containers should at least have a size of `/80`, so that
|
|
||||||
an IPv6 address can end with the container's MAC address and you prevent NDP
|
|
||||||
neighbor cache invalidation issues in the Docker layer.
|
|
||||||
|
|
||||||
By default, `--fixed-cidr-v6` parameter causes Docker to add a new route to the
|
|
||||||
routing table, by basically running the three commands below on your behalf. To
|
|
||||||
prevent the automatic routing, set `ip-forward` to `false` in the `daemon.json`
|
|
||||||
file or start the Docker daemon with the `--ip-forward=false` flag. Then, to get
|
|
||||||
the same routing table that Docker would create automatically for you, issue the
|
|
||||||
following commands:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ ip -6 route add 2001:db8:1::/64 dev docker0
|
|
||||||
|
|
||||||
$ sysctl net.ipv6.conf.default.forwarding=1
|
|
||||||
|
|
||||||
$ sysctl net.ipv6.conf.all.forwarding=1
|
|
||||||
```
|
|
||||||
|
|
||||||
All traffic to the subnet `2001:db8:1::/64` is routed via the `docker0`
|
|
||||||
interface.
|
|
||||||
|
|
||||||
> **Note**: IPv6 forwarding may interfere with your existing IPv6
|
|
||||||
> configuration: If you are using Router Advertisements to get IPv6 settings for
|
|
||||||
> your host's interfaces, set `accept_ra` to `2` using the following command.
|
|
||||||
> Otherwise IPv6 enabled forwarding results in rejecting Router Advertisements.
|
|
||||||
>
|
|
||||||
> $ sysctl net.ipv6.conf.eth0.accept_ra=2
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
Each new container gets an IPv6 address from the defined subnet, and a
|
|
||||||
default route is added on `eth0` in the container via the address specified
|
|
||||||
by the daemon option `--default-gateway-v6` (or `default-gateway-v6` in
|
|
||||||
`daemon.json`) if present. The default gateway defaults to `fe80::1`.
|
|
||||||
|
|
||||||
This example provides a way to examine the IPv6 network settings within a
|
|
||||||
running container.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker run -it alpine ash -c "ip -6 addr show dev eth0; ip -6 route show"
|
|
||||||
|
|
||||||
15: eth0: <BROADCAST,UP,LOWER_UP> mtu 1500
|
|
||||||
inet6 2001:db8:1:0:0:242:ac11:3/64 scope global
|
|
||||||
valid_lft forever preferred_lft forever
|
|
||||||
inet6 fe80::42:acff:fe11:3/64 scope link
|
|
||||||
valid_lft forever preferred_lft forever
|
|
||||||
|
|
||||||
2001:db8:1::/64 dev eth0 proto kernel metric 256
|
|
||||||
fe80::/64 dev eth0 proto kernel metric 256
|
|
||||||
default via fe80::1 dev eth0 metric 1024
|
|
||||||
```
|
|
||||||
|
|
||||||
In this example, the container is assigned a link-local address with the subnet
|
|
||||||
`/64` (`fe80::42:acff:fe11:3/64`) and a globally routable IPv6 address
|
|
||||||
(`2001:db8:1:0:0:242:ac11:3/64`). The container creates connections to
|
|
||||||
addresses outside of the `2001:db8:1::/64` network via the link-local gateway at
|
|
||||||
`fe80::1` on `eth0`.
|
|
||||||
|
|
||||||
If your server or virtual machine has a `/64` IPv6 subnet assigned to it, such
|
|
||||||
as `2001:db8:23:42::/64`, you can split it up further and provide
|
|
||||||
Docker a `/80` subnet while using a separate `/80` subnet for other applications
|
|
||||||
on the host:
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
In this setup the subnet `2001:db8:23:42::/64` with a range from
|
|
||||||
`2001:db8:23:42:0:0:0:0` to `2001:db8:23:42:ffff:ffff:ffff:ffff` is attached to
|
|
||||||
`eth0`, with the host listening at `2001:db8:23:42::1`. The subnet
|
|
||||||
`2001:db8:23:42:1::/80` with an address range from `2001:db8:23:42:1:0:0:0` to
|
|
||||||
`2001:db8:23:42:1:ffff:ffff:ffff` is attached to `docker0` and is used by
|
|
||||||
containers.
|
|
||||||
|
|
||||||
### Using NDP proxying
|
|
||||||
|
|
||||||
If your Docker host is the only part of an IPv6 subnet but does not have an IPv6
|
|
||||||
subnet assigned, you can use NDP proxying to connect your containers to the
|
|
||||||
internet via IPv6. If the host with IPv6 address `2001:db8::c001` is part of
|
|
||||||
the subnet `2001:db8::/64` and your IaaS provider allows you to
|
|
||||||
configure the IPv6 addresses `2001:db8::c000` to `2001:db8::c00f`, your network
|
|
||||||
configuration may look like the following:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ ip -6 addr show
|
|
||||||
|
|
||||||
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536
|
|
||||||
inet6 ::1/128 scope host
|
|
||||||
valid_lft forever preferred_lft forever
|
|
||||||
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qlen 1000
|
|
||||||
inet6 2001:db8::c001/64 scope global
|
|
||||||
valid_lft forever preferred_lft forever
|
|
||||||
inet6 fe80::601:3fff:fea1:9c01/64 scope link
|
|
||||||
valid_lft forever preferred_lft forever
|
|
||||||
```
|
|
||||||
|
|
||||||
To slit up the configurable address range into two subnets
|
|
||||||
`2001:db8::c000/125` and `2001:db8::c008/125`, use the following `daemon.json`
|
|
||||||
settings. The first subnet is used by non-Docker processes on the host, and
|
|
||||||
the second is used by Docker.
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"ipv6": true,
|
|
||||||
"fixed-cidr-v6": "2001:db8::c008/125"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
The Docker subnet is within the subnet managed by your router and connected to
|
|
||||||
`eth0`. All containers with addresses assigned by Docker are expected to be
|
|
||||||
found within the router subnet, and the router can communicate with these
|
|
||||||
containers directly.
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
When the router wants to send an IPv6 packet to the first container, it
|
|
||||||
transmits a _neighbor solicitation request_, asking "Who has `2001:db8::c009`?"
|
|
||||||
However, no host on the subnet has the address; the container with the address
|
|
||||||
is hidden behind the Docker host. The Docker host therefore must listen for
|
|
||||||
neighbor solicitation requests and respond that it is the device with the
|
|
||||||
address. This functionality is called the _NDP Proxy_ and is handled by the kernel
|
|
||||||
on the host machine. To enable the NDP proxy, execute the following command:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ sysctl net.ipv6.conf.eth0.proxy_ndp=1
|
|
||||||
```
|
|
||||||
|
|
||||||
Next, add the container's IPv6 address to the NDP proxy table:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ ip -6 neigh add proxy 2001:db8::c009 dev eth0
|
|
||||||
```
|
|
||||||
|
|
||||||
From now on, the kernel answers neighbor solicitation addresses for this address
|
|
||||||
on the device `eth0`. All traffic to this IPv6 address is routed through the
|
|
||||||
Docker host, which forwards it to the container's network according to its
|
|
||||||
routing table via the `docker0` device:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ ip -6 route show
|
|
||||||
|
|
||||||
2001:db8::c008/125 dev docker0 metric 1
|
|
||||||
2001:db8::/64 dev eth0 proto kernel metric 256
|
|
||||||
```
|
|
||||||
|
|
||||||
Execute the `ip -6 neigh add proxy ...` command for every IPv6
|
|
||||||
address in your Docker subnet. Unfortunately there is no functionality for
|
|
||||||
adding a whole subnet by executing one command. An alternative approach would be
|
|
||||||
to use an NDP proxy daemon such as
|
|
||||||
[ndppd](https://github.com/DanielAdolfsson/ndppd).
|
|
||||||
|
|
||||||
## Docker IPv6 cluster
|
|
||||||
|
|
||||||
### Switched network environment
|
|
||||||
Using routable IPv6 addresses allows you to realize communication between
|
|
||||||
containers on different hosts. Let's have a look at a simple Docker IPv6 cluster
|
|
||||||
example:
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
The Docker hosts are in the `2001:db8:0::/64` subnet. Host1 is configured to
|
|
||||||
provide addresses from the `2001:db8:1::/64` subnet to its containers. It has
|
|
||||||
three routes configured:
|
|
||||||
|
|
||||||
- Route all traffic to `2001:db8:0::/64` via `eth0`
|
|
||||||
- Route all traffic to `2001:db8:1::/64` via `docker0`
|
|
||||||
- Route all traffic to `2001:db8:2::/64` via Host2 with IP `2001:db8::2`
|
|
||||||
|
|
||||||
Host1 also acts as a router on OSI layer 3. When one of the network clients
|
|
||||||
tries to contact a target that is specified in Host1's routing table Host1
|
|
||||||
forwards the traffic accordingly. It acts as a router for all networks it knows:
|
|
||||||
`2001:db8::/64`, `2001:db8:1::/64`, and `2001:db8:2::/64`.
|
|
||||||
|
|
||||||
On Host2 we have nearly the same configuration. Host2's containers gets IPv6
|
|
||||||
addresses from `2001:db8:2::/64`. Host2 has three routes configured:
|
|
||||||
|
|
||||||
- Route all traffic to `2001:db8:0::/64` via `eth0`
|
|
||||||
- Route all traffic to `2001:db8:2::/64` via `docker0`
|
|
||||||
- Route all traffic to `2001:db8:1::/64` via Host1 with IP `2001:db8:0::1`
|
|
||||||
|
|
||||||
The difference to Host1 is that the network `2001:db8:2::/64` is directly
|
|
||||||
attached to Host2 via its `docker0` interface whereas Host2 reaches
|
|
||||||
`2001:db8:1::/64` via Host1's IPv6 address `2001:db8::1`.
|
|
||||||
|
|
||||||
This way every container can contact every other container. The
|
|
||||||
containers `Container1-*` share the same subnet and contact each other directly.
|
|
||||||
The traffic between `Container1-*` and `Container2-*` are routed via Host1
|
|
||||||
and Host2 because those containers do not share the same subnet.
|
|
||||||
|
|
||||||
In a switched environment every host needs to know all routes to every subnet.
|
|
||||||
You always need to update the hosts' routing tables once you add or remove a
|
|
||||||
host to the cluster.
|
|
||||||
|
|
||||||
Every configuration in the diagram that is shown below the dashed line is
|
|
||||||
handled by Docker: The `docker0` bridge IP address configuration, the route to
|
|
||||||
the Docker subnet on the host, the container IP addresses and the routes on the
|
|
||||||
containers. The configuration above the line is up to the user and can be
|
|
||||||
adapted to the individual environment.
|
|
||||||
|
|
||||||
### Routed network environment
|
|
||||||
In a routed network environment you replace the layer 2 switch with a layer 3
|
|
||||||
router. Now the hosts just need to know their default gateway (the router) and
|
|
||||||
the route to their own containers (managed by Docker). The router holds all
|
|
||||||
routing information about the Docker subnets. When you add or remove a host to
|
|
||||||
this environment, just update the routing table in the router, rather than on
|
|
||||||
every host.
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
In this scenario containers of the same host can communicate directly with each
|
|
||||||
other. The traffic between containers on different hosts is routed via
|
|
||||||
their hosts and the router. For example, packets from `Container1-1` to
|
|
||||||
`Container2-1` are routed through `Host1`, `Router`, and `Host2` until they
|
|
||||||
arrive at `Container2-1`.
|
|
||||||
|
|
||||||
To keep the IPv6 addresses short in this example a `/48` network is assigned to
|
|
||||||
every host. The hosts use a `/64` subnet of this for its own services and one
|
|
||||||
for Docker. When adding a third host you would add a route for the subnet
|
|
||||||
`2001:db8:3::/48` in the router and configure Docker on Host3 with
|
|
||||||
`--fixed-cidr-v6=2001:db8:3:1::/64`.
|
|
||||||
|
|
||||||
Remember the subnet for Docker containers should at least have a size of `/80`.
|
|
||||||
This way an IPv6 address can end with the container's MAC address and you
|
|
||||||
prevent NDP neighbor cache invalidation issues in the Docker layer. So if you
|
|
||||||
have a `/64` for your whole environment use `/76` subnets for the hosts and
|
|
||||||
`/80` for the containers. This way you can use 4096 hosts with 16 `/80` subnets
|
|
||||||
each.
|
|
||||||
|
|
||||||
Every configuration in the diagram that is visualized below the dashed line is
|
|
||||||
handled by Docker: The `docker0` bridge IP address configuration, the route to
|
|
||||||
the Docker subnet on the host, the container IP addresses and the routes on the
|
|
||||||
containers. The configuration above the line is up to the user and can be
|
|
||||||
adapted to the individual environment.
|
|
|
@ -1,283 +0,0 @@
|
||||||
---
|
|
||||||
description: Use macvlan for container networking
|
|
||||||
keywords: Examples, Usage, network, docker, documentation, user guide, macvlan, cluster
|
|
||||||
title: Get started with Macvlan network driver
|
|
||||||
---
|
|
||||||
|
|
||||||
Libnetwork gives users total control over both IPv4 and IPv6 addressing. The VLAN drivers build on top of that in giving operators complete control of layer 2 VLAN tagging for users interested in underlay network integration. For overlay deployments that abstract away physical constraints see the [multi-host overlay ](/engine/userguide/networking/get-started-overlay/) driver.
|
|
||||||
|
|
||||||
Macvlan is a new twist on the tried and true network virtualization technique. The Linux implementations are extremely lightweight because rather than using the traditional Linux bridge for isolation, they are simply associated to a Linux Ethernet interface or sub-interface to enforce separation between networks and connectivity to the physical network.
|
|
||||||
|
|
||||||
Macvlan offers a number of unique features and plenty of room for further innovations with the various modes. Two high level advantages of these approaches are, the positive performance implications of bypassing the Linux bridge and the simplicity of having less moving parts. Removing the bridge that traditionally resides in between the Docker host NIC and container interface leaves a very simple setup consisting of container interfaces, attached directly to the Docker host interface. This result is easy access for external facing services as there is no port mappings in these scenarios.
|
|
||||||
|
|
||||||
## Pre-Requisites
|
|
||||||
|
|
||||||
- The examples on this page are all single host and setup using Docker 1.12.0+
|
|
||||||
|
|
||||||
- All of the examples can be performed on a single host running Docker. Any examples using a sub-interface like `eth0.10` can be replaced with `eth0` or any other valid parent interface on the Docker host. Sub-interfaces with a `.` are created on the fly. `-o parent` interfaces can also be left out of the `docker network create` all together and the driver creates a `dummy` interface that enables local host connectivity to perform the examples.
|
|
||||||
|
|
||||||
- Kernel requirements:
|
|
||||||
|
|
||||||
- To check your current kernel version, use `uname -r` to display your kernel version
|
|
||||||
- Macvlan Linux kernel v3.9–3.19 and 4.0+
|
|
||||||
|
|
||||||
## Macvlan Bridge Mode example usage
|
|
||||||
|
|
||||||
Macvlan Bridge mode has a unique MAC address per container used to track MAC to port mappings by the Docker host.
|
|
||||||
|
|
||||||
- Macvlan driver networks are attached to a parent Docker host interface. Examples are a physical interface such as `eth0`, a sub-interface for 802.1q VLAN tagging like `eth0.10` (`.10` representing VLAN `10`) or even bonded host adaptors which bundle two Ethernet interfaces into a single logical interface.
|
|
||||||
|
|
||||||
- The specified gateway is external to the host provided by the network infrastructure.
|
|
||||||
|
|
||||||
- Each Macvlan Bridge mode Docker network is isolated from one another and there can be only one network attached to a parent interface at a time. There is a theoretical limit of 4,094 sub-interfaces per host adaptor that a Docker network could be attached to.
|
|
||||||
|
|
||||||
- Any container inside the same subnet can talk to any other container in the same network without a gateway in `macvlan bridge`.
|
|
||||||
|
|
||||||
- The same `docker network` commands apply to the vlan drivers.
|
|
||||||
|
|
||||||
- In Macvlan mode, containers on separate networks cannot reach one another without an external process routing between the two networks/subnets. This also applies to multiple subnets within the same docker network.
|
|
||||||
|
|
||||||
In the following example, `eth0` on the docker host has an IP on the `172.16.86.0/24` network and a default gateway of `172.16.86.1`. The gateway is an external router with an address of `172.16.86.1`. An IP address is not required on the Docker host interface `eth0` in `bridge` mode, it merely needs to be on the proper upstream network to get forwarded by a network switch or network router.
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
> **Note**: For Macvlan bridge mode the subnet values need to match the NIC's interface of the Docker host. For example, Use the same subnet and gateway of the Docker host ethernet interface that is specified by the `-o parent=` option.
|
|
||||||
|
|
||||||
- The parent interface used in this example is `eth0` and it is on the subnet `172.16.86.0/24`. The containers in the `docker network` also need to be on this same subnet as the parent `-o parent=`. The gateway is an external router on the network, not any ip masquerading or any other local proxy.
|
|
||||||
|
|
||||||
- The driver is specified with `-d driver_name` option. In this case `-d macvlan`
|
|
||||||
|
|
||||||
- The parent interface `-o parent=eth0` is configured as followed:
|
|
||||||
|
|
||||||
```
|
|
||||||
ip addr show eth0
|
|
||||||
3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
|
|
||||||
inet 172.16.86.250/24 brd 172.16.86.255 scope global eth0
|
|
||||||
```
|
|
||||||
|
|
||||||
Create the macvlan network and run a couple of containers attached to it:
|
|
||||||
|
|
||||||
```
|
|
||||||
# Macvlan (-o macvlan_mode= Defaults to Bridge mode if not specified)
|
|
||||||
docker network create -d macvlan \
|
|
||||||
--subnet=172.16.86.0/24 \
|
|
||||||
--gateway=172.16.86.1 \
|
|
||||||
-o parent=eth0 pub_net
|
|
||||||
|
|
||||||
# Run a container on the new network specifying the --ip address.
|
|
||||||
docker run --net=pub_net --ip=172.16.86.10 -itd alpine /bin/sh
|
|
||||||
|
|
||||||
# Start a second container and ping the first
|
|
||||||
docker run --net=pub_net -it --rm alpine /bin/sh
|
|
||||||
ping -c 4 172.16.86.10
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
Take a look at the containers IP and routing table:
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
ip a show eth0
|
|
||||||
eth0@if3: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UNKNOWN
|
|
||||||
link/ether 46:b2:6b:26:2f:69 brd ff:ff:ff:ff:ff:ff
|
|
||||||
inet 172.16.86.2/24 scope global eth0
|
|
||||||
|
|
||||||
ip route
|
|
||||||
default via 172.16.86.1 dev eth0
|
|
||||||
172.16.86.0/24 dev eth0 src 172.16.86.2
|
|
||||||
|
|
||||||
# NOTE: the containers can NOT ping the underlying host interfaces as
|
|
||||||
# they are intentionally filtered by Linux for additional isolation.
|
|
||||||
# In this case the containers cannot ping the -o parent=172.16.86.250
|
|
||||||
```
|
|
||||||
|
|
||||||
You can explicitly specify the `bridge` mode option `-o macvlan_mode=bridge`. It is the default so is in `bridge` mode either way.
|
|
||||||
|
|
||||||
While the `eth0` interface does not need to have an IP address in Macvlan Bridge it is not uncommon to have an IP address on the interface. Addresses can be excluded from getting an address from the default built in IPAM by using the `--aux-address=x.x.x.x` flag. This blacklists the specified address from being handed out to containers. The same network example above blocking the `-o parent=eth0` address from being handed out to a container.
|
|
||||||
|
|
||||||
```
|
|
||||||
docker network create -d macvlan \
|
|
||||||
--subnet=172.16.86.0/24 \
|
|
||||||
--gateway=172.16.86.1 \
|
|
||||||
--aux-address="exclude_host=172.16.86.250" \
|
|
||||||
-o parent=eth0 pub_net
|
|
||||||
```
|
|
||||||
|
|
||||||
Another option for subpool IP address selection in a network provided by the default Docker IPAM driver is to use `--ip-range=`. This specifies the driver to allocate container addresses from this pool rather then the broader range from the `--subnet=` argument from a network create as seen in the following example that allocates addresses beginning at `192.168.32.128` and increment upwards from there.
|
|
||||||
|
|
||||||
```
|
|
||||||
docker network create -d macvlan \
|
|
||||||
--subnet=192.168.32.0/24 \
|
|
||||||
--ip-range=192.168.32.128/25 \
|
|
||||||
--gateway=192.168.32.254 \
|
|
||||||
-o parent=eth0 macnet32
|
|
||||||
|
|
||||||
# Start a container and verify the address is 192.168.32.128
|
|
||||||
docker run --net=macnet32 -it --rm alpine /bin/sh
|
|
||||||
```
|
|
||||||
|
|
||||||
The network can then be deleted with:
|
|
||||||
|
|
||||||
```
|
|
||||||
docker network rm <network_name or id>
|
|
||||||
```
|
|
||||||
|
|
||||||
> Communication with the Docker host over macvlan
|
|
||||||
>
|
|
||||||
> - When using macvlan, you cannot ping or communicate with the default namespace IP address.
|
|
||||||
> For example, if you create a container and try to ping the Docker host's `eth0`, it does
|
|
||||||
> **not** work. That traffic is explicitly filtered by the kernel modules themselves to
|
|
||||||
> offer additional provider isolation and security.
|
|
||||||
>
|
|
||||||
> - A macvlan subinterface can be added to the Docker host, to allow traffic between the Docker
|
|
||||||
> host and containers. The IP address needs to be set on this subinterface and removed from
|
|
||||||
> the parent address.
|
|
||||||
|
|
||||||
```
|
|
||||||
ip link add mac0 link $PARENTDEV type macvlan mode bridge
|
|
||||||
```
|
|
||||||
|
|
||||||
On Debian or Ubuntu, adding the following to `/etc/network/interfaces` makes this persistent.
|
|
||||||
Consult your operating system documentation for more details.
|
|
||||||
|
|
||||||
```none
|
|
||||||
auto eno1
|
|
||||||
iface eno1 inet manual
|
|
||||||
|
|
||||||
auto mac0
|
|
||||||
iface mac0 inet dhcp
|
|
||||||
pre-up ip link add mac0 link eno1 type macvlan mode bridge
|
|
||||||
post-down ip link del mac0 link eno1 type macvlan mode bridge
|
|
||||||
```
|
|
||||||
|
|
||||||
For more on Docker networking commands, see
|
|
||||||
Working with Docker network commands](/engine/userguide/networking/work-with-networks/).
|
|
||||||
|
|
||||||
## Macvlan 802.1q Trunk Bridge Mode example usage
|
|
||||||
|
|
||||||
VLANs (Virtual Local Area Networks) have long been a primary means of virtualizing data center networks and are still in virtually all existing networks today. VLANs work by tagging a Layer-2 isolation domain with a 12-bit identifier ranging from 1-4094 that is inserted into a packet header that enables a logical grouping of a single or multiple subnets of both IPv4 and IPv6. It is very common for network operators to separate traffic using VLANs based on a subnet(s) function or security profile such as `web`, `db` or any other isolation needs.
|
|
||||||
|
|
||||||
It is very common to have a compute host requirement of running multiple virtual networks concurrently on a host. Linux networking has long supported VLAN tagging, also known by its standard 802.1q, for maintaining datapath isolation between networks. The Ethernet link connected to a Docker host can be configured to support the 802.1q VLAN IDs, by creating Linux sub-interfaces, each one dedicated to a unique VLAN ID.
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
Trunking 802.1q to a Linux host is notoriously painful for many in operations. It requires configuration file changes to be persistent through a reboot. If a bridge is involved, a physical NIC needs to be moved into the bridge and the bridge then gets the IP address. This has lead to many a stranded servers since the risk of cutting off access during that convoluted process is high.
|
|
||||||
|
|
||||||
Like all of the Docker network drivers, the overarching goal is to alleviate the operational pains of managing network resources. To that end, when a network receives a sub-interface as the parent that does not exist, the drivers create the VLAN tagged interfaces while creating the network.
|
|
||||||
|
|
||||||
In the case of a host reboot, instead of needing to modify often complex network configuration files the driver recreates all network links when the Docker daemon restarts. The driver tracks if it created the VLAN tagged sub-interface originally with the network create and **only** recreates the sub-interface after a restart or delete `docker network rm` the link if it created it in the first place with `docker network create`.
|
|
||||||
|
|
||||||
If the user doesn't want Docker to modify the `-o parent` sub-interface, the user simply needs to pass an existing link that already exists as the parent interface. Parent interfaces such as `eth0` are not deleted, only sub-interfaces that are not master links.
|
|
||||||
|
|
||||||
For the driver to add/delete the vlan sub-interfaces the format needs to be `interface_name.vlan_tag`.
|
|
||||||
|
|
||||||
For example: `eth0.50` denotes a parent interface of `eth0` with a slave of `eth0.50` tagged with vlan id `50`. The equivalent `ip link` command would be `ip link add link eth0 name eth0.50 type vlan id 50`.
|
|
||||||
|
|
||||||
**Vlan ID 50**
|
|
||||||
|
|
||||||
In the first network tagged and isolated by the Docker host, `eth0.50` is the parent interface tagged with vlan id `50` specified with `-o parent=eth0.50`. Other naming formats can be used, but the links need to be added and deleted manually using `ip link` or Linux configuration files. As long as the `-o parent` exists anything can be used if compliant with Linux netlink.
|
|
||||||
|
|
||||||
```
|
|
||||||
# now add networks and hosts as you would normally by attaching to the master (sub)interface that is tagged
|
|
||||||
docker network create -d macvlan \
|
|
||||||
--subnet=192.168.50.0/24 \
|
|
||||||
--gateway=192.168.50.1 \
|
|
||||||
-o parent=eth0.50 macvlan50
|
|
||||||
|
|
||||||
# In two separate terminals, start a Docker container and the containers can now ping one another.
|
|
||||||
docker run --net=macvlan50 -it --name macvlan_test5 --rm alpine /bin/sh
|
|
||||||
docker run --net=macvlan50 -it --name macvlan_test6 --rm alpine /bin/sh
|
|
||||||
```
|
|
||||||
|
|
||||||
**Vlan ID 60**
|
|
||||||
|
|
||||||
In the second network, tagged and isolated by the Docker host, `eth0.60` is the parent interface tagged with vlan id `60` specified with `-o parent=eth0.60`. The `macvlan_mode=` defaults to `macvlan_mode=bridge`. It can also be explicitly set with the same result as shown in the next example.
|
|
||||||
|
|
||||||
```
|
|
||||||
# now add networks and hosts as you would normally by attaching to the master (sub)interface that is tagged.
|
|
||||||
docker network create -d macvlan \
|
|
||||||
--subnet=192.168.60.0/24 \
|
|
||||||
--gateway=192.168.60.1 \
|
|
||||||
-o parent=eth0.60 \
|
|
||||||
-o macvlan_mode=bridge macvlan60
|
|
||||||
|
|
||||||
# In two separate terminals, start a Docker container and the containers can now ping one another.
|
|
||||||
docker run --net=macvlan60 -it --name macvlan_test7 --rm alpine /bin/sh
|
|
||||||
docker run --net=macvlan60 -it --name macvlan_test8 --rm alpine /bin/sh
|
|
||||||
```
|
|
||||||
**Example:** Multi-Subnet Macvlan 802.1q Trunking
|
|
||||||
|
|
||||||
The same as the example before except there is an additional subnet bound to the network that the user can choose to provision containers on. In MacVlan/Bridge mode, containers can only ping one another if they are on the same subnet/broadcast domain unless there is an external router that routes the traffic (answers ARP etc) between the two subnets.
|
|
||||||
|
|
||||||
```
|
|
||||||
### Create multiple L2 subnets
|
|
||||||
docker network create -d ipvlan \
|
|
||||||
--subnet=192.168.210.0/24 \
|
|
||||||
--subnet=192.168.212.0/24 \
|
|
||||||
--gateway=192.168.210.254 \
|
|
||||||
--gateway=192.168.212.254 \
|
|
||||||
-o ipvlan_mode=l2 ipvlan210
|
|
||||||
|
|
||||||
# Test 192.168.210.0/24 connectivity between containers
|
|
||||||
docker run --net=ipvlan210 --ip=192.168.210.10 -itd alpine /bin/sh
|
|
||||||
docker run --net=ipvlan210 --ip=192.168.210.9 -it --rm alpine ping -c 2 192.168.210.10
|
|
||||||
|
|
||||||
# Test 192.168.212.0/24 connectivity between containers
|
|
||||||
docker run --net=ipvlan210 --ip=192.168.212.10 -itd alpine /bin/sh
|
|
||||||
docker run --net=ipvlan210 --ip=192.168.212.9 -it --rm alpine ping -c 2 192.168.212.10
|
|
||||||
```
|
|
||||||
|
|
||||||
## Dual Stack IPv4 IPv6 Macvlan Bridge Mode
|
|
||||||
|
|
||||||
**Example:** Macvlan Bridge mode, 802.1q trunk, VLAN ID: 218, Multi-Subnet, Dual Stack
|
|
||||||
|
|
||||||
```
|
|
||||||
# Create multiple bridge subnets with a gateway of x.x.x.1:
|
|
||||||
docker network create -d macvlan \
|
|
||||||
--subnet=192.168.216.0/24 --subnet=192.168.218.0/24 \
|
|
||||||
--gateway=192.168.216.1 --gateway=192.168.218.1 \
|
|
||||||
--subnet=2001:db8:abc8::/64 --gateway=2001:db8:abc8::10 \
|
|
||||||
-o parent=eth0.218 \
|
|
||||||
-o macvlan_mode=bridge macvlan216
|
|
||||||
|
|
||||||
# Start a container on the first subnet 192.168.216.0/24
|
|
||||||
docker run --net=macvlan216 --name=macnet216_test --ip=192.168.216.10 -itd alpine /bin/sh
|
|
||||||
|
|
||||||
# Start a container on the second subnet 192.168.218.0/24
|
|
||||||
docker run --net=macvlan216 --name=macnet216_test --ip=192.168.218.10 -itd alpine /bin/sh
|
|
||||||
|
|
||||||
# Ping the first container started on the 192.168.216.0/24 subnet
|
|
||||||
docker run --net=macvlan216 --ip=192.168.216.11 -it --rm alpine /bin/sh
|
|
||||||
ping 192.168.216.10
|
|
||||||
|
|
||||||
# Ping the first container started on the 192.168.218.0/24 subnet
|
|
||||||
docker run --net=macvlan216 --ip=192.168.218.11 -it --rm alpine /bin/sh
|
|
||||||
ping 192.168.218.10
|
|
||||||
```
|
|
||||||
|
|
||||||
View the details of one of the containers:
|
|
||||||
|
|
||||||
```
|
|
||||||
docker run --net=macvlan216 --ip=192.168.216.11 -it --rm alpine /bin/sh
|
|
||||||
|
|
||||||
root@526f3060d759:/# ip a show eth0
|
|
||||||
eth0@if92: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
|
|
||||||
link/ether 8e:9a:99:25:b6:16 brd ff:ff:ff:ff:ff:ff
|
|
||||||
inet 192.168.216.11/24 scope global eth0
|
|
||||||
valid_lft forever preferred_lft forever
|
|
||||||
inet6 2001:db8:abc4::8c9a:99ff:fe25:b616/64 scope link tentative
|
|
||||||
valid_lft forever preferred_lft forever
|
|
||||||
inet6 2001:db8:abc8::2/64 scope link nodad
|
|
||||||
valid_lft forever preferred_lft forever
|
|
||||||
|
|
||||||
# Specified v4 gateway of 192.168.216.1
|
|
||||||
root@526f3060d759:/# ip route
|
|
||||||
default via 192.168.216.1 dev eth0
|
|
||||||
192.168.216.0/24 dev eth0 proto kernel scope link src 192.168.216.11
|
|
||||||
|
|
||||||
# Specified v6 gateway of 2001:db8:abc8::10
|
|
||||||
root@526f3060d759:/# ip -6 route
|
|
||||||
2001:db8:abc4::/64 dev eth0 proto kernel metric 256
|
|
||||||
2001:db8:abc8::/64 dev eth0 proto kernel metric 256
|
|
||||||
default via 2001:db8:abc8::10 dev eth0 metric 1024
|
|
||||||
```
|
|
Before Width: | Height: | Size: 8.8 KiB |
Before Width: | Height: | Size: 23 KiB |
Before Width: | Height: | Size: 7.6 KiB |
Before Width: | Height: | Size: 36 KiB |
Before Width: | Height: | Size: 7.2 KiB |
Before Width: | Height: | Size: 26 KiB |
Before Width: | Height: | Size: 14 KiB |
Before Width: | Height: | Size: 53 KiB |
Before Width: | Height: | Size: 19 KiB |
Before Width: | Height: | Size: 39 KiB |
Before Width: | Height: | Size: 17 KiB |
Before Width: | Height: | Size: 48 KiB |
Before Width: | Height: | Size: 18 KiB |
Before Width: | Height: | Size: 43 KiB |
Before Width: | Height: | Size: 17 KiB |
Before Width: | Height: | Size: 53 KiB |
Before Width: | Height: | Size: 14 KiB |
Before Width: | Height: | Size: 44 KiB |
Before Width: | Height: | Size: 10 KiB |
Before Width: | Height: | Size: 19 KiB |
|
@ -1,676 +0,0 @@
|
||||||
---
|
|
||||||
description: How do we connect docker containers within and across hosts ?
|
|
||||||
keywords: network, networking, iptables, user-defined networks, bridge, firewall, ports
|
|
||||||
redirect_from:
|
|
||||||
- /engine/userguide/networking/dockernetworks/
|
|
||||||
- /articles/networking/
|
|
||||||
title: Docker container networking
|
|
||||||
---
|
|
||||||
|
|
||||||
This section provides an overview of Docker's default networking behavior,
|
|
||||||
including the type of networks created by default and how to create your own
|
|
||||||
user-defined networks. It also describes the resources required to create
|
|
||||||
networks on a single host or across a cluster of hosts.
|
|
||||||
|
|
||||||
For details about how Docker interacts with `iptables` on Linux hosts, see
|
|
||||||
[Docker and `iptables`](#docker-and-iptables).
|
|
||||||
|
|
||||||
## Default networks
|
|
||||||
|
|
||||||
When you install Docker, it creates three networks automatically. You can list
|
|
||||||
these networks using the `docker network ls` command:
|
|
||||||
|
|
||||||
```
|
|
||||||
$ docker network ls
|
|
||||||
|
|
||||||
NETWORK ID NAME DRIVER
|
|
||||||
7fca4eb8c647 bridge bridge
|
|
||||||
9f904ee27bf5 none null
|
|
||||||
cf03ee007fb4 host host
|
|
||||||
```
|
|
||||||
|
|
||||||
These three networks are built into Docker. When
|
|
||||||
you run a container, you can use the `--network` flag to specify which networks
|
|
||||||
your container should connect to.
|
|
||||||
|
|
||||||
The `bridge` network represents the `docker0` network present in all Docker
|
|
||||||
installations. Unless you specify otherwise with the `docker run
|
|
||||||
--network=<NETWORK>` option, the Docker daemon connects containers to this
|
|
||||||
network by default. You can see this bridge as part of a host's network stack by
|
|
||||||
using the `ip addr show` command (or short form, `ip a`) on the host. (The
|
|
||||||
`ifconfig` command is deprecated. It may also work or give you a `command not
|
|
||||||
found` error, depending on your system.)
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ ip addr show
|
|
||||||
|
|
||||||
docker0 Link encap:Ethernet HWaddr 02:42:47:bc:3a:eb
|
|
||||||
inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
|
|
||||||
inet6 addr: fe80::42:47ff:febc:3aeb/64 Scope:Link
|
|
||||||
UP BROADCAST RUNNING MULTICAST MTU:9001 Metric:1
|
|
||||||
RX packets:17 errors:0 dropped:0 overruns:0 frame:0
|
|
||||||
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
|
|
||||||
collisions:0 txqueuelen:0
|
|
||||||
RX bytes:1100 (1.1 KB) TX bytes:648 (648.0 B)
|
|
||||||
```
|
|
||||||
|
|
||||||
> Running on Docker for Mac or Docker for Windows?
|
|
||||||
>
|
|
||||||
> If you are using Docker for Mac (or running Linux containers on Docker for Windows), the
|
|
||||||
`docker network ls` command works as described above, but the
|
|
||||||
`ip addr show` and `ifconfig` commands may be present, but give you information about
|
|
||||||
the IP addresses for your local host, not Docker container networks.
|
|
||||||
This is because Docker uses network interfaces running inside a thin VM,
|
|
||||||
instead of on the host machine itself.
|
|
||||||
>
|
|
||||||
> To use the `ip addr show` or `ifconfig` commands to browse Docker
|
|
||||||
networks, log on to a [Docker machine](/machine/overview.md) such as a
|
|
||||||
local VM or on a cloud provider like a
|
|
||||||
[Docker machine on AWS](/machine/examples/aws.md) or a
|
|
||||||
[Docker machine on Digital Ocean](/machine/examples/ocean.md).
|
|
||||||
You can use `docker-machine ssh <machine-name>` to log on to your
|
|
||||||
local or cloud hosted machines, or a direct `ssh` as described
|
|
||||||
on the cloud provider site.
|
|
||||||
|
|
||||||
The `none` network adds a container to a container-specific network stack. That
|
|
||||||
container lacks a network interface. Attaching to such a container and looking
|
|
||||||
at its stack you see this:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ docker container attach nonenetcontainer
|
|
||||||
|
|
||||||
root@0cb243cd1293:/# cat /etc/hosts
|
|
||||||
127.0.0.1 localhost
|
|
||||||
::1 localhost ip6-localhost ip6-loopback
|
|
||||||
fe00::0 ip6-localnet
|
|
||||||
ff00::0 ip6-mcastprefix
|
|
||||||
ff02::1 ip6-allnodes
|
|
||||||
ff02::2 ip6-allrouters
|
|
||||||
|
|
||||||
root@0cb243cd1293:/# ip -4 addr
|
|
||||||
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
|
|
||||||
inet 127.0.0.1/8 scope host lo
|
|
||||||
valid_lft forever preferred_lft forever
|
|
||||||
|
|
||||||
root@0cb243cd1293:/#
|
|
||||||
```
|
|
||||||
|
|
||||||
>**Note**: You can detach from the container and leave it running with `CTRL-p CTRL-q`.
|
|
||||||
|
|
||||||
The `host` network adds a container on the host's network stack. As far as the
|
|
||||||
network is concerned, there is no isolation between the host machine and the
|
|
||||||
container. For instance, if you run a container that runs a web server on port
|
|
||||||
80 using host networking, the web server is available on port 80 of the host
|
|
||||||
machine.
|
|
||||||
|
|
||||||
The `none` and `host` networks are not directly configurable in Docker.
|
|
||||||
However, you can configure the default `bridge` network, as well as your own
|
|
||||||
user-defined bridge networks.
|
|
||||||
|
|
||||||
|
|
||||||
### The default bridge network
|
|
||||||
|
|
||||||
The default `bridge` network is present on all Docker hosts. If you do not
|
|
||||||
specify a different network, new containers are automatically connected to the
|
|
||||||
default `bridge` network.
|
|
||||||
|
|
||||||
The `docker network inspect` command returns information about a network:
|
|
||||||
|
|
||||||
```none
|
|
||||||
$ docker network inspect bridge
|
|
||||||
|
|
||||||
[
|
|
||||||
{
|
|
||||||
"Name": "bridge",
|
|
||||||
"Id": "f7ab26d71dbd6f557852c7156ae0574bbf62c42f539b50c8ebde0f728a253b6f",
|
|
||||||
"Scope": "local",
|
|
||||||
"Driver": "bridge",
|
|
||||||
"IPAM": {
|
|
||||||
"Driver": "default",
|
|
||||||
"Config": [
|
|
||||||
{
|
|
||||||
"Subnet": "172.17.0.1/16",
|
|
||||||
"Gateway": "172.17.0.1"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"Containers": {},
|
|
||||||
"Options": {
|
|
||||||
"com.docker.network.bridge.default_bridge": "true",
|
|
||||||
"com.docker.network.bridge.enable_icc": "true",
|
|
||||||
"com.docker.network.bridge.enable_ip_masquerade": "true",
|
|
||||||
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
|
|
||||||
"com.docker.network.bridge.name": "docker0",
|
|
||||||
"com.docker.network.driver.mtu": "9001"
|
|
||||||
},
|
|
||||||
"Labels": {}
|
|
||||||
}
|
|
||||||
]
|
|
||||||
```
|
|
||||||
|
|
||||||
Run the following two commands to start two `busybox` containers, which are each
|
|
||||||
connected to the default `bridge` network.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ docker run -itd --name=container1 busybox
|
|
||||||
|
|
||||||
3386a527aa08b37ea9232cbcace2d2458d49f44bb05a6b775fba7ddd40d8f92c
|
|
||||||
|
|
||||||
$ docker run -itd --name=container2 busybox
|
|
||||||
|
|
||||||
94447ca479852d29aeddca75c28f7104df3c3196d7b6d83061879e339946805c
|
|
||||||
```
|
|
||||||
|
|
||||||
Inspect the `bridge` network again after starting two containers. Both of the
|
|
||||||
`busybox` containers are connected to the network. Make note of their IP
|
|
||||||
addresses, which is different on your host machine than in the example
|
|
||||||
below.
|
|
||||||
|
|
||||||
```none
|
|
||||||
$ docker network inspect bridge
|
|
||||||
|
|
||||||
{[
|
|
||||||
{
|
|
||||||
"Name": "bridge",
|
|
||||||
"Id": "f7ab26d71dbd6f557852c7156ae0574bbf62c42f539b50c8ebde0f728a253b6f",
|
|
||||||
"Scope": "local",
|
|
||||||
"Driver": "bridge",
|
|
||||||
"IPAM": {
|
|
||||||
"Driver": "default",
|
|
||||||
"Config": [
|
|
||||||
{
|
|
||||||
"Subnet": "172.17.0.1/16",
|
|
||||||
"Gateway": "172.17.0.1"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"Containers": {
|
|
||||||
"3386a527aa08b37ea9232cbcace2d2458d49f44bb05a6b775fba7ddd40d8f92c": {
|
|
||||||
"EndpointID": "647c12443e91faf0fd508b6edfe59c30b642abb60dfab890b4bdccee38750bc1",
|
|
||||||
"MacAddress": "02:42:ac:11:00:02",
|
|
||||||
"IPv4Address": "172.17.0.2/16",
|
|
||||||
"IPv6Address": ""
|
|
||||||
},
|
|
||||||
"94447ca479852d29aeddca75c28f7104df3c3196d7b6d83061879e339946805c": {
|
|
||||||
"EndpointID": "b047d090f446ac49747d3c37d63e4307be745876db7f0ceef7b311cbba615f48",
|
|
||||||
"MacAddress": "02:42:ac:11:00:03",
|
|
||||||
"IPv4Address": "172.17.0.3/16",
|
|
||||||
"IPv6Address": ""
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"Options": {
|
|
||||||
"com.docker.network.bridge.default_bridge": "true",
|
|
||||||
"com.docker.network.bridge.enable_icc": "true",
|
|
||||||
"com.docker.network.bridge.enable_ip_masquerade": "true",
|
|
||||||
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
|
|
||||||
"com.docker.network.bridge.name": "docker0",
|
|
||||||
"com.docker.network.driver.mtu": "9001"
|
|
||||||
},
|
|
||||||
"Labels": {}
|
|
||||||
}
|
|
||||||
]
|
|
||||||
```
|
|
||||||
|
|
||||||
Containers connected to the default `bridge` network can communicate with each
|
|
||||||
other by IP address. **Docker does not support automatic service discovery on the
|
|
||||||
default bridge network. If you want containers to resolve IP addresses
|
|
||||||
by container name, you should use _user-defined networks_ instead**. You can link
|
|
||||||
two containers together using the legacy `docker run --link` option, but this
|
|
||||||
is not recommended in most cases.
|
|
||||||
|
|
||||||
You can `attach` to a running `container` to see how the network looks from
|
|
||||||
inside the container. You are connected as `root`, so your command prompt is
|
|
||||||
a `#` character.
|
|
||||||
|
|
||||||
```none
|
|
||||||
$ docker container attach container1
|
|
||||||
|
|
||||||
root@3386a527aa08:/# ip -4 addr
|
|
||||||
|
|
||||||
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
|
|
||||||
inet 127.0.0.1/8 scope host lo
|
|
||||||
valid_lft forever preferred_lft forever
|
|
||||||
633: eth0@if634: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
|
|
||||||
inet 172.17.0.2/16 scope global eth0
|
|
||||||
valid_lft forever preferred_lft forever
|
|
||||||
```
|
|
||||||
|
|
||||||
From inside the container, use the `ping` command to test the network connection
|
|
||||||
to the IP address of the other container.
|
|
||||||
|
|
||||||
```none
|
|
||||||
root@3386a527aa08:/# ping -w3 172.17.0.3
|
|
||||||
|
|
||||||
PING 172.17.0.3 (172.17.0.3): 56 data bytes
|
|
||||||
64 bytes from 172.17.0.3: seq=0 ttl=64 time=0.096 ms
|
|
||||||
64 bytes from 172.17.0.3: seq=1 ttl=64 time=0.080 ms
|
|
||||||
64 bytes from 172.17.0.3: seq=2 ttl=64 time=0.074 ms
|
|
||||||
|
|
||||||
--- 172.17.0.3 ping statistics ---
|
|
||||||
3 packets transmitted, 3 packets received, 0% packet loss
|
|
||||||
round-trip min/avg/max = 0.074/0.083/0.096 ms
|
|
||||||
```
|
|
||||||
|
|
||||||
Use the `cat` command to view the `/etc/hosts` file on the container. This shows
|
|
||||||
the hostnames and IP addresses the container recognizes.
|
|
||||||
|
|
||||||
```
|
|
||||||
root@3386a527aa08:/# cat /etc/hosts
|
|
||||||
|
|
||||||
172.17.0.2 3386a527aa08
|
|
||||||
127.0.0.1 localhost
|
|
||||||
::1 localhost ip6-localhost ip6-loopback
|
|
||||||
fe00::0 ip6-localnet
|
|
||||||
ff00::0 ip6-mcastprefix
|
|
||||||
ff02::1 ip6-allnodes
|
|
||||||
ff02::2 ip6-allrouters
|
|
||||||
```
|
|
||||||
|
|
||||||
To detach from the `container1` container and leave it running, use the keyboard
|
|
||||||
sequence **CTRL-p CTRL-q**. If you wish, attach to `container2` and repeat the
|
|
||||||
commands above.
|
|
||||||
|
|
||||||
The default `docker0` bridge network supports the use of port mapping and
|
|
||||||
`docker run --link` to allow communications among containers in the `docker0`
|
|
||||||
network. This approach is not recommended. Where possible, you should use
|
|
||||||
[user-defined bridge networks](#user-defined-networks) instead.
|
|
||||||
|
|
||||||
#### Disable the default bridge network
|
|
||||||
|
|
||||||
If you do not want the default bridge network to be created at all, add the
|
|
||||||
following to the `daemon.json` file. This only applies when the Docker daemon
|
|
||||||
runs on a Linux host.
|
|
||||||
|
|
||||||
```json
|
|
||||||
"bridge": "none",
|
|
||||||
"iptables": "false"
|
|
||||||
```
|
|
||||||
|
|
||||||
Restart Docker for the changes to take effect.
|
|
||||||
|
|
||||||
You can also manually start the `dockerd` with the flags `--bridge=none
|
|
||||||
--iptables=false`. However, this may not start the daemon with the same
|
|
||||||
environment as the system init scripts, so other behaviors may be changed.
|
|
||||||
|
|
||||||
Disabling the default bridge network is an advanced option that most users do
|
|
||||||
not need.
|
|
||||||
|
|
||||||
## User-defined networks
|
|
||||||
|
|
||||||
It is recommended to use user-defined bridge networks to control which
|
|
||||||
containers can communicate with each other, and also to enable automatic DNS
|
|
||||||
resolution of container names to IP addresses. Docker provides default **network
|
|
||||||
drivers** for creating these networks. You can create a new **bridge network**,
|
|
||||||
**overlay network** or **MACVLAN network**. You can also create a **network
|
|
||||||
plugin** or **remote network** for complete customization and control.
|
|
||||||
|
|
||||||
You can create as many networks as you need, and you can connect a container to
|
|
||||||
zero or more of these networks at any given time. In addition, you can connect
|
|
||||||
and disconnect running containers from networks without restarting the
|
|
||||||
container. When a container is connected to multiple networks, its external
|
|
||||||
connectivity is provided via the first non-internal network, in lexical order.
|
|
||||||
|
|
||||||
The next few sections describe each of Docker's built-in network drivers in
|
|
||||||
greater detail.
|
|
||||||
|
|
||||||
### Bridge networks
|
|
||||||
|
|
||||||
A `bridge` network is the most common type of network used in Docker. Bridge
|
|
||||||
networks are similar to the default `bridge` network, but add some new features
|
|
||||||
and remove some old abilities. The following examples create some bridge
|
|
||||||
networks and perform some experiments on containers on these networks.
|
|
||||||
|
|
||||||
```none
|
|
||||||
$ docker network create --driver bridge isolated_nw
|
|
||||||
|
|
||||||
1196a4c5af43a21ae38ef34515b6af19236a3fc48122cf585e3f3054d509679b
|
|
||||||
|
|
||||||
$ docker network inspect isolated_nw
|
|
||||||
|
|
||||||
[
|
|
||||||
{
|
|
||||||
"Name": "isolated_nw",
|
|
||||||
"Id": "1196a4c5af43a21ae38ef34515b6af19236a3fc48122cf585e3f3054d509679b",
|
|
||||||
"Scope": "local",
|
|
||||||
"Driver": "bridge",
|
|
||||||
"IPAM": {
|
|
||||||
"Driver": "default",
|
|
||||||
"Config": [
|
|
||||||
{
|
|
||||||
"Subnet": "172.21.0.0/16",
|
|
||||||
"Gateway": "172.21.0.1/16"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"Containers": {},
|
|
||||||
"Options": {},
|
|
||||||
"Labels": {}
|
|
||||||
}
|
|
||||||
]
|
|
||||||
|
|
||||||
$ docker network ls
|
|
||||||
|
|
||||||
NETWORK ID NAME DRIVER
|
|
||||||
9f904ee27bf5 none null
|
|
||||||
cf03ee007fb4 host host
|
|
||||||
7fca4eb8c647 bridge bridge
|
|
||||||
c5ee82f76de3 isolated_nw bridge
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
After you create the network, you can launch containers on it using the
|
|
||||||
`docker run --network=<NETWORK>` option.
|
|
||||||
|
|
||||||
```none
|
|
||||||
$ docker run --network=isolated_nw -itd --name=container3 busybox
|
|
||||||
|
|
||||||
8c1a0a5be480921d669a073393ade66a3fc49933f08bcc5515b37b8144f6d47c
|
|
||||||
|
|
||||||
$ docker network inspect isolated_nw
|
|
||||||
[
|
|
||||||
{
|
|
||||||
"Name": "isolated_nw",
|
|
||||||
"Id": "1196a4c5af43a21ae38ef34515b6af19236a3fc48122cf585e3f3054d509679b",
|
|
||||||
"Scope": "local",
|
|
||||||
"Driver": "bridge",
|
|
||||||
"IPAM": {
|
|
||||||
"Driver": "default",
|
|
||||||
"Config": [
|
|
||||||
{}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"Containers": {
|
|
||||||
"8c1a0a5be480921d669a073393ade66a3fc49933f08bcc5515b37b8144f6d47c": {
|
|
||||||
"EndpointID": "93b2db4a9b9a997beb912d28bcfc117f7b0eb924ff91d48cfa251d473e6a9b08",
|
|
||||||
"MacAddress": "02:42:ac:15:00:02",
|
|
||||||
"IPv4Address": "172.21.0.2/16",
|
|
||||||
"IPv6Address": ""
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"Options": {},
|
|
||||||
"Labels": {}
|
|
||||||
}
|
|
||||||
]
|
|
||||||
```
|
|
||||||
|
|
||||||
The containers you launch into this network must reside on the same Docker host.
|
|
||||||
Each container in the network can immediately communicate with other containers
|
|
||||||
in the network. Though, the network itself isolates the containers from external
|
|
||||||
networks.
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
Within a user-defined bridge network, linking is not supported. You can
|
|
||||||
[expose and publish container ports](#exposing-and-publishing-ports) on
|
|
||||||
containers in this network. This is useful if you want to make a portion of the
|
|
||||||
`bridge` network available to an outside network.
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
A bridge network is useful in cases where you want to run a relatively small
|
|
||||||
network on a single host. You can, however, create significantly larger networks
|
|
||||||
by creating an `overlay` network.
|
|
||||||
|
|
||||||
### The `docker_gwbridge` network
|
|
||||||
|
|
||||||
The `docker_gwbridge` is a local bridge network which is automatically created by Docker
|
|
||||||
in two different circumstances:
|
|
||||||
|
|
||||||
- When you initialize or join a swarm, Docker creates the `docker_gwbridge` network and
|
|
||||||
uses it for communication among swarm nodes on different hosts.
|
|
||||||
|
|
||||||
- When none of a container's networks can provide external connectivity, Docker connects
|
|
||||||
the container to the `docker_gwbridge` network in addition to the container's other
|
|
||||||
networks, so that the container can connect to external networks or other swarm nodes.
|
|
||||||
|
|
||||||
You can create the `docker_gwbridge` network ahead of time if you need a custom configuration,
|
|
||||||
but otherwise Docker creates it on demand. The following example creates the `docker_gwbridge`
|
|
||||||
network with some custom options.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ docker network create --subnet 172.30.0.0/16 \
|
|
||||||
--opt com.docker.network.bridge.name=docker_gwbridge \
|
|
||||||
--opt com.docker.network.bridge.enable_icc=false \
|
|
||||||
docker_gwbridge
|
|
||||||
```
|
|
||||||
|
|
||||||
The `docker_gwbridge` network is always present when you use `overlay` networks.
|
|
||||||
|
|
||||||
### Overlay networks in swarm mode
|
|
||||||
|
|
||||||
You can create an overlay network on a manager node running in swarm mode
|
|
||||||
without an external key-value store. The swarm makes the overlay network
|
|
||||||
available only to nodes in the swarm that require it for a service. When you
|
|
||||||
create a service that uses the overlay network, the manager node automatically
|
|
||||||
extends the overlay network to nodes that run service tasks.
|
|
||||||
|
|
||||||
To learn more about running Docker Engine in swarm mode, refer to the
|
|
||||||
[Swarm mode overview](../../swarm/index.md).
|
|
||||||
|
|
||||||
The example below shows how to create a network and use it for a service from a
|
|
||||||
manager node in the swarm:
|
|
||||||
|
|
||||||
```bash.
|
|
||||||
$ docker network create \
|
|
||||||
--driver overlay \
|
|
||||||
--subnet 10.0.9.0/24 \
|
|
||||||
my-multi-host-network
|
|
||||||
|
|
||||||
400g6bwzd68jizzdx5pgyoe95
|
|
||||||
|
|
||||||
$ docker service create --replicas 2 --network my-multi-host-network --name my-web nginx
|
|
||||||
|
|
||||||
716thylsndqma81j6kkkb5aus
|
|
||||||
```
|
|
||||||
|
|
||||||
Only swarm services can connect to overlay networks, not standalone containers.
|
|
||||||
For more information about swarms, see
|
|
||||||
[Docker swarm mode overlay network security model](overlay-security-model.md) and
|
|
||||||
[Attach services to an overlay network](../../swarm/networking.md).
|
|
||||||
|
|
||||||
### An overlay network without swarm mode
|
|
||||||
|
|
||||||
If you are not using Docker Engine in swarm mode, the `overlay` network requires
|
|
||||||
a valid key-value store service. Supported key-value stores include Consul,
|
|
||||||
Etcd, and ZooKeeper (Distributed store). Before creating a network in this way,
|
|
||||||
you must install and configure your chosen key-value store service. The Docker
|
|
||||||
hosts that you intend to network and the service must be able to communicate.
|
|
||||||
|
|
||||||
> **Note**: Docker Engine running in swarm mode is not compatible with networking
|
|
||||||
> with an external key-value store.
|
|
||||||
|
|
||||||
This way of using overlay networks is not recommended for most Docker users. It
|
|
||||||
can be used with standalone swarms and may be useful to system developers
|
|
||||||
building solutions on top of Docker. It may be deprecated in the future. If you
|
|
||||||
think you may need to use overlay networks in this way, see
|
|
||||||
[this guide](get-started-overlay.md).
|
|
||||||
|
|
||||||
### Custom network plugins
|
|
||||||
|
|
||||||
If your needs are not addressed by any of the above network mechanisms, you can
|
|
||||||
write your own network driver plugin, using Docker's plugin infrastructure.
|
|
||||||
The plugin runs as a separate process on the host which runs the Docker
|
|
||||||
daemon. Using network plugins is an advanced topic.
|
|
||||||
|
|
||||||
Network plugins follow the same restrictions and installation rules as other
|
|
||||||
plugins. All plugins use the plugin API, and have a lifecycle that encompasses
|
|
||||||
installation, starting, stopping, and activation.
|
|
||||||
|
|
||||||
Once you have created and installed a custom network driver, you can create
|
|
||||||
a network which uses that driver with the `--driver` flag.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ docker network create --driver weave mynet
|
|
||||||
```
|
|
||||||
|
|
||||||
You can inspect the network, connect and disconnect containers from it, and
|
|
||||||
remove it. A specific plugin may have specific requirements. Check that plugin's
|
|
||||||
documentation for specific information. For more information on writing plugins,
|
|
||||||
see
|
|
||||||
[Extending Docker](../../extend/legacy_plugins.md) and
|
|
||||||
[Writing a network driver plugin](../../extend/plugins_network.md).
|
|
||||||
|
|
||||||
### Embedded DNS server
|
|
||||||
|
|
||||||
Docker daemon runs an embedded DNS server which provides DNS resolution among
|
|
||||||
containers connected to the same user-defined network, so that these containers
|
|
||||||
can resolve container names to IP addresses. If the embedded DNS server is
|
|
||||||
unable to resolve the request, it is forwarded to any external DNS servers
|
|
||||||
configured for the container. To facilitate this when the container is created,
|
|
||||||
only the embedded DNS server reachable at `127.0.0.11` is listed in the
|
|
||||||
container's `resolv.conf` file. For more information on embedded DNS server on
|
|
||||||
user-defined networks, see
|
|
||||||
[embedded DNS server in user-defined networks](configure-dns.md)
|
|
||||||
|
|
||||||
## Exposing and publishing ports
|
|
||||||
|
|
||||||
In Docker networking, there are two different mechanisms that directly involve
|
|
||||||
network ports: exposing and publishing ports. This applies to the default bridge
|
|
||||||
network and user-defined bridge networks.
|
|
||||||
|
|
||||||
- You expose ports using the `EXPOSE` keyword in the Dockerfile or the
|
|
||||||
`--expose` flag to `docker run`. Exposing ports is a way of documenting which
|
|
||||||
ports are used, but **does not actually map or open any ports**. Exposing ports
|
|
||||||
is optional.
|
|
||||||
- You publish ports using the `--publish` or `--publish-all` flag to `docker run`.
|
|
||||||
This tells Docker which ports to open on the container's network interface.
|
|
||||||
When a port is published, it is mapped to an
|
|
||||||
available high-order port (higher than `30000`) on the host machine, unless
|
|
||||||
you specify the port to map to on the host machine at runtime. You cannot
|
|
||||||
specify the port to map to on the host machine when you build the image (in the
|
|
||||||
Dockerfile), because there is no way to guarantee that the port is available
|
|
||||||
on the host machine where you run the image.
|
|
||||||
|
|
||||||
This example publishes port 80 in the container to a random high
|
|
||||||
port (in this case, `32768`) on the host machine. The `-d` flag causes the
|
|
||||||
container to run in the background so you can issue the `docker ps`
|
|
||||||
command.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ docker run -it -d -p 80 nginx
|
|
||||||
|
|
||||||
$ docker ps
|
|
||||||
|
|
||||||
64879472feea nginx "nginx -g 'daemon ..." 43 hours ago Up About a minute 443/tcp, 0.0.0.0:32768->80/tcp blissful_mclean
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
The next example specifies that port 80 should be mapped to port 8080 on the
|
|
||||||
host machine. It fails if port 8080 is not available.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ docker run -it -d -p 8080:80 nginx
|
|
||||||
|
|
||||||
$ docker ps
|
|
||||||
|
|
||||||
b9788c7adca3 nginx "nginx -g 'daemon ..." 43 hours ago Up 3 seconds 80/tcp, 443/tcp, 0.0.0.0:8080->80/tcp goofy_brahmagupta
|
|
||||||
```
|
|
||||||
|
|
||||||
## Use a proxy server with containers
|
|
||||||
|
|
||||||
If your container needs to use an HTTP, HTTPS, or FTP proxy server, you can
|
|
||||||
configure it in different ways:
|
|
||||||
|
|
||||||
- In Docker 17.07 and higher, you can configure the Docker client to pass
|
|
||||||
proxy information to containers automatically.
|
|
||||||
|
|
||||||
- In Docker 17.06 and lower, you must set appropriate environment variables
|
|
||||||
within the container. You can do this when you build the image (which makes
|
|
||||||
the image less portable) or when you create or run the container.
|
|
||||||
|
|
||||||
### Configure the Docker Client
|
|
||||||
|
|
||||||
1. On the Docker client, create or edit the file `~/.config.json` in the
|
|
||||||
home directory of the user which starts containers. Add JSON such as the
|
|
||||||
following, substituting the type of proxy with `httpsProxy` or `ftpProxy` if
|
|
||||||
necessary, and substituting the address and port of the proxy server. You
|
|
||||||
can configure multiple proxy servers at the same time.
|
|
||||||
|
|
||||||
You can optionally exclude hosts or ranges from going through the proxy
|
|
||||||
server by setting a `noProxy` key to one or more comma-separated IP
|
|
||||||
addresses or hosts. Using the `*` character as a wildcard is supported, as
|
|
||||||
shown in this example.
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"proxies":
|
|
||||||
{
|
|
||||||
"default":
|
|
||||||
{
|
|
||||||
"httpProxy": "http://127.0.0.1:3001",
|
|
||||||
"noProxy": "*.test.example.com,.example2.com"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Save the file.
|
|
||||||
|
|
||||||
2. When you create or start new containers, the environment variables are
|
|
||||||
set automatically within the container.
|
|
||||||
|
|
||||||
### Set the environment variables manually
|
|
||||||
|
|
||||||
When you build the image, or using the `--env` flag when you create or run the
|
|
||||||
container, you can set one or more of the following variables to the appropriate
|
|
||||||
value. This method makes the image less portable, so if you have Docker 17.07
|
|
||||||
or higher, you should [configure the Docker client](#configure-the-docker-client)
|
|
||||||
instead.
|
|
||||||
|
|
||||||
| Variable | Dockerfile example | `docker run` Example |
|
|
||||||
|:--------------|:--------------------------------------------------|:----------------------------------------------------|
|
|
||||||
| `HTTP_PROXY` | `ENV HTTP_PROXY "http://127.0.0.1:3001"` | `--env HTTP_PROXY "http://127.0.0.1:3001"` |
|
|
||||||
| `HTTPS_PROXY` | `ENV HTTPS_PROXY "https://127.0.0.1:3001"` | `--env HTTPS_PROXY "https://127.0.0.1:3001"` |
|
|
||||||
| `FTP_PROXY` | `ENV FTP_PROXY "ftp://127.0.0.1:3001"` | `--env FTP_PROXY "ftp://127.0.0.1:3001"` |
|
|
||||||
| `NO_PROXY` | `ENV NO_PROXY "*.test.example.com,.example2.com"` | `--env NO_PROXY "*.test.example.com,.example2.com"` |
|
|
||||||
|
|
||||||
## Links
|
|
||||||
|
|
||||||
Before Docker included user-defined networks, you could use the Docker `--link`
|
|
||||||
feature to allow a container to resolve another container's name to an IP
|
|
||||||
address, and also give it access to the linked container's environment variables.
|
|
||||||
Where possible, you should avoid using the legacy `--link` flag.
|
|
||||||
|
|
||||||
When you create links, they behave differently when you use the default `bridge`
|
|
||||||
network or when you use user-defined bridge networks. For more information,
|
|
||||||
see [Legacy Links](default_network/dockerlinks.md) for link feature
|
|
||||||
in default `bridge` network and the
|
|
||||||
[linking containers in user-defined networks](work-with-networks.md#linking-containers-in-user-defined-networks)
|
|
||||||
for links functionality in user-defined networks.
|
|
||||||
|
|
||||||
## Docker and iptables
|
|
||||||
|
|
||||||
Linux hosts use a kernel module called `iptables` to manage access to network
|
|
||||||
devices, including routing, port forwarding, network address translation (NAT),
|
|
||||||
and other concerns. Docker modifies `iptables` rules when you start or stop
|
|
||||||
containers which publish ports, when you create or modify networks or attach
|
|
||||||
containers to them, or for other network-related operations.
|
|
||||||
|
|
||||||
Full discussion of `iptables` is out of scope for this topic. To see which
|
|
||||||
`iptables` rules are in effect at any time, you can use `iptables -L`. Multiple
|
|
||||||
tables exist, and you can list a specific table, such as `nat`, `prerouting`, or
|
|
||||||
`postrouting`, using a command such as `iptables -t nat -L`. For full
|
|
||||||
documentation about `iptables`, see
|
|
||||||
[netfilter/iptables](https://netfilter.org/documentation/){: target="_blank" class="_" }.
|
|
||||||
|
|
||||||
Typically, `iptables` rules are created by an initialization script or a daemon
|
|
||||||
process such as `firewalld`. The rules do not persist across a system reboot, so
|
|
||||||
the script or utility must run when the system boots, typically at run-level 3
|
|
||||||
or directly after the network is initialized. Consult the networking
|
|
||||||
documentation for your Linux distribution for suggestions about the appropriate
|
|
||||||
way to make `iptables` rules persistent.
|
|
||||||
|
|
||||||
Docker dynamically manages `iptables` rules for the daemon, as well as your
|
|
||||||
containers, services, and networks. In Docker 17.06 and higher, you can add
|
|
||||||
rules to a new table called `DOCKER-USER`, and these rules are loaded before
|
|
||||||
any rules Docker creates automatically. This can be useful if you need to
|
|
||||||
pre-populate `iptables` rules that need to be in place before Docker runs.
|
|
||||||
|
|
||||||
## Related information
|
|
||||||
|
|
||||||
- [Work with network commands](work-with-networks.md)
|
|
||||||
- [Get started with multi-host networking](get-started-overlay.md)
|
|
||||||
- [Managing Data in Containers](../../tutorials/dockervolumes.md)
|
|
||||||
- [Docker Machine overview](/machine)
|
|
||||||
- [Docker Swarm overview](/swarm)
|
|
||||||
- [Investigate the LibNetwork project](https://github.com/docker/libnetwork)
|
|
|
@ -1,46 +0,0 @@
|
||||||
---
|
|
||||||
description: Docker swarm mode overlay network security model
|
|
||||||
keywords: network, docker, documentation, user guide, multihost, swarm mode, overlay
|
|
||||||
title: Docker swarm mode overlay network security model
|
|
||||||
---
|
|
||||||
|
|
||||||
Overlay networking for Docker Engine swarm mode comes secure out of the box. The
|
|
||||||
swarm nodes exchange overlay network information using a gossip protocol. By
|
|
||||||
default the nodes encrypt and authenticate information they exchange via gossip
|
|
||||||
using the [AES algorithm](https://en.wikipedia.org/wiki/Galois/Counter_Mode) in
|
|
||||||
GCM mode. Manager nodes in the swarm rotate the key used to encrypt gossip data
|
|
||||||
every 12 hours.
|
|
||||||
|
|
||||||
You can also encrypt data exchanged between containers on different nodes on the
|
|
||||||
overlay network. To enable encryption, when you create an overlay network pass
|
|
||||||
the `--opt encrypted` flag:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ docker network create --opt encrypted --driver overlay my-multi-host-network
|
|
||||||
|
|
||||||
dt0zvqn0saezzinc8a5g4worx
|
|
||||||
```
|
|
||||||
|
|
||||||
When you enable overlay encryption, Docker creates IPSEC tunnels between all the
|
|
||||||
nodes where tasks are scheduled for services attached to the overlay network.
|
|
||||||
These tunnels also use the AES algorithm in GCM mode and manager nodes
|
|
||||||
automatically rotate the keys every 12 hours.
|
|
||||||
|
|
||||||
> **Do not attach Windows nodes to encrypted overlay networks.**
|
|
||||||
>
|
|
||||||
> Overlay network encryption is not supported on Windows. If a Windows node
|
|
||||||
> attempts to connect to an encrypted overlay network, no error is detected but
|
|
||||||
> the node cannot communicate.
|
|
||||||
{: .warning }
|
|
||||||
|
|
||||||
## Swarm mode overlay networks and unmanaged containers
|
|
||||||
|
|
||||||
It is possible to use the overlay network feature with both `--opt encrypted --attachable`, and attach unmanaged containers to that network:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ docker network create --opt encrypted --driver overlay --attachable my-attachable-multi-host-network
|
|
||||||
|
|
||||||
9s1p1sfaqtvaibq6yp7e6jsrt
|
|
||||||
```
|
|
||||||
|
|
||||||
Just like services that are attached to an encrypted network, regular containers can also have the advantage of encrypted traffic when attached to a network created this way.
|
|
|
@ -0,0 +1,258 @@
|
||||||
|
---
|
||||||
|
title: Use bridge networks
|
||||||
|
description: All about using user-defined bridge networks and the default bridge
|
||||||
|
keywords: network, bridge, user-defined, standalone
|
||||||
|
redirect_from:
|
||||||
|
- /engine/userguide/networking/default_network/custom-docker0/
|
||||||
|
- /engine/userguide/networking/default_network/dockerlinks/
|
||||||
|
- /engine/userguide/networking/default_network/build-bridges/
|
||||||
|
- /engine/userguide/networking/work-with-networks/
|
||||||
|
---
|
||||||
|
|
||||||
|
In terms of networking, a bridge network is a Link Layer device
|
||||||
|
which forwards traffic between network segments. A bridge can be a hardware
|
||||||
|
device or a software device running within a host machine's kernel.
|
||||||
|
|
||||||
|
In terms of Docker, a bridge network uses a software bridge which allows
|
||||||
|
containers connected to the same bridge network to communicate, while providing
|
||||||
|
isolation from containers which are not connected to that bridge network. The
|
||||||
|
Docker bridge driver automatically installs rules in the host machine so that
|
||||||
|
containers on different bridge networks cannot communicate directly with each
|
||||||
|
other.
|
||||||
|
|
||||||
|
Bridge networks apply to containers running on the **same** Docker daemon host.
|
||||||
|
For communication among containers running on different Docker daemon hosts, you
|
||||||
|
can either manage routing at the OS level, or you can use an [overlay
|
||||||
|
network](overlay.md).
|
||||||
|
|
||||||
|
When you start Docker, a [default bridge network](#use-the-default-bridge-network) (also
|
||||||
|
called `bridge`) is created automatically, and newly-started containers connect
|
||||||
|
to it unless otherwise specified. You can also create user-defined custom bridge
|
||||||
|
networks. **User-defined bridge networks are superior to the default `bridge`
|
||||||
|
network.**
|
||||||
|
|
||||||
|
## Differences between user-defined bridges and the default bridge
|
||||||
|
|
||||||
|
- **User-defined bridges provide better isolation and interoperability between containerized applications**.
|
||||||
|
|
||||||
|
Containers connected to the same user-defined bridge network automatically
|
||||||
|
expose **all ports** to each other, and **no ports** to the outside world. This allows
|
||||||
|
containerized applications to communicate with each other easily, without
|
||||||
|
accidentally opening access to the outside world.
|
||||||
|
|
||||||
|
Imagine an application with a web front-end and a database back-end. The
|
||||||
|
outside world needs access to the web front-end (perhaps on port 80), but only
|
||||||
|
the front-end itself needs access to the database host and port. Using a
|
||||||
|
user-defined bridge, only the web port needs to be opened, and the database
|
||||||
|
application doesn't need any ports open, since the web front-end can reach it
|
||||||
|
over the user-defined bridge.
|
||||||
|
|
||||||
|
If you run the same application stack on the default bridge network, you need
|
||||||
|
to open both the web port and the database port, using the `-p` or `--publish`
|
||||||
|
flag for each. This means the Docker host needs to block access to the
|
||||||
|
database port by other means.
|
||||||
|
|
||||||
|
- **User-defined brides provide automatic DNS resolution between containers**.
|
||||||
|
|
||||||
|
Containers on the default bridge network can only access each other by IP
|
||||||
|
addresses, unless you use the [`--link` option](/network/links/), which is
|
||||||
|
considered legacy. On a user-defined bridge network, containers can resolve
|
||||||
|
each other by name or alias.
|
||||||
|
|
||||||
|
Imagine the same application as in the previous point, with a web front-end
|
||||||
|
and a database back-end. If you call your containers `web` and `db`, the web
|
||||||
|
container can connect to the db container at `db`, no matter which Docker host
|
||||||
|
the application stack is running on.
|
||||||
|
|
||||||
|
If you run the same application stack on the default bridge network, you need
|
||||||
|
to manually create links between the containers (using the legacy `--link`)
|
||||||
|
flag. These links need to be created in both directions, so you can see this
|
||||||
|
gets complex with more than two containers which need to communicate.
|
||||||
|
Alternatively, you can manipulate the `/etc/hosts` files within the containers,
|
||||||
|
but this creates problems that are difficult to debug.
|
||||||
|
|
||||||
|
- **Containers can be attached and detached from user-defined networks on the fly**.
|
||||||
|
|
||||||
|
During a container's lifetime, you can connect or disconnect it from
|
||||||
|
user-defined networks on the fly. To remove a container from the default
|
||||||
|
bridge network, you need to stop the container and recreate it with different
|
||||||
|
network options.
|
||||||
|
|
||||||
|
- **Each user-defined network creates a configurable bridge**.
|
||||||
|
|
||||||
|
If your containers use the default bridge network, you can configure it, but
|
||||||
|
all the containers use the same settings, such as MTU and `iptables` rules.
|
||||||
|
In addition, configuring the default bridge network happens outside of Docker
|
||||||
|
itself, and requires a restart of Docker.
|
||||||
|
|
||||||
|
User-defined bridge networks are created and configured using
|
||||||
|
`docker network create`. If different groups of applications have different
|
||||||
|
network requirements, you can configure each user-defined bridge separately,
|
||||||
|
as you create it.
|
||||||
|
|
||||||
|
- **Linked containers on the default bridge network share environment variables**.
|
||||||
|
|
||||||
|
Originally, the only way to share environment variables between two containers
|
||||||
|
was to link them using the [`--link` flag](/network/links/). This type of
|
||||||
|
variable sharing is not possible with user-defined networks. However, there
|
||||||
|
are superior ways to share environment variables. A few ideas:
|
||||||
|
|
||||||
|
- Multiple containers can mount a file or directory containing the shared
|
||||||
|
information, using a Docker volume.
|
||||||
|
|
||||||
|
- Multiple containers can be started together using `docker-compose` and the
|
||||||
|
compose file can define the shared variables.
|
||||||
|
|
||||||
|
- You can use swarm services instead of standalone containers, and take
|
||||||
|
advantage of shared [secrets](/engine/swarm/secrets.md) and
|
||||||
|
[configs](/engine/swarm/configs.md).
|
||||||
|
|
||||||
|
Containers connected to the same user-defined bridge network effectively expose all ports
|
||||||
|
to each other. For a port to be accessible to containers or non-Docker hosts on
|
||||||
|
different networks, that port must be _published_ using the `-p` or `--publish`
|
||||||
|
flag.
|
||||||
|
|
||||||
|
## Manage a user-defined bridge
|
||||||
|
|
||||||
|
Use the `docker network create` command to create a user-defined bridge
|
||||||
|
network.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker network create my-net
|
||||||
|
```
|
||||||
|
|
||||||
|
You can specify the subnet, the IP address range, the gateway, and other
|
||||||
|
options. See the
|
||||||
|
[docker network create](/engine/reference/commandline/network_create/#specify-advanced-options)
|
||||||
|
reference or the output of `docker network create --help` for details.
|
||||||
|
|
||||||
|
Use the `docker network rm` command to remove a user-defined bridge
|
||||||
|
network. If containers are currently connected to the network,
|
||||||
|
[disconnect them](#disconnect-a-container-from-a-user-defined-bridge)
|
||||||
|
first.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker network rm my-net
|
||||||
|
```
|
||||||
|
|
||||||
|
> **What's really happening?**
|
||||||
|
>
|
||||||
|
> When you create or remove a user-defined bridge or connect or disconnect a
|
||||||
|
> container from a user-defined bridge, Docker uses tools specific to the
|
||||||
|
> operating system to manage the underlying network infrastructure (such as adding
|
||||||
|
> or removing bridge devices or configuring `iptables` rules on Linux). These
|
||||||
|
> details should be considered implementation details. Let Docker manage your
|
||||||
|
> user-defined networks for you.
|
||||||
|
|
||||||
|
## Connect a container to a user-defined bridge
|
||||||
|
|
||||||
|
When you create a new container, you can specify one or more `--network` flags.
|
||||||
|
This example connects a Nginx container to the `my-net` network. It also
|
||||||
|
publishes port 80 in the container to port 8080 on the Docker host, so external
|
||||||
|
clients can access that port. Any other container connected to the `my-net`
|
||||||
|
network has access to all ports on the `my-nginx` container, and vice versa.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker create --name my-nginx \
|
||||||
|
--network my-net \
|
||||||
|
--publish 8080:80 \
|
||||||
|
nginx:latest
|
||||||
|
```
|
||||||
|
|
||||||
|
To connect a **running** container to an existing user-defined bridge, use the
|
||||||
|
`docker network connect` command. The following command connects an already-running
|
||||||
|
`my-nginx` container to an already-existing `my-net` network:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker network connect my-net my-nginx
|
||||||
|
```
|
||||||
|
|
||||||
|
## Disconnect a container from a user-defined bridge
|
||||||
|
|
||||||
|
To disconnect a running container from a user-defined bridge, use the `docker
|
||||||
|
network disconnect` command. The following command disconnects the `my-nginx`
|
||||||
|
container from the `my-net` network.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker network disconnect my-net my-nginx
|
||||||
|
```
|
||||||
|
|
||||||
|
## Use IPv6
|
||||||
|
|
||||||
|
If you need IPv6 support for Docker containers, you need to
|
||||||
|
[enable the option](/config/daemon/ipv6.md) on the Docker daemon and reload its
|
||||||
|
configuration, before creating any IPv6 networks or assigning containers IPv6
|
||||||
|
addresses.
|
||||||
|
|
||||||
|
When you create your network, you can specify the `--ipv6` flag to enable
|
||||||
|
IPv6. You can't selectively disable IPv6 support on the default `bridge` network.
|
||||||
|
|
||||||
|
## Enable forwarding from Docker containers to the outside world
|
||||||
|
|
||||||
|
By default, traffic from containers connected to the default bridge network is
|
||||||
|
**not** forwarded to the outside world. To enable forwarding, you need to change
|
||||||
|
two settings. These are not Docker commands and they affect the Docker host's
|
||||||
|
kernel.
|
||||||
|
|
||||||
|
1. Configure the Linux kernel to allow IP forwarding.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ sysctl net.ipv4.conf.all.forwarding=1
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Change the policy for the `iptables` `FORWARD` policy from `DROP` to
|
||||||
|
`ACCEPT`.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ sudo iptables -P FORWARD ACCEPT
|
||||||
|
```
|
||||||
|
|
||||||
|
These settings do not persist across a reboot, so you may need to add them to a
|
||||||
|
start-up script.
|
||||||
|
|
||||||
|
## Use the default bridge network
|
||||||
|
|
||||||
|
The default `bridge` network is considered a legacy detail of Docker and is not
|
||||||
|
recommended for production use. Configuring it is a manual operation, and it has
|
||||||
|
[technical shortcomings](#differences-between-user-defined-bridges-and-the-default-bridge).
|
||||||
|
|
||||||
|
### Connect a container to the default bridge network
|
||||||
|
|
||||||
|
If you do not specify a network using the `--network` flag, and you do specify a
|
||||||
|
network driver, your container is connected to the default `bridge` network by
|
||||||
|
default. Containers connected to the default `bridge` network can communicate,
|
||||||
|
but only by IP address, unless they are linked using the
|
||||||
|
[legacy `--link` flag](/network/links/).
|
||||||
|
|
||||||
|
### Configure the default bridge network
|
||||||
|
|
||||||
|
To configure the default `bridge` network, you specify options in `daemon.json`.
|
||||||
|
Here is an example `daemon.json` with several options specified. Only specify
|
||||||
|
the settings you need to customize.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"bip": "192.168.1.5/24",
|
||||||
|
"fixed-cidr": "192.168.1.5/25",
|
||||||
|
"fixed-cidr-v6": "2001:db8::/64",
|
||||||
|
"mtu": 1500,
|
||||||
|
"default-gateway": "10.20.1.1",
|
||||||
|
"default-gateway-v6": "2001:db8:abcd::89",
|
||||||
|
"dns": ["10.20.1.2","10.20.1.3"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Restart Docker for the changes to take effect.
|
||||||
|
|
||||||
|
### Use IPv6 with the default bridge network
|
||||||
|
|
||||||
|
If you configure Docker for IPv6 support (see [Use IPv6](#use-ipv6)), the
|
||||||
|
default bridge network is also configured for IPv6 automatically. Unlike
|
||||||
|
user-defined bridges, you can't selectively disable IPv6 on the default bridge.
|
||||||
|
|
||||||
|
## Next steps
|
||||||
|
|
||||||
|
- Go through the [standalone networking tutorial](/network/network-tutorial-standalone.md)
|
||||||
|
- Learn about [networking from the container's point of view](/config/container/container-networking.md)
|
||||||
|
- Learn about [overlay networks](/network/overlay.md)
|
||||||
|
- Learn about [Macvlan networks](/network/macvlan.md)
|
|
@ -0,0 +1,28 @@
|
||||||
|
---
|
||||||
|
title: Use host networking
|
||||||
|
description: All about exposing containers on the Docker host's network
|
||||||
|
keywords: network, host, standalone
|
||||||
|
---
|
||||||
|
|
||||||
|
If you use the `host` network driver for a container, that container's network
|
||||||
|
stack is not isolated from the Docker host. For instance, if you run a container
|
||||||
|
which binds to port 80 and you use `host` networking, the container's
|
||||||
|
application will be available on port 80 on the host's IP address.
|
||||||
|
|
||||||
|
In Docker 17.06 and higher, you can also use a `host` network for a swarm
|
||||||
|
service, by passing `--network host` to the `docker container create` command.
|
||||||
|
In this case, control traffic (traffic related to managing the swarm and the
|
||||||
|
service) is still sent across an overlay network, but the individual swarm
|
||||||
|
service containers send data using the Docker daemon's host network and ports.
|
||||||
|
This creates some extra limitations. For instance, if a service container binds
|
||||||
|
to port 80, only one service container can run on a given swarm node.
|
||||||
|
|
||||||
|
If your container or service publishes no ports, host networking has no effect.
|
||||||
|
|
||||||
|
## Next steps
|
||||||
|
|
||||||
|
- Go through the [host networking tutorial](/network/network-tutorial-host.md)
|
||||||
|
- Learn about [networking from the container's point of view](/config/containers/container-networking.md)
|
||||||
|
- Learn about [bridge networks](/network/bridge.md)
|
||||||
|
- Learn about [overlay networks](/network/overlay.md)
|
||||||
|
- Learn about [Macvlan networks](/network/macvlan.md)
|
|
@ -0,0 +1,120 @@
|
||||||
|
---
|
||||||
|
title: Overview
|
||||||
|
description: Overview of Docker networks and networking concepts
|
||||||
|
keywords: networking, bridge, routing, routing mesh, overlay, ports
|
||||||
|
redirect_from:
|
||||||
|
- /engine/userguide/networking/
|
||||||
|
- /engine/userguide/networking/dockernetworks/
|
||||||
|
- /articles/networking/
|
||||||
|
---
|
||||||
|
|
||||||
|
One of the reasons Docker containers and services are so powerful is that
|
||||||
|
you can connect them together, or connect them to non-Docker workloads. Docker
|
||||||
|
containers and services do not even need to be aware that they are deployed on
|
||||||
|
Docker, or whether their peers are also Docker workloads or not. Whether your
|
||||||
|
Docker hosts run Linux, Windows, or a mix of the two, you can use Docker to
|
||||||
|
manage them in a platform-agnostic way.
|
||||||
|
|
||||||
|
This topic defines some basic Docker networking concepts and prepares you to
|
||||||
|
design and deploy your applications to take full advantage of these
|
||||||
|
capabilities.
|
||||||
|
|
||||||
|
Most of this content applies to all Docker installations. However,
|
||||||
|
[a few advanced features](#docker-ee-networking-features) are only available to
|
||||||
|
Docker EE customers.
|
||||||
|
|
||||||
|
## Scope of this topic
|
||||||
|
|
||||||
|
This topic does **not** go into OS-specific details about how Docker networks
|
||||||
|
work, so you will not find information about how Docker manipulates `iptables`
|
||||||
|
rules on Linux or how it manipulates routing rules on Windows servers, and you
|
||||||
|
will not find detailed information about how Docker forms and encapsulates
|
||||||
|
packets or handles encryption. See
|
||||||
|
[Docker Reference Architecture: Designing Scalable, Portable Docker Container Networks](https://success.docker.com/Architecture/Docker_Reference_Architecture%3A_Designing_Scalable%2C_Portable_Docker_Container_Networks)
|
||||||
|
for a much greater depth of technical detail.
|
||||||
|
|
||||||
|
In addition, this topic does not provide any tutorials for how to create,
|
||||||
|
manage, and use Docker networks. Each section includes links to relevant
|
||||||
|
tutorials and command references.
|
||||||
|
|
||||||
|
## Network drivers
|
||||||
|
|
||||||
|
Docker's networking subsystem is pluggable, using drivers. Several drivers
|
||||||
|
exist by default, and provide core networking functionality:
|
||||||
|
|
||||||
|
- `bridge`: The default network driver. If you don't specify a driver, this is
|
||||||
|
the type of network you are creating. **Bridge networks are usually used when
|
||||||
|
your applications run in standalone containers that need to communicate.** See
|
||||||
|
[bridge networks](bridge.md).
|
||||||
|
|
||||||
|
- `host`: For standalone containers, remove network isolation between the
|
||||||
|
container and the Docker host, and use the host's networking directly. `host`
|
||||||
|
is only available for swarm services on Docker 17.06 and higher. See
|
||||||
|
[use the host network](host.md).
|
||||||
|
|
||||||
|
- `overlay`: Overlay networks connect multiple Docker daemons together and
|
||||||
|
enable swarm services to communicate with each other. You can also use overlay
|
||||||
|
networks to facilitate communication between a swarm service and a standalone
|
||||||
|
container, or between two standalone containers on different Docker daemons.
|
||||||
|
This strategy removes the need to do OS-level routing between these
|
||||||
|
containers. See [overlay networks](overlay.md).
|
||||||
|
|
||||||
|
- `macvlan`: Macvlan networks allow you to assign a MAC address to a container,
|
||||||
|
making it appear as a physical device on your network. The Docker daemon
|
||||||
|
routes traffic to containers by their MAC addresses. Using the `macvlan`
|
||||||
|
driver is sometimes the best choice when dealing with legacy applications that
|
||||||
|
expect to be directly connected to the physical network, rather than routed
|
||||||
|
through the Docker host's network stack. See
|
||||||
|
[Macvlan networks](macvlan.md).
|
||||||
|
|
||||||
|
- `none`: For this container, disable all networking. Usually used in
|
||||||
|
conjunction with a custom network driver. `none` is not available for swarm
|
||||||
|
services. See
|
||||||
|
[disable container networking](none.md).
|
||||||
|
|
||||||
|
- [Network plugins](/engine/extend/plugins_services/): You can install and use
|
||||||
|
third-party network plugins with Docker. These plugins are available from
|
||||||
|
[Docker Store](https://store.docker.com/search?category=network&q=&type=plugin)
|
||||||
|
or from third-party vendors. See the vendor's documentation for installing and
|
||||||
|
using a given network plugin.
|
||||||
|
|
||||||
|
|
||||||
|
### Network driver summary
|
||||||
|
|
||||||
|
- **User-defined bridge networks** are best when you need multiple containers to
|
||||||
|
communicate on the same Docker host.
|
||||||
|
- **Host networks** are best when the network stack should not be isolated from
|
||||||
|
the Docker host, but you want other aspects of the container to be isolated.
|
||||||
|
- **Overlay networks** are best when you need containers running on different
|
||||||
|
Docker hosts to communicate, or when multiple applications work together using
|
||||||
|
swarm services.
|
||||||
|
- **Macvlan networks** are best when you are migrating from a VM setup or
|
||||||
|
need your containers to look like physical hosts on your network, each with a
|
||||||
|
unique MAC address.
|
||||||
|
- **Third-party network plugins** allow you to integrate Docker with specialized
|
||||||
|
network stacks.
|
||||||
|
|
||||||
|
## Docker EE networking features
|
||||||
|
|
||||||
|
The following two features are only possible when using Docker EE and managing
|
||||||
|
your Docker services using Universal Control Plane (UCP):
|
||||||
|
|
||||||
|
- The [HTTP routing mesh](/datacenter/ucp/2.2/guides/admin/configure/use-domain-names-to-access-services/)
|
||||||
|
allows you to share the same network IP address and port among multiple
|
||||||
|
services. UCP routes the traffic to the appropriate service using the
|
||||||
|
combination of hostname and port, as requested from the client.
|
||||||
|
|
||||||
|
- [Session stickiness](/datacenter/ucp/2.2/guides/user/services/use-domain-names-to-access-services/#sticky-sessions) allows you to specify information in the HTTP header
|
||||||
|
which UCP uses to route subsequent requests to the same service task, for
|
||||||
|
applications which require stateful sessions.
|
||||||
|
|
||||||
|
## Networking tutorials
|
||||||
|
|
||||||
|
Now that you understand the basics about Docker networks, deepen your
|
||||||
|
understanding using the following tutorials:
|
||||||
|
|
||||||
|
- [Standalone networking tutorial](network-tutorial-standalone.md)
|
||||||
|
- [Host networking tutorial](network-tutorial-host.md)
|
||||||
|
- [Overlay networking tutorial](network-tutorial-overlay.md)
|
||||||
|
- [Macvlan networking tutorial](network-tutorial-macvlan.md)
|
||||||
|
|
|
@ -3,10 +3,26 @@ description: Learn how to connect Docker containers together.
|
||||||
keywords: Examples, Usage, user guide, links, linking, docker, documentation, examples, names, name, container naming, port, map, network port, network
|
keywords: Examples, Usage, user guide, links, linking, docker, documentation, examples, names, name, container naming, port, map, network port, network
|
||||||
redirect_from:
|
redirect_from:
|
||||||
- /userguide/dockerlinks/
|
- /userguide/dockerlinks/
|
||||||
|
- /engine/userguide/networking/default_network/dockerlinks/
|
||||||
title: Legacy container links
|
title: Legacy container links
|
||||||
---
|
---
|
||||||
|
|
||||||
The information in this section explains legacy container links within the Docker default `bridge` network which is created automatically when you install Docker.
|
>**Warning**:
|
||||||
|
>The `--link` flag is a legacy feature of Docker. It may eventually
|
||||||
|
be removed. Unless you absolutely need to continue using it, we recommend that you use
|
||||||
|
user-defined networks to facilitate communication between two containers instead of using
|
||||||
|
`--link`. One feature that user-defined networks do not support that you can do
|
||||||
|
with `--link` is sharing environmental variables between containers. However,
|
||||||
|
you can use other mechanisms such as volumes to share environment variables
|
||||||
|
between containers in a more controlled way.
|
||||||
|
>
|
||||||
|
> See [Differences between user-defined bridges and the default bridge](bridge.md##differences-between-user-defined-bridges-and-the-default-bridge)
|
||||||
|
> for some alternatives to using `--link`.
|
||||||
|
{:.warning}
|
||||||
|
|
||||||
|
The information in this section explains legacy container links within the
|
||||||
|
Docker default `bridge` network which is created automatically when you install
|
||||||
|
Docker.
|
||||||
|
|
||||||
Before the [Docker networks feature](/engine/userguide/networking/index.md), you could use the
|
Before the [Docker networks feature](/engine/userguide/networking/index.md), you could use the
|
||||||
Docker link feature to allow containers to discover each other and securely
|
Docker link feature to allow containers to discover each other and securely
|
||||||
|
@ -18,16 +34,6 @@ behave differently between default `bridge` network and
|
||||||
This section briefly discusses connecting via a network port and then goes into
|
This section briefly discusses connecting via a network port and then goes into
|
||||||
detail on container linking in default `bridge` network.
|
detail on container linking in default `bridge` network.
|
||||||
|
|
||||||
>**Warning**:
|
|
||||||
>The `--link` flag is a legacy feature of Docker. It may eventually
|
|
||||||
be removed. Unless you absolutely need to continue using it, we recommend that you use
|
|
||||||
user-defined networks to facilitate communication between two containers instead of using
|
|
||||||
`--link`. One feature that user-defined networks do not support that you can do
|
|
||||||
with `--link` is sharing environmental variables between containers. However,
|
|
||||||
you can use other mechanisms such as volumes to share environment variables
|
|
||||||
between containers in a more controlled way.
|
|
||||||
{:.warning}
|
|
||||||
|
|
||||||
## Connect using network port mapping
|
## Connect using network port mapping
|
||||||
|
|
||||||
Let's say you used this command to run a simple Python Flask application:
|
Let's say you used this command to run a simple Python Flask application:
|
||||||
|
@ -372,4 +378,3 @@ allowing linked communication to continue.
|
||||||
. . .
|
. . .
|
||||||
172.17.0.9 db
|
172.17.0.9 db
|
||||||
|
|
||||||
# Related information
|
|
|
@ -0,0 +1,113 @@
|
||||||
|
---
|
||||||
|
title: Use Macvlan networks
|
||||||
|
description: All about using macvlan to make your containers appear like physical machines on the network
|
||||||
|
keywords: network, macvlan, standalone
|
||||||
|
redirect_from:
|
||||||
|
- /engine/userguide/networking/get-started-macvlan/
|
||||||
|
---
|
||||||
|
|
||||||
|
Some applications, especially legacy applications or applications which monitor
|
||||||
|
network traffic, expect to be directly connected to the physical network. In
|
||||||
|
this type of situation, you can use the `macvlan` network driver to assign a MAC
|
||||||
|
address to each container's virtual network interface, making it appear to be
|
||||||
|
a physical network interface directly connected to the physical network. In this
|
||||||
|
case, you need to designate a physical interface on your Docker host to use for
|
||||||
|
the Macvlan, as well as the subnet and gateway of the Macvlan. You can even
|
||||||
|
isolate your Macvlan networks using different physical network interfaces.
|
||||||
|
Keep the following things in mind:
|
||||||
|
|
||||||
|
- It is very easy to unintentionally damage your network due IP address
|
||||||
|
exhaustion or to "VLAN spread", which is a situation in which you have an
|
||||||
|
inappropriately large number of unique MAC addresses in your network.
|
||||||
|
- Your networking equipment needs to be able to handle "promiscuous mode",
|
||||||
|
where one physical interface can be assigned multiple MAC addresses.
|
||||||
|
- If your application can work using a bridge (on a single Docker host) or
|
||||||
|
overlay (to communicate across multiple Docker hosts), these solutions may be
|
||||||
|
better in the long term.
|
||||||
|
|
||||||
|
## Create a macvlan network
|
||||||
|
|
||||||
|
When you create a Macvlan network, it can either be in bridge mode or 802.1q
|
||||||
|
trunk bridge mode.
|
||||||
|
|
||||||
|
- In bridge mode,Macvlan traffic goes through a physical device on the host.
|
||||||
|
|
||||||
|
- In 8021.q trunk bridge mode, traffic goes through an 8021.q sub-interface
|
||||||
|
which Docker creates on the fly. This allows you to control routing and
|
||||||
|
filtering at a more granular level.
|
||||||
|
|
||||||
|
### Bridge mode
|
||||||
|
|
||||||
|
To create a Macvlan network which bridges with a given physical network
|
||||||
|
interface, use `--driver macvlan` with the `docker network create` command. You
|
||||||
|
also need to specify the `parent`, which is the interface the traffic will
|
||||||
|
physically go through on the Docker host.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker network create -d macvlan \
|
||||||
|
--subnet=172.16.86.0/24 \
|
||||||
|
--gateway=172.16.86.1 \
|
||||||
|
-o parent=eth0 pub_net
|
||||||
|
```
|
||||||
|
|
||||||
|
If you need to exclude IP addresses from being used in the Macvlan network, such
|
||||||
|
as when a given IP address is already in use, use `--aux-addresses`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker network create -d macvlan \
|
||||||
|
--subnet=192.168.32.0/24 \
|
||||||
|
--ip-range=192.168.32.128/25 \
|
||||||
|
--gateway=192.168.32.254 \
|
||||||
|
-o parent=eth0 macnet32
|
||||||
|
```
|
||||||
|
|
||||||
|
### 8021.q trunk bridge mode
|
||||||
|
|
||||||
|
If you specify a `parent` interface name with a dot included, such as `eth0.50`,
|
||||||
|
Docker interprets that as a sub-interface of `eth0` and creates the sub-interface
|
||||||
|
automatically.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker network create -d macvlan \
|
||||||
|
--subnet=192.168.50.0/24 \
|
||||||
|
--gateway=192.168.50.1 \
|
||||||
|
-o parent=eth0.50 macvlan50
|
||||||
|
```
|
||||||
|
|
||||||
|
### Use an ipvlan instead of macvlan
|
||||||
|
|
||||||
|
In the above example, you are still using a L3 bridge. You can use `ipvlan`
|
||||||
|
instead, and get an L2 bridge. Specify `-o ipvlan_mode=l2`.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker network create -d ipvlan \
|
||||||
|
--subnet=192.168.210.0/24 \
|
||||||
|
--subnet=192.168.212.0/24 \
|
||||||
|
--gateway=192.168.210.254 \
|
||||||
|
--gateway=192.168.212.254 \
|
||||||
|
-o ipvlan_mode=l2 ipvlan210
|
||||||
|
```
|
||||||
|
|
||||||
|
## Use IPv6
|
||||||
|
|
||||||
|
If you have [configured the Docker daemon to allow IPv6](/config/daemon/ipv6.md),
|
||||||
|
you can use dual-stack IPv4/IPv6 Macvlan networks.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker network create -d macvlan \
|
||||||
|
--subnet=192.168.216.0/24 --subnet=192.168.218.0/24 \
|
||||||
|
--gateway=192.168.216.1 --gateway=192.168.218.1 \
|
||||||
|
--subnet=2001:db8:abc8::/64 --gateway=2001:db8:abc8::10 \
|
||||||
|
-o parent=eth0.218 \
|
||||||
|
-o macvlan_mode=bridge macvlan216
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## Next steps
|
||||||
|
|
||||||
|
- Go through the [macvlan networking tutorial](/network/network-tutorial-macvlan.md)
|
||||||
|
- Learn about [networking from the container's point of view](/config/containers/container-networking.md)
|
||||||
|
- Learn about [bridge networks](/network/bridge.md)
|
||||||
|
- Learn about [overlay networks](/network/overlay.md)
|
||||||
|
- Learn about [host networking](/network/host.md)
|
||||||
|
- Learn about [Macvlan networks](/network/macvlan.md)
|
|
@ -0,0 +1,71 @@
|
||||||
|
---
|
||||||
|
title: Networking using the host network
|
||||||
|
description: Tutorials for networking using the host network, disabling network isolation
|
||||||
|
keywords: networking, host, standalone
|
||||||
|
---
|
||||||
|
|
||||||
|
This series of tutorials deals with networking standalone containers which bind
|
||||||
|
directly to the Docker host's network, with no network isolation. For other
|
||||||
|
networking topics, see the [overview](index.md).
|
||||||
|
|
||||||
|
## Goal
|
||||||
|
|
||||||
|
The goal of this tutorial is to start a `nginx` container which binds directly
|
||||||
|
to port 80 on the Docker host. From a networking point of view, this is the
|
||||||
|
same level of isolation as if the `nginx` process were running directly on the
|
||||||
|
Docker host and not in a container. However, in all other ways, such as storage,
|
||||||
|
process namespace, and user namespace, the `nginx` process is isolated from the
|
||||||
|
host.
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
- This procedure requires port 80 to be available on the Docker host. To make
|
||||||
|
Nginx listen on a different port, see the
|
||||||
|
[documentation for the `nginx` image](https://hub.docker.com/_/nginx/)
|
||||||
|
|
||||||
|
- The `host` networking driver only works on Linux hosts, and is not supported
|
||||||
|
on Docker for Mac, Docker for Windows, or Docker EE for Windows Server.
|
||||||
|
|
||||||
|
## Procedure
|
||||||
|
|
||||||
|
1. Create and start the container as a detached process.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker run --rm -itd --network host --name my_nginx nginx
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Access Nginx by browsing to
|
||||||
|
[http://localhost:80/](http://localhost:80/).
|
||||||
|
|
||||||
|
3. Examine your network stack using the following commands:
|
||||||
|
|
||||||
|
- Examine all network interfaces and verify that a new one was not created.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ip addr show
|
||||||
|
```
|
||||||
|
|
||||||
|
- Verify which process is bound to port 80, using the `netstat` command. You
|
||||||
|
need to use `sudo` because the process is owned by the Docker daemon user
|
||||||
|
and you otherwise won't be able to see its name or PID.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo netstat -tulpn | grep :80
|
||||||
|
```
|
||||||
|
|
||||||
|
4. Stop the container.
|
||||||
|
|
||||||
|
```basn
|
||||||
|
docker container stop my_nginx
|
||||||
|
docker container rm my_nginx
|
||||||
|
```
|
||||||
|
|
||||||
|
## Other networking tutorials
|
||||||
|
|
||||||
|
Now that you have completed the networking tutorials for standalone containers,
|
||||||
|
you might want to run through these other networking tutorials:
|
||||||
|
|
||||||
|
- [Standalone networking tutorial](network-tutorial-standalone.md)
|
||||||
|
- [Overlay networking tutorial](network-tutorial-overlay.md)
|
||||||
|
- [Macvlan networking tutorial](network-tutorial-macvlan.md)
|
||||||
|
|
|
@ -0,0 +1,224 @@
|
||||||
|
---
|
||||||
|
title: Networking using a macvlan network
|
||||||
|
description: Tutorials for networking using a macvlan bridge network and 8021.q trunk bridge network
|
||||||
|
keywords: networking, macvlan, 8021.q, standalone
|
||||||
|
---
|
||||||
|
|
||||||
|
This series of tutorials deals with networking standalone containers which
|
||||||
|
connect to `macvlan` networks. In this type of network, the Docker host accepts
|
||||||
|
requests for multiple MAC addresses at its IP address, and routes those requests
|
||||||
|
to the appropriate container. For other networking topics, see the
|
||||||
|
[overview](index.md).
|
||||||
|
|
||||||
|
## Goal
|
||||||
|
|
||||||
|
The goal of these tutorials is to set up a bridged `macvlan` network and attach
|
||||||
|
a container to it, then set up an 8021.q trunked `macvlan` network and attach a
|
||||||
|
container to it.
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
- Most cloud providers block `macvlan` networking. You may need physical access
|
||||||
|
to your networking equipment.
|
||||||
|
|
||||||
|
- The `macvlan` networking driver only works on Linux hosts, and is not supported
|
||||||
|
on Docker for Mac, Docker for Windows, or Docker EE for Windows Server.
|
||||||
|
|
||||||
|
- You need at least version 3.9 of the Linux kernel, and version 4.0 or higher
|
||||||
|
is recommended.
|
||||||
|
|
||||||
|
- The examples assume your ethernet interface is `eth0`. If your device has a
|
||||||
|
different name, use that instead.
|
||||||
|
|
||||||
|
## Bridge example
|
||||||
|
|
||||||
|
In the simple bridge example, your traffic flows through `eth0` and Docker
|
||||||
|
routes traffic to your container using its MAC address. To network devices
|
||||||
|
on your network, your container appears to be physically attached to the network.
|
||||||
|
|
||||||
|
1. Create a `macvlan` network called `my-macvlan-net`. Modify the `subnet`, `gateway`,
|
||||||
|
and `parent` values to values that make sense in your environment.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker network create -d macvlan \
|
||||||
|
--subnet=172.16.86.0/24 \
|
||||||
|
--gateway=172.16.86.1 \
|
||||||
|
-o parent=eth0 \
|
||||||
|
my-macvlan-net
|
||||||
|
```
|
||||||
|
|
||||||
|
You can use `docker network ls` and `docker network inspect pub_net`
|
||||||
|
commands to verify that the network exists and is a `macvlan` network.
|
||||||
|
|
||||||
|
2. Start an `alpine` container and attach it to the `my-macvlan-net` network. The
|
||||||
|
`-dit` flags start the container in the background but allow you to attach
|
||||||
|
to it. The `--rm` flag means the container is removed when it is stopped.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker run --rm -itd \
|
||||||
|
--network my-macvlan-net \
|
||||||
|
--name my-macvlan-alpine \
|
||||||
|
alpine:latest \
|
||||||
|
ash
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Inspect the `my-macvlan-alpine` container and notice the `MacAddress` key
|
||||||
|
within the `Networks` key:
|
||||||
|
|
||||||
|
```none
|
||||||
|
$ docker container inspect my-macvlan-alpine
|
||||||
|
|
||||||
|
...truncated...
|
||||||
|
"Networks": {
|
||||||
|
"my-macvlan-net": {
|
||||||
|
"IPAMConfig": null,
|
||||||
|
"Links": null,
|
||||||
|
"Aliases": [
|
||||||
|
"bec64291cd4c"
|
||||||
|
],
|
||||||
|
"NetworkID": "5e3ec79625d388dbcc03dcf4a6dc4548644eb99d58864cf8eee2252dcfc0cc9f",
|
||||||
|
"EndpointID": "8caf93c862b22f379b60515975acf96f7b54b7cf0ba0fb4a33cf18ae9e5c1d89",
|
||||||
|
"Gateway": "172.16.86.1",
|
||||||
|
"IPAddress": "172.16.86.2",
|
||||||
|
"IPPrefixLen": 24,
|
||||||
|
"IPv6Gateway": "",
|
||||||
|
"GlobalIPv6Address": "",
|
||||||
|
"GlobalIPv6PrefixLen": 0,
|
||||||
|
"MacAddress": "02:42:ac:10:56:02",
|
||||||
|
"DriverOpts": null
|
||||||
|
}
|
||||||
|
}
|
||||||
|
...truncated
|
||||||
|
```
|
||||||
|
|
||||||
|
4. Check out how the container sees its own network interfaces by running a
|
||||||
|
couple of `docker exec` commands.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker exec my-macvlan-alpine ip addr show eth0
|
||||||
|
|
||||||
|
9: eth0@tunl0: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
|
||||||
|
link/ether 02:42:ac:10:56:02 brd ff:ff:ff:ff:ff:ff
|
||||||
|
inet 172.16.86.2/24 brd 172.16.86.255 scope global eth0
|
||||||
|
valid_lft forever preferred_lft forever
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker exec my-macvlan-alpine ip route
|
||||||
|
|
||||||
|
default via 172.16.86.1 dev eth0
|
||||||
|
172.16.86.0/24 dev eth0 scope link src 172.16.86.2
|
||||||
|
```
|
||||||
|
|
||||||
|
5. Stop the container (Docker removes it because of the `--rm` flag), and remove
|
||||||
|
the network.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker container stop my-macvlan-alpine
|
||||||
|
|
||||||
|
$ docker network rm my-macvlan-net
|
||||||
|
```
|
||||||
|
|
||||||
|
## 8021.q trunked bridge example
|
||||||
|
|
||||||
|
In the 8021.q trunked bridge example, your traffic flows through a sub-interface
|
||||||
|
of `eth0` (called `eth0.10`) and Docker routes traffic to your container using
|
||||||
|
its MAC address. To network devices on your network, your container appears to
|
||||||
|
be physically attached to the network.
|
||||||
|
|
||||||
|
1. Create a `macvlan` network called `my-8021q-macvlan-net`. Modify the
|
||||||
|
`subnet`, `gateway`, and `parent` values to values that make sense in your
|
||||||
|
environment.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker network create -d macvlan \
|
||||||
|
--subnet=172.16.86.0/24 \
|
||||||
|
--gateway=172.16.86.1 \
|
||||||
|
-o parent=eth0.10 \
|
||||||
|
my-8021q-macvlan-net
|
||||||
|
```
|
||||||
|
|
||||||
|
You can use `docker network ls` and `docker network inspect pub_net`
|
||||||
|
commands to verify that the network exists, is a `macvlan` network, and
|
||||||
|
has parent `eth0.10`. You can use `ip addr show` on the Docker host to
|
||||||
|
verify that the interface `eth0.10` exists and has a separate IP address
|
||||||
|
|
||||||
|
2. Start an `alpine` container and attach it to the `my-8021q-macvlan-net`
|
||||||
|
network. The `-dit` flags start the container in the background but allow
|
||||||
|
you to attach to it. The `--rm` flag means the container is removed when it
|
||||||
|
is stopped.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker run --rm -itd \
|
||||||
|
--network my-8021q-macvlan-net \
|
||||||
|
--name my-second-macvlan-alpine \
|
||||||
|
alpine:latest \
|
||||||
|
ash
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Inspect the `my-second-macvlan-alpine` container and notice the `MacAddress`
|
||||||
|
key within the `Networks` key:
|
||||||
|
|
||||||
|
```none
|
||||||
|
$ docker container inspect my-second-macvlan-alpine
|
||||||
|
|
||||||
|
...truncated...
|
||||||
|
"Networks": {
|
||||||
|
"my-8021q-macvlan-net": {
|
||||||
|
"IPAMConfig": null,
|
||||||
|
"Links": null,
|
||||||
|
"Aliases": [
|
||||||
|
"12f5c3c9ba5c"
|
||||||
|
],
|
||||||
|
"NetworkID": "c6203997842e654dd5086abb1133b7e6df627784fec063afcbee5893b2bb64db",
|
||||||
|
"EndpointID": "aa08d9aa2353c68e8d2ae0bf0e11ed426ea31ed0dd71c868d22ed0dcf9fc8ae6",
|
||||||
|
"Gateway": "172.16.86.1",
|
||||||
|
"IPAddress": "172.16.86.2",
|
||||||
|
"IPPrefixLen": 24,
|
||||||
|
"IPv6Gateway": "",
|
||||||
|
"GlobalIPv6Address": "",
|
||||||
|
"GlobalIPv6PrefixLen": 0,
|
||||||
|
"MacAddress": "02:42:ac:10:56:02",
|
||||||
|
"DriverOpts": null
|
||||||
|
}
|
||||||
|
}
|
||||||
|
...truncated
|
||||||
|
```
|
||||||
|
|
||||||
|
4. Check out how the container sees its own network interfaces by running a
|
||||||
|
couple of `docker exec` commands.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker exec my-second-macvlan-alpine ip addr show eth0
|
||||||
|
|
||||||
|
11: eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
|
||||||
|
link/ether 02:42:ac:10:56:02 brd ff:ff:ff:ff:ff:ff
|
||||||
|
inet 172.16.86.2/24 brd 172.16.86.255 scope global eth0
|
||||||
|
valid_lft forever preferred_lft forever
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker exec my-second-macvlan-alpine ip route
|
||||||
|
|
||||||
|
default via 172.16.86.1 dev eth0
|
||||||
|
172.16.86.0/24 dev eth0 scope link src 172.16.86.2
|
||||||
|
```
|
||||||
|
|
||||||
|
5. Stop the container (Docker removes it because of the `--rm` flag), and remove
|
||||||
|
the network.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker container stop my-second-macvlan-alpin
|
||||||
|
|
||||||
|
$ docker network rm my-8021q-macvlan-net
|
||||||
|
```
|
||||||
|
|
||||||
|
## Other networking tutorials
|
||||||
|
|
||||||
|
Now that you have completed the networking tutorial for `macvlan` networks,
|
||||||
|
you might want to run through these other networking tutorials:
|
||||||
|
|
||||||
|
- [Standalone networking tutorial](network-tutorial-standalone.md)
|
||||||
|
- [Overlay networking tutorial](network-tutorial-overlay.md)
|
||||||
|
- [Host networking tutorial](network-tutorial-host.md)
|
||||||
|
|
|
@ -0,0 +1,662 @@
|
||||||
|
---
|
||||||
|
title: Networking with overlay networks
|
||||||
|
description: Tutorials for networking with swarm services and standalone containers on multiple Docker daemons
|
||||||
|
keywords: networking, bridge, routing, ports, swarm, overlay
|
||||||
|
---
|
||||||
|
|
||||||
|
This series of tutorials deals with networking for swarm services.
|
||||||
|
For networking with standalone containers, see
|
||||||
|
[Networking with swarm services](network-tutorial-standalone.md). If you need to
|
||||||
|
learn more about Docker networking in general, see the [overview](index.md).
|
||||||
|
|
||||||
|
This topic includes four different tutorials. You can run each of them on
|
||||||
|
Linux, Windows, or a Mac, but for the last two, you need a second Docker
|
||||||
|
host running elsewhere.
|
||||||
|
|
||||||
|
- [Use the default overlay network](#use-the-default-overlay-network) demonstrates
|
||||||
|
how to use the default overlay network that Docker sets up for you
|
||||||
|
automatically when you initialize or join a swarm. This network is not the
|
||||||
|
best choice for production systems.
|
||||||
|
|
||||||
|
- [Use user-defined overlay networks](#use-a-user-defined-overlay-network) shows
|
||||||
|
how to create and use your own custom overlay networks, to connect services.
|
||||||
|
This is recommended for services running in production.
|
||||||
|
|
||||||
|
- [Use an overlay network for standalone containers](#use-an-overlay-network-for-standalone-containers)
|
||||||
|
shows how to communicate between standalone containers on different Docker
|
||||||
|
daemons using an overlay network.
|
||||||
|
|
||||||
|
- [Communicate between a container and a swarm service](#communicate-between-a-container-and-a-swarm-service)
|
||||||
|
sets up communication between a standalone container and a swarm service,
|
||||||
|
using an attachable overlay network. This is supported in Docker 17.06 and
|
||||||
|
higher.
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
These requires you to have at least a single-node swarm, which means that
|
||||||
|
you have started Docker and run `docker swarm init` on the host. You can run
|
||||||
|
the examples on a multi-node swarm as well.
|
||||||
|
|
||||||
|
The last example requires Docker 17.06 or higher.
|
||||||
|
|
||||||
|
## Use the default overlay network
|
||||||
|
|
||||||
|
In this example, you start an `alpine` service and examine the characteristics
|
||||||
|
of the network from the point of view of the individual service containers.
|
||||||
|
|
||||||
|
This tutorial does not go into operation-system-specific details about how
|
||||||
|
overlay networks are implemented, but focuses on how the overlay functions from
|
||||||
|
the point of view of a service.
|
||||||
|
|
||||||
|
### Prereqisites
|
||||||
|
|
||||||
|
This tutorial requires three physical or virtual Docker hosts which can all
|
||||||
|
communicate with one another, all running new installations of Docker 17.03 or
|
||||||
|
higher. This tutorial assumes that the three hosts are running on the same
|
||||||
|
network with no firewall involved.
|
||||||
|
|
||||||
|
These hosts will be referred to as `manager`, `worker-1`, and `worker-2`. The
|
||||||
|
`manager` host will function as both a manager and a worker, which means it can
|
||||||
|
both run service tasks and manage the swarm. `worker-1` and `worker-2` will
|
||||||
|
function as workers only,
|
||||||
|
|
||||||
|
If you don't have three hosts handy, an easy solution is to set up three
|
||||||
|
Ubuntu hosts on a cloud provider such as Amazon EC2, all on the same network
|
||||||
|
with all communications allowed to all hosts on that network (using a mechanism
|
||||||
|
such as EC2 security groups), and then to follow the
|
||||||
|
[installation instructions for Docker CE on Ubuntu](/engine/installation/linux/docker-ce/ubuntu.md).
|
||||||
|
|
||||||
|
### Walkthrough
|
||||||
|
|
||||||
|
#### Create the swarm
|
||||||
|
|
||||||
|
At the end of this procedure, all three Docker hosts will be joined to the swarm
|
||||||
|
and will be connected together using an overlay network called `ingress`.
|
||||||
|
|
||||||
|
1. On `master`. initialize the swarm. If the host only has one network
|
||||||
|
interface, the `--advertise-addr` flag is optional.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker swarm init --advertise-addr=<IP-ADDRESS-OF-MANAGER>
|
||||||
|
```
|
||||||
|
|
||||||
|
Make a note of the text that is printed, as this contains the token that
|
||||||
|
you will use to join `worker-1` and `worker-2` to the swarm. It is a good
|
||||||
|
idea to store the token in a password manager.
|
||||||
|
|
||||||
|
2. On `worker-1`, join the swarm. If the host only has one network interface,
|
||||||
|
the `--advertise-addr` flag is optional.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker swarm --join --token <TOKEN> \
|
||||||
|
--advertise-addr <IP-ADDRESS-OF-WORKER-1> \
|
||||||
|
<IP-ADDRESS-OF-MANAGER>:2377
|
||||||
|
```
|
||||||
|
|
||||||
|
3. On `worker-1`, join the swarm. If the host only has one network interface,
|
||||||
|
the `--advertise-addr` flag is optional.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker swarm --join --token <TOKEN> \
|
||||||
|
--advertise-addr <IP-ADDRESS-OF-WORKER-2> \
|
||||||
|
<IP-ADDRESS-OF-MANAGER>:2377
|
||||||
|
```
|
||||||
|
|
||||||
|
4. On `manager`, list all the nodes. This command can only be done from a
|
||||||
|
manager.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker node ls
|
||||||
|
|
||||||
|
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
|
||||||
|
d68ace5iraw6whp7llvgjpu48 * ip-172-31-34-146 Ready Active Leader
|
||||||
|
nvp5rwavvb8lhdggo8fcf7plg ip-172-31-35-151 Ready Active
|
||||||
|
ouvx2l7qfcxisoyms8mtkgahw ip-172-31-36-89 Ready Active
|
||||||
|
```
|
||||||
|
|
||||||
|
You can also use the `--filter` flag to filter by role:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker node ls --filter role=manager
|
||||||
|
|
||||||
|
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
|
||||||
|
d68ace5iraw6whp7llvgjpu48 * ip-172-31-34-146 Ready Active Leader
|
||||||
|
|
||||||
|
$ docker node ls --filter role=worker
|
||||||
|
|
||||||
|
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
|
||||||
|
nvp5rwavvb8lhdggo8fcf7plg ip-172-31-35-151 Ready Active
|
||||||
|
ouvx2l7qfcxisoyms8mtkgahw ip-172-31-36-89 Ready Active
|
||||||
|
```
|
||||||
|
|
||||||
|
5. List the Docker networks on `manager`, `worker-1`, and `worker-2` and notice
|
||||||
|
that each of them now has an overlay network called `ingress` and a bridge
|
||||||
|
network called `docker_gwbridge`. Only the listing for `manager` is shown
|
||||||
|
here:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker network ls
|
||||||
|
|
||||||
|
NETWORK ID NAME DRIVER SCOPE
|
||||||
|
495c570066be bridge bridge local
|
||||||
|
961c6cae9945 docker_gwbridge bridge local
|
||||||
|
ff35ceda3643 host host local
|
||||||
|
trtnl4tqnc3n ingress overlay swarm
|
||||||
|
c8357deec9cb none null local
|
||||||
|
```
|
||||||
|
|
||||||
|
The `docker_gwbridge` connects the `ingress` network to the Docker host's
|
||||||
|
network interface so that traffic can flow to and from swarm managers and
|
||||||
|
workers. If you create swarm services and do not specify a network, they are
|
||||||
|
connected to the `ingress` network. It is recommended that you use separate
|
||||||
|
overlay networks for each application or group of applications which will work
|
||||||
|
together. In the next procedure, you will create two overlay networks and
|
||||||
|
connect a service to each of them.
|
||||||
|
|
||||||
|
#### Create the services
|
||||||
|
|
||||||
|
1. On `manager`, create a new overlay network called `nginx-net`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker network create -d overlay nginx-net
|
||||||
|
```
|
||||||
|
|
||||||
|
You don't need to create the overlay network on the other nodes, beacause it
|
||||||
|
will be automatically created when one of those nodes starts running a
|
||||||
|
service task which requires it.
|
||||||
|
|
||||||
|
2. On `manager`, create a 5-replica Nginx service connected to `nginx-net`. The
|
||||||
|
service will publish port 80 to the outside world. All of the service
|
||||||
|
task containers can communicate with each other without opening any ports.
|
||||||
|
|
||||||
|
> **Note**: Services can only be created on a manager.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker service create \
|
||||||
|
--name my-nginx \
|
||||||
|
--publish target=80,published=80 \
|
||||||
|
--replicas=5 \
|
||||||
|
--network nginx-net \
|
||||||
|
nginx
|
||||||
|
```
|
||||||
|
|
||||||
|
The default publish mode of `ingress`, which is used when you do not
|
||||||
|
specify a `mode` for the `--publish` flag, means that if you browse to
|
||||||
|
port 80 on `manager`, `worker-1`, or `worker-2`, you will be connected to
|
||||||
|
port 80 on one of the 5 service tasks, even if no tasks are currently
|
||||||
|
running on the node you browse to. If you want to publish the port using
|
||||||
|
`host` mode, you can add `mode=host` to the `--publish` output. However,
|
||||||
|
you should also use `--global` instead of `--replicas=5` in this case,
|
||||||
|
since only one service task can bind a given port on a given node.
|
||||||
|
|
||||||
|
3. Run `docker service ls` to monitor the progress of service bring-up, which
|
||||||
|
may take a few seconds.
|
||||||
|
|
||||||
|
4. Inspect the `nginx-net` network on `master`, `worker-1`, and `worker-2`.
|
||||||
|
Remember that you did not need to create it manually on `worker-1` and
|
||||||
|
`worker-2` because Docker created it for you. The output will be long, but
|
||||||
|
notice the `Containers` and `Peers` sections. `Containers` lists all
|
||||||
|
service tasks (or standalone containers) connected to the overlay network
|
||||||
|
from that host.
|
||||||
|
|
||||||
|
5. From `manager`, inspect the service using `docker service inspect my-nginx`
|
||||||
|
and notice the information about the ports and endpoints used by the
|
||||||
|
service.
|
||||||
|
|
||||||
|
6. Create a new network `nginx-net-2`, then update the service to use this
|
||||||
|
network instead of `nginx-net`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker network create -d overlay nginx-net-2
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker service update \
|
||||||
|
--network-add nginx-net-2 \
|
||||||
|
--network-rm nginx-net \
|
||||||
|
my-nginx
|
||||||
|
```
|
||||||
|
|
||||||
|
7. Run `docker service ls` to verify that the service has been updated and all
|
||||||
|
tasks have been redeployed. Run `docker network inspect nginx-net` to verify
|
||||||
|
that no containers are connected to it. Run the same command for
|
||||||
|
`nginx-net-2` and notice that all the service task containers are connected
|
||||||
|
to it.
|
||||||
|
|
||||||
|
> **Note**: Even though overlay networks are automatically created on swarm
|
||||||
|
> worker nodes as needed, they are not automatically removed.
|
||||||
|
|
||||||
|
8. Clean up the service and the networks. From `manager`, run the following
|
||||||
|
commands. The manager will direct the workers to remove the networks
|
||||||
|
automatically.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker service rm my-nginx
|
||||||
|
$ docker network rm nginx-net nginx-net-2
|
||||||
|
```
|
||||||
|
|
||||||
|
## Use a user-defined overlay network
|
||||||
|
|
||||||
|
### Prerequisites
|
||||||
|
|
||||||
|
This tutorial assumes the swarm is already set up and you are on a manager.
|
||||||
|
|
||||||
|
### Walkthrough
|
||||||
|
|
||||||
|
1. Create the user-defined overlay network.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker network create -d overlay my-overlay
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Start a service using the overlay network and publishing port 80 to port
|
||||||
|
8080 on the Docker host.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker service create \
|
||||||
|
--name my-nginx \
|
||||||
|
--network my-overlay \
|
||||||
|
--replicas 1 \
|
||||||
|
--publish published=8080,target=80 \
|
||||||
|
nginx:latest
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Run `docker network inspect my-overlay` and verify that the `my-nginx`
|
||||||
|
service task is connected to it, by looking at the `Containers` section.
|
||||||
|
|
||||||
|
4. Remove the service and the network.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker service rm my-nginx
|
||||||
|
|
||||||
|
$ docker network rm my-overlay
|
||||||
|
```
|
||||||
|
|
||||||
|
## Use an overlay network for standalone containers
|
||||||
|
|
||||||
|
This example does the following:
|
||||||
|
|
||||||
|
- initializes a swarm on `host1`
|
||||||
|
- joins `host2` to the swarm
|
||||||
|
- creates an attachable overlay network
|
||||||
|
- creates an `alpine` service with 3 replicas, connected to the overlay network
|
||||||
|
- creates a single `alpine` container on `host2`, which is also attached to the
|
||||||
|
overlay network
|
||||||
|
- proves that the standalone container can communicate with the service tasks,
|
||||||
|
and vice versa.
|
||||||
|
|
||||||
|
### Prerequisites
|
||||||
|
|
||||||
|
For this test, you need two different Docker hosts, which can communicate with
|
||||||
|
each other. Each host needs to be running Docker 17.06 or higher. The following
|
||||||
|
ports must be open between the two Docker hosts:
|
||||||
|
|
||||||
|
- TCP port 2377
|
||||||
|
- TCP and UDP port 7946
|
||||||
|
- UDP port 4789
|
||||||
|
|
||||||
|
One easy way to set this is up is to have two VMs (either local or on a cloud
|
||||||
|
provider like AWS), each with Docker installed and running. If you're using AWS
|
||||||
|
or a similar cloud computing platform, the easiest configuration is to use a
|
||||||
|
security group which opens all incoming ports between the two hosts and the SSH
|
||||||
|
port from your client's IP address.
|
||||||
|
|
||||||
|
This example will refer to the hosts as `host1` and `host2`, and the command
|
||||||
|
prompts will be labelled accordingly.
|
||||||
|
|
||||||
|
The example uses Linux hosts, but the same commands work on Windows.
|
||||||
|
|
||||||
|
### Walk-through
|
||||||
|
|
||||||
|
1. Set up the swarm.
|
||||||
|
|
||||||
|
1. On `host1`, run `docker swarm init`, specifying the IP address for the
|
||||||
|
interface which will communicate with the other host (for instance, the
|
||||||
|
private IP address on AWS).
|
||||||
|
|
||||||
|
```bash
|
||||||
|
(host1) $ docker swarm init --advertise-addr 192.0.2.1
|
||||||
|
|
||||||
|
Swarm initialized: current node (l9ozqg3m6gysdnemmhoychk9p) is now a manager.
|
||||||
|
|
||||||
|
To add a worker to this swarm, run the following command:
|
||||||
|
|
||||||
|
docker swarm join \
|
||||||
|
--token SWMTKN-1-3mtj3k6tkuts4cpecpgjdvgj1u5jre5zwgiapox0tcjs1trqim-bfwb0ve6kf42go1rznrn0lycx \
|
||||||
|
192.0.2.1:2377
|
||||||
|
|
||||||
|
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
|
||||||
|
```
|
||||||
|
|
||||||
|
The swarm is initialized and `host1` runs both manager and worker roles.
|
||||||
|
|
||||||
|
2. Copy the `docker swarm join` command. Open a new terminal, connect to
|
||||||
|
`host2`, and execute the command. Add the `--advertise-addr` flag,
|
||||||
|
specifying the IP address for the interface that will communicate with
|
||||||
|
the other host (for instance, the private IP address on AWS). The
|
||||||
|
last argument is the IP address of `host1`.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
(host2) $ docker swarm join \
|
||||||
|
--token SWMTKN-1-3mtj3k6tkuts4cpecpgjdvgj1u5jre5zwgiapox0tcjs1trqim-bfwb0ve6kf42go1rznrn0lycx \
|
||||||
|
--advertise-addr 192.0.2.2:2377 \
|
||||||
|
192.0.2.1:2377
|
||||||
|
```
|
||||||
|
|
||||||
|
If the command succeeds, the following message is shown:
|
||||||
|
|
||||||
|
```none
|
||||||
|
This node joined a swarm as a worker.
|
||||||
|
```
|
||||||
|
|
||||||
|
Otherwise, the `docker swarm join` command will time out. In this case,
|
||||||
|
run `docker swarm leave --force` on `node2`, verify your network and
|
||||||
|
firewall settings, and try again.
|
||||||
|
|
||||||
|
2. Create an attachable overlay network called `test-net` on `host1`.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker network create --driver=overlay --attachable test-net
|
||||||
|
```
|
||||||
|
|
||||||
|
You don't need to manually create the overlay on `host2` because it will
|
||||||
|
be created when a container or service tries to connect to it from `host2`.
|
||||||
|
|
||||||
|
3. On `host1`, start a container that connects to `test-net`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
(host1) $ docker run -dit \
|
||||||
|
--name alpine1 \
|
||||||
|
--network test-net \
|
||||||
|
alpine
|
||||||
|
```
|
||||||
|
|
||||||
|
4. On `host2`, start a container that connects to `test-net`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
(host2) $ docker run -dit \
|
||||||
|
--name alpine2 \
|
||||||
|
--network test-net \
|
||||||
|
alpine
|
||||||
|
```
|
||||||
|
|
||||||
|
> **Note**: There is nothing to prevent you from using the same container
|
||||||
|
> name on multiple hosts, but automatic service discovery will not work if
|
||||||
|
> you do, and you will need to refer to the containers by IP address.
|
||||||
|
|
||||||
|
Verify that `test-net` was created on `host2`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
(host2) $ docker network ls
|
||||||
|
|
||||||
|
NETWORK ID NAME DRIVER SCOPE
|
||||||
|
6e327b25443d bridge bridge local
|
||||||
|
10eda0b42471 docker_gwbridge bridge local
|
||||||
|
1b16b7e2a72c host host local
|
||||||
|
lgsov6d3c6hh ingress overlay swarm
|
||||||
|
6af747d9ae1e none null local
|
||||||
|
uw9etrdymism test-net overlay swarm
|
||||||
|
```
|
||||||
|
|
||||||
|
5. Remember that you created `alpine1` from `host1` and `alpine2` from `host2`.
|
||||||
|
Now, attach to `alpine2` from `host1`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
(host1) $ docker container attach alpine2
|
||||||
|
|
||||||
|
#
|
||||||
|
```
|
||||||
|
|
||||||
|
Automatic service discovery worked between two containers across the overlay
|
||||||
|
network!
|
||||||
|
|
||||||
|
Within the attached session, try pinging `alpine1` from `alpine2`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# ping -c 2 alpine1
|
||||||
|
|
||||||
|
PING alpine1 (10.0.0.2): 56 data bytes
|
||||||
|
64 bytes from 10.0.0.2: seq=0 ttl=64 time=0.523 ms
|
||||||
|
64 bytes from 10.0.0.2: seq=1 ttl=64 time=0.547 ms
|
||||||
|
|
||||||
|
--- alpine1 ping statistics ---
|
||||||
|
2 packets transmitted, 2 packets received, 0% packet loss
|
||||||
|
round-trip min/avg/max = 0.523/0.535/0.547 ms
|
||||||
|
```
|
||||||
|
|
||||||
|
This proves that the two containers can communicate with each other using
|
||||||
|
the overlay network which is connecting `host1` and `host2`.
|
||||||
|
|
||||||
|
Detach from `alpine2` using the `CTRL` + `P` `CTRL` + `Q` sequence.
|
||||||
|
|
||||||
|
6. Stop the containers and remove `test-net` from each host. Because the Docker
|
||||||
|
daemons are operating independently and these are standalone containers, you
|
||||||
|
need to run the commands on the individual hosts.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
(host1) $ docker container stop alpine1
|
||||||
|
$ docker container rm alpine1
|
||||||
|
$ docker network rm test-net
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
(host2) $ docker container stop alpine2
|
||||||
|
$ docker container rm alpine2
|
||||||
|
$ docker network rm test-net
|
||||||
|
```
|
||||||
|
|
||||||
|
## Communicate between a container and a swarm service
|
||||||
|
|
||||||
|
### Prerequisites
|
||||||
|
|
||||||
|
You need Docker 17.06 or higher for this example.
|
||||||
|
|
||||||
|
### Walkthrough
|
||||||
|
|
||||||
|
In this example, you start two different `alpine` containers on the same Docker
|
||||||
|
host and do some tests to understand how they communicate with each other. You
|
||||||
|
need to have Docker installed and running.
|
||||||
|
|
||||||
|
1. Open a terminal window. List current networks before you do anything else.
|
||||||
|
Here's what you should see if you've never added a network or initialized a
|
||||||
|
swarm on this Docker daemon. You may see different networks, but you should
|
||||||
|
at least see these (the network IDs will be different):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker network ls
|
||||||
|
|
||||||
|
NETWORK ID NAME DRIVER SCOPE
|
||||||
|
17e324f45964 bridge bridge local
|
||||||
|
6ed54d316334 host host local
|
||||||
|
7092879f2cc8 none null local
|
||||||
|
```
|
||||||
|
|
||||||
|
The default `bridge` network is listed, along with `host` and `none`. The
|
||||||
|
latter two are not fully-fledged networks, but are used to start a container
|
||||||
|
connected directly to the Docker daemon host's networking stack, or to start
|
||||||
|
a container with no network devices. **This tutorial will connect two
|
||||||
|
containers to the `bridge` network.**
|
||||||
|
|
||||||
|
2. Start two `alpine` containers running `ash`, which is Alpine's default shell
|
||||||
|
rather than `bash`. The `-dit` flags mean to start the container detached
|
||||||
|
(in the background), interactive (with the ability to type into it), and
|
||||||
|
with a TTY (so you can see the input and output). Since you are starting it
|
||||||
|
detached, you won't be connected to the container right away. Instead, the
|
||||||
|
container's ID will be printed. Because you have not specified any
|
||||||
|
`--network` flags, the containers connect to the default `bridge` network.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker run -dit --name alpine1 alpine ash
|
||||||
|
|
||||||
|
$ docker run -dit --name alpine2 alpine ash
|
||||||
|
```
|
||||||
|
|
||||||
|
Check that both containers are actually started:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker container ls
|
||||||
|
|
||||||
|
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||||
|
602dbf1edc81 alpine "ash" 4 seconds ago Up 3 seconds alpine2
|
||||||
|
da33b7aa74b0 alpine "ash" 17 seconds ago Up 16 seconds alpine1
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Inspect the `bridge` network to see what containers are connected to it.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker network inspect bridge
|
||||||
|
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"Name": "bridge",
|
||||||
|
"Id": "17e324f459648a9baaea32b248d3884da102dde19396c25b30ec800068ce6b10",
|
||||||
|
"Created": "2017-06-22T20:27:43.826654485Z",
|
||||||
|
"Scope": "local",
|
||||||
|
"Driver": "bridge",
|
||||||
|
"EnableIPv6": false,
|
||||||
|
"IPAM": {
|
||||||
|
"Driver": "default",
|
||||||
|
"Options": null,
|
||||||
|
"Config": [
|
||||||
|
{
|
||||||
|
"Subnet": "172.17.0.0/16",
|
||||||
|
"Gateway": "172.17.0.1"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"Internal": false,
|
||||||
|
"Attachable": false,
|
||||||
|
"Containers": {
|
||||||
|
"602dbf1edc81813304b6cf0a647e65333dc6fe6ee6ed572dc0f686a3307c6a2c": {
|
||||||
|
"Name": "alpine2",
|
||||||
|
"EndpointID": "03b6aafb7ca4d7e531e292901b43719c0e34cc7eef565b38a6bf84acf50f38cd",
|
||||||
|
"MacAddress": "02:42:ac:11:00:03",
|
||||||
|
"IPv4Address": "172.17.0.3/16",
|
||||||
|
"IPv6Address": ""
|
||||||
|
},
|
||||||
|
"da33b7aa74b0bf3bda3ebd502d404320ca112a268aafe05b4851d1e3312ed168": {
|
||||||
|
"Name": "alpine1",
|
||||||
|
"EndpointID": "46c044a645d6afc42ddd7857d19e9dcfb89ad790afb5c239a35ac0af5e8a5bc5",
|
||||||
|
"MacAddress": "02:42:ac:11:00:02",
|
||||||
|
"IPv4Address": "172.17.0.2/16",
|
||||||
|
"IPv6Address": ""
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"Options": {
|
||||||
|
"com.docker.network.bridge.default_bridge": "true",
|
||||||
|
"com.docker.network.bridge.enable_icc": "true",
|
||||||
|
"com.docker.network.bridge.enable_ip_masquerade": "true",
|
||||||
|
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
|
||||||
|
"com.docker.network.bridge.name": "docker0",
|
||||||
|
"com.docker.network.driver.mtu": "1500"
|
||||||
|
},
|
||||||
|
"Labels": {}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
Near the top, information about the `bridge` network is listed, including
|
||||||
|
the IP address of the gateway between the Docker host and the `bridge`
|
||||||
|
network (`172.17.0.1`). Under the `Containers` key, each connected container
|
||||||
|
is listed, along with information about its IP address (`172.17.0.2` for
|
||||||
|
`alpine1` and `172.17.0.3` for `alpine2`).
|
||||||
|
|
||||||
|
4. The containers are running in the background. Use the `docker attach`
|
||||||
|
command to connect to `alpine1`.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker attach alpine1
|
||||||
|
|
||||||
|
/ #
|
||||||
|
```
|
||||||
|
|
||||||
|
The prompt changes to `#` to indicate that you are the `root` user within
|
||||||
|
the container. Use the `ip addr show` command to show the network interfaces
|
||||||
|
for `alpine1` as they look from within the container:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# ip addr show
|
||||||
|
|
||||||
|
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
|
||||||
|
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
|
||||||
|
inet 127.0.0.1/8 scope host lo
|
||||||
|
valid_lft forever preferred_lft forever
|
||||||
|
inet6 ::1/128 scope host
|
||||||
|
valid_lft forever preferred_lft forever
|
||||||
|
27: eth0@if28: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
|
||||||
|
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
|
||||||
|
inet 172.17.0.2/16 scope global eth0
|
||||||
|
valid_lft forever preferred_lft forever
|
||||||
|
inet6 fe80::42:acff:fe11:2/64 scope link
|
||||||
|
valid_lft forever preferred_lft forever
|
||||||
|
```
|
||||||
|
|
||||||
|
The first interface is the loopback device. Ignore it for now. Notice that
|
||||||
|
the second interface has the IP address `172.17.0.2`, which is the same
|
||||||
|
address shown for `alpine1` in the previous step.
|
||||||
|
|
||||||
|
5. From within `alpine1`, make sure you can connect to the internet by
|
||||||
|
pinging `google.com`. The `-c 2` flag limits the command two two `ping`
|
||||||
|
attempts.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# ping -c 2 google.com
|
||||||
|
|
||||||
|
PING google.com (172.217.3.174): 56 data bytes
|
||||||
|
64 bytes from 172.217.3.174: seq=0 ttl=41 time=9.841 ms
|
||||||
|
64 bytes from 172.217.3.174: seq=1 ttl=41 time=9.897 ms
|
||||||
|
|
||||||
|
--- google.com ping statistics ---
|
||||||
|
2 packets transmitted, 2 packets received, 0% packet loss
|
||||||
|
round-trip min/avg/max = 9.841/9.869/9.897 ms
|
||||||
|
```
|
||||||
|
|
||||||
|
6. Now try to ping the second container. First, ping it by its IP address,
|
||||||
|
`172.17.0.3`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# ping -c 2 172.17.0.3
|
||||||
|
|
||||||
|
PING 172.17.0.3 (172.17.0.3): 56 data bytes
|
||||||
|
64 bytes from 172.17.0.3: seq=0 ttl=64 time=0.086 ms
|
||||||
|
64 bytes from 172.17.0.3: seq=1 ttl=64 time=0.094 ms
|
||||||
|
|
||||||
|
--- 172.17.0.3 ping statistics ---
|
||||||
|
2 packets transmitted, 2 packets received, 0% packet loss
|
||||||
|
round-trip min/avg/max = 0.086/0.090/0.094 ms
|
||||||
|
```
|
||||||
|
|
||||||
|
This succeeds. Next, try pinging the `alpine2` container by container
|
||||||
|
name. This will fail.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# ping -c 2 alpine2
|
||||||
|
|
||||||
|
ping: bad address 'alpine2'
|
||||||
|
```
|
||||||
|
|
||||||
|
7. Detach from `alpine2` without stopping it by using the detach sequence,
|
||||||
|
`CTRL` + `p` `CTRL` + `q` (hold down `CTRL` and type `p` followed by `q`).
|
||||||
|
If you wish, attach to `alpine2` and repeat steps 4, 5, and 6 there,
|
||||||
|
substituting `alpine1` for `alpine2`.
|
||||||
|
|
||||||
|
8. Stop and remove both containers.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker container stop alpine1 alpine2
|
||||||
|
$ docker container rm alpine1 alpine2
|
||||||
|
```
|
||||||
|
|
||||||
|
Remember, the default `bridge` network is not recommended for production. To
|
||||||
|
learn about user-defined bridge networks, continue to the
|
||||||
|
[next tutorial](#use-user-defined-bridge-networks).
|
||||||
|
|
||||||
|
## Other networking tutorials
|
||||||
|
|
||||||
|
Now that you have completed the networking tutorials for overlay networks,
|
||||||
|
you might want to run through these other networking tutorials:
|
||||||
|
|
||||||
|
- [Host networking tutorial](network-tutorial-host.md)
|
||||||
|
- [Standalone networking tutorial](network-tutorial-standalone.md)
|
||||||
|
- [Macvlan networking tutorial](network-tutorial-macvlan.md)
|
||||||
|
|
|
@ -0,0 +1,621 @@
|
||||||
|
---
|
||||||
|
title: Networking with standalone containers
|
||||||
|
description: Tutorials for networking with standalone containers
|
||||||
|
keywords: networking, bridge, routing, ports, overlay
|
||||||
|
---
|
||||||
|
|
||||||
|
This series of tutorials deals with networking for standalone Docker containers.
|
||||||
|
For networking with swarm services, see
|
||||||
|
[Networking with swarm services](network-tutorial-overlay.md). If you need to
|
||||||
|
learn more about Docker networking in general, see the [overview](index.md).
|
||||||
|
|
||||||
|
This topic includes three different tutorials. You can run each of them on
|
||||||
|
Linux, Windows, or a Mac, but for the last two, you need a second Docker
|
||||||
|
host running elsewhere.
|
||||||
|
|
||||||
|
- [Use the default bridge network](#use-the-default-bridge-network) demonstrates
|
||||||
|
how to use the default `bridge` network that Docker sets up for you
|
||||||
|
automatically. This network is not the best choice for production systems.
|
||||||
|
|
||||||
|
- [Use user-defined bridge networks](#use-user-defined-bridge-networks) shows
|
||||||
|
how to create and use your own custom bridge networks, to connect containers
|
||||||
|
running on the same Docker host. This is recommended for standalone containers
|
||||||
|
running in production.
|
||||||
|
|
||||||
|
Although [overlay networks](overlay.md) are generally used for swarm services,
|
||||||
|
Docker 17.06 and higher allow you to use an overlay network for standalone
|
||||||
|
containers. That's covered as part of the
|
||||||
|
[tutorial on using overlay networks](network-tutorial-overlay.md#use-an-overlay-network-for-standalone-containers).
|
||||||
|
|
||||||
|
## Use the default bridge network
|
||||||
|
|
||||||
|
In this example, you start two different `alpine` containers on the same Docker
|
||||||
|
host and do some tests to understand how they communicate with each other. You
|
||||||
|
need to have Docker installed and running.
|
||||||
|
|
||||||
|
1. Open a terminal window. List current networks before you do anything else.
|
||||||
|
Here's what you should see if you've never added a network or initialized a
|
||||||
|
swarm on this Docker daemon. You may see different networks, but you should
|
||||||
|
at least see these (the network IDs will be different):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker network ls
|
||||||
|
|
||||||
|
NETWORK ID NAME DRIVER SCOPE
|
||||||
|
17e324f45964 bridge bridge local
|
||||||
|
6ed54d316334 host host local
|
||||||
|
7092879f2cc8 none null local
|
||||||
|
```
|
||||||
|
|
||||||
|
The default `bridge` network is listed, along with `host` and `none`. The
|
||||||
|
latter two are not fully-fledged networks, but are used to start a container
|
||||||
|
connected directly to the Docker daemon host's networking stack, or to start
|
||||||
|
a container with no network devices. **This tutorial will connect two
|
||||||
|
containers to the `bridge` network.**
|
||||||
|
|
||||||
|
2. Start two `alpine` containers running `ash`, which is Alpine's default shell
|
||||||
|
rather than `bash`. The `-dit` flags mean to start the container detached
|
||||||
|
(in the background), interactive (with the ability to type into it), and
|
||||||
|
with a TTY (so you can see the input and output). Since you are starting it
|
||||||
|
detached, you won't be connected to the container right away. Instead, the
|
||||||
|
container's ID will be printed. Because you have not specified any
|
||||||
|
`--network` flags, the containers connect to the default `bridge` network.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker run -dit --name alpine1 alpine ash
|
||||||
|
|
||||||
|
$ docker run -dit --name alpine2 alpine ash
|
||||||
|
```
|
||||||
|
|
||||||
|
Check that both containers are actually started:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker container ls
|
||||||
|
|
||||||
|
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||||
|
602dbf1edc81 alpine "ash" 4 seconds ago Up 3 seconds alpine2
|
||||||
|
da33b7aa74b0 alpine "ash" 17 seconds ago Up 16 seconds alpine1
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Inspect the `bridge` network to see what containers are connected to it.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker network inspect bridge
|
||||||
|
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"Name": "bridge",
|
||||||
|
"Id": "17e324f459648a9baaea32b248d3884da102dde19396c25b30ec800068ce6b10",
|
||||||
|
"Created": "2017-06-22T20:27:43.826654485Z",
|
||||||
|
"Scope": "local",
|
||||||
|
"Driver": "bridge",
|
||||||
|
"EnableIPv6": false,
|
||||||
|
"IPAM": {
|
||||||
|
"Driver": "default",
|
||||||
|
"Options": null,
|
||||||
|
"Config": [
|
||||||
|
{
|
||||||
|
"Subnet": "172.17.0.0/16",
|
||||||
|
"Gateway": "172.17.0.1"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"Internal": false,
|
||||||
|
"Attachable": false,
|
||||||
|
"Containers": {
|
||||||
|
"602dbf1edc81813304b6cf0a647e65333dc6fe6ee6ed572dc0f686a3307c6a2c": {
|
||||||
|
"Name": "alpine2",
|
||||||
|
"EndpointID": "03b6aafb7ca4d7e531e292901b43719c0e34cc7eef565b38a6bf84acf50f38cd",
|
||||||
|
"MacAddress": "02:42:ac:11:00:03",
|
||||||
|
"IPv4Address": "172.17.0.3/16",
|
||||||
|
"IPv6Address": ""
|
||||||
|
},
|
||||||
|
"da33b7aa74b0bf3bda3ebd502d404320ca112a268aafe05b4851d1e3312ed168": {
|
||||||
|
"Name": "alpine1",
|
||||||
|
"EndpointID": "46c044a645d6afc42ddd7857d19e9dcfb89ad790afb5c239a35ac0af5e8a5bc5",
|
||||||
|
"MacAddress": "02:42:ac:11:00:02",
|
||||||
|
"IPv4Address": "172.17.0.2/16",
|
||||||
|
"IPv6Address": ""
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"Options": {
|
||||||
|
"com.docker.network.bridge.default_bridge": "true",
|
||||||
|
"com.docker.network.bridge.enable_icc": "true",
|
||||||
|
"com.docker.network.bridge.enable_ip_masquerade": "true",
|
||||||
|
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
|
||||||
|
"com.docker.network.bridge.name": "docker0",
|
||||||
|
"com.docker.network.driver.mtu": "1500"
|
||||||
|
},
|
||||||
|
"Labels": {}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
Near the top, information about the `bridge` network is listed, including
|
||||||
|
the IP address of the gateway between the Docker host and the `bridge`
|
||||||
|
network (`172.17.0.1`). Under the `Containers` key, each connected container
|
||||||
|
is listed, along with information about its IP address (`172.17.0.2` for
|
||||||
|
`alpine1` and `172.17.0.3` for `alpine2`).
|
||||||
|
|
||||||
|
4. The containers are running in the background. Use the `docker attach`
|
||||||
|
command to connect to `alpine1`.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker attach alpine1
|
||||||
|
|
||||||
|
/ #
|
||||||
|
```
|
||||||
|
|
||||||
|
The prompt changes to `#` to indicate that you are the `root` user within
|
||||||
|
the container. Use the `ip addr show` command to show the network interfaces
|
||||||
|
for `alpine1` as they look from within the container:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# ip addr show
|
||||||
|
|
||||||
|
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
|
||||||
|
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
|
||||||
|
inet 127.0.0.1/8 scope host lo
|
||||||
|
valid_lft forever preferred_lft forever
|
||||||
|
inet6 ::1/128 scope host
|
||||||
|
valid_lft forever preferred_lft forever
|
||||||
|
27: eth0@if28: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
|
||||||
|
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
|
||||||
|
inet 172.17.0.2/16 scope global eth0
|
||||||
|
valid_lft forever preferred_lft forever
|
||||||
|
inet6 fe80::42:acff:fe11:2/64 scope link
|
||||||
|
valid_lft forever preferred_lft forever
|
||||||
|
```
|
||||||
|
|
||||||
|
The first interface is the loopback device. Ignore it for now. Notice that
|
||||||
|
the second interface has the IP address `172.17.0.2`, which is the same
|
||||||
|
address shown for `alpine1` in the previous step.
|
||||||
|
|
||||||
|
5. From within `alpine1`, make sure you can connect to the internet by
|
||||||
|
pinging `google.com`. The `-c 2` flag limits the command two two `ping`
|
||||||
|
attempts.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# ping -c 2 google.com
|
||||||
|
|
||||||
|
PING google.com (172.217.3.174): 56 data bytes
|
||||||
|
64 bytes from 172.217.3.174: seq=0 ttl=41 time=9.841 ms
|
||||||
|
64 bytes from 172.217.3.174: seq=1 ttl=41 time=9.897 ms
|
||||||
|
|
||||||
|
--- google.com ping statistics ---
|
||||||
|
2 packets transmitted, 2 packets received, 0% packet loss
|
||||||
|
round-trip min/avg/max = 9.841/9.869/9.897 ms
|
||||||
|
```
|
||||||
|
|
||||||
|
6. Now try to ping the second container. First, ping it by its IP address,
|
||||||
|
`172.17.0.3`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# ping -c 2 172.17.0.3
|
||||||
|
|
||||||
|
PING 172.17.0.3 (172.17.0.3): 56 data bytes
|
||||||
|
64 bytes from 172.17.0.3: seq=0 ttl=64 time=0.086 ms
|
||||||
|
64 bytes from 172.17.0.3: seq=1 ttl=64 time=0.094 ms
|
||||||
|
|
||||||
|
--- 172.17.0.3 ping statistics ---
|
||||||
|
2 packets transmitted, 2 packets received, 0% packet loss
|
||||||
|
round-trip min/avg/max = 0.086/0.090/0.094 ms
|
||||||
|
```
|
||||||
|
|
||||||
|
This succeeds. Next, try pinging the `alpine2` container by container
|
||||||
|
name. This will fail.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# ping -c 2 alpine2
|
||||||
|
|
||||||
|
ping: bad address 'alpine2'
|
||||||
|
```
|
||||||
|
|
||||||
|
7. Detach from `alpine2` without stopping it by using the detach sequence,
|
||||||
|
`CTRL` + `p` `CTRL` + `q` (hold down `CTRL` and type `p` followed by `q`).
|
||||||
|
If you wish, attach to `alpine2` and repeat steps 4, 5, and 6 there,
|
||||||
|
substituting `alpine1` for `alpine2`.
|
||||||
|
|
||||||
|
8. Stop and remove both containers.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker container stop alpine1 alpine2
|
||||||
|
$ docker container rm alpine1 alpine2
|
||||||
|
```
|
||||||
|
|
||||||
|
Remember, the default `bridge` network is not recommended for production. To
|
||||||
|
learn about user-defined bridge networks, continue to the
|
||||||
|
[next tutorial](#use-user-defined-bridge-networks).
|
||||||
|
|
||||||
|
## Use user-defined bridge networks
|
||||||
|
|
||||||
|
In this example, we again start two `alpine` containers, but attach them to a
|
||||||
|
user-defined network called `alpine-net` which we have already created. These
|
||||||
|
containers are not connected to the default `bridge` network at all. We then
|
||||||
|
start a third `alpine` container which is connected to the `bridge` network but
|
||||||
|
not connected to `alpine-net`, and a fourth `alpine` container which is
|
||||||
|
connected to both networks.
|
||||||
|
|
||||||
|
1. Create the `alpine-net` network. You do not need the `--driver bridge` flag
|
||||||
|
since it's the default, but this example shows how to specify it.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker network create --driver bridge alpine-net
|
||||||
|
```
|
||||||
|
|
||||||
|
2. List Docker's networks:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker network ls
|
||||||
|
|
||||||
|
NETWORK ID NAME DRIVER SCOPE
|
||||||
|
e9261a8c9a19 alpine-net bridge local
|
||||||
|
17e324f45964 bridge bridge local
|
||||||
|
6ed54d316334 host host local
|
||||||
|
7092879f2cc8 none null local
|
||||||
|
```
|
||||||
|
|
||||||
|
Inspect the `alpine-net` network. This shows you its IP address and the fact
|
||||||
|
that no containers are connected to it:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker network inspect alpine-net
|
||||||
|
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"Name": "alpine-net",
|
||||||
|
"Id": "e9261a8c9a19eabf2bf1488bf5f208b99b1608f330cff585c273d39481c9b0ec",
|
||||||
|
"Created": "2017-09-25T21:38:12.620046142Z",
|
||||||
|
"Scope": "local",
|
||||||
|
"Driver": "bridge",
|
||||||
|
"EnableIPv6": false,
|
||||||
|
"IPAM": {
|
||||||
|
"Driver": "default",
|
||||||
|
"Options": {},
|
||||||
|
"Config": [
|
||||||
|
{
|
||||||
|
"Subnet": "172.18.0.0/16",
|
||||||
|
"Gateway": "172.18.0.1"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"Internal": false,
|
||||||
|
"Attachable": false,
|
||||||
|
"Containers": {},
|
||||||
|
"Options": {},
|
||||||
|
"Labels": {}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
Notice that this network's gateway is `172.18.0.1`, as opposed to the
|
||||||
|
default bridge network, whose gateway is `172.17.0.1`. The exact IP address
|
||||||
|
may be different on your system.
|
||||||
|
|
||||||
|
3. Create your four containers. Notice the `--network` flags. You can only
|
||||||
|
connect to one network during the `docker run` command, so you need to use
|
||||||
|
`docker network attach` afterward to connect `alpine4` to the `bridge`
|
||||||
|
network as well.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker run -dit --name alpine1 --network alpine-net alpine ash
|
||||||
|
|
||||||
|
$ docker run -dit --name alpine2 --network alpine-net alpine ash
|
||||||
|
|
||||||
|
$ docker run -dit --name alpine3 alpine ash
|
||||||
|
|
||||||
|
$ docker run -dit --name alpine4 --network alpine-net alpine ash
|
||||||
|
|
||||||
|
$ docker network connect alpine-net alpine4
|
||||||
|
```
|
||||||
|
|
||||||
|
Verify that all containers are running:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker container ls
|
||||||
|
|
||||||
|
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||||
|
156849ccd902 alpine "ash" 41 seconds ago Up 41 seconds alpine4
|
||||||
|
fa1340b8d83e alpine "ash" 51 seconds ago Up 51 seconds alpine3
|
||||||
|
a535d969081e alpine "ash" About a minute ago Up About a minute alpine2
|
||||||
|
0a02c449a6e9 alpine "ash" About a minute ago Up About a minute alpine1
|
||||||
|
```
|
||||||
|
|
||||||
|
4. Inspect the `bridge` network and the `alpine-net` network again:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker network inspect bridge
|
||||||
|
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"Name": "bridge",
|
||||||
|
"Id": "17e324f459648a9baaea32b248d3884da102dde19396c25b30ec800068ce6b10",
|
||||||
|
"Created": "2017-06-22T20:27:43.826654485Z",
|
||||||
|
"Scope": "local",
|
||||||
|
"Driver": "bridge",
|
||||||
|
"EnableIPv6": false,
|
||||||
|
"IPAM": {
|
||||||
|
"Driver": "default",
|
||||||
|
"Options": null,
|
||||||
|
"Config": [
|
||||||
|
{
|
||||||
|
"Subnet": "172.17.0.0/16",
|
||||||
|
"Gateway": "172.17.0.1"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"Internal": false,
|
||||||
|
"Attachable": false,
|
||||||
|
"Containers": {
|
||||||
|
"156849ccd902b812b7d17f05d2d81532ccebe5bf788c9a79de63e12bb92fc621": {
|
||||||
|
"Name": "alpine4",
|
||||||
|
"EndpointID": "7277c5183f0da5148b33d05f329371fce7befc5282d2619cfb23690b2adf467d",
|
||||||
|
"MacAddress": "02:42:ac:11:00:03",
|
||||||
|
"IPv4Address": "172.17.0.3/16",
|
||||||
|
"IPv6Address": ""
|
||||||
|
},
|
||||||
|
"fa1340b8d83eef5497166951184ad3691eb48678a3664608ec448a687b047c53": {
|
||||||
|
"Name": "alpine3",
|
||||||
|
"EndpointID": "5ae767367dcbebc712c02d49556285e888819d4da6b69d88cd1b0d52a83af95f",
|
||||||
|
"MacAddress": "02:42:ac:11:00:02",
|
||||||
|
"IPv4Address": "172.17.0.2/16",
|
||||||
|
"IPv6Address": ""
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"Options": {
|
||||||
|
"com.docker.network.bridge.default_bridge": "true",
|
||||||
|
"com.docker.network.bridge.enable_icc": "true",
|
||||||
|
"com.docker.network.bridge.enable_ip_masquerade": "true",
|
||||||
|
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
|
||||||
|
"com.docker.network.bridge.name": "docker0",
|
||||||
|
"com.docker.network.driver.mtu": "1500"
|
||||||
|
},
|
||||||
|
"Labels": {}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
Containers `alpine3` and `alpine4` are connected to the `bridge` network.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker network inspect alpine-net
|
||||||
|
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"Name": "alpine-net",
|
||||||
|
"Id": "e9261a8c9a19eabf2bf1488bf5f208b99b1608f330cff585c273d39481c9b0ec",
|
||||||
|
"Created": "2017-09-25T21:38:12.620046142Z",
|
||||||
|
"Scope": "local",
|
||||||
|
"Driver": "bridge",
|
||||||
|
"EnableIPv6": false,
|
||||||
|
"IPAM": {
|
||||||
|
"Driver": "default",
|
||||||
|
"Options": {},
|
||||||
|
"Config": [
|
||||||
|
{
|
||||||
|
"Subnet": "172.18.0.0/16",
|
||||||
|
"Gateway": "172.18.0.1"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"Internal": false,
|
||||||
|
"Attachable": false,
|
||||||
|
"Containers": {
|
||||||
|
"0a02c449a6e9a15113c51ab2681d72749548fb9f78fae4493e3b2e4e74199c4a": {
|
||||||
|
"Name": "alpine1",
|
||||||
|
"EndpointID": "c83621678eff9628f4e2d52baf82c49f974c36c05cba152db4c131e8e7a64673",
|
||||||
|
"MacAddress": "02:42:ac:12:00:02",
|
||||||
|
"IPv4Address": "172.18.0.2/16",
|
||||||
|
"IPv6Address": ""
|
||||||
|
},
|
||||||
|
"156849ccd902b812b7d17f05d2d81532ccebe5bf788c9a79de63e12bb92fc621": {
|
||||||
|
"Name": "alpine4",
|
||||||
|
"EndpointID": "058bc6a5e9272b532ef9a6ea6d7f3db4c37527ae2625d1cd1421580fd0731954",
|
||||||
|
"MacAddress": "02:42:ac:12:00:04",
|
||||||
|
"IPv4Address": "172.18.0.4/16",
|
||||||
|
"IPv6Address": ""
|
||||||
|
},
|
||||||
|
"a535d969081e003a149be8917631215616d9401edcb4d35d53f00e75ea1db653": {
|
||||||
|
"Name": "alpine2",
|
||||||
|
"EndpointID": "198f3141ccf2e7dba67bce358d7b71a07c5488e3867d8b7ad55a4c695ebb8740",
|
||||||
|
"MacAddress": "02:42:ac:12:00:03",
|
||||||
|
"IPv4Address": "172.18.0.3/16",
|
||||||
|
"IPv6Address": ""
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"Options": {},
|
||||||
|
"Labels": {}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
Containers `alpine1`, `alpine2`, and `alpine4` are connected to the
|
||||||
|
`alpine-net` network.
|
||||||
|
|
||||||
|
5. On user-defined networks like `alpine-net`, containers can not only
|
||||||
|
communicate by IP address, but can also resolve a container name to an IP
|
||||||
|
address. This capability is called **automatic service discovery**. Let's
|
||||||
|
connect to `alpine1` and test this out. `alpine1` should be able to resolve
|
||||||
|
`alpine2` and `alpine4` (and `alpine1`, itself) to IP addresses.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker container attach alpine1
|
||||||
|
|
||||||
|
# ping -c 2 alpine2
|
||||||
|
|
||||||
|
PING alpine2 (172.18.0.3): 56 data bytes
|
||||||
|
64 bytes from 172.18.0.3: seq=0 ttl=64 time=0.085 ms
|
||||||
|
64 bytes from 172.18.0.3: seq=1 ttl=64 time=0.090 ms
|
||||||
|
|
||||||
|
--- alpine2 ping statistics ---
|
||||||
|
2 packets transmitted, 2 packets received, 0% packet loss
|
||||||
|
round-trip min/avg/max = 0.085/0.087/0.090 ms
|
||||||
|
|
||||||
|
# ping -c 2 alpine4
|
||||||
|
|
||||||
|
PING alpine4 (172.18.0.4): 56 data bytes
|
||||||
|
64 bytes from 172.18.0.4: seq=0 ttl=64 time=0.076 ms
|
||||||
|
64 bytes from 172.18.0.4: seq=1 ttl=64 time=0.091 ms
|
||||||
|
|
||||||
|
--- alpine4 ping statistics ---
|
||||||
|
2 packets transmitted, 2 packets received, 0% packet loss
|
||||||
|
round-trip min/avg/max = 0.076/0.083/0.091 ms
|
||||||
|
|
||||||
|
# ping -c 2 alpine1
|
||||||
|
|
||||||
|
PING alpine1 (172.18.0.2): 56 data bytes
|
||||||
|
64 bytes from 172.18.0.2: seq=0 ttl=64 time=0.026 ms
|
||||||
|
64 bytes from 172.18.0.2: seq=1 ttl=64 time=0.054 ms
|
||||||
|
|
||||||
|
--- alpine1 ping statistics ---
|
||||||
|
2 packets transmitted, 2 packets received, 0% packet loss
|
||||||
|
round-trip min/avg/max = 0.026/0.040/0.054 ms
|
||||||
|
```
|
||||||
|
|
||||||
|
6. From `alpine1`, you should not be able to connect to `alpine3` at all, since
|
||||||
|
it is not on the `alpine-net` network.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# ping -c 2 alpine3
|
||||||
|
|
||||||
|
ping: bad address 'alpine3'
|
||||||
|
```
|
||||||
|
|
||||||
|
Not only that, but you can't connect to `alpine3` from `alpine1` by its IP
|
||||||
|
address either. Look back at the `docker network inspect` output for the
|
||||||
|
`bridge` network and find `alpine3`'s IP address: `172.17.0.2` Try to ping
|
||||||
|
it.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# ping -c 2 172.17.0.2
|
||||||
|
|
||||||
|
PING 172.17.0.2 (172.17.0.2): 56 data bytes
|
||||||
|
|
||||||
|
--- 172.17.0.2 ping statistics ---
|
||||||
|
2 packets transmitted, 0 packets received, 100% packet loss
|
||||||
|
```
|
||||||
|
|
||||||
|
Detach from `alpine1` using detach sequence,
|
||||||
|
`CTRL` + `p` `CTRL` + `q` (hold down `CTRL` and type `p` followed by `q`).
|
||||||
|
|
||||||
|
7. Remember that `alpine4` is connected to both the default `bridge` network
|
||||||
|
and `alpine-net`. It should be able to reach all of the other containers.
|
||||||
|
However, you will need to address `alpine3` by its IP address. Attach to it
|
||||||
|
and run the tests.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker container attach alpine4
|
||||||
|
|
||||||
|
# ping -c 2 alpine1
|
||||||
|
|
||||||
|
PING alpine1 (172.18.0.2): 56 data bytes
|
||||||
|
64 bytes from 172.18.0.2: seq=0 ttl=64 time=0.074 ms
|
||||||
|
64 bytes from 172.18.0.2: seq=1 ttl=64 time=0.082 ms
|
||||||
|
|
||||||
|
--- alpine1 ping statistics ---
|
||||||
|
2 packets transmitted, 2 packets received, 0% packet loss
|
||||||
|
round-trip min/avg/max = 0.074/0.078/0.082 ms
|
||||||
|
|
||||||
|
# ping -c 2 alpine2
|
||||||
|
|
||||||
|
PING alpine2 (172.18.0.3): 56 data bytes
|
||||||
|
64 bytes from 172.18.0.3: seq=0 ttl=64 time=0.075 ms
|
||||||
|
64 bytes from 172.18.0.3: seq=1 ttl=64 time=0.080 ms
|
||||||
|
|
||||||
|
--- alpine2 ping statistics ---
|
||||||
|
2 packets transmitted, 2 packets received, 0% packet loss
|
||||||
|
round-trip min/avg/max = 0.075/0.077/0.080 ms
|
||||||
|
|
||||||
|
# ping -c 2 alpine3
|
||||||
|
ping: bad address 'alpine3'
|
||||||
|
|
||||||
|
# ping -c 2 172.17.0.2
|
||||||
|
|
||||||
|
PING 172.17.0.2 (172.17.0.2): 56 data bytes
|
||||||
|
64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.089 ms
|
||||||
|
64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.075 ms
|
||||||
|
|
||||||
|
--- 172.17.0.2 ping statistics ---
|
||||||
|
2 packets transmitted, 2 packets received, 0% packet loss
|
||||||
|
round-trip min/avg/max = 0.075/0.082/0.089 ms
|
||||||
|
|
||||||
|
# ping -c 2 alpine4
|
||||||
|
|
||||||
|
PING alpine4 (172.18.0.4): 56 data bytes
|
||||||
|
64 bytes from 172.18.0.4: seq=0 ttl=64 time=0.033 ms
|
||||||
|
64 bytes from 172.18.0.4: seq=1 ttl=64 time=0.064 ms
|
||||||
|
|
||||||
|
--- alpine4 ping statistics ---
|
||||||
|
2 packets transmitted, 2 packets received, 0% packet loss
|
||||||
|
round-trip min/avg/max = 0.033/0.048/0.064 ms
|
||||||
|
```
|
||||||
|
|
||||||
|
8. As a final test, make sure your containers can all connect to the internet
|
||||||
|
by pinging `google.com`. You are already attached to `alpine4` so start by
|
||||||
|
trying from there. Next, detach from `alpine4` and connect to `alpine3`
|
||||||
|
(which is only attached to the `bridge` network) and try again. Finally,
|
||||||
|
connect to `alpine1` (which is only connected to the `alpine-net` network)
|
||||||
|
and try again.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# ping -c 2 google.com
|
||||||
|
|
||||||
|
PING google.com (172.217.3.174): 56 data bytes
|
||||||
|
64 bytes from 172.217.3.174: seq=0 ttl=41 time=9.778 ms
|
||||||
|
64 bytes from 172.217.3.174: seq=1 ttl=41 time=9.634 ms
|
||||||
|
|
||||||
|
--- google.com ping statistics ---
|
||||||
|
2 packets transmitted, 2 packets received, 0% packet loss
|
||||||
|
round-trip min/avg/max = 9.634/9.706/9.778 ms
|
||||||
|
|
||||||
|
CTRL+p CTRL+q
|
||||||
|
|
||||||
|
$ docker container attach alpine3
|
||||||
|
|
||||||
|
# ping -c 2 google.com
|
||||||
|
|
||||||
|
PING google.com (172.217.3.174): 56 data bytes
|
||||||
|
64 bytes from 172.217.3.174: seq=0 ttl=41 time=9.706 ms
|
||||||
|
64 bytes from 172.217.3.174: seq=1 ttl=41 time=9.851 ms
|
||||||
|
|
||||||
|
--- google.com ping statistics ---
|
||||||
|
2 packets transmitted, 2 packets received, 0% packet loss
|
||||||
|
round-trip min/avg/max = 9.706/9.778/9.851 ms
|
||||||
|
|
||||||
|
CTRL+p CTRL+q
|
||||||
|
|
||||||
|
$ docker container attach alpine1
|
||||||
|
|
||||||
|
# ping -c 2 google.com
|
||||||
|
|
||||||
|
PING google.com (172.217.3.174): 56 data bytes
|
||||||
|
64 bytes from 172.217.3.174: seq=0 ttl=41 time=9.606 ms
|
||||||
|
64 bytes from 172.217.3.174: seq=1 ttl=41 time=9.603 ms
|
||||||
|
|
||||||
|
--- google.com ping statistics ---
|
||||||
|
2 packets transmitted, 2 packets received, 0% packet loss
|
||||||
|
round-trip min/avg/max = 9.603/9.604/9.606 ms
|
||||||
|
|
||||||
|
CTRL+p CTRL+q
|
||||||
|
```
|
||||||
|
|
||||||
|
9. Stop and remove all containers and the `alpine-net` network.
|
||||||
|
|
||||||
|
```
|
||||||
|
$ docker container stop alpine1 alpine2 alpine3 alpine4
|
||||||
|
|
||||||
|
$ docker container rm alpine1 alpine2 alpine3 alpine4
|
||||||
|
|
||||||
|
$ docker network rm alpine-net
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## Other networking tutorials
|
||||||
|
|
||||||
|
Now that you have completed the networking tutorials for standalone containers,
|
||||||
|
you might want to run through these other networking tutorials:
|
||||||
|
|
||||||
|
- [Host networking tutorial](network-tutorial-host.md)
|
||||||
|
- [Overlay networking tutorial](network-tutorial-overlay.md)
|
||||||
|
- [Macvlan networking tutorial](network-tutorial-macvlan.md)
|
||||||
|
|
|
@ -0,0 +1,54 @@
|
||||||
|
---
|
||||||
|
title: Disable networking for a container
|
||||||
|
description: How to disable networking by using the none driver
|
||||||
|
keywords: network, none, standalone
|
||||||
|
---
|
||||||
|
|
||||||
|
If you want to completely disable the networking stack on a container, you can
|
||||||
|
use the `--network none` flag when starting the container. Within the container,
|
||||||
|
only the loopback device is created. The following example illustrates this.
|
||||||
|
|
||||||
|
1. Create the container.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker run --rm -dit \
|
||||||
|
--network none \
|
||||||
|
--name no-net-alpine \
|
||||||
|
alpine:latest \
|
||||||
|
ash
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Check the container's network stack, by executing some common networking
|
||||||
|
commands within the container. Notice that no `eth0` was created.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker exec no-net-alpine ip link show
|
||||||
|
|
||||||
|
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
|
||||||
|
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
|
||||||
|
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN qlen 1
|
||||||
|
link/ipip 0.0.0.0 brd 0.0.0.0
|
||||||
|
3: ip6tnl0@NONE: <NOARP> mtu 1452 qdisc noop state DOWN qlen 1
|
||||||
|
link/tunnel6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker exec no-net-alpine ip route
|
||||||
|
```
|
||||||
|
|
||||||
|
The second command returns empty because there is no routing table.
|
||||||
|
|
||||||
|
3. Stop the container. It is removed automatically because it was created with
|
||||||
|
the `--rm` flag.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker container rm no-net-alpine
|
||||||
|
```
|
||||||
|
|
||||||
|
## Next steps
|
||||||
|
|
||||||
|
- Go through the [host networking tutorial](/network/network-tutorial-host.md)
|
||||||
|
- Learn about [networking from the container's point of view](/config/containers/container-networking.md)
|
||||||
|
- Learn about [bridge networks](/network/bridge.md)
|
||||||
|
- Learn about [overlay networks](/network/overlay.md)
|
||||||
|
- Learn about [Macvlan networks](/network/macvlan.md)
|
|
@ -3,7 +3,8 @@ description: Use overlay for multi-host networking
|
||||||
keywords: Examples, Usage, network, docker, documentation, user guide, multihost, cluster
|
keywords: Examples, Usage, network, docker, documentation, user guide, multihost, cluster
|
||||||
title: Multi-host networking with standalone swarms
|
title: Multi-host networking with standalone swarms
|
||||||
redirect_from:
|
redirect_from:
|
||||||
- engine/userguide/networking/get-started-overlay/
|
- /engine/userguide/networking/get-started-overlay/
|
||||||
|
- /engine/userguide/networking/overlay-standalone-swarm/
|
||||||
---
|
---
|
||||||
|
|
||||||
## Standalone swarm only!
|
## Standalone swarm only!
|
||||||
|
@ -407,15 +408,13 @@ to have external connectivity outside of their cluster.
|
||||||
the `my-net` overlay network. While the `eth1` interface represents the
|
the `my-net` overlay network. While the `eth1` interface represents the
|
||||||
container interface that is connected to the `docker_gwbridge` network.
|
container interface that is connected to the `docker_gwbridge` network.
|
||||||
|
|
||||||
### Extra credit with Docker Compose
|
## Use Docker Compose with swarm classic
|
||||||
|
|
||||||
Refer to the Networking feature introduced in
|
Refer to the Networking feature introduced in
|
||||||
[Compose V2 format](/compose/networking/)
|
[Compose V2 format](/compose/networking/)
|
||||||
and execute the multi-host networking scenario in the swarm cluster used above.
|
and execute the multi-host networking scenario in the swarm cluster used above.
|
||||||
|
|
||||||
## Related information
|
## Next steps
|
||||||
|
|
||||||
* [Understand Docker container networks](index.md)
|
- [Networking overview](/network/index.md)
|
||||||
* [Work with network commands](work-with-networks.md)
|
- [Overlay networks](/network/overlay.md)
|
||||||
* [Docker Swarm overview](/swarm)
|
|
||||||
* [Docker Machine overview](/machine)
|
|
|
@ -0,0 +1,289 @@
|
||||||
|
---
|
||||||
|
title: Use overlay networks
|
||||||
|
description: All about using overlay networks
|
||||||
|
keywords: network, overlay, user-defined, swarm, service
|
||||||
|
redirect_from:
|
||||||
|
- /engine/swarm/networking/
|
||||||
|
- /engine/userguide/networking/overlay-security-model/
|
||||||
|
---
|
||||||
|
|
||||||
|
The `overlay` network driver creates a distributed network among multiple
|
||||||
|
Docker daemon hosts. This network sits on top of (overlays) the host-specific
|
||||||
|
networks allows containers connected to it (including swarm service
|
||||||
|
containers) to communicate securely. Docker transparently handles routing of
|
||||||
|
each packet to and from the correct Docker daemon host and the correct
|
||||||
|
destination container.
|
||||||
|
|
||||||
|
When you initialize a swarm or join a Docker host to an existing swarm, two
|
||||||
|
new networks are created on that Docker host:
|
||||||
|
|
||||||
|
- an overlay network called `ingress`, which handles control and data traffic
|
||||||
|
related to swarm services. When you create a swarm service and do not
|
||||||
|
connect it to a user-defined overlay network, it connects to the `ingress`
|
||||||
|
network by default.
|
||||||
|
- a bridge network called `docker_gwbridge`, which connects the individual
|
||||||
|
Docker daemon to the other daemons participating in the swarm.
|
||||||
|
|
||||||
|
You can create user-defined `overlay` networks using `docker network create`,
|
||||||
|
in the same way that you can create user-defined `bridge` networks. Services
|
||||||
|
or containers can be connected to more than one network at a time. Services or
|
||||||
|
containers can only communicate across networks they are each connected to.
|
||||||
|
|
||||||
|
Although you can connect both swarm services and standalone containers to an
|
||||||
|
overlay network, the default behaviors and configuration concerns are different.
|
||||||
|
For that reason, the rest of this topic is divided into operations that apply to
|
||||||
|
all overlay networks, those that apply to swarm service networks, and those that
|
||||||
|
apply to overlay networks used by standalone containers.
|
||||||
|
|
||||||
|
## Operations for all overlay networks
|
||||||
|
|
||||||
|
### Create an overlay network
|
||||||
|
|
||||||
|
> **Prerequisites**:
|
||||||
|
>
|
||||||
|
> - Firewall rules for Docker daemons using overlay networks
|
||||||
|
>
|
||||||
|
> You need the following ports open to traffic to and from each Docker host
|
||||||
|
> participating on an overlay network:
|
||||||
|
>
|
||||||
|
> - TCP port 2377 for cluster management communications
|
||||||
|
> - TCP and UDP port 7946 for communication among nodes
|
||||||
|
> - UDP port 4789 for overlay network traffic
|
||||||
|
>
|
||||||
|
> - Before you can create an overlay network, you need to either initialize your
|
||||||
|
> Docker daemon as a swarm manager using `docker swarm init` or join it to an
|
||||||
|
> existing swarm using `docker swarm join`. Either of these creates the default
|
||||||
|
> `ingress` overlay network which is used by swarm services by default. You need
|
||||||
|
> to do this even if you never plan to use swarm services. Afterward, you can
|
||||||
|
> create additional user-defined overlay networks.
|
||||||
|
|
||||||
|
To create an overlay network for use with swarm services, use a command like
|
||||||
|
the following:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker network create -d overlay my-overlay
|
||||||
|
```
|
||||||
|
|
||||||
|
To create an overlay network which can be used by swarm services **or**
|
||||||
|
standalone containers to communicate with other standalone containers running on
|
||||||
|
other Docker daemons, add the `--attachable` flag:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker network create -d overlay --attachable my-attachable-overlay
|
||||||
|
```
|
||||||
|
|
||||||
|
You can specify the IP address range, subnet, gateway, and other options. See
|
||||||
|
`docker network create --help` for details.
|
||||||
|
|
||||||
|
### Encrypt traffic on an overlay network
|
||||||
|
|
||||||
|
All swarm service management traffic is encrypted by default, using the
|
||||||
|
[AES algorithm](https://en.wikipedia.org/wiki/Galois/Counter_Mode) in
|
||||||
|
GCM mode. Manager nodes in the swarm rotate the key used to encrypt gossip data
|
||||||
|
every 12 hours.
|
||||||
|
|
||||||
|
To encrypt application data as well, add `--opt encrypted` when creating the
|
||||||
|
overlay network. This enables IPSEC encryption at the level of the vxlan. This
|
||||||
|
encryption imposes a non-negligible performance penalty, so you should test this
|
||||||
|
option before using it in production.
|
||||||
|
|
||||||
|
When you enable overlay encryption, Docker creates IPSEC tunnels between all the
|
||||||
|
nodes where tasks are scheduled for services attached to the overlay network.
|
||||||
|
These tunnels also use the AES algorithm in GCM mode and manager nodes
|
||||||
|
automatically rotate the keys every 12 hours.
|
||||||
|
|
||||||
|
> **Do not attach Windows nodes to encrypted overlay networks.**
|
||||||
|
>
|
||||||
|
> Overlay network encryption is not supported on Windows. If a Windows node
|
||||||
|
> attempts to connect to an encrypted overlay network, no error is detected but
|
||||||
|
> the node cannot communicate.
|
||||||
|
{: .warning }
|
||||||
|
|
||||||
|
#### Swarm mode overlay networks and standalone containers
|
||||||
|
|
||||||
|
You can use the overlay network feature with both `--opt encrypted --attachable`
|
||||||
|
and attach unmanaged containers to that network:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker network create --opt encrypted --driver overlay --attachable my-attachable-multi-host-network
|
||||||
|
```
|
||||||
|
|
||||||
|
### Customize the default ingress network
|
||||||
|
|
||||||
|
Most users never need to configure the `ingress` network, but Docker 17.05 and
|
||||||
|
higher allow you to do so. This can be useful if the automatically-chosen subnet
|
||||||
|
conflicts with one that already exists on your network, or you need to customize
|
||||||
|
other low-level network settings such as the MTU.
|
||||||
|
|
||||||
|
Customizing the `ingress` network involves removing and recreating it. This is
|
||||||
|
usually done before you create any services in the swarm. If you have existing
|
||||||
|
services which publish ports, those services need to be removed before you can
|
||||||
|
remove the `ingress` network.
|
||||||
|
|
||||||
|
During the time that no `ingress` network exists, existing services which do not
|
||||||
|
publish ports continue to function but are not load-balanced. This affects
|
||||||
|
services which publish ports, such as a WordPress service which publishes port
|
||||||
|
80.
|
||||||
|
|
||||||
|
1. Inspect the `ingress` network using `docker network inspect ingress`, and
|
||||||
|
remove any services whose containers are connected to it. These are services
|
||||||
|
that publish ports, such as a WordPress service which publishes port 80. If
|
||||||
|
all such services are not stopped, the next step fails.
|
||||||
|
|
||||||
|
2. Remove the existing `ingress` network:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker network rm ingress
|
||||||
|
|
||||||
|
WARNING! Before removing the routing-mesh network, make sure all the nodes
|
||||||
|
in your swarm run the same docker engine version. Otherwise, removal may not
|
||||||
|
be effective and functionality of newly created ingress networks will be
|
||||||
|
impaired.
|
||||||
|
Are you sure you want to continue? [y/N]
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Create a new overlay network using the `--ingress` flag, along with the
|
||||||
|
custom options you want to set. This example sets the MTU to 1200, sets
|
||||||
|
the subnet to `10.11.0.0/16`, and sets the gateway to `10.11.0.2`.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker network create \
|
||||||
|
--driver overlay \
|
||||||
|
--ingress \
|
||||||
|
--subnet=10.11.0.0/16 \
|
||||||
|
--gateway=10.11.0.2 \
|
||||||
|
--opt com.docker.network.mtu=1200 \
|
||||||
|
my-ingress
|
||||||
|
```
|
||||||
|
|
||||||
|
> **Note**: You can name your `ingress` network something other than
|
||||||
|
> `ingress`, but you can only have one. An attempt to create a second one
|
||||||
|
> fails.
|
||||||
|
|
||||||
|
4. Restart the services that you stopped in the first step.
|
||||||
|
|
||||||
|
### Customize the docker_gwbridge interface
|
||||||
|
|
||||||
|
The `docker_gwbridge` is a virtual bridge that connects the overlay networks
|
||||||
|
(including the `ingress` network) to an individual Docker daemon's physical
|
||||||
|
network. Docker creates it automatically when you initialize a swarm or join a
|
||||||
|
Docker host to a swarm, but it is not a Docker device. It exists in the kernel
|
||||||
|
of the Docker host. If you need to customize its settings, you must do so before
|
||||||
|
joining the Docker host to the swarm, or after temporarily removing the host
|
||||||
|
from the swarm.
|
||||||
|
|
||||||
|
1. Stop Docker.
|
||||||
|
|
||||||
|
2. Delete the existing `docker_gwbridge` interface.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ sudo ip link set docker_gwbridge down
|
||||||
|
|
||||||
|
$ sudo ip link del name docker_gwbridge
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Start Docker. Do not join or initialize the swarm.
|
||||||
|
|
||||||
|
4. Create or re-create the `docker_gwbridge` bridge manually with your custom
|
||||||
|
settings, using the `docker network create` command.
|
||||||
|
This example uses the subnet `10.11.0.0/16`. For a full list of customizable
|
||||||
|
options, see [Bridge driver options](/engine/reference/commandline/network_create.md#bridge-driver-options).
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker network create \
|
||||||
|
--subnet 10.11.0.0/16 \
|
||||||
|
--opt com.docker.network.bridge.name=docker_gwbridge \
|
||||||
|
--opt com.docker.network.bridge.enable_icc=false \
|
||||||
|
--opt com.docker.network.bridge.enable_ip_masquerade=true \
|
||||||
|
docker_gwbridge
|
||||||
|
```
|
||||||
|
|
||||||
|
5. Initialize or join the swarm. Since the bridge already exists, Docker does
|
||||||
|
not create it with automatic settings.
|
||||||
|
|
||||||
|
## Operations for swarm services
|
||||||
|
|
||||||
|
### Publish ports on an overlay network
|
||||||
|
|
||||||
|
Swarm services connected to the same overlay network effectively expose all
|
||||||
|
ports to each other. For a port to be accessible outside of the service, that
|
||||||
|
port must be _published_ using the `-p` or `--publish` flag on `docker service
|
||||||
|
create` or `docker service update`. Both the legacy colon-separated syntax and
|
||||||
|
the newer comma-separated value syntax are supported. The longer syntax is
|
||||||
|
preferred because it is somewhat self-documenting.
|
||||||
|
|
||||||
|
<table>
|
||||||
|
<thead>
|
||||||
|
<tr>
|
||||||
|
<th>Flag value</th>
|
||||||
|
<th>Description</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tr>
|
||||||
|
<td><tt>-p 8080:80</tt> or<br /><tt>-p published=8080,target=80</tt></td>
|
||||||
|
<td>Map TCP port 80 on the service to port 8080 on the routing mesh.</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><tt>-p 8080:80/udp</tt> or<br /><tt>-p published=8080,target=80,protocol=udp</tt></td>
|
||||||
|
<td>Map UDP port 80 on the service to port 8080 on the routing mesh.</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><tt>-p 8080:80/tcp -p 8080:80/udp</tt> or <br /><tt>-p published=8080,target=80,protocol=tcp -p published=8080,target=80,protocol=udp</tt></td>
|
||||||
|
<td>Map TCP port 80 on the service to TCP port 8080 on the routing mesh, and map UDP port 80 on the service to UDP port 8080 on the routine mesh.</td>
|
||||||
|
</tr>
|
||||||
|
</table>
|
||||||
|
|
||||||
|
### Bypass the routing mesh for a swarm service
|
||||||
|
|
||||||
|
By default, swarm services which publish ports do so using the routing mesh.
|
||||||
|
When you connect to a published port on any swarm node (whether it is running a
|
||||||
|
given service or not), you are redirected to a worker which is running that
|
||||||
|
service, transparently. Effectively, Docker acts as a load balancer for your
|
||||||
|
swarm services. Services using the routing mesh are running in _virtual IP (VIP)
|
||||||
|
mode_. Even a service running on each node (by means of the `--global` flag)
|
||||||
|
uses the routing mesh. When using the routing mesh, there is no guarantee about
|
||||||
|
which Docker node services client requests.
|
||||||
|
|
||||||
|
To bypass the routing mesh, you can start a service using _DNS Round Robin
|
||||||
|
(DNSRR) mode_, by setting the `--endpoint-mode` flag to `dnsrr`. You must run
|
||||||
|
your own load balancer in front of the service. A DNS query for the service name
|
||||||
|
on the Docker host returns a list of IP addresses for the nodes running the
|
||||||
|
service. Configure your load balancer to consume this list and balance the
|
||||||
|
traffic across the nodes.
|
||||||
|
|
||||||
|
### Separate control and data traffic
|
||||||
|
|
||||||
|
By default, control traffic relating to swarm management and traffic to and from
|
||||||
|
your applications runs over the same network, though the swarm control traffic
|
||||||
|
is encrypted. You can configure Docker to use separate network interfaces for
|
||||||
|
handling the two different types of traffic. When you initialize or join the
|
||||||
|
swarm, specify `--advertise-addr` and `--datapath-addr` separately. You must do
|
||||||
|
this for each node joining the swarm.
|
||||||
|
|
||||||
|
## Operations for standalone containers on overlay networks
|
||||||
|
|
||||||
|
### Attach a standalone container to an overlay network
|
||||||
|
|
||||||
|
The `ingress` network is create without the `--attachable` flag, which means
|
||||||
|
that only swarm services can use it, and not standalone containers. You can
|
||||||
|
connect standalone containers to user-defined overlay networks which are created
|
||||||
|
with the `--attachable` flag. This gives standalone containers running on
|
||||||
|
different Docker daemons the ability to communicate without the need to set up
|
||||||
|
routing on the individual Docker daemon hosts.
|
||||||
|
|
||||||
|
### Publish ports
|
||||||
|
|
||||||
|
| Flag value | Description |
|
||||||
|
|---------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||||
|
| `-p 8080:80` | Map TCP port 80 in the container to port 8080 on the overlay network. |
|
||||||
|
| `-p 8080:80/udp` | Map UDP port 80 in the container to port 8080 on the overlay network. |
|
||||||
|
| `-p 8080:80/tcp -p 8080:80/udp` | Map TCP port 80 in the container to TCP port 8080 on the overlay network, and map UDP port 80 in the container to UDP port 8080 on the overlay networkt. |
|
||||||
|
|
||||||
|
|
||||||
|
## Next steps
|
||||||
|
|
||||||
|
- Go through the [overlay networking tutorial](/network/network-tutorial-overlay.md)
|
||||||
|
- Learn about [networking from the container's point of view](/config/containers/container-networking.md)
|
||||||
|
- Learn about [standalone bridge networks](/network/bridge.md)
|
||||||
|
- Learn about [Macvlan networks](/network/macvlan.md)
|
||||||
|
|