Merge pull request #235 from mstanleyjones/96_working_with_networks

Rewrites to the 'Working with Networks' topic
This commit is contained in:
Misty Stanley-Jones 2016-12-12 11:09:50 -08:00 committed by GitHub
commit efd865f13f
1 changed files with 768 additions and 610 deletions

View File

@ -73,10 +73,6 @@ The `dockerd` options that support the `overlay` network are:
* `--cluster-store-opt`
* `--cluster-advertise`
It is also a good idea, though not required, that you install Docker Swarm
to manage the cluster. Swarm provides sophisticated discovery and server
management that can assist your implementation.
When you create a network, Engine creates a non-overlapping subnetwork for the
network by default. You can override this default and specify a subnetwork
directly using the `--subnet` option. On a `bridge` network you can only
@ -103,12 +99,11 @@ $ docker network create -d overlay \
my-multihost-network
```
Be sure that your subnetworks do not overlap. If they do, the network create
Be sure that your subnetworks do not overlap. If they do, network creation
fails and Engine returns an error.
When creating a custom network, the default network driver (i.e. `bridge`) has
additional options that can be passed. The following are those options and the
equivalent docker daemon flags used for docker0 bridge:
When creating a custom network, you can pass additional options to the driver.
The `bridge` driver accepts the following options:
| Option | Equivalent | Description |
|--------------------------------------------------|-------------|-------------------------------------------------------|
@ -127,7 +122,9 @@ The following arguments can be passed to `docker network create` for any network
| `--internal` | - | Restricts external access to the network |
| `--ipv6` | `--ipv6` | Enable IPv6 networking |
For example, now let's use `-o` or `--opt` options to specify an IP address binding when publishing ports:
The following example uses `-o` to bind to a specific IP address when binding
ports, then uses `docker network inspect` to inspect the network, and finally
attaches a new container to the new network.
```bash
$ docker network create -o "com.docker.network.bridge.host_binding_ipv4"="172.23.0.1" my-network
@ -171,15 +168,20 @@ bafb0c808c53 redis "/entrypoint.sh redis" 4 seconds ago
## Connect containers
You can connect containers dynamically to one or more networks. These networks
can be backed the same or different network drivers. Once connected, the
You can connect an existing container to one or more networks. A container can
connect to networks which use different network drivers. Once connected, the
containers can communicate using another container's IP address or name.
For `overlay` networks or custom plugins that support multi-host
connectivity, containers connected to the same multi-host network but launched
from different hosts can also communicate in this way.
Create two containers for this example:
This example uses six containers, and directs you to create them as they are
needed.
### Basic container networking example
1. First, create and run two containers, `container1` and `container2`:
```bash
$ docker run -itd --name=container1 busybox
@ -191,7 +193,7 @@ $ docker run -itd --name=container2 busybox
498eaaaf328e1018042c04b2de04036fc04719a6e39a097a4f4866043a2c2152
```
Then create an isolated, `bridge` network to test with.
2. Create an isolated, `bridge` network to test with.
```bash
$ docker network create -d bridge --subnet 172.25.0.0/16 isolated_nw
@ -199,10 +201,10 @@ $ docker network create -d bridge --subnet 172.25.0.0/16 isolated_nw
06a62f1c73c4e3107c0f555b7a5f163309827bfbbf999840166065a8f35455a8
```
Connect `container2` to the network and then `inspect` the network to verify
3. Connect `container2` to the network and then `inspect` the network to verify
the connection:
```
```bash
$ docker network connect isolated_nw container2
$ docker network inspect isolated_nw
@ -236,10 +238,15 @@ $ docker network inspect isolated_nw
]
```
You can see that the Engine automatically assigns an IP address to `container2`.
Given we specified a `--subnet` when creating the network, Engine picked
an address from that same subnet. Now, start a third container and connect it to
the network on launch using the `docker run` command's `--network` option:
Notice that `container2` is assigned an IP address automatically. Because
you specified a `--subnet` when creating the network, the IP address was
chosen from that subnet.
As a reminder, `container1` is only connected to the default `bridge` network.
4. Start a third container, but this time assign it an IP address using the
`--ip` flag and connect it to the `isolated_nw` network using the `docker run`
command's `--network` option:
```bash
$ docker run --network=isolated_nw --ip=172.25.3.3 -itd --name=container3 busybox
@ -247,27 +254,47 @@ $ docker run --network=isolated_nw --ip=172.25.3.3 -itd --name=container3 busybo
467a7863c3f0277ef8e661b38427737f28099b61fa55622d6c30fb288d88c551
```
As you can see you were able to specify the ip address for your container. As
long as the network to which the container is connecting was created with a
user specified subnet, you will be able to select the IPv4 and/or IPv6
address(es) for your container when executing `docker run` and `docker network
connect` commands by respectively passing the `--ip` and `--ip6` flags for IPv4
and IPv6. The selected IP address is part of the container networking
configuration and will be preserved across container reload. The feature is
only available on user defined networks, because they guarantee their subnets
configuration does not change across daemon reload.
As long as the IP address you specify for the container is part of the
network's subnet, you can assign an IPv4 or IPv6 address to a container
when connecting it to a network, by using the `--ip` or `--ip6` flag. when
you specify an IP address in this way while using a user-defined network,
the configuration is preserved as part of the container's configuration and
will be applied when the container is reloaded. Assigned IP addresses are
preserved when using non-user-defined networks, because there is no guarantee
that a container's subnet will not change when the Docker daemon restarts unless
you use user-defined networks.
Now, inspect the network resources used by `container3`.
5. Inspect the network resources used by `container3`. The
output below is truncated for brevity.
```bash{% raw %}
```bash
$ docker inspect --format='{{json .NetworkSettings.Networks}}' container3
{"isolated_nw":{"IPAMConfig":{"IPv4Address":"172.25.3.3"},"NetworkID":"1196a4c5af43a21ae38ef34515b6af19236a3fc48122cf585e3f3054d509679b",
"EndpointID":"dffc7ec2915af58cc827d995e6ebdc897342be0420123277103c40ae35579103","Gateway":"172.25.0.1","IPAddress":"172.25.3.3","IPPrefixLen":16,"IPv6Gateway":"","GlobalIPv6Address":"","GlobalIPv6PrefixLen":0,"MacAddress":"02:42:ac:19:03:03"}}
{% endraw %}```
Repeat this command for `container2`. If you have Python installed, you can pretty print the output.
{"isolated_nw":
{"IPAMConfig":
{
"IPv4Address":"172.25.3.3"},
"NetworkID":"1196a4c5af43a21ae38ef34515b6af19236a3fc48122cf585e3f3054d509679b",
"EndpointID":"dffc7ec2915af58cc827d995e6ebdc897342be0420123277103c40ae35579103",
"Gateway":"172.25.0.1",
"IPAddress":"172.25.3.3",
"IPPrefixLen":16,
"IPv6Gateway":"",
"GlobalIPv6Address":"",
"GlobalIPv6PrefixLen":0,
"MacAddress":"02:42:ac:19:03:03"}
}
}
}
```
```bash{% raw %}
Because you connected `container3` to the `isolated_nw` when you started it,
it is not connected to the default `bridge` network at all.
6. Inspect the network resources used by `container2`. If you have Python
installed, you can pretty print the output.
```bash
$ docker inspect --format='{{json .NetworkSettings.Networks}}' container2 | python -m json.tool
{
@ -296,30 +323,32 @@ $ docker inspect --format='{{json .NetworkSettings.Networks}}' container2 | pyt
"MacAddress": "02:42:ac:19:00:02"
}
}
{% endraw %}```
```
You should find `container2` belongs to two networks. The `bridge` network
which it joined by default when you launched it and the `isolated_nw` which you
later connected it to.
Notice that `container2` belongs to two networks. It joined the default `bridge`
network when you launched it and you connected it to the `isolated_nw` in
step 3.
![](images/working.png)
In the case of `container3`, you connected it through `docker run` to the
`isolated_nw` so that container is not connected to `bridge`.
eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:03
Use the `docker attach` command to connect to the running `container2` and
eth1 Link encap:Ethernet HWaddr 02:42:AC:15:00:02
7. Use the `docker attach` command to connect to the running `container2` and
examine its networking stack:
```bash
$ docker attach container2
```
If you look at the container's network stack you should see two Ethernet
interfaces, one for the default bridge network and one for the `isolated_nw`
network.
Use the `ifconfig` command to examine the container's networking stack. you
should see two ethernet interfaces, one for the default `bridge` network,
and the other for the `isolated_nw` network.
```bash
/ # ifconfig
$ sudo ifconfig -a
eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:03
inet addr:172.17.0.3 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:acff:fe11:3/64 Scope:Link
@ -348,9 +377,10 @@ lo Link encap:Local Loopback
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
```
On the `isolated_nw` which was user defined, the Docker embedded DNS server
enables name resolution for other containers in the network. Inside of
`container2` it is possible to ping `container3` by name.
7. The Docker embedded DNS server enables name resolution for containers
connected to a given network. This means that any connected container can
ping another container on the same network by its container name. From
within `container2`, you can ping `container3` by name.
```bash
/ # ping -w 4 container3
@ -365,17 +395,17 @@ PING container3 (172.25.3.3): 56 data bytes
round-trip min/avg/max = 0.070/0.081/0.097 ms
```
This isn't the case for the default `bridge` network. Both `container2` and
`container1` are connected to the default bridge network. Docker does not
support automatic service discovery on this network. For this reason, pinging
`container1` by name fails as you would expect based on the `/etc/hosts` file:
This functionality is not available for the default `bridge` network. Both
`container1` and `container2` are connected to the `bridge` network, but
you cannot ping `container1` from `container2` using the container name.
```bash
/ # ping -w 4 container1
ping: bad address 'container1'
```
A ping using the `container1` IP address does succeed though:
You can still ping the IP address directly:
```bash
/ # ping -w 4 172.17.0.2
@ -390,59 +420,65 @@ PING 172.17.0.2 (172.17.0.2): 56 data bytes
round-trip min/avg/max = 0.072/0.085/0.101 ms
```
If you wanted you could connect `container1` to `container2` with the `docker
run --link` command and that would enable the two containers to interact by name
as well as IP.
Detach from `container2` and leave it running using `CTRL-p CTRL-q`.
Detach from a `container2` and leave it running using `CTRL-p CTRL-q`.
In this example, `container2` is attached to both networks and so can talk to
`container1` and `container3`. But `container3` and `container1` are not in the
same network and cannot communicate. Test, this now by attaching to
`container3` and attempting to ping `container1` by IP address.
8. Currently, `container2` is attached to both `bridge` and `isolated_nw`
networks, so it can communicate with both `container1` and `container3`.
However, `container3` and `container1` do not have any networks in common,
so they cannot communicate. To verify this, attach to `container3` and try
to ping `container1` by IP address.
```bash
$ docker attach container3
/ # ping 172.17.0.2
$ ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2): 56 data bytes
^C
--- 172.17.0.2 ping statistics ---
10 packets transmitted, 0 packets received, 100% packet loss
```
You can connect both running and non-running containers to a network. However,
`docker network inspect` only displays information on running containers.
Detach from `container3` and leave it running using `CTRL-p CTRL-q`.
### Linking containers in user-defined networks
>You can connect a container to a network even if the container is not running.
However, `docker network inspect` only displays information on running containers.
In the above example, `container2` was able to resolve `container3`'s name
automatically in the user defined network `isolated_nw`, but the name
resolution did not succeed automatically in the default `bridge` network. This
is expected in order to maintain backward compatibility with [legacy
link](default_network/dockerlinks.md).
### Linking containers without using user-defined networks
The `legacy link` provided 4 major functionalities to the default `bridge`
network.
After you complete the steps in
[Basic container networking examples](#basic-container-networking-examples),
`container2` can resolve `container3`'s name automatically because both containers
are connected to the `isolated_nw` network. However, containers connected to the
default `bridge` network cannot resolve each other's container name. If you need
containers to be able to communicate with each other over the `bridge` network,
you need to use the legacy [link](default_network/dockerlinks.md) feature.
This is the only use case where using `--link` is recommended. You should
strongly consider using user-defined networks instead.
* name resolution
* name alias for the linked container using `--link=CONTAINER-NAME:ALIAS`
Using the legacy `link` flag adds the following features for communication
between communication on the default `bridge` network:
* the ability to resolve container names to IP addresses
* the ability to define a network alias as an alternate way to refer to the linked container, using `--link=CONTAINER-NAME:ALIAS`
* secured container connectivity (in isolation via `--icc=false`)
* environment variable injection
Comparing the above 4 functionalities with the non-default user-defined
networks such as `isolated_nw` in this example, without any additional config,
`docker network` provides
To reiterate, all of these features are provided by default when you use a
user-defined network, with no additional configuration required. **Additionally,
you get the ability to dynamically attach to and detach from multiple networks.**
* automatic name resolution using DNS
* automatic secured isolated environment for the containers in a network
* ability to dynamically attach and detach to multiple networks
* supports the `--link` option to provide name alias for the linked container
* automatic secured isolated environment for the containers in a network
* environment variable injection
Continuing with the above example, create another container `container4` in
`isolated_nw` with `--link` to provide additional name resolution using alias
for other containers in the same network.
The following example briefly describes how to use `--link`.
1. Continuing with the above example, create a new container, `container4`, and
connect it to the network `isolated_nw`. In addition, link it to container
`container5` (which does not exist yet!) using the `--link` flag.
```bash
$ docker run --network=isolated_nw -itd --name=container4 --link container5:c5 busybox
@ -450,20 +486,36 @@ $ docker run --network=isolated_nw -itd --name=container4 --link container5:c5 b
01b5df970834b77a9eadbaff39051f237957bd35c4c56f11193e0594cfd5117c
```
With the help of `--link` `container4` will be able to reach `container5` using
the aliased name `c5` as well.
This is a little tricky, because `container5` does not exist yet. When
`container5` is created, `container4` will be able to resolve the name `c5` to
`container5`'s IP address.
Please note that while creating `container4`, we linked to a container named
`container5` which is not created yet. That is one of the differences in
behavior between the *legacy link* in default `bridge` network and the new
*link* functionality in user defined networks. The *legacy link* is static in
nature and it hard-binds the container with the alias and it doesn't tolerate
linked container restarts. While the new *link* functionality in user defined
networks are dynamic in nature and supports linked container restarts including
tolerating ip-address changes on the linked container.
>**Note:** Any link between containers created with *legacy link* is static in
nature and hard-binds the container with the alias. It does not tolerate
linked container restarts. The new *link* functionality in user defined
networks supports dynamic links between containers, and tolerates restarts and
IP address changes in the linked container.
Since you have not yet created container `container5` trying to ping it will result
in an error. Attach to `container4` and try to ping either `container5` or `c5`:
```bash
$ docker attach container4
$ ping container5
ping: bad address 'container5'
$ ping c5
ping: bad address 'c5'
```
Detach from `container3` and leave it running using `CTRL-p CTRL-q`.
2. Create another container named `container5`, and link it to `container4`
using the alias `c4`.
Now let us launch another container named `container5` linking `container4` to
c4.
```bash
$ docker run --network=isolated_nw -itd --name=container5 --link container4:c4 busybox
@ -471,9 +523,7 @@ $ docker run --network=isolated_nw -itd --name=container5 --link container4:c4 b
72eccf2208336f31e9e33ba327734125af00d1e1d2657878e2ee8154fbb23c7a
```
As expected, `container4` will be able to reach `container5` by both its
container name and its alias c5 and `container5` will be able to reach
`container4` by its container name and its alias c4.
Now attach to `container4` and try to ping `c5` and `container5`.
```bash
$ docker attach container4
@ -500,6 +550,9 @@ PING container5 (172.25.0.5): 56 data bytes
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.070/0.081/0.097 ms
```
Detach from `container4` and leave it running using `CTRL-p CTRL-q`.
3. Finally, attach to `container5` and verify that you can ping `container4`.
```bash
$ docker attach container5
@ -526,29 +579,40 @@ PING container4 (172.25.0.4): 56 data bytes
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.065/0.070/0.082 ms
```
Detach from `container5` and leave it running using `CTRL-p CTRL-q`.
Similar to the legacy link functionality the new link alias is localized to a
container and the aliased name has no meaning outside of the container using
the `--link`.
### Network alias scoping example
Also, it is important to note that if a container belongs to multiple networks,
the linked alias is scoped within a given network. Hence the containers can be
linked to different aliases in different networks.
When you link containers, whether using the legacy `link` method or using
user-defined networks, any aliases you specify only have meaning to the
container where they are specified, and won't work on other containers on the
default `bridge` network.
Extending the example, let us create another network named `local_alias`
In addition, if a container belongs to multiple networks, a given linked alias
is scoped within a given network. Thus, a container can be linked to different
aliases in different networks, and the aliases will not work for containers which
are not on the same network.
The following example illustrates these points.
1. Create another network named `local_alias`
```bash
$ docker network create -d bridge --subnet 172.26.0.0/24 local_alias
76b7dc932e037589e6553f59f76008e5b76fa069638cd39776b890607f567aaa
```
let us connect `container4` and `container5` to the new network `local_alias`
2. Next, connect `container4` and `container5` to the new network `local_alias`
with the aliases `foo` and `bar`:
```
```bash
$ docker network connect --link container5:foo local_alias container4
$ docker network connect --link container4:bar local_alias container5
```
3. Attach to `container3` and try to ping `container4` using alias `foo`, then
try pinging container `container5` using alias `c5`:
```bash
$ docker attach container4
@ -575,9 +639,13 @@ PING c5 (172.25.0.5): 56 data bytes
round-trip min/avg/max = 0.070/0.081/0.097 ms
```
Note that the ping succeeds for both the aliases but on different networks. Let
us conclude this section by disconnecting `container5` from the `isolated_nw`
and observe the results
Both pings succeed, but the subnets are different, which means that the
networks are different.
Detach from `container4` and leave it running using `CTRL-p CTRL-q`.
4. Disconnect `container5` from the `isolated_nw` network. Attach to `container4`
and try pinging `c5` and `foo`.
```
$ docker network disconnect isolated_nw container5
@ -600,28 +668,35 @@ round-trip min/avg/max = 0.070/0.081/0.097 ms
```
In conclusion, the new link functionality in user defined networks provides all
the benefits of legacy links while avoiding most of the well-known issues with
*legacy links*.
You can no longer reach containers on the `isolated_nw` network from `container5`.
However, you can still reach `container4` (from `container4`) using the alias
`foo`.
One notable missing functionality compared to *legacy links* is the injection
of environment variables. Though very useful, environment variable injection is
static in nature and must be injected when the container is started. One cannot
inject environment variables into a running container without significant
effort and hence it is not compatible with `docker network` which provides a
dynamic way to connect/ disconnect containers to/from a network.
Detach from `container4` and leave it running using `CTRL-p CTRL-q`.
### Network-scoped alias
### Limitations of `docker network`
While *link*s provide private name resolution that is localized within a
container, the network-scoped alias provides a way for a container to be
discovered by an alternate name by any other container within the scope of a
particular network. Unlike the *link* alias, which is defined by the consumer
of a service, the network-scoped alias is defined by the container that is
offering the service to the network.
Although `docker network` is the recommended way to control the networks your
containers use, it does have some limitations.
Continuing with the above example, create another container in `isolated_nw`
with a network alias.
#### Environment variable injection
Environment variable injection is static in nature and environment variables
cannot be changed after a container is started. The legacy `--link` flag shares
all environment variables to the linked container, but the `docker network` command
has no equivalent. When you connect to a network using `docker network`, no
environment variables can be dynamically among containers.
#### Understanding network-scoped aliases
Legacy links provide outgoing name resolution that is isolated within the
container in which the alias is configured. Network-scoped aliases do not allow
for this one-way isolation, but provide the alias to all members of the network.
The following example illustrates this limitation.
1. Create another container called `container6` in the network `isolated_nw`
and give it the network alias `app`.
```bash
$ docker run --network=isolated_nw -itd --name=container6 --network-alias app busybox
@ -629,6 +704,9 @@ $ docker run --network=isolated_nw -itd --name=container6 --network-alias app bu
8ebe6767c1e0361f27433090060b33200aac054a68476c3be87ef4005eb1df17
```
2. Attach to `container4`. Try pinging the container by name (`container6`) and by
network alias (`app`). Notice that the IP address is the same.
```bash
$ docker attach container4
@ -655,17 +733,21 @@ PING container5 (172.25.0.6): 56 data bytes
round-trip min/avg/max = 0.070/0.081/0.097 ms
```
Now let us connect `container6` to the `local_alias` network with a different
network-scoped alias.
Detach from `container4` and leave it running using `CTRL-p CTRL-q`.
3. Connect `container6` to the `local_alias` network with the network-scoped
alias `scoped-app`.
```bash
$ docker network connect --alias scoped-app local_alias container6
```
`container6` in this example now is aliased as `app` in network `isolated_nw`
Now `container6` is aliased as `app` in network `isolated_nw`
and as `scoped-app` in network `local_alias`.
Let's try to reach these aliases from `container4` (which is connected to both
4. Try to reach these aliases from `container4` (which is connected to both
these networks) and `container5` (which is connected only to `isolated_nw`).
```bash
@ -681,7 +763,10 @@ PING foo (172.26.0.5): 56 data bytes
--- foo ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.070/0.081/0.097 ms
```
Detach from `container4` and leave it running using `CTRL-p CTRL-q`.
```bash
$ docker attach container5
/ # ping -w 4 scoped-app
@ -689,12 +774,19 @@ ping: bad address 'scoped-app'
```
As you can see, the alias is scoped to the network it is defined on and hence
only those containers that are connected to that network can access the alias.
Detach from `container5` and leave it running using `CTRL-p CTRL-q`.
In addition to the above features, multiple containers can share the same
network-scoped alias within the same network. For example, let's launch
`container7` in `isolated_nw` with the same alias as `container6`
This shows that an alias is scoped to the network where it is defined, and only
containers connected to that network can access the alias.
#### Resolving multiple containers to a single alias
Multiple containers can share the same network-scoped alias within the same
network. This example illustrates how this works.
1. Launch `container7` in `isolated_nw` with the same alias as `container6`,
which is `app`.
```bash
$ docker run --network=isolated_nw -itd --name=container7 --network-alias app busybox
@ -702,51 +794,92 @@ $ docker run --network=isolated_nw -itd --name=container7 --network-alias app bu
3138c678c123b8799f4c7cc6a0cecc595acbdfa8bf81f621834103cd4f504554
```
When multiple containers share the same alias, name resolution to that alias
will happen to one of the containers (typically the first container that is
aliased). When the container that backs the alias goes down or disconnected
from the network, the next container that backs the alias will be resolved.
When multiple containers share the same alias, one of those containers
will resolve to the alias. If that container is unavailable, another
container with the alias will be resolved. This provides a sort of high
availability within the cluster.
Let us ping the alias `app` from `container4` and bring down `container6` to
verify that `container7` is resolving the `app` alias.
> When the IP address is resolved, the container chosen to resolve it is
random. For that reason, in the exercises below, you may get different
results in some steps. If the step assumes the result returned is `container6`
but you get `container7`, this is why.
2. Start a continuous ping from `container4` to the `app` alias.
```bash
$ docker attach container4
/ # ping -w 4 app
$ ping app
PING app (172.25.0.6): 56 data bytes
64 bytes from 172.25.0.6: seq=0 ttl=64 time=0.070 ms
64 bytes from 172.25.0.6: seq=1 ttl=64 time=0.080 ms
64 bytes from 172.25.0.6: seq=2 ttl=64 time=0.080 ms
64 bytes from 172.25.0.6: seq=3 ttl=64 time=0.097 ms
...
```
--- app ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.070/0.081/0.097 ms
The IP address that is returned belongs to `container6`.
3. In another terminal, stop `container6`.
```bash
$ docker stop container6
```
$ docker attach container4
In the terminal attached to `container4`, observe the `ping` output.
It will pause when `container6` goes down, because the `ping` command
looks up the IP when it is first invoked, and that IP is no longer reachable.
However, the `ping` command has a very long timeout by default, so no error
occurs.
4. Exit the `ping` command using `CTRL+C` and run it again.
```bash
$ ping app
/ # ping -w 4 app
PING app (172.25.0.7): 56 data bytes
64 bytes from 172.25.0.7: seq=0 ttl=64 time=0.095 ms
64 bytes from 172.25.0.7: seq=1 ttl=64 time=0.075 ms
64 bytes from 172.25.0.7: seq=2 ttl=64 time=0.072 ms
64 bytes from 172.25.0.7: seq=3 ttl=64 time=0.101 ms
--- app ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.072/0.085/0.101 ms
...
```
The `app` alias now resolves to the IP address of `container7`.
5. For one last test, restart `container6`.
```bash
$ docker start container6
```
In the terminal attached to `container4`, run the `ping` command again. It
might now resolve to `container6` again. If you start and stop the `ping`
several times, you will see responses from each of the containers.
```bash
$ docker attach container4
$ ping app
PING app (172.25.0.6): 56 data bytes
64 bytes from 172.25.0.6: seq=0 ttl=64 time=0.070 ms
64 bytes from 172.25.0.6: seq=1 ttl=64 time=0.080 ms
64 bytes from 172.25.0.6: seq=2 ttl=64 time=0.080 ms
64 bytes from 172.25.0.6: seq=3 ttl=64 time=0.097 ms
...
```
Stop the ping with `CTRL+C`. Detach from `container4` and leave it running
using `CTRL-p CTRL-q`.
## Disconnecting containers
You can disconnect a container from a network using the `docker network
You can disconnect a container from a network at any time using the `docker network
disconnect` command.
```bash{% raw %}
1. Disconnect `container2` from the `isolated_nw` network, then inspect `container2`
and the `isolated_nw` network.
```bash
$ docker network disconnect isolated_nw container2
$ docker inspect --format='{{json .NetworkSettings.Networks}}' container2 | python -m json.tool
@ -795,11 +928,13 @@ $ docker network inspect isolated_nw
"Options": {}
}
]
{% endraw %}```
```
Once a container is disconnected from a network, it cannot communicate with
other containers connected to that network. In this example, `container2` can
no longer talk to `container3` on the `isolated_nw` network.
2. When a container is disconnected from a network, it can no longer communicate
with other containers connected to that network, unless it has other networks
in common with them. Verify that `container2` can no longer reach `container3`,
which is on the `isolated_nw` network.
```bash
$ docker attach container2
@ -830,7 +965,8 @@ PING container3 (172.25.3.3): 56 data bytes
2 packets transmitted, 0 packets received, 100% packet loss
```
The `container2` still has full connectivity to the bridge network
3. Verify that `container2` still has full connectivity to the default `bridge`
network.
```bash
/ # ping container1
@ -844,14 +980,27 @@ round-trip min/avg/max = 0.119/0.146/0.174 ms
/ #
```
There are certain scenarios such as ungraceful docker daemon restarts in
multi-host network, where the daemon is unable to cleanup stale connectivity
endpoints. Such stale endpoints may cause an error `container already connected
to network` when a new container is connected to that network with the same
name as the stale endpoint. In order to cleanup these stale endpoints, first
remove the container and force disconnect (`docker network disconnect -f`) the
endpoint from the network. Once the endpoint is cleaned up, the container can
be connected to the network.
4. Remove `container4`, `container5`, `container6`, and `container7`.
```bash
$ docker stop container4 container5 container6 container7
$ docker rm container4 container5 container6 container7
```
### Handling stale network endpoints
In some scenarios, such as ungraceful docker daemon restarts in a
multi-host network, the daemon cannot clean up stale connectivity endpoints.
Such stale endpoints may cause an error if a new container is connected
to that network with the same name as the stale endpoint:
```no-highlight
ERROR: Cannot start container bc0b19c089978f7845633027aa3435624ca3d12dd4f4f764b61eac4c0610f32e: container already connected to network multihost
```
To clean up these stale endpoints, remove the container and disconnect it
from the network forcibly (`docker network disconnect -f`). Now you can
successfully connect the container to the network.
```bash
$ docker run -d --name redis_db --network multihost redis
@ -870,12 +1019,16 @@ $ docker run -d --name redis_db --network multihost redis
## Remove a network
When all the containers in a network are stopped or disconnected, you can
remove a network.
remove a network. If a network has connected endpoints, an error occurs.
1. Disconnect `container3` from `isolated_nw`.
```bash
$ docker network disconnect isolated_nw container3
```
2. Inspect `isolated_nw` to verify that no other endpoints are connected to it.
```bash
$ docker network inspect isolated_nw
@ -898,19 +1051,24 @@ $ docker network inspect isolated_nw
"Options": {}
}
]
```
3. Remove the `isolated_nw` network.
```bash
$ docker network rm isolated_nw
```
List all your networks to verify the `isolated_nw` was removed:
4. List all your networks to verify that `isolated_nw` no longer exists:
```bash
$ docker network ls
NETWORK ID NAME DRIVER
72314fa53006 host host
f7ab26d71dbd bridge bridge
0f32e83e61ac none null
NETWORK ID NAME DRIVER SCOPE
4bb8c9bf4292 bridge bridge local
43575911a2bd host host local
76b7dc932e03 local_alias bridge local
b1a086897963 my-network bridge local
3eb020e70bfd none null local
69568e6336d8 simple-network bridge local
```
## Related information