- --net-alias=ALIAS
+ --network-alias=ALIAS
|
In addition to --name as described above, a container is discovered by one or more
- of its configured --net-alias (or --alias in docker network connect command)
+ of its configured --network-alias (or --alias in docker network connect command)
within the user-defined network. The embedded DNS server maintains the mapping between
all of the container aliases and its IP address on a specific user-defined network.
A container can have different aliases in different networks by using the --alias
diff --git a/docs/userguide/networking/default_network/binding.md b/docs/userguide/networking/default_network/binding.md
index d8799f4fbd..0ec495a173 100644
--- a/docs/userguide/networking/default_network/binding.md
+++ b/docs/userguide/networking/default_network/binding.md
@@ -23,6 +23,7 @@ when it starts:
```
$ sudo iptables -t nat -L -n
+
...
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
@@ -56,6 +57,7 @@ network stack by examining your NAT tables.
# is finished setting up a -P forward:
$ iptables -t nat -L -n
+
...
Chain DOCKER (2 references)
target prot opt source destination
diff --git a/docs/userguide/networking/default_network/build-bridges.md b/docs/userguide/networking/default_network/build-bridges.md
index 73f35e357e..0cd70215df 100644
--- a/docs/userguide/networking/default_network/build-bridges.md
+++ b/docs/userguide/networking/default_network/build-bridges.md
@@ -27,8 +27,11 @@ stopping the service and removing the interface:
# Stopping Docker and removing docker0
$ sudo service docker stop
+
$ sudo ip link set dev docker0 down
+
$ sudo brctl delbr docker0
+
$ sudo iptables -t nat -F POSTROUTING
```
@@ -41,12 +44,15 @@ customize `docker0`, but it will be enough to illustrate the technique.
# Create our own bridge
$ sudo brctl addbr bridge0
+
$ sudo ip addr add 192.168.5.1/24 dev bridge0
+
$ sudo ip link set dev bridge0 up
# Confirming that our bridge is up and running
$ ip addr show bridge0
+
4: bridge0: mtu 1500 qdisc noop state UP group default
link/ether 66:38:d0:0d:76:18 brd ff:ff:ff:ff:ff:ff
inet 192.168.5.1/24 scope global bridge0
@@ -55,11 +61,13 @@ $ ip addr show bridge0
# Tell Docker about it and restart (on Ubuntu)
$ echo 'DOCKER_OPTS="-b=bridge0"' >> /etc/default/docker
+
$ sudo service docker start
# Confirming new outgoing NAT masquerade is set up
$ sudo iptables -t nat -L -n
+
...
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
diff --git a/docs/userguide/networking/default_network/configure-dns.md b/docs/userguide/networking/default_network/configure-dns.md
index 2703aca1d0..71f189e141 100644
--- a/docs/userguide/networking/default_network/configure-dns.md
+++ b/docs/userguide/networking/default_network/configure-dns.md
@@ -20,6 +20,7 @@ How can Docker supply each container with a hostname and DNS configuration, with
```
$$ mount
+
...
/dev/disk/by-uuid/1fec...ebdf on /etc/hostname type ext4 ...
/dev/disk/by-uuid/1fec...ebdf on /etc/hosts type ext4 ...
diff --git a/docs/userguide/networking/default_network/container-communication.md b/docs/userguide/networking/default_network/container-communication.md
index 0ca1976333..8e2110af6e 100644
--- a/docs/userguide/networking/default_network/container-communication.md
+++ b/docs/userguide/networking/default_network/container-communication.md
@@ -31,14 +31,18 @@ set `--ip-forward=false` and your system's kernel has it enabled, the
or to turn it on manually:
```
$ sysctl net.ipv4.conf.all.forwarding
+
net.ipv4.conf.all.forwarding = 0
+
$ sysctl net.ipv4.conf.all.forwarding=1
+
$ sysctl net.ipv4.conf.all.forwarding
+
net.ipv4.conf.all.forwarding = 1
```
> **Note**: this setting does not affect containers that use the host
-> network stack (`--net=host`).
+> network stack (`--network=host`).
Many using Docker will want `ip_forward` to be on, to at least make
communication _possible_ between containers and the wider world. May also be
@@ -98,6 +102,7 @@ You can run the `iptables` command on your Docker host to see whether the `FORWA
# When --icc=false, you should see a DROP rule:
$ sudo iptables -L -n
+
...
Chain FORWARD (policy ACCEPT)
target prot opt source destination
@@ -110,6 +115,7 @@ DROP all -- 0.0.0.0/0 0.0.0.0/0
# the subsequent DROP policy for all other packets:
$ sudo iptables -L -n
+
...
Chain FORWARD (policy ACCEPT)
target prot opt source destination
diff --git a/docs/userguide/networking/default_network/custom-docker0.md b/docs/userguide/networking/default_network/custom-docker0.md
index 6178b06ab5..f4a3f90c1c 100644
--- a/docs/userguide/networking/default_network/custom-docker0.md
+++ b/docs/userguide/networking/default_network/custom-docker0.md
@@ -30,6 +30,7 @@ Once you have one or more containers up and running, you can confirm that Docker
# Display bridge info
$ sudo brctl show
+
bridge name bridge id STP enabled interfaces
docker0 8000.3a1d7362b4ee no veth65f9
vethdda6
@@ -45,6 +46,7 @@ Finally, the `docker0` Ethernet bridge settings are used every time you create a
$ docker run -i -t --rm base /bin/bash
$$ ip addr show eth0
+
24: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 32:6f:e0:35:57:91 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.3/16 scope global eth0
@@ -53,6 +55,7 @@ $$ ip addr show eth0
valid_lft forever preferred_lft forever
$$ ip route
+
default via 172.17.42.1 dev eth0
172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.3
diff --git a/docs/userguide/networking/default_network/dockerlinks.md b/docs/userguide/networking/default_network/dockerlinks.md
index 66299002e7..95f32cb4c1 100644
--- a/docs/userguide/networking/default_network/dockerlinks.md
+++ b/docs/userguide/networking/default_network/dockerlinks.md
@@ -43,6 +43,7 @@ range* on your Docker host. Next, when `docker ps` was run, you saw that port
5000 in the container was bound to port 49155 on the host.
$ docker ps nostalgic_morse
+
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bc533791f3f5 training/webapp:latest python app.py 5 seconds ago Up 2 seconds 0.0.0.0:49155->5000/tcp nostalgic_morse
@@ -88,6 +89,7 @@ configurations. For example, if you've bound the container port to the
`localhost` on the host machine, then the `docker port` output will reflect that.
$ docker port nostalgic_morse 5000
+
127.0.0.1:49155
> **Note:**
@@ -132,6 +134,7 @@ name the container `web`. You can see the container's name using the
`docker ps` command.
$ docker ps -l
+
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
aed84ee21bde training/webapp:latest python app.py 12 hours ago Up 2 seconds 0.0.0.0:49154->5000/tcp web
@@ -187,6 +190,7 @@ example as:
Next, inspect your linked containers with `docker inspect`:
$ docker inspect -f "{{ .HostConfig.Links }}" web
+
[/db:/web/db]
You can see that the `web` container is now linked to the `db` container
@@ -273,6 +277,7 @@ command to list the specified container's environment variables.
```
$ docker run --rm --name web2 --link db:db training/webapp env
+
. . .
DB_NAME=/web2/db
DB_PORT=tcp://172.17.0.5:5432
@@ -310,7 +315,9 @@ source container to the `/etc/hosts` file. Here's an entry for the `web`
container:
$ docker run -t -i --rm --link db:webdb training/webapp /bin/bash
+
root@aed84ee21bde:/opt/webapp# cat /etc/hosts
+
172.17.0.7 aed84ee21bde
. . .
172.17.0.5 webdb 6e5cdeb2d300 db
@@ -324,7 +331,9 @@ also be added in `/etc/hosts` for the linked container's IP address. You can pin
that host now via any of these entries:
root@aed84ee21bde:/opt/webapp# apt-get install -yqq inetutils-ping
+
root@aed84ee21bde:/opt/webapp# ping webdb
+
PING webdb (172.17.0.5): 48 data bytes
56 bytes from 172.17.0.5: icmp_seq=0 ttl=64 time=0.267 ms
56 bytes from 172.17.0.5: icmp_seq=1 ttl=64 time=0.250 ms
@@ -348,9 +357,13 @@ will be automatically updated with the source container's new IP address,
allowing linked communication to continue.
$ docker restart db
+
db
+
$ docker run -t -i --rm --link db:db training/webapp /bin/bash
+
root@aed84ee21bde:/opt/webapp# cat /etc/hosts
+
172.17.0.7 aed84ee21bde
. . .
172.17.0.9 db
diff --git a/docs/userguide/networking/default_network/ipv6.md b/docs/userguide/networking/default_network/ipv6.md
index fc6c968a50..64a1b7e55b 100644
--- a/docs/userguide/networking/default_network/ipv6.md
+++ b/docs/userguide/networking/default_network/ipv6.md
@@ -48,7 +48,9 @@ starting dockerd with `--ip-forward=false`):
```
$ ip -6 route add 2001:db8:1::/64 dev docker0
+
$ sysctl net.ipv6.conf.default.forwarding=1
+
$ sysctl net.ipv6.conf.all.forwarding=1
```
@@ -113,6 +115,7 @@ configure the IPv6 addresses `2001:db8::c000` to `2001:db8::c00f`:
```
$ ip -6 addr show
+
1: lo: mtu 65536
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
@@ -165,6 +168,7 @@ device to the container network:
```
$ ip -6 route show
+
2001:db8::c008/125 dev docker0 metric 1
2001:db8::/64 dev eth0 proto kernel metric 256
```
diff --git a/docs/userguide/networking/dockernetworks.md b/docs/userguide/networking/dockernetworks.md
index 2bab1b41c3..7a70e066d2 100644
--- a/docs/userguide/networking/dockernetworks.md
+++ b/docs/userguide/networking/dockernetworks.md
@@ -29,6 +29,7 @@ these networks using the `docker network ls` command:
```
$ docker network ls
+
NETWORK ID NAME DRIVER
7fca4eb8c647 bridge bridge
9f904ee27bf5 none null
@@ -36,17 +37,18 @@ cf03ee007fb4 host host
```
Historically, these three networks are part of Docker's implementation. When
-you run a container you can use the `--net` flag to specify which network you
+you run a container you can use the `--network` flag to specify which network you
want to run a container on. These three networks are still available to you.
The `bridge` network represents the `docker0` network present in all Docker
installations. Unless you specify otherwise with the `docker run
---net=` option, the Docker daemon connects containers to this network
+--network=` option, the Docker daemon connects containers to this network
by default. You can see this bridge as part of a host's network stack by using
the `ifconfig` command on the host.
```
$ ifconfig
+
docker0 Link encap:Ethernet HWaddr 02:42:47:bc:3a:eb
inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:47ff:febc:3aeb/64 Scope:Link
@@ -100,6 +102,7 @@ command returns information about a network:
```
$ docker network inspect bridge
+
[
{
"Name": "bridge",
@@ -132,9 +135,11 @@ The `docker run` command automatically adds new containers to this network.
```
$ docker run -itd --name=container1 busybox
+
3386a527aa08b37ea9232cbcace2d2458d49f44bb05a6b775fba7ddd40d8f92c
$ docker run -itd --name=container2 busybox
+
94447ca479852d29aeddca75c28f7104df3c3196d7b6d83061879e339946805c
```
@@ -142,6 +147,7 @@ Inspecting the `bridge` network again after starting two containers shows both n
```
$ docker network inspect bridge
+
{[
{
"Name": "bridge",
@@ -215,6 +221,7 @@ Then use `ping` for about 3 seconds to test the connectivity of the containers o
```
root@0cb243cd1293:/# ping -w3 172.17.0.3
+
PING 172.17.0.3 (172.17.0.3): 56 data bytes
64 bytes from 172.17.0.3: seq=0 ttl=64 time=0.096 ms
64 bytes from 172.17.0.3: seq=1 ttl=64 time=0.080 ms
@@ -229,6 +236,7 @@ Finally, use the `cat` command to check the `container1` network configuration:
```
root@0cb243cd1293:/# cat /etc/hosts
+
172.17.0.2 3386a527aa08
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
@@ -243,6 +251,7 @@ To detach from a `container1` and leave it running use `CTRL-p CTRL-q`.Then, att
$ docker attach container2
root@0cb243cd1293:/# ifconfig
+
eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:03
inet addr:172.17.0.3 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:acff:fe11:3/64 Scope:Link
@@ -262,6 +271,7 @@ lo Link encap:Local Loopback
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
root@0cb243cd1293:/# ping -w3 172.17.0.2
+
PING 172.17.0.2 (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.067 ms
64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.075 ms
@@ -311,6 +321,7 @@ $ docker network create --driver bridge isolated_nw
1196a4c5af43a21ae38ef34515b6af19236a3fc48122cf585e3f3054d509679b
$ docker network inspect isolated_nw
+
[
{
"Name": "isolated_nw",
@@ -332,6 +343,7 @@ $ docker network inspect isolated_nw
]
$ docker network ls
+
NETWORK ID NAME DRIVER
9f904ee27bf5 none null
cf03ee007fb4 host host
@@ -340,10 +352,11 @@ c5ee82f76de3 isolated_nw bridge
```
-After you create the network, you can launch containers on it using the `docker run --net=` option.
+After you create the network, you can launch containers on it using the `docker run --network=` option.
```
-$ docker run --net=isolated_nw -itd --name=container3 busybox
+$ docker run --network=isolated_nw -itd --name=container3 busybox
+
8c1a0a5be480921d669a073393ade66a3fc49933f08bcc5515b37b8144f6d47c
$ docker network inspect isolated_nw
@@ -460,7 +473,7 @@ provides complete isolation for the containers.
Then, on each host, launch containers making sure to specify the network name.
- $ docker run -itd --net=my-multi-host-network busybox
+ $ docker run -itd --network=my-multi-host-network busybox
Once connected, each container has access to all the containers in the network
regardless of which Docker host the container was launched on.
diff --git a/docs/userguide/networking/get-started-overlay.md b/docs/userguide/networking/get-started-overlay.md
index 89d5b2ca59..709d48e717 100644
--- a/docs/userguide/networking/get-started-overlay.md
+++ b/docs/userguide/networking/get-started-overlay.md
@@ -73,6 +73,7 @@ key-value stores. This example uses Consul.
5. Run the `docker ps` command to see the `consul` container.
$ docker ps
+
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4d51392253b3 progrium/consul "/bin/start -server -" 25 minutes ago Up 25 minutes 53/tcp, 53/udp, 8300-8302/tcp, 0.0.0.0:8500->8500/tcp, 8400/tcp, 8301-8302/udp admiring_panini
@@ -111,6 +112,7 @@ that machine options that are needed by the `overlay` network driver.
3. List your machines to confirm they are all up and running.
$ docker-machine ls
+
NAME ACTIVE DRIVER STATE URL SWARM
default - virtualbox Running tcp://192.168.99.100:2376
mh-keystore * virtualbox Running tcp://192.168.99.103:2376
@@ -134,6 +136,7 @@ To create an overlay network
2. Use the `docker info` command to view the Swarm.
$ docker info
+
Containers: 3
Images: 2
Role: primary
@@ -171,6 +174,7 @@ To create an overlay network
4. Check that the network is running:
$ docker network ls
+
NETWORK ID NAME DRIVER
412c2496d0eb mhs-demo1/host host
dd51763e6dd2 mhs-demo0/bridge bridge
@@ -187,14 +191,19 @@ To create an overlay network
5. Switch to each Swarm agent in turn and list the networks.
$ eval $(docker-machine env mhs-demo0)
+
$ docker network ls
+
NETWORK ID NAME DRIVER
6b07d0be843f my-net overlay
dd51763e6dd2 bridge bridge
b4234109bd9b none null
1aeead6dd890 host host
+
$ eval $(docker-machine env mhs-demo1)
+
$ docker network ls
+
NETWORK ID NAME DRIVER
d0bb78cbe7bd bridge bridge
1c0eb8f69ebb none null
@@ -214,11 +223,12 @@ Once your network is created, you can start a container on any of the hosts and
2. Start an Nginx web server on the `mhs-demo0` instance.
- $ docker run -itd --name=web --net=my-net --env="constraint:node==mhs-demo0" nginx
+ $ docker run -itd --name=web --network=my-net --env="constraint:node==mhs-demo0" nginx
4. Run a BusyBox instance on the `mhs-demo1` instance and get the contents of the Nginx server's home page.
- $ docker run -it --rm --net=my-net --env="constraint:node==mhs-demo1" busybox wget -O- http://web
+ $ docker run -it --rm --network=my-net --env="constraint:node==mhs-demo1" busybox wget -O- http://web
+
Unable to find image 'busybox:latest' locally
latest: Pulling from library/busybox
ab2b8a86ca6c: Pull complete
@@ -268,6 +278,7 @@ to have external connectivity outside of their cluster.
2. View the `docker_gwbridge` network, by listing the networks.
$ docker network ls
+
NETWORK ID NAME DRIVER
6b07d0be843f my-net overlay
dd51763e6dd2 bridge bridge
@@ -278,7 +289,9 @@ to have external connectivity outside of their cluster.
3. Repeat steps 1 and 2 on the Swarm master.
$ eval $(docker-machine env mhs-demo0)
+
$ docker network ls
+
NETWORK ID NAME DRIVER
6b07d0be843f my-net overlay
d0bb78cbe7bd bridge bridge
@@ -289,6 +302,7 @@ to have external connectivity outside of their cluster.
2. Check the Nginx container's network interfaces.
$ docker exec web ip addr
+
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
diff --git a/docs/userguide/networking/work-with-networks.md b/docs/userguide/networking/work-with-networks.md
index 66c6bd1a63..c142383d4d 100644
--- a/docs/userguide/networking/work-with-networks.md
+++ b/docs/userguide/networking/work-with-networks.md
@@ -42,7 +42,9 @@ bridge network for you.
```bash
$ docker network create simple-network
+
69568e6336d8c96bbf57869030919f7c69524f71183b44d80948bd3927c87f6a
+
$ docker network inspect simple-network
[
{
@@ -134,8 +136,11 @@ For example, now let's use `-o` or `--opt` options to specify an IP address bind
```bash
$ docker network create -o "com.docker.network.bridge.host_binding_ipv4"="172.23.0.1" my-network
+
b1a086897963e6a2e7fc6868962e55e746bee8ad0c97b54a5831054b5f62672a
+
$ docker network inspect my-network
+
[
{
"Name": "my-network",
@@ -158,9 +163,13 @@ $ docker network inspect my-network
}
}
]
-$ docker run -d -P --name redis --net my-network redis
+
+$ docker run -d -P --name redis --network my-network redis
+
bafb0c808c53104b2c90346f284bda33a69beadcab4fc83ab8f2c5a4410cd129
+
$ docker ps
+
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bafb0c808c53 redis "/entrypoint.sh redis" 4 seconds ago Up 3 seconds 172.23.0.1:32770->6379/tcp redis
```
@@ -179,9 +188,11 @@ Create two containers for this example:
```bash
$ docker run -itd --name=container1 busybox
+
18c062ef45ac0c026ee48a83afa39d25635ee5f02b58de4abc8f467bcaa28731
$ docker run -itd --name=container2 busybox
+
498eaaaf328e1018042c04b2de04036fc04719a6e39a097a4f4866043a2c2152
```
@@ -189,6 +200,7 @@ Then create an isolated, `bridge` network to test with.
```bash
$ docker network create -d bridge --subnet 172.25.0.0/16 isolated_nw
+
06a62f1c73c4e3107c0f555b7a5f163309827bfbbf999840166065a8f35455a8
```
@@ -197,7 +209,9 @@ the connection:
```
$ docker network connect isolated_nw container2
+
$ docker network inspect isolated_nw
+
[
{
"Name": "isolated_nw",
@@ -230,10 +244,11 @@ $ docker network inspect isolated_nw
You can see that the Engine automatically assigns an IP address to `container2`.
Given we specified a `--subnet` when creating the network, Engine picked
an address from that same subnet. Now, start a third container and connect it to
-the network on launch using the `docker run` command's `--net` option:
+the network on launch using the `docker run` command's `--network` option:
```bash
-$ docker run --net=isolated_nw --ip=172.25.3.3 -itd --name=container3 busybox
+$ docker run --network=isolated_nw --ip=172.25.3.3 -itd --name=container3 busybox
+
467a7863c3f0277ef8e661b38427737f28099b61fa55622d6c30fb288d88c551
```
@@ -251,6 +266,7 @@ Now, inspect the network resources used by `container3`.
```bash
$ docker inspect --format='{{json .NetworkSettings.Networks}}' container3
+
{"isolated_nw":{"IPAMConfig":{"IPv4Address":"172.25.3.3"},"NetworkID":"1196a4c5af43a21ae38ef34515b6af19236a3fc48122cf585e3f3054d509679b",
"EndpointID":"dffc7ec2915af58cc827d995e6ebdc897342be0420123277103c40ae35579103","Gateway":"172.25.0.1","IPAddress":"172.25.3.3","IPPrefixLen":16,"IPv6Gateway":"","GlobalIPv6Address":"","GlobalIPv6PrefixLen":0,"MacAddress":"02:42:ac:19:03:03"}}
```
@@ -258,6 +274,7 @@ Repeat this command for `container2`. If you have Python installed, you can pret
```bash
$ docker inspect --format='{{json .NetworkSettings.Networks}}' container2 | python -m json.tool
+
{
"bridge": {
"NetworkID":"7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812",
@@ -391,6 +408,7 @@ same network and cannot communicate. Test, this now by attaching to
```bash
$ docker attach container3
+
/ # ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2): 56 data bytes
^C
@@ -432,7 +450,8 @@ Continuing with the above example, create another container `container4` in
for other containers in the same network.
```bash
-$ docker run --net=isolated_nw -itd --name=container4 --link container5:c5 busybox
+$ docker run --network=isolated_nw -itd --name=container4 --link container5:c5 busybox
+
01b5df970834b77a9eadbaff39051f237957bd35c4c56f11193e0594cfd5117c
```
@@ -452,7 +471,8 @@ Now let us launch another container named `container5` linking `container4` to
c4.
```bash
-$ docker run --net=isolated_nw -itd --name=container5 --link container4:c4 busybox
+$ docker run --network=isolated_nw -itd --name=container5 --link container4:c4 busybox
+
72eccf2208336f31e9e33ba327734125af00d1e1d2657878e2ee8154fbb23c7a
```
@@ -462,6 +482,7 @@ container name and its alias c5 and `container5` will be able to reach
```bash
$ docker attach container4
+
/ # ping -w 4 c5
PING c5 (172.25.0.5): 56 data bytes
64 bytes from 172.25.0.5: seq=0 ttl=64 time=0.070 ms
@@ -487,6 +508,7 @@ round-trip min/avg/max = 0.070/0.081/0.097 ms
```bash
$ docker attach container5
+
/ # ping -w 4 c4
PING c4 (172.25.0.4): 56 data bytes
64 bytes from 172.25.0.4: seq=0 ttl=64 time=0.065 ms
@@ -607,12 +629,14 @@ Continuing with the above example, create another container in `isolated_nw`
with a network alias.
```bash
-$ docker run --net=isolated_nw -itd --name=container6 --net-alias app busybox
+$ docker run --network=isolated_nw -itd --name=container6 --network-alias app busybox
+
8ebe6767c1e0361f27433090060b33200aac054a68476c3be87ef4005eb1df17
```
```bash
$ docker attach container4
+
/ # ping -w 4 app
PING app (172.25.0.6): 56 data bytes
64 bytes from 172.25.0.6: seq=0 ttl=64 time=0.070 ms
@@ -678,7 +702,8 @@ network-scoped alias within the same network. For example, let's launch
`container7` in `isolated_nw` with the same alias as `container6`
```bash
-$ docker run --net=isolated_nw -itd --name=container7 --net-alias app busybox
+$ docker run --network=isolated_nw -itd --name=container7 --network-alias app busybox
+
3138c678c123b8799f4c7cc6a0cecc595acbdfa8bf81f621834103cd4f504554
```
@@ -692,6 +717,7 @@ verify that `container7` is resolving the `app` alias.
```bash
$ docker attach container4
+
/ # ping -w 4 app
PING app (172.25.0.6): 56 data bytes
64 bytes from 172.25.0.6: seq=0 ttl=64 time=0.070 ms
@@ -706,6 +732,7 @@ round-trip min/avg/max = 0.070/0.081/0.097 ms
$ docker stop container6
$ docker attach container4
+
/ # ping -w 4 app
PING app (172.25.0.7): 56 data bytes
64 bytes from 172.25.0.7: seq=0 ttl=64 time=0.095 ms
@@ -728,6 +755,7 @@ disconnect` command.
$ docker network disconnect isolated_nw container2
$ docker inspect --format='{{json .NetworkSettings.Networks}}' container2 | python -m json.tool
+
{
"bridge": {
"NetworkID":"7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812",
@@ -744,6 +772,7 @@ $ docker inspect --format='{{json .NetworkSettings.Networks}}' container2 | pyt
$ docker network inspect isolated_nw
+
[
{
"Name": "isolated_nw",
@@ -830,13 +859,16 @@ endpoint from the network. Once the endpoint is cleaned up, the container can
be connected to the network.
```bash
-$ docker run -d --name redis_db --net multihost redis
+$ docker run -d --name redis_db --network multihost redis
+
ERROR: Cannot start container bc0b19c089978f7845633027aa3435624ca3d12dd4f4f764b61eac4c0610f32e: container already connected to network multihost
$ docker rm -f redis_db
+
$ docker network disconnect -f multihost redis_db
-$ docker run -d --name redis_db --net multihost redis
+$ docker run -d --name redis_db --network multihost redis
+
7d986da974aeea5e9f7aca7e510bdb216d58682faa83a9040c2f2adc0544795a
```
@@ -851,6 +883,7 @@ $ docker network disconnect isolated_nw container3
```bash
docker network inspect isolated_nw
+
[
{
"Name": "isolated_nw",
@@ -878,6 +911,7 @@ List all your networks to verify the `isolated_nw` was removed:
```bash
$ docker network ls
+
NETWORK ID NAME DRIVER
72314fa53006 host host
f7ab26d71dbd bridge bridge
diff --git a/docs/userguide/storagedriver/aufs-driver.md b/docs/userguide/storagedriver/aufs-driver.md
index e64c33c972..af0261591f 100644
--- a/docs/userguide/storagedriver/aufs-driver.md
+++ b/docs/userguide/storagedriver/aufs-driver.md
@@ -97,6 +97,7 @@ You can only use the AUFS storage driver on Linux systems with AUFS installed.
Use the following command to determine if your system supports AUFS.
$ grep aufs /proc/filesystems
+
nodev aufs
This output indicates the system supports AUFS. Once you've verified your
@@ -116,6 +117,7 @@ Once your daemon is running, verify the storage driver with the `docker info`
command.
$ sudo docker info
+
Containers: 1
Images: 4
Storage Driver: aufs
@@ -153,6 +155,7 @@ stacked below it in the union mount. Remember, these directory names do no map
to image layer IDs with Docker 1.10 and higher.
$ cat /var/lib/docker/aufs/layers/91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c
+
d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82
c22013c8472965aa5b62559f2b540cd440716ef149756e7b958a1b2aba421e87
d3a1f33e8a5a513092f01bb7eb1c2abf4d711e5105390a3fe1ae2248cfde1391
diff --git a/docs/userguide/storagedriver/btrfs-driver.md b/docs/userguide/storagedriver/btrfs-driver.md
index cc329e731e..dd5da2a229 100644
--- a/docs/userguide/storagedriver/btrfs-driver.md
+++ b/docs/userguide/storagedriver/btrfs-driver.md
@@ -112,6 +112,7 @@ commands. The example below shows a truncated output of an `ls -l` command an
image layer:
$ ls -l /var/lib/docker/btrfs/subvolumes/0a17decee4139b0de68478f149cc16346f5e711c5ae3bb969895f22dd6723751/
+
total 0
drwxr-xr-x 1 root root 1372 Oct 9 08:39 bin
drwxr-xr-x 1 root root 0 Apr 10 2014 boot
@@ -173,6 +174,7 @@ Assuming your system meets the prerequisites, do the following:
1. Install the "btrfs-tools" package.
$ sudo apt-get install btrfs-tools
+
Reading package lists... Done
Building dependency tree
|