mirror of https://github.com/docker/docs.git
3626 lines
180 KiB
XML
3626 lines
180 KiB
XML
<?xml version="1.0" encoding="utf-8" standalone="yes" ?>
|
||
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
|
||
<channel>
|
||
<title>Docker Docs</title>
|
||
<link>http://localhost/</link>
|
||
<description>Recent content on Docker Docs</description>
|
||
<generator>Hugo -- gohugo.io</generator>
|
||
<language>en-us</language>
|
||
<atom:link href="http://localhost/index.xml" rel="self" type="application/rss+xml" />
|
||
|
||
<item>
|
||
<title>Link via an ambassador container</title>
|
||
<link>http://localhost/articles/ambassador_pattern_linking/</link>
|
||
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
|
||
|
||
<guid>http://localhost/articles/ambassador_pattern_linking/</guid>
|
||
<description>
|
||
|
||
<h1 id="link-via-an-ambassador-container">Link via an ambassador container</h1>
|
||
|
||
<h2 id="introduction">Introduction</h2>
|
||
|
||
<p>Rather than hardcoding network links between a service consumer and
|
||
provider, Docker encourages service portability, for example instead of:</p>
|
||
|
||
<pre><code>(consumer) --&gt; (redis)
|
||
</code></pre>
|
||
|
||
<p>Requiring you to restart the <code>consumer</code> to attach it to a different
|
||
<code>redis</code> service, you can add ambassadors:</p>
|
||
|
||
<pre><code>(consumer) --&gt; (redis-ambassador) --&gt; (redis)
|
||
</code></pre>
|
||
|
||
<p>Or</p>
|
||
|
||
<pre><code>(consumer) --&gt; (redis-ambassador) ---network---&gt; (redis-ambassador) --&gt; (redis)
|
||
</code></pre>
|
||
|
||
<p>When you need to rewire your consumer to talk to a different Redis
|
||
server, you can just restart the <code>redis-ambassador</code> container that the
|
||
consumer is connected to.</p>
|
||
|
||
<p>This pattern also allows you to transparently move the Redis server to a
|
||
different docker host from the consumer.</p>
|
||
|
||
<p>Using the <code>svendowideit/ambassador</code> container, the link wiring is
|
||
controlled entirely from the <code>docker run</code> parameters.</p>
|
||
|
||
<h2 id="two-host-example">Two host example</h2>
|
||
|
||
<p>Start actual Redis server on one Docker host</p>
|
||
|
||
<pre><code>big-server $ docker run -d --name redis crosbymichael/redis
|
||
</code></pre>
|
||
|
||
<p>Then add an ambassador linked to the Redis server, mapping a port to the
|
||
outside world</p>
|
||
|
||
<pre><code>big-server $ docker run -d --link redis:redis --name redis_ambassador -p 6379:6379 svendowideit/ambassador
|
||
</code></pre>
|
||
|
||
<p>On the other host, you can set up another ambassador setting environment
|
||
variables for each remote port we want to proxy to the <code>big-server</code></p>
|
||
|
||
<pre><code>client-server $ docker run -d --name redis_ambassador --expose 6379 -e REDIS_PORT_6379_TCP=tcp://192.168.1.52:6379 svendowideit/ambassador
|
||
</code></pre>
|
||
|
||
<p>Then on the <code>client-server</code> host, you can use a Redis client container
|
||
to talk to the remote Redis server, just by linking to the local Redis
|
||
ambassador.</p>
|
||
|
||
<pre><code>client-server $ docker run -i -t --rm --link redis_ambassador:redis relateiq/redis-cli
|
||
redis 172.17.0.160:6379&gt; ping
|
||
PONG
|
||
</code></pre>
|
||
|
||
<h2 id="how-it-works">How it works</h2>
|
||
|
||
<p>The following example shows what the <code>svendowideit/ambassador</code> container
|
||
does automatically (with a tiny amount of <code>sed</code>)</p>
|
||
|
||
<p>On the Docker host (192.168.1.52) that Redis will run on:</p>
|
||
|
||
<pre><code># start actual redis server
|
||
$ docker run -d --name redis crosbymichael/redis
|
||
|
||
# get a redis-cli container for connection testing
|
||
$ docker pull relateiq/redis-cli
|
||
|
||
# test the redis server by talking to it directly
|
||
$ docker run -t -i --rm --link redis:redis relateiq/redis-cli
|
||
redis 172.17.0.136:6379&gt; ping
|
||
PONG
|
||
^D
|
||
|
||
# add redis ambassador
|
||
$ docker run -t -i --link redis:redis --name redis_ambassador -p 6379:6379 busybox sh
|
||
</code></pre>
|
||
|
||
<p>In the <code>redis_ambassador</code> container, you can see the linked Redis
|
||
containers <code>env</code>:</p>
|
||
|
||
<pre><code>$ env
|
||
REDIS_PORT=tcp://172.17.0.136:6379
|
||
REDIS_PORT_6379_TCP_ADDR=172.17.0.136
|
||
REDIS_NAME=/redis_ambassador/redis
|
||
HOSTNAME=19d7adf4705e
|
||
REDIS_PORT_6379_TCP_PORT=6379
|
||
HOME=/
|
||
REDIS_PORT_6379_TCP_PROTO=tcp
|
||
container=lxc
|
||
REDIS_PORT_6379_TCP=tcp://172.17.0.136:6379
|
||
TERM=xterm
|
||
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
|
||
PWD=/
|
||
</code></pre>
|
||
|
||
<p>This environment is used by the ambassador <code>socat</code> script to expose Redis
|
||
to the world (via the <code>-p 6379:6379</code> port mapping):</p>
|
||
|
||
<pre><code>$ docker rm redis_ambassador
|
||
$ sudo ./contrib/mkimage-unittest.sh
|
||
$ docker run -t -i --link redis:redis --name redis_ambassador -p 6379:6379 docker-ut sh
|
||
|
||
$ socat TCP4-LISTEN:6379,fork,reuseaddr TCP4:172.17.0.136:6379
|
||
</code></pre>
|
||
|
||
<p>Now ping the Redis server via the ambassador:</p>
|
||
|
||
<p>Now go to a different server:</p>
|
||
|
||
<pre><code>$ sudo ./contrib/mkimage-unittest.sh
|
||
$ docker run -t -i --expose 6379 --name redis_ambassador docker-ut sh
|
||
|
||
$ socat TCP4-LISTEN:6379,fork,reuseaddr TCP4:192.168.1.52:6379
|
||
</code></pre>
|
||
|
||
<p>And get the <code>redis-cli</code> image so we can talk over the ambassador bridge.</p>
|
||
|
||
<pre><code>$ docker pull relateiq/redis-cli
|
||
$ docker run -i -t --rm --link redis_ambassador:redis relateiq/redis-cli
|
||
redis 172.17.0.160:6379&gt; ping
|
||
PONG
|
||
</code></pre>
|
||
|
||
<h2 id="the-svendowideit-ambassador-dockerfile">The svendowideit/ambassador Dockerfile</h2>
|
||
|
||
<p>The <code>svendowideit/ambassador</code> image is a small <code>busybox</code> image with
|
||
<code>socat</code> built in. When you start the container, it uses a small <code>sed</code>
|
||
script to parse out the (possibly multiple) link environment variables
|
||
to set up the port forwarding. On the remote host, you need to set the
|
||
variable using the <code>-e</code> command line option.</p>
|
||
|
||
<pre><code>--expose 1234 -e REDIS_PORT_1234_TCP=tcp://192.168.1.52:6379
|
||
</code></pre>
|
||
|
||
<p>Will forward the local <code>1234</code> port to the remote IP and port, in this
|
||
case <code>192.168.1.52:6379</code>.</p>
|
||
|
||
<pre><code>#
|
||
#
|
||
# first you need to build the docker-ut image
|
||
# using ./contrib/mkimage-unittest.sh
|
||
# then
|
||
# docker build -t SvenDowideit/ambassador .
|
||
# docker tag SvenDowideit/ambassador ambassador
|
||
# then to run it (on the host that has the real backend on it)
|
||
# docker run -t -i --link redis:redis --name redis_ambassador -p 6379:6379 ambassador
|
||
# on the remote host, you can set up another ambassador
|
||
# docker run -t -i --name redis_ambassador --expose 6379 sh
|
||
|
||
FROM docker-ut
|
||
MAINTAINER SvenDowideit@home.org.au
|
||
|
||
|
||
CMD env | grep _TCP= | sed 's/.*_PORT_\([0-9]*\)_TCP=tcp:\/\/\(.*\):\(.*\)/socat TCP4-LISTEN:\1,fork,reuseaddr TCP4:\2:\3 \&amp;/' | sh &amp;&amp; top
|
||
</code></pre>
|
||
</description>
|
||
</item>
|
||
|
||
<item>
|
||
<title>Resizing a Boot2Docker volume </title>
|
||
<link>http://localhost/articles/b2d_volume_resize/</link>
|
||
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
|
||
|
||
<guid>http://localhost/articles/b2d_volume_resize/</guid>
|
||
<description>
|
||
|
||
<h1 id="getting-no-space-left-on-device-errors-with-boot2docker">Getting “no space left on device” errors with Boot2Docker?</h1>
|
||
|
||
<p>If you&rsquo;re using Boot2Docker with a large number of images, or the images you&rsquo;re
|
||
working with are very large, your pulls might start failing with &ldquo;no space left
|
||
on device&rdquo; errors when the Boot2Docker volume fills up. There are two solutions
|
||
you can try.</p>
|
||
|
||
<h2 id="solution-1-add-the-diskimage-property-in-boot2docker-profile">Solution 1: Add the <code>DiskImage</code> property in boot2docker profile</h2>
|
||
|
||
<p>The <code>boot2docker</code> command reads its configuration from the <code>$BOOT2DOCKER_PROFILE</code> if set, or <code>$BOOT2DOCKER_DIR/profile</code> or <code>$HOME/.boot2docker/profile</code> (on Windows this is <code>%USERPROFILE%/.boot2docker/profile</code>).</p>
|
||
|
||
<ol>
|
||
<li><p>View the existing configuration, use the <code>boot2docker config</code> command.</p>
|
||
|
||
<pre><code>$ boot2docker config
|
||
# boot2docker profile filename: /Users/mary/.boot2docker/profile
|
||
Init = false
|
||
Verbose = false
|
||
Driver = &quot;virtualbox&quot;
|
||
Clobber = true
|
||
ForceUpgradeDownload = false
|
||
SSH = &quot;ssh&quot;
|
||
SSHGen = &quot;ssh-keygen&quot;
|
||
SSHKey = &quot;/Users/mary/.ssh/id_boot2docker&quot;
|
||
VM = &quot;boot2docker-vm&quot;
|
||
Dir = &quot;/Users/mary/.boot2docker&quot;
|
||
ISOURL = &quot;https://api.github.com/repos/boot2docker/boot2docker/releases&quot;
|
||
ISO = &quot;/Users/mary/.boot2docker/boot2docker.iso&quot;
|
||
DiskSize = 20000
|
||
Memory = 2048
|
||
CPUs = 8
|
||
SSHPort = 2022
|
||
DockerPort = 0
|
||
HostIP = &quot;192.168.59.3&quot;
|
||
DHCPIP = &quot;192.168.59.99&quot;
|
||
NetMask = [255, 255, 255, 0]
|
||
LowerIP = &quot;192.168.59.103&quot;
|
||
UpperIP = &quot;192.168.59.254&quot;
|
||
DHCPEnabled = true
|
||
Serial = false
|
||
SerialFile = &quot;/Users/mary/.boot2docker/boot2docker-vm.sock&quot;
|
||
Waittime = 300
|
||
Retries = 75
|
||
</code></pre></li>
|
||
</ol>
|
||
|
||
<p>The configuration shows you where <code>boot2docker</code> is looking for the <code>profile</code> file. It also output the settings that are in use.</p>
|
||
|
||
<ol>
|
||
<li><p>Initialise a default file to customize using <code>boot2docker config &gt; ~/.boot2docker/profile</code> command.</p></li>
|
||
|
||
<li><p>Add the following lines to <code>$HOME/.boot2docker/profile</code>:</p>
|
||
|
||
<pre><code># Disk image size in MB
|
||
DiskSize = 50000
|
||
</code></pre></li>
|
||
|
||
<li><p>Run the following sequence of commands to restart Boot2Docker with the new settings.</p>
|
||
|
||
<pre><code>$ boot2docker poweroff
|
||
$ boot2docker destroy
|
||
$ boot2docker init
|
||
$ boot2docker up
|
||
</code></pre></li>
|
||
</ol>
|
||
|
||
<h2 id="solution-2-increase-the-size-of-boot2docker-volume">Solution 2: Increase the size of boot2docker volume</h2>
|
||
|
||
<p>This solution increases the volume size by first cloning it, then resizing it
|
||
using a disk partitioning tool. We recommend
|
||
<a href="http://gparted.sourceforge.net/download.php/index.php">GParted</a>. The tool comes
|
||
as a bootable ISO, is a free download, and works well with VirtualBox.</p>
|
||
|
||
<ol>
|
||
<li>Stop Boot2Docker</li>
|
||
</ol>
|
||
|
||
<p>Issue the command to stop the Boot2Docker VM on the command line:</p>
|
||
|
||
<pre><code> $ boot2docker stop
|
||
</code></pre>
|
||
|
||
<ol>
|
||
<li>Clone the VMDK image to a VDI image</li>
|
||
</ol>
|
||
|
||
<p>Boot2Docker ships with a VMDK image, which can&rsquo;t be resized by VirtualBox&rsquo;s
|
||
native tools. We will instead create a VDI volume and clone the VMDK volume to
|
||
it.</p>
|
||
|
||
<ol>
|
||
<li><p>Using the command line VirtualBox tools, clone the VMDK image to a VDI image:</p>
|
||
|
||
<pre><code>$ vboxmanage clonehd /full/path/to/boot2docker-hd.vmdk /full/path/to/&lt;newVDIimage&gt;.vdi --format VDI --variant Standard
|
||
</code></pre></li>
|
||
|
||
<li><p>Resize the VDI volume</p></li>
|
||
</ol>
|
||
|
||
<p>Choose a size that will be appropriate for your needs. If you&rsquo;re spinning up a
|
||
lot of containers, or your containers are particularly large, larger will be
|
||
better:</p>
|
||
|
||
<pre><code> $ vboxmanage modifyhd /full/path/to/&lt;newVDIimage&gt;.vdi --resize &lt;size in MB&gt;
|
||
</code></pre>
|
||
|
||
<ol>
|
||
<li>Download a disk partitioning tool ISO</li>
|
||
</ol>
|
||
|
||
<p>To resize the volume, we&rsquo;ll use <a href="http://gparted.sourceforge.net/download.php/">GParted</a>.
|
||
Once you&rsquo;ve downloaded the tool, add the ISO to the Boot2Docker VM IDE bus.
|
||
You might need to create the bus before you can add the ISO.</p>
|
||
|
||
<blockquote>
|
||
<p><strong>Note:</strong>
|
||
It&rsquo;s important that you choose a partitioning tool that is available as an ISO so
|
||
that the Boot2Docker VM can be booted with it.</p>
|
||
</blockquote>
|
||
|
||
<p><table>
|
||
<tr>
|
||
<td><img src="http://localhost/articles/b2d_volume_images/add_new_controller.png"><br><br></td>
|
||
</tr>
|
||
<tr>
|
||
<td><img src="http://localhost/articles/b2d_volume_images/add_cd.png"></td>
|
||
</tr>
|
||
</table></p>
|
||
|
||
<ol>
|
||
<li>Add the new VDI image</li>
|
||
</ol>
|
||
|
||
<p>In the settings for the Boot2Docker image in VirtualBox, remove the VMDK image
|
||
from the SATA controller and add the VDI image.</p>
|
||
|
||
<p><img src="http://localhost/articles/b2d_volume_images/add_volume.png"></p>
|
||
|
||
<ol>
|
||
<li><p>Verify the boot order</p>
|
||
|
||
<p>In the <strong>System</strong> settings for the Boot2Docker VM, make sure that <strong>CD/DVD</strong> is
|
||
at the top of the <strong>Boot Order</strong> list.</p>
|
||
|
||
<p><img src="http://localhost/articles/b2d_volume_images/boot_order.png"></p></li>
|
||
|
||
<li><p>Boot to the disk partitioning ISO</p></li>
|
||
</ol>
|
||
|
||
<p>Manually start the Boot2Docker VM in VirtualBox, and the disk partitioning ISO
|
||
should start up. Using GParted, choose the <strong>GParted Live (default settings)</strong>
|
||
option. Choose the default keyboard, language, and XWindows settings, and the
|
||
GParted tool will start up and display the VDI volume you created. Right click
|
||
on the VDI and choose <strong>Resize/Move</strong>.</p>
|
||
|
||
<p><img src="http://localhost/articles/b2d_volume_images/gparted.png"></p>
|
||
|
||
<ol>
|
||
<li><p>Drag the slider representing the volume to the maximum available size.</p></li>
|
||
|
||
<li><p>Click <strong>Resize/Move</strong> followed by <strong>Apply</strong>.</p></li>
|
||
</ol>
|
||
|
||
<p><img src="http://localhost/articles/b2d_volume_images/gparted2.png"></p>
|
||
|
||
<ol>
|
||
<li><p>Quit GParted and shut down the VM.</p></li>
|
||
|
||
<li><p>Remove the GParted ISO from the IDE controller for the Boot2Docker VM in
|
||
VirtualBox.</p></li>
|
||
|
||
<li><p>Start the Boot2Docker VM</p></li>
|
||
</ol>
|
||
|
||
<p>Fire up the Boot2Docker VM manually in VirtualBox. The VM should log in
|
||
automatically, but if it doesn&rsquo;t, the credentials are <code>docker/tcuser</code>. Using
|
||
the <code>df -h</code> command, verify that your changes took effect.</p>
|
||
|
||
<p><img src="http://localhost/articles/b2d_volume_images/verify.png"></p>
|
||
|
||
<p>You&rsquo;re done!</p>
|
||
</description>
|
||
</item>
|
||
|
||
<item>
|
||
<title>Get started with containers</title>
|
||
<link>http://localhost/articles/basics/</link>
|
||
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
|
||
|
||
<guid>http://localhost/articles/basics/</guid>
|
||
<description>
|
||
|
||
<h1 id="get-started-with-containers">Get started with containers</h1>
|
||
|
||
<p>This guide assumes you have a working installation of Docker. To check
|
||
your Docker install, run the following command:</p>
|
||
|
||
<pre><code># Check that you have a working install
|
||
$ docker info
|
||
</code></pre>
|
||
|
||
<p>If you get <code>docker: command not found</code> or something like
|
||
<code>/var/lib/docker/repositories: permission denied</code> you may have an
|
||
incomplete Docker installation or insufficient privileges to access
|
||
Docker on your machine. Please</p>
|
||
|
||
<p>Additionally, depending on your Docker system configuration, you may be required
|
||
to preface each <code>docker</code> command with <code>sudo</code>. To avoid having to use <code>sudo</code> with
|
||
the <code>docker</code> command, your system administrator can create a Unix group called
|
||
<code>docker</code> and add users to it.</p>
|
||
|
||
<p>For more information about installing Docker or <code>sudo</code> configuration, refer to
|
||
the <a href="http://localhost/articles/articles/installation">installation</a> instructions for your operating system.</p>
|
||
|
||
<h2 id="download-a-pre-built-image">Download a pre-built image</h2>
|
||
|
||
<pre><code># Download an ubuntu image
|
||
$ docker pull ubuntu
|
||
</code></pre>
|
||
|
||
<p>This will find the <code>ubuntu</code> image by name on
|
||
<a href="http://localhost/userguide/dockerrepos/#searching-for-images"><em>Docker Hub</em></a>
|
||
and download it from <a href="https://hub.docker.com">Docker Hub</a> to a local
|
||
image cache.</p>
|
||
|
||
<blockquote>
|
||
<p><strong>Note</strong>:
|
||
When the image has successfully downloaded, you will see a 12 character
|
||
hash <code>539c0211cd76: Download complete</code> which is the
|
||
short form of the image ID. These short image IDs are the first 12
|
||
characters of the full image ID - which can be found using
|
||
<code>docker inspect</code> or <code>docker images --no-trunc=true</code></p>
|
||
|
||
<p><strong>Note:</strong> if you are using a remote Docker daemon, such as Boot2Docker,
|
||
then <em>do not</em> type the <code>sudo</code> before the <code>docker</code> commands shown in the
|
||
documentation&rsquo;s examples.</p>
|
||
</blockquote>
|
||
|
||
<h2 id="running-an-interactive-shell">Running an interactive shell</h2>
|
||
|
||
<pre><code># Run an interactive shell in the ubuntu image,
|
||
# allocate a tty, attach stdin and stdout
|
||
# To detach the tty without exiting the shell,
|
||
# use the escape sequence Ctrl-p + Ctrl-q
|
||
# note: This will continue to exist in a stopped state once exited (see &quot;docker ps -a&quot;)
|
||
$ docker run -i -t ubuntu /bin/bash
|
||
</code></pre>
|
||
|
||
<h2 id="bind-docker-to-another-host-port-or-a-unix-socket">Bind Docker to another host/port or a Unix socket</h2>
|
||
|
||
<blockquote>
|
||
<p><strong>Warning</strong>:
|
||
Changing the default <code>docker</code> daemon binding to a
|
||
TCP port or Unix <em>docker</em> user group will increase your security risks
|
||
by allowing non-root users to gain <em>root</em> access on the host. Make sure
|
||
you control access to <code>docker</code>. If you are binding
|
||
to a TCP port, anyone with access to that port has full Docker access;
|
||
so it is not advisable on an open network.</p>
|
||
</blockquote>
|
||
|
||
<p>With <code>-H</code> it is possible to make the Docker daemon to listen on a
|
||
specific IP and port. By default, it will listen on
|
||
<code>unix:///var/run/docker.sock</code> to allow only local connections by the
|
||
<em>root</em> user. You <em>could</em> set it to <code>0.0.0.0:2375</code> or a specific host IP
|
||
to give access to everybody, but that is <strong>not recommended</strong> because
|
||
then it is trivial for someone to gain root access to the host where the
|
||
daemon is running.</p>
|
||
|
||
<p>Similarly, the Docker client can use <code>-H</code> to connect to a custom port.</p>
|
||
|
||
<p><code>-H</code> accepts host and port assignment in the following format:</p>
|
||
|
||
<pre><code>tcp://[host][:port]` or `unix://path
|
||
</code></pre>
|
||
|
||
<p>For example:</p>
|
||
|
||
<ul>
|
||
<li><code>tcp://host:2375</code> -&gt; TCP connection on
|
||
host:2375</li>
|
||
<li><code>unix://path/to/socket</code> -&gt; Unix socket located
|
||
at <code>path/to/socket</code></li>
|
||
</ul>
|
||
|
||
<p><code>-H</code>, when empty, will default to the same value as
|
||
when no <code>-H</code> was passed in.</p>
|
||
|
||
<p><code>-H</code> also accepts short form for TCP bindings:</p>
|
||
|
||
<pre><code>host[:port]` or `:port
|
||
</code></pre>
|
||
|
||
<p>Run Docker in daemon mode:</p>
|
||
|
||
<pre><code>$ sudo &lt;path to&gt;/docker -H 0.0.0.0:5555 -d &amp;
|
||
</code></pre>
|
||
|
||
<p>Download an <code>ubuntu</code> image:</p>
|
||
|
||
<pre><code>$ docker -H :5555 pull ubuntu
|
||
</code></pre>
|
||
|
||
<p>You can use multiple <code>-H</code>, for example, if you want to listen on both
|
||
TCP and a Unix socket</p>
|
||
|
||
<pre><code># Run docker in daemon mode
|
||
$ sudo &lt;path to&gt;/docker -H tcp://127.0.0.1:2375 -H unix:///var/run/docker.sock -d &amp;
|
||
# Download an ubuntu image, use default Unix socket
|
||
$ docker pull ubuntu
|
||
# OR use the TCP port
|
||
$ docker -H tcp://127.0.0.1:2375 pull ubuntu
|
||
</code></pre>
|
||
|
||
<h2 id="starting-a-long-running-worker-process">Starting a long-running worker process</h2>
|
||
|
||
<pre><code># Start a very useful long-running process
|
||
$ JOB=$(docker run -d ubuntu /bin/sh -c &quot;while true; do echo Hello world; sleep 1; done&quot;)
|
||
|
||
# Collect the output of the job so far
|
||
$ docker logs $JOB
|
||
|
||
# Kill the job
|
||
$ docker kill $JOB
|
||
</code></pre>
|
||
|
||
<h2 id="listing-containers">Listing containers</h2>
|
||
|
||
<pre><code>$ docker ps # Lists only running containers
|
||
$ docker ps -a # Lists all containers
|
||
</code></pre>
|
||
|
||
<h2 id="controlling-containers">Controlling containers</h2>
|
||
|
||
<pre><code># Start a new container
|
||
$ JOB=$(docker run -d ubuntu /bin/sh -c &quot;while true; do echo Hello world; sleep 1; done&quot;)
|
||
|
||
# Stop the container
|
||
$ docker stop $JOB
|
||
|
||
# Start the container
|
||
$ docker start $JOB
|
||
|
||
# Restart the container
|
||
$ docker restart $JOB
|
||
|
||
# SIGKILL a container
|
||
$ docker kill $JOB
|
||
|
||
# Remove a container
|
||
$ docker stop $JOB # Container must be stopped to remove it
|
||
$ docker rm $JOB
|
||
</code></pre>
|
||
|
||
<h2 id="bind-a-service-on-a-tcp-port">Bind a service on a TCP port</h2>
|
||
|
||
<pre><code># Bind port 4444 of this container, and tell netcat to listen on it
|
||
$ JOB=$(docker run -d -p 4444 ubuntu:12.10 /bin/nc -l 4444)
|
||
|
||
# Which public port is NATed to my container?
|
||
$ PORT=$(docker port $JOB 4444 | awk -F: '{ print $2 }')
|
||
|
||
# Connect to the public port
|
||
$ echo hello world | nc 127.0.0.1 $PORT
|
||
|
||
# Verify that the network connection worked
|
||
$ echo &quot;Daemon received: $(docker logs $JOB)&quot;
|
||
</code></pre>
|
||
|
||
<h2 id="committing-saving-a-container-state">Committing (saving) a container state</h2>
|
||
|
||
<p>Save your containers state to an image, so the state can be
|
||
re-used.</p>
|
||
|
||
<p>When you commit your container only the differences between the image
|
||
the container was created from and the current state of the container
|
||
will be stored (as a diff). See which images you already have using the
|
||
<code>docker images</code> command.</p>
|
||
|
||
<pre><code># Commit your container to a new named image
|
||
$ docker commit &lt;container_id&gt; &lt;some_name&gt;
|
||
|
||
# List your images
|
||
$ docker images
|
||
</code></pre>
|
||
|
||
<p>You now have an image state from which you can create new instances.</p>
|
||
|
||
<p>Read more about <a href="http://localhost/userguide/dockerrepos"><em>Share Images via
|
||
Repositories</em></a> or
|
||
continue to the complete <a href="http://localhost/articles/articles/reference/commandline/cli"><em>Command
|
||
Line</em></a></p>
|
||
</description>
|
||
</item>
|
||
|
||
<item>
|
||
<title>Create a base image</title>
|
||
<link>http://localhost/articles/baseimages/</link>
|
||
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
|
||
|
||
<guid>http://localhost/articles/baseimages/</guid>
|
||
<description>
|
||
|
||
<h1 id="create-a-base-image">Create a base image</h1>
|
||
|
||
<p>So you want to create your own <a href="http://localhost/terms/image/#base-image"><em>Base Image</em></a>? Great!</p>
|
||
|
||
<p>The specific process will depend heavily on the Linux distribution you
|
||
want to package. We have some examples below, and you are encouraged to
|
||
submit pull requests to contribute new ones.</p>
|
||
|
||
<h2 id="create-a-full-image-using-tar">Create a full image using tar</h2>
|
||
|
||
<p>In general, you&rsquo;ll want to start with a working machine that is running
|
||
the distribution you&rsquo;d like to package as a base image, though that is
|
||
not required for some tools like Debian&rsquo;s
|
||
<a href="https://wiki.debian.org/Debootstrap">Debootstrap</a>, which you can also
|
||
use to build Ubuntu images.</p>
|
||
|
||
<p>It can be as simple as this to create an Ubuntu base image:</p>
|
||
|
||
<pre><code>$ sudo debootstrap raring raring &gt; /dev/null
|
||
$ sudo tar -C raring -c . | docker import - raring
|
||
a29c15f1bf7a
|
||
$ docker run raring cat /etc/lsb-release
|
||
DISTRIB_ID=Ubuntu
|
||
DISTRIB_RELEASE=13.04
|
||
DISTRIB_CODENAME=raring
|
||
DISTRIB_DESCRIPTION=&quot;Ubuntu 13.04&quot;
|
||
</code></pre>
|
||
|
||
<p>There are more example scripts for creating base images in the Docker
|
||
GitHub Repo:</p>
|
||
|
||
<ul>
|
||
<li><a href="https://github.com/docker/docker/blob/master/contrib/mkimage-busybox.sh">BusyBox</a></li>
|
||
<li>CentOS / Scientific Linux CERN (SLC) <a href="https://github.com/docker/docker/blob/master/contrib/mkimage-rinse.sh">on Debian/Ubuntu</a> or
|
||
<a href="https://github.com/docker/docker/blob/master/contrib/mkimage-yum.sh">on CentOS/RHEL/SLC/etc.</a></li>
|
||
<li><a href="https://github.com/docker/docker/blob/master/contrib/mkimage-debootstrap.sh">Debian / Ubuntu</a></li>
|
||
</ul>
|
||
|
||
<h2 id="creating-a-simple-base-image-using-scratch">Creating a simple base image using <code>scratch</code></h2>
|
||
|
||
<p>There is a special repository in the Docker registry called <code>scratch</code>, which
|
||
was created using an empty tar file:</p>
|
||
|
||
<pre><code>$ tar cv --files-from /dev/null | docker import - scratch
|
||
</code></pre>
|
||
|
||
<p>which you can <code>docker pull</code>. You can then use that
|
||
image to base your new minimal containers <code>FROM</code>:</p>
|
||
|
||
<pre><code>FROM scratch
|
||
COPY true-asm /true
|
||
CMD [&quot;/true&quot;]
|
||
</code></pre>
|
||
|
||
<p>The <code>Dockerfile</code> above is from an extremely minimal image - <a href="https://github.com/tianon/dockerfiles/tree/master/true">tianon/true</a>.</p>
|
||
|
||
<h2 id="more-resources">More resources</h2>
|
||
|
||
<p>There are lots more resources available to help you write your &lsquo;Dockerfile`.</p>
|
||
|
||
<ul>
|
||
<li>There&rsquo;s a <a href="http://localhost/articles/articles/reference/builder/">complete guide to all the instructions</a> available for use in a <code>Dockerfile</code> in the reference section.</li>
|
||
<li>To help you write a clear, readable, maintainable <code>Dockerfile</code>, we&rsquo;ve also
|
||
written a <a href="http://localhost/articles/articles/articles/dockerfile_best-practices"><code>Dockerfile</code> Best Practices guide</a>.</li>
|
||
<li>If your goal is to create a new Official Repository, be sure to read up on Docker&rsquo;s <a href="http://localhost/articles/articles/docker-hub/official_repos/">Official Repositories</a>.</li>
|
||
</ul>
|
||
</description>
|
||
</item>
|
||
|
||
<item>
|
||
<title>Using Chef</title>
|
||
<link>http://localhost/articles/chef/</link>
|
||
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
|
||
|
||
<guid>http://localhost/articles/chef/</guid>
|
||
<description>
|
||
|
||
<h1 id="using-chef">Using Chef</h1>
|
||
|
||
<blockquote>
|
||
<p><strong>Note</strong>:
|
||
Please note this is a community contributed installation path. The only
|
||
<code>official</code> installation is using the
|
||
<a href="http://localhost/articles/articles/installation/ubuntulinux"><em>Ubuntu</em></a> installation
|
||
path. This version may sometimes be out of date.</p>
|
||
</blockquote>
|
||
|
||
<h2 id="requirements">Requirements</h2>
|
||
|
||
<p>To use this guide you&rsquo;ll need a working installation of
|
||
<a href="http://www.getchef.com/">Chef</a>. This cookbook supports a variety of
|
||
operating systems.</p>
|
||
|
||
<h2 id="installation">Installation</h2>
|
||
|
||
<p>The cookbook is available on the <a href="http://community.opscode.com/cookbooks/docker">Chef Community
|
||
Site</a> and can be
|
||
installed using your favorite cookbook dependency manager.</p>
|
||
|
||
<p>The source can be found on
|
||
<a href="https://github.com/bflad/chef-docker">GitHub</a>.</p>
|
||
|
||
<h2 id="usage">Usage</h2>
|
||
|
||
<p>The cookbook provides recipes for installing Docker, configuring init
|
||
for Docker, and resources for managing images and containers. It
|
||
supports almost all Docker functionality.</p>
|
||
|
||
<h3 id="installation-1">Installation</h3>
|
||
|
||
<pre><code>include_recipe 'docker'
|
||
</code></pre>
|
||
|
||
<h3 id="images">Images</h3>
|
||
|
||
<p>The next step is to pull a Docker image. For this, we have a resource:</p>
|
||
|
||
<pre><code>docker_image 'samalba/docker-registry'
|
||
</code></pre>
|
||
|
||
<p>This is equivalent to running:</p>
|
||
|
||
<pre><code>$ docker pull samalba/docker-registry
|
||
</code></pre>
|
||
|
||
<p>There are attributes available to control how long the cookbook will
|
||
allow for downloading (5 minute default).</p>
|
||
|
||
<p>To remove images you no longer need:</p>
|
||
|
||
<pre><code>docker_image 'samalba/docker-registry' do
|
||
action :remove
|
||
end
|
||
</code></pre>
|
||
|
||
<h3 id="containers">Containers</h3>
|
||
|
||
<p>Now you have an image where you can run commands within a container
|
||
managed by Docker.</p>
|
||
|
||
<pre><code>docker_container 'samalba/docker-registry' do
|
||
detach true
|
||
port '5000:5000'
|
||
env 'SETTINGS_FLAVOR=local'
|
||
volume '/mnt/docker:/docker-storage'
|
||
end
|
||
</code></pre>
|
||
|
||
<p>This is equivalent to running the following command, but under upstart:</p>
|
||
|
||
<pre><code>$ docker run --detach=true --publish='5000:5000' --env='SETTINGS_FLAVOR=local' --volume='/mnt/docker:/docker-storage' samalba/docker-registry
|
||
</code></pre>
|
||
|
||
<p>The resources will accept a single string or an array of values for any
|
||
Docker flags that allow multiple values.</p>
|
||
</description>
|
||
</item>
|
||
|
||
<item>
|
||
<title>Using certificates for repository client verification</title>
|
||
<link>http://localhost/articles/certificates/</link>
|
||
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
|
||
|
||
<guid>http://localhost/articles/certificates/</guid>
|
||
<description>
|
||
|
||
<h1 id="using-certificates-for-repository-client-verification">Using certificates for repository client verification</h1>
|
||
|
||
<p>In <a href="http://localhost/articles/articles/articles/https">Running Docker with HTTPS</a>, you learned that, by default,
|
||
Docker runs via a non-networked Unix socket and TLS must be enabled in order
|
||
to have the Docker client and the daemon communicate securely over HTTPS.</p>
|
||
|
||
<p>Now, you will see how to allow the Docker registry (i.e., <em>a server</em>) to
|
||
verify that the Docker daemon (i.e., <em>a client</em>) has the right to access the
|
||
images being hosted with <em>certificate-based client-server authentication</em>.</p>
|
||
|
||
<p>We will show you how to install a Certificate Authority (CA) root certificate
|
||
for the registry and how to set the client TLS certificate for verification.</p>
|
||
|
||
<h2 id="understanding-the-configuration">Understanding the configuration</h2>
|
||
|
||
<p>A custom certificate is configured by creating a directory under
|
||
<code>/etc/docker/certs.d</code> using the same name as the registry&rsquo;s hostname (e.g.,
|
||
<code>localhost</code>). All <code>*.crt</code> files are added to this directory as CA roots.</p>
|
||
|
||
<blockquote>
|
||
<p><strong>Note:</strong>
|
||
In the absence of any root certificate authorities, Docker
|
||
will use the system default (i.e., host&rsquo;s root CA set).</p>
|
||
</blockquote>
|
||
|
||
<p>The presence of one or more <code>&lt;filename&gt;.key/cert</code> pairs indicates to Docker
|
||
that there are custom certificates required for access to the desired
|
||
repository.</p>
|
||
|
||
<blockquote>
|
||
<p><strong>Note:</strong>
|
||
If there are multiple certificates, each will be tried in alphabetical
|
||
order. If there is an authentication error (e.g., 403, 404, 5xx, etc.), Docker
|
||
will continue to try with the next certificate.</p>
|
||
</blockquote>
|
||
|
||
<p>Our example is set up like this:</p>
|
||
|
||
<pre><code>/etc/docker/certs.d/ &lt;-- Certificate directory
|
||
└── localhost &lt;-- Hostname
|
||
├── client.cert &lt;-- Client certificate
|
||
├── client.key &lt;-- Client key
|
||
└── localhost.crt &lt;-- Registry certificate
|
||
</code></pre>
|
||
|
||
<h2 id="creating-the-client-certificates">Creating the client certificates</h2>
|
||
|
||
<p>You will use OpenSSL&rsquo;s <code>genrsa</code> and <code>req</code> commands to first generate an RSA
|
||
key and then use the key to create the certificate.</p>
|
||
|
||
<pre><code>$ openssl genrsa -out client.key 1024
|
||
$ openssl req -new -x509 -text -key client.key -out client.cert
|
||
</code></pre>
|
||
|
||
<blockquote>
|
||
<p><strong>Warning:</strong>:
|
||
Using TLS and managing a CA is an advanced topic.
|
||
You should be familiar with OpenSSL, x509, and TLS before
|
||
attempting to use them in production.</p>
|
||
|
||
<p><strong>Warning:</strong>
|
||
These TLS commands will only generate a working set of certificates on Linux.
|
||
The version of OpenSSL in Mac OS X is incompatible with the type of
|
||
certificate Docker requires.</p>
|
||
</blockquote>
|
||
|
||
<h2 id="testing-the-verification-setup">Testing the verification setup</h2>
|
||
|
||
<p>You can test this setup by using Apache to host a Docker registry.
|
||
For this purpose, you can copy a registry tree (containing images) inside
|
||
the Apache root.</p>
|
||
|
||
<blockquote>
|
||
<p><strong>Note:</strong>
|
||
You can find such an example <a href="http://people.gnome.org/~alexl/v1.tar.gz">here</a> - which contains the busybox image.</p>
|
||
</blockquote>
|
||
|
||
<p>Once you set up the registry, you can use the following Apache configuration
|
||
to implement certificate-based protection.</p>
|
||
|
||
<pre><code># This must be in the root context, otherwise it causes a re-negotiation
|
||
# which is not supported by the TLS implementation in go
|
||
SSLVerifyClient optional_no_ca
|
||
|
||
&lt;Location /v1&gt;
|
||
Action cert-protected /cgi-bin/cert.cgi
|
||
SetHandler cert-protected
|
||
|
||
Header set x-docker-registry-version &quot;0.6.2&quot;
|
||
SetEnvIf Host (.*) custom_host=$1
|
||
Header set X-Docker-Endpoints &quot;%{custom_host}e&quot;
|
||
&lt;/Location&gt;
|
||
</code></pre>
|
||
|
||
<p>Save the above content as <code>/etc/httpd/conf.d/registry.conf</code>, and
|
||
continue with creating a <code>cert.cgi</code> file under <code>/var/www/cgi-bin/</code>.</p>
|
||
|
||
<pre><code>#!/bin/bash
|
||
if [ &quot;$HTTPS&quot; != &quot;on&quot; ]; then
|
||
echo &quot;Status: 403 Not using SSL&quot;
|
||
echo &quot;x-docker-registry-version: 0.6.2&quot;
|
||
echo
|
||
exit 0
|
||
fi
|
||
if [ &quot;$SSL_CLIENT_VERIFY&quot; == &quot;NONE&quot; ]; then
|
||
echo &quot;Status: 403 Client certificate invalid&quot;
|
||
echo &quot;x-docker-registry-version: 0.6.2&quot;
|
||
echo
|
||
exit 0
|
||
fi
|
||
echo &quot;Content-length: $(stat --printf='%s' $PATH_TRANSLATED)&quot;
|
||
echo &quot;x-docker-registry-version: 0.6.2&quot;
|
||
echo &quot;X-Docker-Endpoints: $SERVER_NAME&quot;
|
||
echo &quot;X-Docker-Size: 0&quot;
|
||
echo
|
||
|
||
cat $PATH_TRANSLATED
|
||
</code></pre>
|
||
|
||
<p>This CGI script will ensure that all requests to <code>/v1</code> <em>without</em> a valid
|
||
certificate will be returned with a <code>403</code> (i.e., HTTP forbidden) error.</p>
|
||
</description>
|
||
</item>
|
||
|
||
<item>
|
||
<title>Process management with CFEngine</title>
|
||
<link>http://localhost/articles/cfengine_process_management/</link>
|
||
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
|
||
|
||
<guid>http://localhost/articles/cfengine_process_management/</guid>
|
||
<description>
|
||
|
||
<h1 id="process-management-with-cfengine">Process management with CFEngine</h1>
|
||
|
||
<p>Create Docker containers with managed processes.</p>
|
||
|
||
<p>Docker monitors one process in each running container and the container
|
||
lives or dies with that process. By introducing CFEngine inside Docker
|
||
containers, we can alleviate a few of the issues that may arise:</p>
|
||
|
||
<ul>
|
||
<li>It is possible to easily start multiple processes within a
|
||
container, all of which will be managed automatically, with the
|
||
normal <code>docker run</code> command.</li>
|
||
<li>If a managed process dies or crashes, CFEngine will start it again
|
||
within 1 minute.</li>
|
||
<li>The container itself will live as long as the CFEngine scheduling
|
||
daemon (cf-execd) lives. With CFEngine, we are able to decouple the
|
||
life of the container from the uptime of the service it provides.</li>
|
||
</ul>
|
||
|
||
<h2 id="how-it-works">How it works</h2>
|
||
|
||
<p>CFEngine, together with the cfe-docker integration policies, are
|
||
installed as part of the Dockerfile. This builds CFEngine into our
|
||
Docker image.</p>
|
||
|
||
<p>The Dockerfile&rsquo;s <code>ENTRYPOINT</code> takes an arbitrary
|
||
amount of commands (with any desired arguments) as parameters. When we
|
||
run the Docker container these parameters get written to CFEngine
|
||
policies and CFEngine takes over to ensure that the desired processes
|
||
are running in the container.</p>
|
||
|
||
<p>CFEngine scans the process table for the <code>basename</code> of the commands given
|
||
to the <code>ENTRYPOINT</code> and runs the command to start the process if the <code>basename</code>
|
||
is not found. For example, if we start the container with
|
||
<code>docker run &quot;/path/to/my/application parameters&quot;</code>, CFEngine will look for a
|
||
process named <code>application</code> and run the command. If an entry for <code>application</code>
|
||
is not found in the process table at any point in time, CFEngine will execute
|
||
<code>/path/to/my/application parameters</code> to start the application once again. The
|
||
check on the process table happens every minute.</p>
|
||
|
||
<p>Note that it is therefore important that the command to start your
|
||
application leaves a process with the basename of the command. This can
|
||
be made more flexible by making some minor adjustments to the CFEngine
|
||
policies, if desired.</p>
|
||
|
||
<h2 id="usage">Usage</h2>
|
||
|
||
<p>This example assumes you have Docker installed and working. We will
|
||
install and manage <code>apache2</code> and <code>sshd</code>
|
||
in a single container.</p>
|
||
|
||
<p>There are three steps:</p>
|
||
|
||
<ol>
|
||
<li>Install CFEngine into the container.</li>
|
||
<li>Copy the CFEngine Docker process management policy into the
|
||
containerized CFEngine installation.</li>
|
||
<li>Start your application processes as part of the <code>docker run</code> command.</li>
|
||
</ol>
|
||
|
||
<h3 id="building-the-image">Building the image</h3>
|
||
|
||
<p>The first two steps can be done as part of a Dockerfile, as follows.</p>
|
||
|
||
<pre><code>FROM ubuntu
|
||
MAINTAINER Eystein Måløy Stenberg &lt;eytein.stenberg@gmail.com&gt;
|
||
|
||
RUN apt-get update &amp;&amp; apt-get install -y wget lsb-release unzip ca-certificates
|
||
|
||
# install latest CFEngine
|
||
RUN wget -qO- http://cfengine.com/pub/gpg.key | apt-key add -
|
||
RUN echo &quot;deb http://cfengine.com/pub/apt $(lsb_release -cs) main&quot; &gt; /etc/apt/sources.list.d/cfengine-community.list
|
||
RUN apt-get update &amp;&amp; apt-get install -y cfengine-community
|
||
|
||
# install cfe-docker process management policy
|
||
RUN wget https://github.com/estenberg/cfe-docker/archive/master.zip -P /tmp/ &amp;&amp; unzip /tmp/master.zip -d /tmp/
|
||
RUN cp /tmp/cfe-docker-master/cfengine/bin/* /var/cfengine/bin/
|
||
RUN cp /tmp/cfe-docker-master/cfengine/inputs/* /var/cfengine/inputs/
|
||
RUN rm -rf /tmp/cfe-docker-master /tmp/master.zip
|
||
|
||
# apache2 and openssh are just for testing purposes, install your own apps here
|
||
RUN apt-get update &amp;&amp; apt-get install -y openssh-server apache2
|
||
RUN mkdir -p /var/run/sshd
|
||
RUN echo &quot;root:password&quot; | chpasswd # need a password for ssh
|
||
|
||
ENTRYPOINT [&quot;/var/cfengine/bin/docker_processes_run.sh&quot;]
|
||
</code></pre>
|
||
|
||
<p>By saving this file as Dockerfile to a working directory, you can then build
|
||
your image with the docker build command, e.g.,
|
||
<code>docker build -t managed_image</code>.</p>
|
||
|
||
<h3 id="testing-the-container">Testing the container</h3>
|
||
|
||
<p>Start the container with <code>apache2</code> and <code>sshd</code> running and managed, forwarding
|
||
a port to our SSH instance:</p>
|
||
|
||
<pre><code>$ docker run -p 127.0.0.1:222:22 -d managed_image &quot;/usr/sbin/sshd&quot; &quot;/etc/init.d/apache2 start&quot;
|
||
</code></pre>
|
||
|
||
<p>We now clearly see one of the benefits of the cfe-docker integration: it
|
||
allows to start several processes as part of a normal <code>docker run</code> command.</p>
|
||
|
||
<p>We can now log in to our new container and see that both <code>apache2</code> and <code>sshd</code>
|
||
are running. We have set the root password to &ldquo;password&rdquo; in the Dockerfile
|
||
above and can use that to log in with ssh:</p>
|
||
|
||
<pre><code>ssh -p222 root@127.0.0.1
|
||
|
||
ps -ef
|
||
UID PID PPID C STIME TTY TIME CMD
|
||
root 1 0 0 07:48 ? 00:00:00 /bin/bash /var/cfengine/bin/docker_processes_run.sh /usr/sbin/sshd /etc/init.d/apache2 start
|
||
root 18 1 0 07:48 ? 00:00:00 /var/cfengine/bin/cf-execd -F
|
||
root 20 1 0 07:48 ? 00:00:00 /usr/sbin/sshd
|
||
root 32 1 0 07:48 ? 00:00:00 /usr/sbin/apache2 -k start
|
||
www-data 34 32 0 07:48 ? 00:00:00 /usr/sbin/apache2 -k start
|
||
www-data 35 32 0 07:48 ? 00:00:00 /usr/sbin/apache2 -k start
|
||
www-data 36 32 0 07:48 ? 00:00:00 /usr/sbin/apache2 -k start
|
||
root 93 20 0 07:48 ? 00:00:00 sshd: root@pts/0
|
||
root 105 93 0 07:48 pts/0 00:00:00 -bash
|
||
root 112 105 0 07:49 pts/0 00:00:00 ps -ef
|
||
</code></pre>
|
||
|
||
<p>If we stop apache2, it will be started again within a minute by
|
||
CFEngine.</p>
|
||
|
||
<pre><code>service apache2 status
|
||
Apache2 is running (pid 32).
|
||
service apache2 stop
|
||
* Stopping web server apache2 ... waiting [ OK ]
|
||
service apache2 status
|
||
Apache2 is NOT running.
|
||
# ... wait up to 1 minute...
|
||
service apache2 status
|
||
Apache2 is running (pid 173).
|
||
</code></pre>
|
||
|
||
<h2 id="adapting-to-your-applications">Adapting to your applications</h2>
|
||
|
||
<p>To make sure your applications get managed in the same manner, there are
|
||
just two things you need to adjust from the above example:</p>
|
||
|
||
<ul>
|
||
<li>In the Dockerfile used above, install your applications instead of
|
||
<code>apache2</code> and <code>sshd</code>.</li>
|
||
<li>When you start the container with <code>docker run</code>,
|
||
specify the command line arguments to your applications rather than
|
||
<code>apache2</code> and <code>sshd</code>.</li>
|
||
</ul>
|
||
</description>
|
||
</item>
|
||
|
||
<item>
|
||
<title>Best practices for writing Dockerfiles</title>
|
||
<link>http://localhost/articles/dockerfile_best-practices/</link>
|
||
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
|
||
|
||
<guid>http://localhost/articles/dockerfile_best-practices/</guid>
|
||
<description>
|
||
|
||
<h1 id="best-practices-for-writing-dockerfiles">Best practices for writing Dockerfiles</h1>
|
||
|
||
<h2 id="overview">Overview</h2>
|
||
|
||
<p>Docker can build images automatically by reading the instructions from a
|
||
<code>Dockerfile</code>, a text file that contains all the commands, in order, needed to
|
||
build a given image. <code>Dockerfile</code>s adhere to a specific format and use a
|
||
specific set of instructions. You can learn the basics on the
|
||
<a href="https://docs.docker.com/reference/builder/">Dockerfile Reference</a> page. If
|
||
you’re new to writing <code>Dockerfile</code>s, you should start there.</p>
|
||
|
||
<p>This document covers the best practices and methods recommended by Docker,
|
||
Inc. and the Docker community for creating easy-to-use, effective
|
||
<code>Dockerfile</code>s. We strongly suggest you follow these recommendations (in fact,
|
||
if you’re creating an Official Image, you <em>must</em> adhere to these practices).</p>
|
||
|
||
<p>You can see many of these practices and recommendations in action in the <a href="https://github.com/docker-library/buildpack-deps/blob/master/jessie/Dockerfile">buildpack-deps <code>Dockerfile</code></a>.</p>
|
||
|
||
<blockquote>
|
||
<p>Note: for more detailed explanations of any of the Dockerfile commands
|
||
mentioned here, visit the <a href="https://docs.docker.com/reference/builder/">Dockerfile Reference</a> page.</p>
|
||
</blockquote>
|
||
|
||
<h2 id="general-guidelines-and-recommendations">General guidelines and recommendations</h2>
|
||
|
||
<h3 id="containers-should-be-ephemeral">Containers should be ephemeral</h3>
|
||
|
||
<p>The container produced by the image your <code>Dockerfile</code> defines should be as
|
||
ephemeral as possible. By “ephemeral,” we mean that it can be stopped and
|
||
destroyed and a new one built and put in place with an absolute minimum of
|
||
set-up and configuration.</p>
|
||
|
||
<h3 id="use-a-dockerignore-file">Use a .dockerignore file</h3>
|
||
|
||
<p>In most cases, it&rsquo;s best to put each Dockerfile in an empty directory. Then,
|
||
add to that directory only the files needed for building the Dockerfile. To
|
||
increase the build&rsquo;s performance, you can exclude files and directories by
|
||
adding a <code>.dockerignore</code> file to that directory as well. This file supports
|
||
exclusion patterns similar to <code>.gitignore</code> files. For information on creating one,
|
||
see the <a href="http://localhost/articles/articles/reference/builder/#dockerignore-file">.dockerignore file</a>.</p>
|
||
|
||
<h3 id="avoid-installing-unnecessary-packages">Avoid installing unnecessary packages</h3>
|
||
|
||
<p>In order to reduce complexity, dependencies, file sizes, and build times, you
|
||
should avoid installing extra or unnecessary packages just because they
|
||
might be “nice to have.” For example, you don’t need to include a text editor
|
||
in a database image.</p>
|
||
|
||
<h3 id="run-only-one-process-per-container">Run only one process per container</h3>
|
||
|
||
<p>In almost all cases, you should only run a single process in a single
|
||
container. Decoupling applications into multiple containers makes it much
|
||
easier to scale horizontally and reuse containers. If that service depends on
|
||
another service, make use of <a href="https://docs.docker.com/userguide/dockerlinks/">container linking</a>.</p>
|
||
|
||
<h3 id="minimize-the-number-of-layers">Minimize the number of layers</h3>
|
||
|
||
<p>You need to find the balance between readability (and thus long-term
|
||
maintainability) of the <code>Dockerfile</code> and minimizing the number of layers it
|
||
uses. Be strategic and cautious about the number of layers you use.</p>
|
||
|
||
<h3 id="sort-multi-line-arguments">Sort multi-line arguments</h3>
|
||
|
||
<p>Whenever possible, ease later changes by sorting multi-line arguments
|
||
alphanumerically. This will help you avoid duplication of packages and make the
|
||
list much easier to update. This also makes PRs a lot easier to read and
|
||
review. Adding a space before a backslash (<code>\</code>) helps as well.</p>
|
||
|
||
<p>Here’s an example from the <a href="https://github.com/docker-library/buildpack-deps"><code>buildpack-deps</code> image</a>:</p>
|
||
|
||
<pre><code>RUN apt-get update &amp;&amp; apt-get install -y \
|
||
bzr \
|
||
cvs \
|
||
git \
|
||
mercurial \
|
||
subversion
|
||
</code></pre>
|
||
|
||
<h3 id="build-cache">Build cache</h3>
|
||
|
||
<p>During the process of building an image Docker will step through the
|
||
instructions in your <code>Dockerfile</code> executing each in the order specified.
|
||
As each instruction is examined Docker will look for an existing image in its
|
||
cache that it can reuse, rather than creating a new (duplicate) image.
|
||
If you do not want to use the cache at all you can use the <code>--no-cache=true</code>
|
||
option on the <code>docker build</code> command.</p>
|
||
|
||
<p>However, if you do let Docker use its cache then it is very important to
|
||
understand when it will, and will not, find a matching image. The basic rules
|
||
that Docker will follow are outlined below:</p>
|
||
|
||
<ul>
|
||
<li><p>Starting with a base image that is already in the cache, the next
|
||
instruction is compared against all child images derived from that base
|
||
image to see if one of them was built using the exact same instruction. If
|
||
not, the cache is invalidated.</p></li>
|
||
|
||
<li><p>In most cases simply comparing the instruction in the <code>Dockerfile</code> with one
|
||
of the child images is sufficient. However, certain instructions require
|
||
a little more examination and explanation.</p></li>
|
||
|
||
<li><p>In the case of the <code>ADD</code> and <code>COPY</code> instructions, the contents of the file(s)
|
||
being put into the image are examined. Specifically, a checksum is done
|
||
of the file(s) and then that checksum is used during the cache lookup.
|
||
If anything has changed in the file(s), including its metadata,
|
||
then the cache is invalidated.</p></li>
|
||
|
||
<li><p>Aside from the <code>ADD</code> and <code>COPY</code> commands cache checking will not look at the
|
||
files in the container to determine a cache match. For example, when processing
|
||
a <code>RUN apt-get -y update</code> command the files updated in the container
|
||
will not be examined to determine if a cache hit exists. In that case just
|
||
the command string itself will be used to find a match.</p></li>
|
||
</ul>
|
||
|
||
<p>Once the cache is invalidated, all subsequent <code>Dockerfile</code> commands will
|
||
generate new images and the cache will not be used.</p>
|
||
|
||
<h2 id="the-dockerfile-instructions">The Dockerfile instructions</h2>
|
||
|
||
<p>Below you&rsquo;ll find recommendations for the best way to write the
|
||
various instructions available for use in a <code>Dockerfile</code>.</p>
|
||
|
||
<h3 id="from-https-docs-docker-com-reference-builder-from"><a href="https://docs.docker.com/reference/builder/#from"><code>FROM</code></a></h3>
|
||
|
||
<p>Whenever possible, use current Official Repositories as the basis for your
|
||
image. We recommend the <a href="https://registry.hub.docker.com/_/debian/">Debian image</a>
|
||
since it’s very tightly controlled and kept extremely minimal (currently under
|
||
100 mb), while still being a full distribution.</p>
|
||
|
||
<h3 id="run-https-docs-docker-com-reference-builder-run"><a href="https://docs.docker.com/reference/builder/#run"><code>RUN</code></a></h3>
|
||
|
||
<p>As always, to make your <code>Dockerfile</code> more readable, understandable, and
|
||
maintainable, put long or complex <code>RUN</code> statements on multiple lines separated
|
||
with backslashes.</p>
|
||
|
||
<p>Probably the most common use-case for <code>RUN</code> is an application of <code>apt-get</code>.
|
||
When using <code>apt-get</code>, here are a few things to keep in mind:</p>
|
||
|
||
<ul>
|
||
<li><p>Don’t do <code>RUN apt-get update</code> on a single line. This will cause
|
||
caching issues if the referenced archive gets updated, which will make your
|
||
subsequent <code>apt-get install</code> fail without comment.</p></li>
|
||
|
||
<li><p>Avoid <code>RUN apt-get upgrade</code> or <code>dist-upgrade</code>, since many of the “essential”
|
||
packages from the base images will fail to upgrade inside an unprivileged
|
||
container. If a base package is out of date, you should contact its
|
||
maintainers. If you know there’s a particular package, <code>foo</code>, that needs to be
|
||
updated, use <code>apt-get install -y foo</code> and it will update automatically.</p></li>
|
||
|
||
<li><p>Do write instructions like:</p>
|
||
|
||
<p>RUN apt-get update &amp;&amp; apt-get install -y package-bar package-foo package-baz</p></li>
|
||
</ul>
|
||
|
||
<p>Writing the instruction this way not only makes it easier to read
|
||
and maintain, but also, by including <code>apt-get update</code>, ensures that the cache
|
||
will naturally be busted and the latest versions will be installed with no
|
||
further coding or manual intervention required.</p>
|
||
|
||
<ul>
|
||
<li>Further natural cache-busting can be realized by version-pinning packages
|
||
(e.g., <code>package-foo=1.3.*</code>). This will force retrieval of that version
|
||
regardless of what’s in the cache.
|
||
Writing your <code>apt-get</code> code this way will greatly ease maintenance and reduce
|
||
failures due to unanticipated changes in required packages.</li>
|
||
</ul>
|
||
|
||
<h4 id="example">Example</h4>
|
||
|
||
<p>Below is a well-formed <code>RUN</code> instruction that demonstrates the above
|
||
recommendations. Note that the last package, <code>s3cmd</code>, specifies a version
|
||
<code>1.1.0*</code>. If the image previously used an older version, specifying the new one
|
||
will cause a cache bust of <code>apt-get update</code> and ensure the installation of
|
||
the new version (which in this case had a new, required feature).</p>
|
||
|
||
<pre><code>RUN apt-get update &amp;&amp; apt-get install -y \
|
||
aufs-tools \
|
||
automake \
|
||
btrfs-tools \
|
||
build-essential \
|
||
curl \
|
||
dpkg-sig \
|
||
git \
|
||
iptables \
|
||
libapparmor-dev \
|
||
libcap-dev \
|
||
libsqlite3-dev \
|
||
lxc=1.0* \
|
||
mercurial \
|
||
parallel \
|
||
reprepro \
|
||
ruby1.9.1 \
|
||
ruby1.9.1-dev \
|
||
s3cmd=1.1.0*
|
||
</code></pre>
|
||
|
||
<p>Writing the instruction this way also helps you avoid potential duplication of
|
||
a given package because it is much easier to read than an instruction like:</p>
|
||
|
||
<pre><code>RUN apt-get install -y package-foo &amp;&amp; apt-get install -y package-bar
|
||
</code></pre>
|
||
|
||
<h3 id="cmd-https-docs-docker-com-reference-builder-cmd"><a href="https://docs.docker.com/reference/builder/#cmd"><code>CMD</code></a></h3>
|
||
|
||
<p>The <code>CMD</code> instruction should be used to run the software contained by your
|
||
image, along with any arguments. <code>CMD</code> should almost always be used in the
|
||
form of <code>CMD [“executable”, “param1”, “param2”…]</code>. Thus, if the image is for a
|
||
service (Apache, Rails, etc.), you would run something like
|
||
<code>CMD [&quot;apache2&quot;,&quot;-DFOREGROUND&quot;]</code>. Indeed, this form of the instruction is
|
||
recommended for any service-based image.</p>
|
||
|
||
<p>In most other cases, <code>CMD</code> should be given an interactive shell (bash, python,
|
||
perl, etc), for example, <code>CMD [&quot;perl&quot;, &quot;-de0&quot;]</code>, <code>CMD [&quot;python&quot;]</code>, or
|
||
<code>CMD [“php”, “-a”]</code>. Using this form means that when you execute something like
|
||
<code>docker run -it python</code>, you’ll get dropped into a usable shell, ready to go.
|
||
<code>CMD</code> should rarely be used in the manner of <code>CMD [“param”, “param”]</code> in
|
||
conjunction with <a href="https://docs.docker.com/reference/builder/#entrypoint"><code>ENTRYPOINT</code></a>, unless
|
||
you and your expected users are already quite familiar with how <code>ENTRYPOINT</code>
|
||
works.</p>
|
||
|
||
<h3 id="expose-https-docs-docker-com-reference-builder-expose"><a href="https://docs.docker.com/reference/builder/#expose"><code>EXPOSE</code></a></h3>
|
||
|
||
<p>The <code>EXPOSE</code> instruction indicates the ports on which a container will listen
|
||
for connections. Consequently, you should use the common, traditional port for
|
||
your application. For example, an image containing the Apache web server would
|
||
use <code>EXPOSE 80</code>, while an image containing MongoDB would use <code>EXPOSE 27017</code> and
|
||
so on.</p>
|
||
|
||
<p>For external access, your users can execute <code>docker run</code> with a flag indicating
|
||
how to map the specified port to the port of their choice.
|
||
For container linking, Docker provides environment variables for the path from
|
||
the recipient container back to the source (ie, <code>MYSQL_PORT_3306_TCP</code>).</p>
|
||
|
||
<h3 id="env-https-docs-docker-com-reference-builder-env"><a href="https://docs.docker.com/reference/builder/#env"><code>ENV</code></a></h3>
|
||
|
||
<p>In order to make new software easier to run, you can use <code>ENV</code> to update the
|
||
<code>PATH</code> environment variable for the software your container installs. For
|
||
example, <code>ENV PATH /usr/local/nginx/bin:$PATH</code> will ensure that <code>CMD [“nginx”]</code>
|
||
just works.</p>
|
||
|
||
<p>The <code>ENV</code> instruction is also useful for providing required environment
|
||
variables specific to services you wish to containerize, such as Postgres’s
|
||
<code>PGDATA</code>.</p>
|
||
|
||
<p>Lastly, <code>ENV</code> can also be used to set commonly used version numbers so that
|
||
version bumps are easier to maintain, as seen in the following example:</p>
|
||
|
||
<pre><code>ENV PG_MAJOR 9.3
|
||
ENV PG_VERSION 9.3.4
|
||
RUN curl -SL http://example.com/postgres-$PG_VERSION.tar.xz | tar -xJC /usr/src/postgress &amp;&amp; …
|
||
ENV PATH /usr/local/postgres-$PG_MAJOR/bin:$PATH
|
||
</code></pre>
|
||
|
||
<p>Similar to having constant variables in a program (as opposed to hard-coding
|
||
values), this approach lets you change a single <code>ENV</code> instruction to
|
||
auto-magically bump the version of the software in your container.</p>
|
||
|
||
<h3 id="add-https-docs-docker-com-reference-builder-add-or-copy-https-docs-docker-com-reference-builder-copy"><a href="https://docs.docker.com/reference/builder/#add"><code>ADD</code></a> or <a href="https://docs.docker.com/reference/builder/#copy"><code>COPY</code></a></h3>
|
||
|
||
<p>Although <code>ADD</code> and <code>COPY</code> are functionally similar, generally speaking, <code>COPY</code>
|
||
is preferred. That’s because it’s more transparent than <code>ADD</code>. <code>COPY</code> only
|
||
supports the basic copying of local files into the container, while <code>ADD</code> has
|
||
some features (like local-only tar extraction and remote URL support) that are
|
||
not immediately obvious. Consequently, the best use for <code>ADD</code> is local tar file
|
||
auto-extraction into the image, as in <code>ADD rootfs.tar.xz /</code>.</p>
|
||
|
||
<p>If you have multiple <code>Dockerfile</code> steps that use different files from your
|
||
context, <code>COPY</code> them individually, rather than all at once. This will ensure that
|
||
each step&rsquo;s build cache is only invalidated (forcing the step to be re-run) if the
|
||
specifically required files change.</p>
|
||
|
||
<p>For example:</p>
|
||
|
||
<pre><code>COPY requirements.txt /tmp/
|
||
RUN pip install /tmp/requirements.txt
|
||
COPY . /tmp/
|
||
</code></pre>
|
||
|
||
<p>Results in fewer cache invalidations for the <code>RUN</code> step, than if you put the
|
||
<code>COPY . /tmp/</code> before it.</p>
|
||
|
||
<p>Because image size matters, using <code>ADD</code> to fetch packages from remote URLs is
|
||
strongly discouraged; you should use <code>curl</code> or <code>wget</code> instead. That way you can
|
||
delete the files you no longer need after they&rsquo;ve been extracted and you won&rsquo;t
|
||
have to add another layer in your image. For example, you should avoid doing
|
||
things like:</p>
|
||
|
||
<pre><code>ADD http://example.com/big.tar.xz /usr/src/things/
|
||
RUN tar -xJf /usr/src/things/big.tar.xz -C /usr/src/things
|
||
RUN make -C /usr/src/things all
|
||
</code></pre>
|
||
|
||
<p>And instead, do something like:</p>
|
||
|
||
<pre><code>RUN mkdir -p /usr/src/things \
|
||
&amp;&amp; curl -SL http://example.com/big.tar.gz \
|
||
| tar -xJC /usr/src/things \
|
||
&amp;&amp; make -C /usr/src/things all
|
||
</code></pre>
|
||
|
||
<p>For other items (files, directories) that do not require <code>ADD</code>’s tar
|
||
auto-extraction capability, you should always use <code>COPY</code>.</p>
|
||
|
||
<h3 id="entrypoint-https-docs-docker-com-reference-builder-entrypoint"><a href="https://docs.docker.com/reference/builder/#entrypoint"><code>ENTRYPOINT</code></a></h3>
|
||
|
||
<p>The best use for <code>ENTRYPOINT</code> is to set the image&rsquo;s main command, allowing that
|
||
image to be run as though it was that command (and then use <code>CMD</code> as the
|
||
default flags).</p>
|
||
|
||
<p>Let&rsquo;s start with an example of an image for the command line tool <code>s3cmd</code>:</p>
|
||
|
||
<pre><code>ENTRYPOINT [&quot;s3cmd&quot;]
|
||
CMD [&quot;--help&quot;]
|
||
</code></pre>
|
||
|
||
<p>Now the image can be run like this to show the command&rsquo;s help:</p>
|
||
|
||
<pre><code>$ docker run s3cmd
|
||
</code></pre>
|
||
|
||
<p>Or using the right parameters to execute a command:</p>
|
||
|
||
<pre><code>$ docker run s3cmd ls s3://mybucket
|
||
</code></pre>
|
||
|
||
<p>This is useful because the image name can double as a reference to the binary as
|
||
shown in the command above.</p>
|
||
|
||
<p>The <code>ENTRYPOINT</code> instruction can also be used in combination with a helper
|
||
script, allowing it to function in a similar way to the command above, even
|
||
when starting the tool may require more than one step.</p>
|
||
|
||
<p>For example, the <a href="https://registry.hub.docker.com/_/postgres/">Postgres Official Image</a>
|
||
uses the following script as its <code>ENTRYPOINT</code>:</p>
|
||
|
||
<pre><code class="language-bash">#!/bin/bash
|
||
set -e
|
||
|
||
if [ &quot;$1&quot; = 'postgres' ]; then
|
||
chown -R postgres &quot;$PGDATA&quot;
|
||
|
||
if [ -z &quot;$(ls -A &quot;$PGDATA&quot;)&quot; ]; then
|
||
gosu postgres initdb
|
||
fi
|
||
|
||
exec gosu postgres &quot;$@&quot;
|
||
fi
|
||
|
||
exec &quot;$@&quot;
|
||
</code></pre>
|
||
|
||
<blockquote>
|
||
<p><strong>Note</strong>:
|
||
This script uses <a href="http://wiki.bash-hackers.org/commands/builtin/exec">the <code>exec</code> Bash command</a>
|
||
so that the final running application becomes the container&rsquo;s PID 1. This allows
|
||
the application to receive any Unix signals sent to the container.
|
||
See the <a href="https://docs.docker.com/reference/builder/#ENTRYPOINT"><code>ENTRYPOINT</code></a>
|
||
help for more details.</p>
|
||
</blockquote>
|
||
|
||
<p>The helper script is copied into the container and run via <code>ENTRYPOINT</code> on
|
||
container start:</p>
|
||
|
||
<pre><code>COPY ./docker-entrypoint.sh /
|
||
ENTRYPOINT [&quot;/docker-entrypoint.sh&quot;]
|
||
</code></pre>
|
||
|
||
<p>This script allows the user to interact with Postgres in several ways.</p>
|
||
|
||
<p>It can simply start Postgres:</p>
|
||
|
||
<pre><code>$ docker run postgres
|
||
</code></pre>
|
||
|
||
<p>Or, it can be used to run Postgres and pass parameters to the server:</p>
|
||
|
||
<pre><code>$ docker run postgres postgres --help
|
||
</code></pre>
|
||
|
||
<p>Lastly, it could also be used to start a totally different tool, such Bash:</p>
|
||
|
||
<pre><code>$ docker run --rm -it postgres bash
|
||
</code></pre>
|
||
|
||
<h3 id="volume-https-docs-docker-com-reference-builder-volume"><a href="https://docs.docker.com/reference/builder/#volume"><code>VOLUME</code></a></h3>
|
||
|
||
<p>The <code>VOLUME</code> instruction should be used to expose any database storage area,
|
||
configuration storage, or files/folders created by your docker container. You
|
||
are strongly encouraged to use <code>VOLUME</code> for any mutable and/or user-serviceable
|
||
parts of your image.</p>
|
||
|
||
<h3 id="user-https-docs-docker-com-reference-builder-user"><a href="https://docs.docker.com/reference/builder/#user"><code>USER</code></a></h3>
|
||
|
||
<p>If a service can run without privileges, use <code>USER</code> to change to a non-root
|
||
user. Start by creating the user and group in the <code>Dockerfile</code> with something
|
||
like <code>RUN groupadd -r postgres &amp;&amp; useradd -r -g postgres postgres</code>.</p>
|
||
|
||
<blockquote>
|
||
<p><strong>Note:</strong> Users and groups in an image get a non-deterministic
|
||
UID/GID in that the “next” UID/GID gets assigned regardless of image
|
||
rebuilds. So, if it’s critical, you should assign an explicit UID/GID.</p>
|
||
</blockquote>
|
||
|
||
<p>You should avoid installing or using <code>sudo</code> since it has unpredictable TTY and
|
||
signal-forwarding behavior that can cause more problems than it solves. If
|
||
you absolutely need functionality similar to <code>sudo</code> (e.g., initializing the
|
||
daemon as root but running it as non-root), you may be able to use
|
||
<a href="https://github.com/tianon/gosu">“gosu”</a>.</p>
|
||
|
||
<p>Lastly, to reduce layers and complexity, avoid switching <code>USER</code> back
|
||
and forth frequently.</p>
|
||
|
||
<h3 id="workdir-https-docs-docker-com-reference-builder-workdir"><a href="https://docs.docker.com/reference/builder/#workdir"><code>WORKDIR</code></a></h3>
|
||
|
||
<p>For clarity and reliability, you should always use absolute paths for your
|
||
<code>WORKDIR</code>. Also, you should use <code>WORKDIR</code> instead of proliferating
|
||
instructions like <code>RUN cd … &amp;&amp; do-something</code>, which are hard to read,
|
||
troubleshoot, and maintain.</p>
|
||
|
||
<h3 id="onbuild-https-docs-docker-com-reference-builder-onbuild"><a href="https://docs.docker.com/reference/builder/#onbuild"><code>ONBUILD</code></a></h3>
|
||
|
||
<p>An <code>ONBUILD</code> command executes after the current <code>Dockerfile</code> build completes.
|
||
<code>ONBUILD</code> executes in any child image derived <code>FROM</code> the current image. Think
|
||
of the <code>ONBUILD</code> command as an instruction the parent <code>Dockerfile</code> gives
|
||
to the child <code>Dockerfile</code>.</p>
|
||
|
||
<p>A Docker build executes <code>ONBUILD</code> commands before any command in a child
|
||
<code>Dockerfile</code>.</p>
|
||
|
||
<p><code>ONBUILD</code> is useful for images that are going to be built <code>FROM</code> a given
|
||
image. For example, you would use <code>ONBUILD</code> for a language stack image that
|
||
builds arbitrary user software written in that language within the
|
||
<code>Dockerfile</code>, as you can see in <a href="https://github.com/docker-library/ruby/blob/master/2.1/onbuild/Dockerfile">Ruby’s <code>ONBUILD</code> variants</a>.</p>
|
||
|
||
<p>Images built from <code>ONBUILD</code> should get a separate tag, for example:
|
||
<code>ruby:1.9-onbuild</code> or <code>ruby:2.0-onbuild</code>.</p>
|
||
|
||
<p>Be careful when putting <code>ADD</code> or <code>COPY</code> in <code>ONBUILD</code>. The “onbuild” image will
|
||
fail catastrophically if the new build&rsquo;s context is missing the resource being
|
||
added. Adding a separate tag, as recommended above, will help mitigate this by
|
||
allowing the <code>Dockerfile</code> author to make a choice.</p>
|
||
|
||
<h2 id="examples-for-official-repositories">Examples for Official Repositories</h2>
|
||
|
||
<p>These Official Repositories have exemplary <code>Dockerfile</code>s:</p>
|
||
|
||
<ul>
|
||
<li><a href="https://registry.hub.docker.com/_/golang/">Go</a></li>
|
||
<li><a href="https://registry.hub.docker.com/_/perl/">Perl</a></li>
|
||
<li><a href="https://registry.hub.docker.com/_/hylang/">Hy</a></li>
|
||
<li><a href="https://registry.hub.docker.com/_/rails">Rails</a></li>
|
||
</ul>
|
||
|
||
<h2 id="additional-resources">Additional resources:</h2>
|
||
|
||
<ul>
|
||
<li><a href="https://docs.docker.com/reference/builder/#onbuild">Dockerfile Reference</a></li>
|
||
<li><a href="https://docs.docker.com/articles/baseimages/">More about Base Images</a></li>
|
||
<li><a href="https://docs.docker.com/docker-hub/builds/">More about Automated Builds</a></li>
|
||
<li><a href="https://docs.docker.com/docker-hub/official_repos/">Guidelines for Creating Official
|
||
Repositories</a></li>
|
||
</ul>
|
||
</description>
|
||
</item>
|
||
|
||
<item>
|
||
<title></title>
|
||
<link>http://localhost/articles/https/README/</link>
|
||
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
|
||
|
||
<guid>http://localhost/articles/https/README/</guid>
|
||
<description><p>This is an initial attempt to make it easier to test the examples in the https.md
|
||
doc</p>
|
||
|
||
<p>at this point, it has to be a manual thing, and I&rsquo;ve been running it in boot2docker</p>
|
||
|
||
<p>so my process is</p>
|
||
|
||
<p>$ boot2docker ssh
|
||
$$ git clone <a href="https://github.com/docker/docker">https://github.com/docker/docker</a>
|
||
$$ cd docker/docs/articles/https
|
||
$$ make cert
|
||
lots of things to see and manually answer, as openssl wants to be interactive
|
||
<strong>NOTE:</strong> make sure you enter the hostname (<code>boot2docker</code> in my case) when prompted for <code>Computer Name</code>)
|
||
$$ sudo make run</p>
|
||
|
||
<p>start another terminal</p>
|
||
|
||
<p>$ boot2docker ssh
|
||
$$ cd docker/docs/articles/https
|
||
$$ make client</p>
|
||
|
||
<p>the last will connect first with <code>--tls</code> and then with <code>--tlsverify</code></p>
|
||
|
||
<p>both should succeed</p>
|
||
</description>
|
||
</item>
|
||
|
||
<item>
|
||
<title>Configuring and running Docker</title>
|
||
<link>http://localhost/articles/configuring/</link>
|
||
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
|
||
|
||
<guid>http://localhost/articles/configuring/</guid>
|
||
<description>
|
||
|
||
<h1 id="configuring-and-running-docker-on-various-distributions">Configuring and running Docker on various distributions</h1>
|
||
|
||
<p>After successfully installing Docker, the <code>docker</code> daemon runs with its default
|
||
configuration.</p>
|
||
|
||
<p>In a production environment, system administrators typically configure the
|
||
<code>docker</code> daemon to start and stop according to an organization&rsquo;s requirements. In most
|
||
cases, the system administrator configures a process manager such as <code>SysVinit</code>, <code>Upstart</code>,
|
||
or <code>systemd</code> to manage the <code>docker</code> daemon&rsquo;s start and stop.</p>
|
||
|
||
<h3 id="running-the-docker-daemon-directly">Running the docker daemon directly</h3>
|
||
|
||
<p>The <code>docker</code> daemon can be run directly using the <code>-d</code> option. By default it listens on
|
||
the Unix socket <code>unix:///var/run/docker.sock</code></p>
|
||
|
||
<pre><code>$ docker -d
|
||
|
||
INFO[0000] +job init_networkdriver()
|
||
INFO[0000] +job serveapi(unix:///var/run/docker.sock)
|
||
INFO[0000] Listening for HTTP on unix (/var/run/docker.sock)
|
||
...
|
||
...
|
||
</code></pre>
|
||
|
||
<h3 id="configuring-the-docker-daemon-directly">Configuring the docker daemon directly</h3>
|
||
|
||
<p>If you&rsquo;re running the <code>docker</code> daemon directly by running <code>docker -d</code> instead
|
||
of using a process manager, you can append the configuration options to the <code>docker</code> run
|
||
command directly. Just like the <code>-d</code> option, other options can be passed to the <code>docker</code>
|
||
daemon to configure it.</p>
|
||
|
||
<p>Some of the daemon&rsquo;s options are:</p>
|
||
|
||
<table>
|
||
<thead>
|
||
<tr>
|
||
<th>Flag</th>
|
||
<th>Description</th>
|
||
</tr>
|
||
</thead>
|
||
|
||
<tbody>
|
||
<tr>
|
||
<td><code>-D</code>, <code>--debug=false</code></td>
|
||
<td>Enable or disable debug mode. By default, this is false.</td>
|
||
</tr>
|
||
|
||
<tr>
|
||
<td><code>-H</code>,<code>--host=[]</code></td>
|
||
<td>Daemon socket(s) to connect to.</td>
|
||
</tr>
|
||
|
||
<tr>
|
||
<td><code>--tls=false</code></td>
|
||
<td>Enable or disable TLS. By default, this is false.</td>
|
||
</tr>
|
||
</tbody>
|
||
</table>
|
||
|
||
<p>Here is a an example of running the <code>docker</code> daemon with configuration options:</p>
|
||
|
||
<pre><code>$ docker -d -D --tls=true --tlscert=/var/docker/server.pem --tlskey=/var/docker/serverkey.pem -H tcp://192.168.59.3:2376
|
||
</code></pre>
|
||
|
||
<p>These options :</p>
|
||
|
||
<ul>
|
||
<li>Enable <code>-D</code> (debug) mode</li>
|
||
<li>Set <code>tls</code> to true with the server certificate and key specified using <code>--tlscert</code> and <code>--tlskey</code> respectively</li>
|
||
<li>Listen for connections on <code>tcp://192.168.59.3:2376</code></li>
|
||
</ul>
|
||
|
||
<p>The command line reference has the <a href="http://localhost/articles/articles/reference/commandline/cli/#daemon">complete list of daemon flags</a>
|
||
with explanations.</p>
|
||
|
||
<h2 id="ubuntu">Ubuntu</h2>
|
||
|
||
<p>As of <code>14.04</code>, Ubuntu uses Upstart as a process manager. By default, Upstart jobs
|
||
are located in <code>/etc/init</code> and the <code>docker</code> Upstart job can be found at <code>/etc/init/docker.conf</code>.</p>
|
||
|
||
<p>After successfully <a href="http://localhost/articles/articles/installation/ubuntulinux/">installing Docker for Ubuntu</a>,
|
||
you can check the running status using Upstart in this way:</p>
|
||
|
||
<pre><code>$ sudo status docker
|
||
|
||
docker start/running, process 989
|
||
</code></pre>
|
||
|
||
<h3 id="running-docker">Running Docker</h3>
|
||
|
||
<p>You can start/stop/restart the <code>docker</code> daemon using</p>
|
||
|
||
<pre><code>$ sudo start docker
|
||
|
||
$ sudo stop docker
|
||
|
||
$ sudo restart docker
|
||
</code></pre>
|
||
|
||
<h3 id="configuring-docker">Configuring Docker</h3>
|
||
|
||
<p>You configure the <code>docker</code> daemon in the <code>/etc/default/docker</code> file on your
|
||
system. You do this by specifying values in a <code>DOCKER_OPTS</code> variable.</p>
|
||
|
||
<p>To configure Docker options:</p>
|
||
|
||
<ol>
|
||
<li><p>Log into your host as a user with <code>sudo</code> or <code>root</code> privileges.</p></li>
|
||
|
||
<li><p>If you don&rsquo;t have one, create the <code>/etc/default/docker</code> file on your host. Depending on how
|
||
you installed Docker, you may already have this file.</p></li>
|
||
|
||
<li><p>Open the file with your favorite editor.</p>
|
||
|
||
<pre><code>$ sudo vi /etc/default/docker
|
||
</code></pre></li>
|
||
|
||
<li><p>Add a <code>DOCKER_OPTS</code> variable with the following options. These options are appended to the
|
||
<code>docker</code> daemon&rsquo;s run command.</p></li>
|
||
</ol>
|
||
|
||
<pre><code> DOCKER_OPTS=&quot;-D --tls=true --tlscert=/var/docker/server.pem --tlskey=/var/docker/serverkey.pem -H tcp://192.168.59.3:2376&quot;
|
||
</code></pre>
|
||
|
||
<p>These options :</p>
|
||
|
||
<ul>
|
||
<li>Enable <code>-D</code> (debug) mode</li>
|
||
<li>Set <code>tls</code> to true with the server certificate and key specified using <code>--tlscert</code> and <code>--tlskey</code> respectively</li>
|
||
<li>Listen for connections on <code>tcp://192.168.59.3:2376</code></li>
|
||
</ul>
|
||
|
||
<p>The command line reference has the <a href="http://localhost/articles/articles/reference/commandline/cli/#daemon">complete list of daemon flags</a>
|
||
with explanations.</p>
|
||
|
||
<ol>
|
||
<li><p>Save and close the file.</p></li>
|
||
|
||
<li><p>Restart the <code>docker</code> daemon.</p>
|
||
|
||
<pre><code>$ sudo restart docker
|
||
</code></pre></li>
|
||
|
||
<li><p>Verify that the <code>docker</code> daemon is running as specified with the <code>ps</code> command.</p>
|
||
|
||
<pre><code>$ ps aux | grep docker | grep -v grep
|
||
</code></pre></li>
|
||
</ol>
|
||
|
||
<h3 id="logs">Logs</h3>
|
||
|
||
<p>By default logs for Upstart jobs are located in <code>/var/log/upstart</code> and the logs for <code>docker</code> daemon
|
||
can be located at <code>/var/log/upstart/docker.log</code></p>
|
||
|
||
<pre><code>$ tail -f /var/log/upstart/docker.log
|
||
INFO[0000] Loading containers: done.
|
||
INFO[0000] docker daemon: 1.6.0 4749651; execdriver: native-0.2; graphdriver: aufs
|
||
INFO[0000] +job acceptconnections()
|
||
INFO[0000] -job acceptconnections() = OK (0)
|
||
INFO[0000] Daemon has completed initialization
|
||
</code></pre>
|
||
|
||
<h2 id="centos-red-hat-enterprise-linux-fedora">CentOS / Red Hat Enterprise Linux / Fedora</h2>
|
||
|
||
<p>As of <code>7.x</code>, CentOS and RHEL use <code>systemd</code> as the process manager. As of <code>21</code>, Fedora uses
|
||
<code>systemd</code> as its process manager.</p>
|
||
|
||
<p>After successfully installing Docker for <a href="http://localhost/articles/articles/installation/centos/">CentOS</a>/<a href="http://localhost/installation/rhel/">Red Hat Enterprise Linux</a>/<a href="http://localhost/articles/articles/installation/fedora">Fedora</a>, you can check the running status in this way:</p>
|
||
|
||
<pre><code>$ sudo systemctl status docker
|
||
</code></pre>
|
||
|
||
<h3 id="running-docker-1">Running Docker</h3>
|
||
|
||
<p>You can start/stop/restart the <code>docker</code> daemon using</p>
|
||
|
||
<pre><code>$ sudo systemctl start docker
|
||
|
||
$ sudo systemctl stop docker
|
||
|
||
$ sudo systemctl restart docker
|
||
</code></pre>
|
||
|
||
<p>If you want Docker to start at boot, you should also:</p>
|
||
|
||
<pre><code>$ sudo systemctl enable docker
|
||
</code></pre>
|
||
|
||
<h3 id="configuring-docker-1">Configuring Docker</h3>
|
||
|
||
<p>You configure the <code>docker</code> daemon in the <code>/etc/sysconfig/docker</code> file on your
|
||
host. You do this by specifying values in a variable. For CentOS 7.x and RHEL 7.x, the name
|
||
of the variable is <code>OPTIONS</code> and for CentOS 6.x and RHEL 6.x, the name of the variable is
|
||
<code>other_args</code>. For this section, we will use CentOS 7.x as an example to configure the <code>docker</code>
|
||
daemon.</p>
|
||
|
||
<p>By default, systemd services are located either in <code>/etc/systemd/service</code>, <code>/lib/systemd/system</code>
|
||
or <code>/usr/lib/systemd/system</code>. The <code>docker.service</code> file can be found in either of these three
|
||
directories depending on your host.</p>
|
||
|
||
<p>To configure Docker options:</p>
|
||
|
||
<ol>
|
||
<li><p>Log into your host as a user with <code>sudo</code> or <code>root</code> privileges.</p></li>
|
||
|
||
<li><p>If you don&rsquo;t have one, create the <code>/etc/sysconfig/docker</code> file on your host. Depending on how
|
||
you installed Docker, you may already have this file.</p></li>
|
||
|
||
<li><p>Open the file with your favorite editor.</p>
|
||
|
||
<pre><code>$ sudo vi /etc/sysconfig/docker
|
||
</code></pre></li>
|
||
|
||
<li><p>Add a <code>OPTIONS</code> variable with the following options. These options are appended to the
|
||
command that starts the <code>docker</code> daemon.</p></li>
|
||
</ol>
|
||
|
||
<pre><code> OPTIONS=&quot;-D --tls=true --tlscert=/var/docker/server.pem --tlskey=/var/docker/serverkey.pem -H tcp://192.168.59.3:2376&quot;
|
||
</code></pre>
|
||
|
||
<p>These options :</p>
|
||
|
||
<ul>
|
||
<li>Enable <code>-D</code> (debug) mode</li>
|
||
<li>Set <code>tls</code> to true with the server certificate and key specified using <code>--tlscert</code> and <code>--tlskey</code> respectively</li>
|
||
<li>Listen for connections on <code>tcp://192.168.59.3:2376</code></li>
|
||
</ul>
|
||
|
||
<p>The command line reference has the <a href="http://localhost/articles/articles/reference/commandline/cli/#daemon">complete list of daemon flags</a>
|
||
with explanations.</p>
|
||
|
||
<ol>
|
||
<li><p>Save and close the file.</p></li>
|
||
|
||
<li><p>Restart the <code>docker</code> daemon.</p>
|
||
|
||
<pre><code>$ sudo service docker restart
|
||
</code></pre></li>
|
||
|
||
<li><p>Verify that the <code>docker</code> daemon is running as specified with the <code>ps</code> command.</p>
|
||
|
||
<pre><code>$ ps aux | grep docker | grep -v grep
|
||
</code></pre></li>
|
||
</ol>
|
||
|
||
<h3 id="logs-1">Logs</h3>
|
||
|
||
<p>systemd has its own logging system called the journal. The logs for the <code>docker</code> daemon can
|
||
be viewed using <code>journalctl -u docker</code></p>
|
||
|
||
<pre><code>$ sudo journalctl -u docker
|
||
May 06 00:22:05 localhost.localdomain systemd[1]: Starting Docker Application Container Engine...
|
||
May 06 00:22:05 localhost.localdomain docker[2495]: time=&quot;2015-05-06T00:22:05Z&quot; level=&quot;info&quot; msg=&quot;+job serveapi(unix:///var/run/docker.sock)&quot;
|
||
May 06 00:22:05 localhost.localdomain docker[2495]: time=&quot;2015-05-06T00:22:05Z&quot; level=&quot;info&quot; msg=&quot;Listening for HTTP on unix (/var/run/docker.sock)&quot;
|
||
May 06 00:22:06 localhost.localdomain docker[2495]: time=&quot;2015-05-06T00:22:06Z&quot; level=&quot;info&quot; msg=&quot;+job init_networkdriver()&quot;
|
||
May 06 00:22:06 localhost.localdomain docker[2495]: time=&quot;2015-05-06T00:22:06Z&quot; level=&quot;info&quot; msg=&quot;-job init_networkdriver() = OK (0)&quot;
|
||
May 06 00:22:06 localhost.localdomain docker[2495]: time=&quot;2015-05-06T00:22:06Z&quot; level=&quot;info&quot; msg=&quot;Loading containers: start.&quot;
|
||
May 06 00:22:06 localhost.localdomain docker[2495]: time=&quot;2015-05-06T00:22:06Z&quot; level=&quot;info&quot; msg=&quot;Loading containers: done.&quot;
|
||
May 06 00:22:06 localhost.localdomain docker[2495]: time=&quot;2015-05-06T00:22:06Z&quot; level=&quot;info&quot; msg=&quot;docker daemon: 1.5.0-dev fc0329b/1.5.0; execdriver: native-0.2; graphdriver: devicemapper&quot;
|
||
May 06 00:22:06 localhost.localdomain docker[2495]: time=&quot;2015-05-06T00:22:06Z&quot; level=&quot;info&quot; msg=&quot;+job acceptconnections()&quot;
|
||
May 06 00:22:06 localhost.localdomain docker[2495]: time=&quot;2015-05-06T00:22:06Z&quot; level=&quot;info&quot; msg=&quot;-job acceptconnections() = OK (0)&quot;
|
||
</code></pre>
|
||
|
||
<p><em>Note: Using and configuring journal is an advanced topic and is beyond the scope of this article.</em></p>
|
||
</description>
|
||
</item>
|
||
|
||
<item>
|
||
<title>Automatically start containers</title>
|
||
<link>http://localhost/articles/host_integration/</link>
|
||
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
|
||
|
||
<guid>http://localhost/articles/host_integration/</guid>
|
||
<description>
|
||
|
||
<h1 id="automatically-start-containers">Automatically start containers</h1>
|
||
|
||
<p>As of Docker 1.2,
|
||
<a href="http://localhost/articles/articles/reference/commandline/cli/#restart-policies">restart policies</a> are the
|
||
built-in Docker mechanism for restarting containers when they exit. If set,
|
||
restart policies will be used when the Docker daemon starts up, as typically
|
||
happens after a system boot. Restart policies will ensure that linked containers
|
||
are started in the correct order.</p>
|
||
|
||
<p>If restart policies don&rsquo;t suit your needs (i.e., you have non-Docker processes
|
||
that depend on Docker containers), you can use a process manager like
|
||
<a href="http://upstart.ubuntu.com/">upstart</a>,
|
||
<a href="http://freedesktop.org/wiki/Software/systemd/">systemd</a> or
|
||
<a href="http://supervisord.org/">supervisor</a> instead.</p>
|
||
|
||
<h2 id="using-a-process-manager">Using a process manager</h2>
|
||
|
||
<p>Docker does not set any restart policies by default, but be aware that they will
|
||
conflict with most process managers. So don&rsquo;t set restart policies if you are
|
||
using a process manager.</p>
|
||
|
||
<p><em>Note:</em> Prior to Docker 1.2, restarting of Docker containers had to be
|
||
explicitly disabled. Refer to the
|
||
<a href="http://localhost/articles/articles/v1.1/articles/host_integration/">previous version</a> of this article for the
|
||
details on how to do that.</p>
|
||
|
||
<p>When you have finished setting up your image and are happy with your
|
||
running container, you can then attach a process manager to manage it.
|
||
When you run <code>docker start -a</code>, Docker will automatically attach to the
|
||
running container, or start it if needed and forward all signals so that
|
||
the process manager can detect when a container stops and correctly
|
||
restart it.</p>
|
||
|
||
<p>Here are a few sample scripts for systemd and upstart to integrate with
|
||
Docker.</p>
|
||
|
||
<h2 id="examples">Examples</h2>
|
||
|
||
<p>The examples below show configuration files for two popular process managers,
|
||
upstart and systemd. In these examples, we&rsquo;ll assume that we have already
|
||
created a container to run Redis with <code>--name=redis_server</code>. These files define
|
||
a new service that will be started after the docker daemon service has started.</p>
|
||
|
||
<h3 id="upstart">upstart</h3>
|
||
|
||
<pre><code>description &quot;Redis container&quot;
|
||
author &quot;Me&quot;
|
||
start on filesystem and started docker
|
||
stop on runlevel [!2345]
|
||
respawn
|
||
script
|
||
/usr/bin/docker start -a redis_server
|
||
end script
|
||
</code></pre>
|
||
|
||
<h3 id="systemd">systemd</h3>
|
||
|
||
<pre><code>[Unit]
|
||
Description=Redis container
|
||
Requires=docker.service
|
||
After=docker.service
|
||
|
||
[Service]
|
||
Restart=always
|
||
ExecStart=/usr/bin/docker start -a redis_server
|
||
ExecStop=/usr/bin/docker stop -t 2 redis_server
|
||
|
||
[Install]
|
||
WantedBy=local.target
|
||
</code></pre>
|
||
|
||
<p>If you need to pass options to the redis container (such as <code>--env</code>),
|
||
then you&rsquo;ll need to use <code>docker run</code> rather than <code>docker start</code>. This will
|
||
create a new container every time the service is started, which will be stopped
|
||
and removed when the service is stopped.</p>
|
||
|
||
<pre><code>[Service]
|
||
...
|
||
ExecStart=/usr/bin/docker run --env foo=bar --name redis_server redis
|
||
ExecStop=/usr/bin/docker stop -t 2 redis_server ; /usr/bin/docker rm -f redis_server
|
||
...
|
||
</code></pre>
|
||
</description>
|
||
</item>
|
||
|
||
<item>
|
||
<title>PowerShell DSC Usage</title>
|
||
<link>http://localhost/articles/dsc/</link>
|
||
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
|
||
|
||
<guid>http://localhost/articles/dsc/</guid>
|
||
<description>
|
||
|
||
<h1 id="using-powershell-dsc">Using PowerShell DSC</h1>
|
||
|
||
<p>Windows PowerShell Desired State Configuration (DSC) is a configuration
|
||
management tool that extends the existing functionality of Windows PowerShell.
|
||
DSC uses a declarative syntax to define the state in which a target should be
|
||
configured. More information about PowerShell DSC can be found at
|
||
<a href="http://technet.microsoft.com/en-us/library/dn249912.aspx">http://technet.microsoft.com/en-us/library/dn249912.aspx</a>.</p>
|
||
|
||
<h2 id="requirements">Requirements</h2>
|
||
|
||
<p>To use this guide you&rsquo;ll need a Windows host with PowerShell v4.0 or newer.</p>
|
||
|
||
<p>The included DSC configuration script also uses the official PPA so
|
||
only an Ubuntu target is supported. The Ubuntu target must already have the
|
||
required OMI Server and PowerShell DSC for Linux providers installed. More
|
||
information can be found at <a href="https://github.com/MSFTOSSMgmt/WPSDSCLinux">https://github.com/MSFTOSSMgmt/WPSDSCLinux</a>.
|
||
The source repository listed below also includes PowerShell DSC for Linux
|
||
installation and init scripts along with more detailed installation information.</p>
|
||
|
||
<h2 id="installation">Installation</h2>
|
||
|
||
<p>The DSC configuration example source is available in the following repository:
|
||
<a href="https://github.com/anweiss/DockerClientDSC">https://github.com/anweiss/DockerClientDSC</a>. It can be cloned with:</p>
|
||
|
||
<pre><code>$ git clone https://github.com/anweiss/DockerClientDSC.git
|
||
</code></pre>
|
||
|
||
<h2 id="usage">Usage</h2>
|
||
|
||
<p>The DSC configuration utilizes a set of shell scripts to determine whether or
|
||
not the specified Docker components are configured on the target node(s). The
|
||
source repository also includes a script (<code>RunDockerClientConfig.ps1</code>) that can
|
||
be used to establish the required CIM session(s) and execute the
|
||
<code>Set-DscConfiguration</code> cmdlet.</p>
|
||
|
||
<p>More detailed usage information can be found at
|
||
<a href="https://github.com/anweiss/DockerClientDSC">https://github.com/anweiss/DockerClientDSC</a>.</p>
|
||
|
||
<h3 id="install-docker">Install Docker</h3>
|
||
|
||
<p>The Docker installation configuration is equivalent to running:</p>
|
||
|
||
<pre><code>apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys\
|
||
36A1D7869245C8950F966E92D8576A8BA88D21E9
|
||
sh -c &quot;echo deb https://get.docker.com/ubuntu docker main\
|
||
&gt; /etc/apt/sources.list.d/docker.list&quot;
|
||
apt-get update
|
||
apt-get install lxc-docker
|
||
</code></pre>
|
||
|
||
<p>Ensure that your current working directory is set to the <code>DockerClientDSC</code>
|
||
source and load the DockerClient configuration into the current PowerShell
|
||
session</p>
|
||
|
||
<pre><code class="language-powershell">. .\DockerClient.ps1
|
||
</code></pre>
|
||
|
||
<p>Generate the required DSC configuration .mof file for the targeted node</p>
|
||
|
||
<pre><code class="language-powershell">DockerClient -Hostname &quot;myhost&quot;
|
||
</code></pre>
|
||
|
||
<p>A sample DSC configuration data file has also been included and can be modified
|
||
and used in conjunction with or in place of the <code>Hostname</code> parameter:</p>
|
||
|
||
<pre><code class="language-powershell">DockerClient -ConfigurationData .\DockerConfigData.psd1
|
||
</code></pre>
|
||
|
||
<p>Start the configuration application process on the targeted node</p>
|
||
|
||
<pre><code class="language-powershell">.\RunDockerClientConfig.ps1 -Hostname &quot;myhost&quot;
|
||
</code></pre>
|
||
|
||
<p>The <code>RunDockerClientConfig.ps1</code> script can also parse a DSC configuration data
|
||
file and execute configurations against multiple nodes as such:</p>
|
||
|
||
<pre><code class="language-powershell">.\RunDockerClientConfig.ps1 -ConfigurationData .\DockerConfigData.psd1
|
||
</code></pre>
|
||
|
||
<h3 id="images">Images</h3>
|
||
|
||
<p>Image configuration is equivalent to running: <code>docker pull [image]</code> or
|
||
<code>docker rmi -f [IMAGE]</code>.</p>
|
||
|
||
<p>Using the same steps defined above, execute <code>DockerClient</code> with the <code>Image</code>
|
||
parameter and apply the configuration:</p>
|
||
|
||
<pre><code class="language-powershell">DockerClient -Hostname &quot;myhost&quot; -Image &quot;node&quot;
|
||
.\RunDockerClientConfig.ps1 -Hostname &quot;myhost&quot;
|
||
</code></pre>
|
||
|
||
<p>You can also configure the host to pull multiple images:</p>
|
||
|
||
<pre><code class="language-powershell">DockerClient -Hostname &quot;myhost&quot; -Image &quot;node&quot;,&quot;mongo&quot;
|
||
.\RunDockerClientConfig.ps1 -Hostname &quot;myhost&quot;
|
||
</code></pre>
|
||
|
||
<p>To remove images, use a hashtable as follows:</p>
|
||
|
||
<pre><code class="language-powershell">DockerClient -Hostname &quot;myhost&quot; -Image @{Name=&quot;node&quot;; Remove=$true}
|
||
.\RunDockerClientConfig.ps1 -Hostname $hostname
|
||
</code></pre>
|
||
|
||
<h3 id="containers">Containers</h3>
|
||
|
||
<p>Container configuration is equivalent to running:</p>
|
||
|
||
<pre><code>docker run -d --name=&quot;[containername]&quot; -p '[port]' -e '[env]' --link '[link]'\
|
||
'[image]' '[command]'
|
||
</code></pre>
|
||
|
||
<p>or</p>
|
||
|
||
<pre><code>docker rm -f [containername]
|
||
</code></pre>
|
||
|
||
<p>To create or remove containers, you can use the <code>Container</code> parameter with one
|
||
or more hashtables. The hashtable(s) passed to this parameter can have the
|
||
following properties:</p>
|
||
|
||
<ul>
|
||
<li>Name (required)</li>
|
||
<li>Image (required unless Remove property is set to <code>$true</code>)</li>
|
||
<li>Port</li>
|
||
<li>Env</li>
|
||
<li>Link</li>
|
||
<li>Command</li>
|
||
<li>Remove</li>
|
||
</ul>
|
||
|
||
<p>For example, create a hashtable with the settings for your container:</p>
|
||
|
||
<pre><code class="language-powershell">$webContainer = @{Name=&quot;web&quot;; Image=&quot;anweiss/docker-platynem&quot;; Port=&quot;80:80&quot;}
|
||
</code></pre>
|
||
|
||
<p>Then, using the same steps defined above, execute
|
||
<code>DockerClient</code> with the <code>-Image</code> and <code>-Container</code> parameters:</p>
|
||
|
||
<pre><code class="language-powershell">DockerClient -Hostname &quot;myhost&quot; -Image node -Container $webContainer
|
||
.\RunDockerClientConfig.ps1 -Hostname &quot;myhost&quot;
|
||
</code></pre>
|
||
|
||
<p>Existing containers can also be removed as follows:</p>
|
||
|
||
<pre><code class="language-powershell">$containerToRemove = @{Name=&quot;web&quot;; Remove=$true}
|
||
DockerClient -Hostname &quot;myhost&quot; -Container $containerToRemove
|
||
.\RunDockerClientConfig.ps1 -Hostname &quot;myhost&quot;
|
||
</code></pre>
|
||
|
||
<p>Here is a hashtable with all of the properties that can be used to create a
|
||
container:</p>
|
||
|
||
<pre><code class="language-powershell">$containerProps = @{Name=&quot;web&quot;; Image=&quot;node:latest&quot;; Port=&quot;80:80&quot;; `
|
||
Env=&quot;PORT=80&quot;; Link=&quot;db:db&quot;; Command=&quot;grunt&quot;}
|
||
</code></pre>
|
||
</description>
|
||
</item>
|
||
|
||
<item>
|
||
<title>Protect the Docker daemon socket</title>
|
||
<link>http://localhost/articles/https/</link>
|
||
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
|
||
|
||
<guid>http://localhost/articles/https/</guid>
|
||
<description>
|
||
|
||
<h1 id="protect-the-docker-daemon-socket">Protect the Docker daemon socket</h1>
|
||
|
||
<p>By default, Docker runs via a non-networked Unix socket. It can also
|
||
optionally communicate using a HTTP socket.</p>
|
||
|
||
<p>If you need Docker to be reachable via the network in a safe manner, you can
|
||
enable TLS by specifying the <code>tlsverify</code> flag and pointing Docker&rsquo;s
|
||
<code>tlscacert</code> flag to a trusted CA certificate.</p>
|
||
|
||
<p>In the daemon mode, it will only allow connections from clients
|
||
authenticated by a certificate signed by that CA. In the client mode,
|
||
it will only connect to servers with a certificate signed by that CA.</p>
|
||
|
||
<blockquote>
|
||
<p><strong>Warning</strong>:
|
||
Using TLS and managing a CA is an advanced topic. Please familiarize yourself
|
||
with OpenSSL, x509 and TLS before using it in production.</p>
|
||
|
||
<p><strong>Warning</strong>:
|
||
These TLS commands will only generate a working set of certificates on Linux.
|
||
Mac OS X comes with a version of OpenSSL that is incompatible with the
|
||
certificates that Docker requires.</p>
|
||
</blockquote>
|
||
|
||
<h2 id="create-a-ca-server-and-client-keys-with-openssl">Create a CA, server and client keys with OpenSSL</h2>
|
||
|
||
<blockquote>
|
||
<p><strong>Note</strong>: replace all instances of <code>$HOST</code> in the following example with the
|
||
DNS name of your Docker daemon&rsquo;s host.</p>
|
||
</blockquote>
|
||
|
||
<p>First generate CA private and public keys:</p>
|
||
|
||
<pre><code>$ openssl genrsa -aes256 -out ca-key.pem 2048
|
||
Generating RSA private key, 2048 bit long modulus
|
||
......+++
|
||
...............+++
|
||
e is 65537 (0x10001)
|
||
Enter pass phrase for ca-key.pem:
|
||
Verifying - Enter pass phrase for ca-key.pem:
|
||
$ openssl req -new -x509 -days 365 -key ca-key.pem -sha256 -out ca.pem
|
||
Enter pass phrase for ca-key.pem:
|
||
You are about to be asked to enter information that will be incorporated
|
||
into your certificate request.
|
||
What you are about to enter is what is called a Distinguished Name or a DN.
|
||
There are quite a few fields but you can leave some blank
|
||
For some fields there will be a default value,
|
||
If you enter '.', the field will be left blank.
|
||
-----
|
||
Country Name (2 letter code) [AU]:
|
||
State or Province Name (full name) [Some-State]:Queensland
|
||
Locality Name (eg, city) []:Brisbane
|
||
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Docker Inc
|
||
Organizational Unit Name (eg, section) []:Boot2Docker
|
||
Common Name (e.g. server FQDN or YOUR name) []:$HOST
|
||
Email Address []:Sven@home.org.au
|
||
</code></pre>
|
||
|
||
<p>Now that we have a CA, you can create a server key and certificate
|
||
signing request (CSR). Make sure that &ldquo;Common Name&rdquo; (i.e., server FQDN or YOUR
|
||
name) matches the hostname you will use to connect to Docker:</p>
|
||
|
||
<blockquote>
|
||
<p><strong>Note</strong>: replace all instances of <code>$HOST</code> in the following example with the
|
||
DNS name of your Docker daemon&rsquo;s host.</p>
|
||
</blockquote>
|
||
|
||
<pre><code>$ openssl genrsa -out server-key.pem 2048
|
||
Generating RSA private key, 2048 bit long modulus
|
||
......................................................+++
|
||
............................................+++
|
||
e is 65537 (0x10001)
|
||
$ openssl req -subj &quot;/CN=$HOST&quot; -new -key server-key.pem -out server.csr
|
||
</code></pre>
|
||
|
||
<p>Next, we&rsquo;re going to sign the public key with our CA:</p>
|
||
|
||
<p>Since TLS connections can be made via IP address as well as DNS name, they need
|
||
to be specified when creating the certificate. For example, to allow connections
|
||
using <code>10.10.10.20</code> and <code>127.0.0.1</code>:</p>
|
||
|
||
<pre><code>$ echo subjectAltName = IP:10.10.10.20,IP:127.0.0.1 &gt; extfile.cnf
|
||
|
||
$ openssl x509 -req -days 365 -in server.csr -CA ca.pem -CAkey ca-key.pem \
|
||
-CAcreateserial -out server-cert.pem -extfile extfile.cnf
|
||
Signature ok
|
||
subject=/CN=your.host.com
|
||
Getting CA Private Key
|
||
Enter pass phrase for ca-key.pem:
|
||
</code></pre>
|
||
|
||
<p>For client authentication, create a client key and certificate signing
|
||
request:</p>
|
||
|
||
<pre><code>$ openssl genrsa -out key.pem 2048
|
||
Generating RSA private key, 2048 bit long modulus
|
||
...............................................+++
|
||
...............................................................+++
|
||
e is 65537 (0x10001)
|
||
$ openssl req -subj '/CN=client' -new -key key.pem -out client.csr
|
||
</code></pre>
|
||
|
||
<p>To make the key suitable for client authentication, create an extensions
|
||
config file:</p>
|
||
|
||
<pre><code>$ echo extendedKeyUsage = clientAuth &gt; extfile.cnf
|
||
</code></pre>
|
||
|
||
<p>Now sign the public key:</p>
|
||
|
||
<pre><code>$ openssl x509 -req -days 365 -in client.csr -CA ca.pem -CAkey ca-key.pem \
|
||
-CAcreateserial -out cert.pem -extfile extfile.cnf
|
||
Signature ok
|
||
subject=/CN=client
|
||
Getting CA Private Key
|
||
Enter pass phrase for ca-key.pem:
|
||
</code></pre>
|
||
|
||
<p>After generating <code>cert.pem</code> and <code>server-cert.pem</code> you can safely remove the
|
||
two certificate signing requests:</p>
|
||
|
||
<pre><code>$ rm -v client.csr server.csr
|
||
</code></pre>
|
||
|
||
<p>With a default <code>umask</code> of 022, your secret keys will be <em>world-readable</em> and
|
||
writable for you and your group.</p>
|
||
|
||
<p>In order to protect your keys from accidental damage, you will want to remove their
|
||
write permissions. To make them only readable by you, change file modes as follows:</p>
|
||
|
||
<pre><code>$ chmod -v 0400 ca-key.pem key.pem server-key.pem
|
||
</code></pre>
|
||
|
||
<p>Certificates can be world-readable, but you might want to remove write access to
|
||
prevent accidental damage:</p>
|
||
|
||
<pre><code>$ chmod -v 0444 ca.pem server-cert.pem cert.pem
|
||
</code></pre>
|
||
|
||
<p>Now you can make the Docker daemon only accept connections from clients
|
||
providing a certificate trusted by our CA:</p>
|
||
|
||
<pre><code>$ docker -d --tlsverify --tlscacert=ca.pem --tlscert=server-cert.pem --tlskey=server-key.pem \
|
||
-H=0.0.0.0:2376
|
||
</code></pre>
|
||
|
||
<p>To be able to connect to Docker and validate its certificate, you now
|
||
need to provide your client keys, certificates and trusted CA:</p>
|
||
|
||
<blockquote>
|
||
<p><strong>Note</strong>: replace all instances of <code>$HOST</code> in the following example with the
|
||
DNS name of your Docker daemon&rsquo;s host.</p>
|
||
</blockquote>
|
||
|
||
<pre><code>$ docker --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem \
|
||
-H=$HOST:2376 version
|
||
</code></pre>
|
||
|
||
<blockquote>
|
||
<p><strong>Note</strong>:
|
||
Docker over TLS should run on TCP port 2376.</p>
|
||
|
||
<p><strong>Warning</strong>:
|
||
As shown in the example above, you don&rsquo;t have to run the <code>docker</code> client
|
||
with <code>sudo</code> or the <code>docker</code> group when you use certificate authentication.
|
||
That means anyone with the keys can give any instructions to your Docker
|
||
daemon, giving them root access to the machine hosting the daemon. Guard
|
||
these keys as you would a root password!</p>
|
||
</blockquote>
|
||
|
||
<h2 id="secure-by-default">Secure by default</h2>
|
||
|
||
<p>If you want to secure your Docker client connections by default, you can move
|
||
the files to the <code>.docker</code> directory in your home directory &ndash; and set the
|
||
<code>DOCKER_HOST</code> and <code>DOCKER_TLS_VERIFY</code> variables as well (instead of passing
|
||
<code>-H=tcp://$HOST:2376</code> and <code>--tlsverify</code> on every call).</p>
|
||
|
||
<pre><code>$ mkdir -pv ~/.docker
|
||
$ cp -v {ca,cert,key}.pem ~/.docker
|
||
$ export DOCKER_HOST=tcp://$HOST:2376 DOCKER_TLS_VERIFY=1
|
||
</code></pre>
|
||
|
||
<p>Docker will now connect securely by default:</p>
|
||
|
||
<pre><code>$ docker ps
|
||
</code></pre>
|
||
|
||
<h2 id="other-modes">Other modes</h2>
|
||
|
||
<p>If you don&rsquo;t want to have complete two-way authentication, you can run
|
||
Docker in various other modes by mixing the flags.</p>
|
||
|
||
<h3 id="daemon-modes">Daemon modes</h3>
|
||
|
||
<ul>
|
||
<li><code>tlsverify</code>, <code>tlscacert</code>, <code>tlscert</code>, <code>tlskey</code> set: Authenticate clients</li>
|
||
<li><code>tls</code>, <code>tlscert</code>, <code>tlskey</code>: Do not authenticate clients</li>
|
||
</ul>
|
||
|
||
<h3 id="client-modes">Client modes</h3>
|
||
|
||
<ul>
|
||
<li><code>tls</code>: Authenticate server based on public/default CA pool</li>
|
||
<li><code>tlsverify</code>, <code>tlscacert</code>: Authenticate server based on given CA</li>
|
||
<li><code>tls</code>, <code>tlscert</code>, <code>tlskey</code>: Authenticate with client certificate, do not
|
||
authenticate server based on given CA</li>
|
||
<li><code>tlsverify</code>, <code>tlscacert</code>, <code>tlscert</code>, <code>tlskey</code>: Authenticate with client
|
||
certificate and authenticate server based on given CA</li>
|
||
</ul>
|
||
|
||
<p>If found, the client will send its client certificate, so you just need
|
||
to drop your keys into <code>~/.docker/{ca,cert,key}.pem</code>. Alternatively,
|
||
if you want to store your keys in another location, you can specify that
|
||
location using the environment variable <code>DOCKER_CERT_PATH</code>.</p>
|
||
|
||
<pre><code>$ export DOCKER_CERT_PATH=~/.docker/zone1/
|
||
$ docker --tlsverify ps
|
||
</code></pre>
|
||
|
||
<h3 id="connecting-to-the-secure-docker-port-using-curl">Connecting to the secure Docker port using <code>curl</code></h3>
|
||
|
||
<p>To use <code>curl</code> to make test API requests, you need to use three extra command line
|
||
flags:</p>
|
||
|
||
<pre><code>$ curl https://$HOST:2376/images/json \
|
||
--cert ~/.docker/cert.pem \
|
||
--key ~/.docker/key.pem \
|
||
--cacert ~/.docker/ca.pem
|
||
</code></pre>
|
||
</description>
|
||
</item>
|
||
|
||
<item>
|
||
<title>Network configuration</title>
|
||
<link>http://localhost/articles/networking/</link>
|
||
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
|
||
|
||
<guid>http://localhost/articles/networking/</guid>
|
||
<description>
|
||
|
||
<h1 id="network-configuration">Network configuration</h1>
|
||
|
||
<h2 id="summary">Summary</h2>
|
||
|
||
<p>When Docker starts, it creates a virtual interface named <code>docker0</code> on
|
||
the host machine. It randomly chooses an address and subnet from the
|
||
private range defined by <a href="http://tools.ietf.org/html/rfc1918">RFC 1918</a>
|
||
that are not in use on the host machine, and assigns it to <code>docker0</code>.
|
||
Docker made the choice <code>172.17.42.1/16</code> when I started it a few minutes
|
||
ago, for example — a 16-bit netmask providing 65,534 addresses for the
|
||
host machine and its containers. The MAC address is generated using the
|
||
IP address allocated to the container to avoid ARP collisions, using a
|
||
range from <code>02:42:ac:11:00:00</code> to <code>02:42:ac:11:ff:ff</code>.</p>
|
||
|
||
<blockquote>
|
||
<p><strong>Note:</strong>
|
||
This document discusses advanced networking configuration
|
||
and options for Docker. In most cases you won&rsquo;t need this information.
|
||
If you&rsquo;re looking to get started with a simpler explanation of Docker
|
||
networking and an introduction to the concept of container linking see
|
||
the <a href="http://localhost/userguide/dockerlinks/">Docker User Guide</a>.</p>
|
||
</blockquote>
|
||
|
||
<p>But <code>docker0</code> is no ordinary interface. It is a virtual <em>Ethernet
|
||
bridge</em> that automatically forwards packets between any other network
|
||
interfaces that are attached to it. This lets containers communicate
|
||
both with the host machine and with each other. Every time Docker
|
||
creates a container, it creates a pair of “peer” interfaces that are
|
||
like opposite ends of a pipe — a packet sent on one will be received on
|
||
the other. It gives one of the peers to the container to become its
|
||
<code>eth0</code> interface and keeps the other peer, with a unique name like
|
||
<code>vethAQI2QT</code>, out in the namespace of the host machine. By binding
|
||
every <code>veth*</code> interface to the <code>docker0</code> bridge, Docker creates a
|
||
virtual subnet shared between the host machine and every Docker
|
||
container.</p>
|
||
|
||
<p>The remaining sections of this document explain all of the ways that you
|
||
can use Docker options and — in advanced cases — raw Linux networking
|
||
commands to tweak, supplement, or entirely replace Docker&rsquo;s default
|
||
networking configuration.</p>
|
||
|
||
<h2 id="quick-guide-to-the-options">Quick guide to the options</h2>
|
||
|
||
<p>Here is a quick list of the networking-related Docker command-line
|
||
options, in case it helps you find the section below that you are
|
||
looking for.</p>
|
||
|
||
<p>Some networking command-line options can only be supplied to the Docker
|
||
server when it starts up, and cannot be changed once it is running:</p>
|
||
|
||
<ul>
|
||
<li><p><code>-b BRIDGE</code> or <code>--bridge=BRIDGE</code> — see
|
||
<a href="#bridge-building">Building your own bridge</a></p></li>
|
||
|
||
<li><p><code>--bip=CIDR</code> — see
|
||
<a href="#docker0">Customizing docker0</a></p></li>
|
||
|
||
<li><p><code>--default-gateway=IP_ADDRESS</code> — see
|
||
<a href="#container-networking">How Docker networks a container</a></p></li>
|
||
|
||
<li><p><code>--default-gateway-v6=IP_ADDRESS</code> — see
|
||
<a href="#ipv6">IPv6</a></p></li>
|
||
|
||
<li><p><code>--fixed-cidr</code> — see
|
||
<a href="#docker0">Customizing docker0</a></p></li>
|
||
|
||
<li><p><code>--fixed-cidr-v6</code> — see
|
||
<a href="#ipv6">IPv6</a></p></li>
|
||
|
||
<li><p><code>-H SOCKET...</code> or <code>--host=SOCKET...</code> —
|
||
This might sound like it would affect container networking,
|
||
but it actually faces in the other direction:
|
||
it tells the Docker server over what channels
|
||
it should be willing to receive commands
|
||
like “run container” and “stop container.”</p></li>
|
||
|
||
<li><p><code>--icc=true|false</code> — see
|
||
<a href="#between-containers">Communication between containers</a></p></li>
|
||
|
||
<li><p><code>--ip=IP_ADDRESS</code> — see
|
||
<a href="#binding-ports">Binding container ports</a></p></li>
|
||
|
||
<li><p><code>--ipv6=true|false</code> — see
|
||
<a href="#ipv6">IPv6</a></p></li>
|
||
|
||
<li><p><code>--ip-forward=true|false</code> — see
|
||
<a href="#the-world">Communication between containers and the wider world</a></p></li>
|
||
|
||
<li><p><code>--iptables=true|false</code> — see
|
||
<a href="#between-containers">Communication between containers</a></p></li>
|
||
|
||
<li><p><code>--mtu=BYTES</code> — see
|
||
<a href="#docker0">Customizing docker0</a></p></li>
|
||
|
||
<li><p><code>--userland-proxy=true|false</code> — see
|
||
<a href="#binding-ports">Binding container ports</a></p></li>
|
||
</ul>
|
||
|
||
<p>There are two networking options that can be supplied either at startup
|
||
or when <code>docker run</code> is invoked. When provided at startup, set the
|
||
default value that <code>docker run</code> will later use if the options are not
|
||
specified:</p>
|
||
|
||
<ul>
|
||
<li><p><code>--dns=IP_ADDRESS...</code> — see
|
||
<a href="#dns">Configuring DNS</a></p></li>
|
||
|
||
<li><p><code>--dns-search=DOMAIN...</code> — see
|
||
<a href="#dns">Configuring DNS</a></p></li>
|
||
</ul>
|
||
|
||
<p>Finally, several networking options can only be provided when calling
|
||
<code>docker run</code> because they specify something specific to one container:</p>
|
||
|
||
<ul>
|
||
<li><p><code>-h HOSTNAME</code> or <code>--hostname=HOSTNAME</code> — see
|
||
<a href="#dns">Configuring DNS</a> and
|
||
<a href="#container-networking">How Docker networks a container</a></p></li>
|
||
|
||
<li><p><code>--link=CONTAINER_NAME_or_ID:ALIAS</code> — see
|
||
<a href="#dns">Configuring DNS</a> and
|
||
<a href="#between-containers">Communication between containers</a></p></li>
|
||
|
||
<li><p><code>--net=bridge|none|container:NAME_or_ID|host</code> — see
|
||
<a href="#container-networking">How Docker networks a container</a></p></li>
|
||
|
||
<li><p><code>--mac-address=MACADDRESS...</code> — see
|
||
<a href="#container-networking">How Docker networks a container</a></p></li>
|
||
|
||
<li><p><code>-p SPEC</code> or <code>--publish=SPEC</code> — see
|
||
<a href="#binding-ports">Binding container ports</a></p></li>
|
||
|
||
<li><p><code>-P</code> or <code>--publish-all=true|false</code> — see
|
||
<a href="#binding-ports">Binding container ports</a></p></li>
|
||
</ul>
|
||
|
||
<p>To supply networking options to the Docker server at startup, use the
|
||
<code>DOCKER_OPTS</code> variable in the Docker upstart configuration file. For Ubuntu, edit the
|
||
variable in <code>/etc/default/docker</code> or <code>/etc/sysconfig/docker</code> for CentOS.</p>
|
||
|
||
<p>The following example illustrates how to configure Docker on Ubuntu to recognize a
|
||
newly built bridge.</p>
|
||
|
||
<p>Edit the <code>/etc/default/docker</code> file:</p>
|
||
|
||
<pre><code>$ echo 'DOCKER_OPTS=&quot;-b=bridge0&quot;' &gt;&gt; /etc/default/docker
|
||
</code></pre>
|
||
|
||
<p>Then restart the Docker server.</p>
|
||
|
||
<pre><code>$ sudo service docker start
|
||
</code></pre>
|
||
|
||
<p>For additional information on bridges, see <a href="#building-your-own-bridge">building your own
|
||
bridge</a> later on this page.</p>
|
||
|
||
<p>The following sections tackle all of the above topics in an order that we can move roughly from simplest to most complex.</p>
|
||
|
||
<h2 id="configuring-dns">Configuring DNS</h2>
|
||
|
||
<p><a name="dns"></a></p>
|
||
|
||
<p>How can Docker supply each container with a hostname and DNS
|
||
configuration, without having to build a custom image with the hostname
|
||
written inside? Its trick is to overlay three crucial <code>/etc</code> files
|
||
inside the container with virtual files where it can write fresh
|
||
information. You can see this by running <code>mount</code> inside a container:</p>
|
||
|
||
<pre><code>$$ mount
|
||
...
|
||
/dev/disk/by-uuid/1fec...ebdf on /etc/hostname type ext4 ...
|
||
/dev/disk/by-uuid/1fec...ebdf on /etc/hosts type ext4 ...
|
||
/dev/disk/by-uuid/1fec...ebdf on /etc/resolv.conf type ext4 ...
|
||
...
|
||
</code></pre>
|
||
|
||
<p>This arrangement allows Docker to do clever things like keep
|
||
<code>resolv.conf</code> up to date across all containers when the host machine
|
||
receives new configuration over DHCP later. The exact details of how
|
||
Docker maintains these files inside the container can change from one
|
||
Docker version to the next, so you should leave the files themselves
|
||
alone and use the following Docker options instead.</p>
|
||
|
||
<p>Four different options affect container domain name services.</p>
|
||
|
||
<ul>
|
||
<li><p><code>-h HOSTNAME</code> or <code>--hostname=HOSTNAME</code> — sets the hostname by which
|
||
the container knows itself. This is written into <code>/etc/hostname</code>,
|
||
into <code>/etc/hosts</code> as the name of the container&rsquo;s host-facing IP
|
||
address, and is the name that <code>/bin/bash</code> inside the container will
|
||
display inside its prompt. But the hostname is not easy to see from
|
||
outside the container. It will not appear in <code>docker ps</code> nor in the
|
||
<code>/etc/hosts</code> file of any other container.</p></li>
|
||
|
||
<li><p><code>--link=CONTAINER_NAME_or_ID:ALIAS</code> — using this option as you <code>run</code> a
|
||
container gives the new container&rsquo;s <code>/etc/hosts</code> an extra entry
|
||
named <code>ALIAS</code> that points to the IP address of the container identified by
|
||
<code>CONTAINER_NAME_or_ID</code>. This lets processes inside the new container
|
||
connect to the hostname <code>ALIAS</code> without having to know its IP. The
|
||
<code>--link=</code> option is discussed in more detail below, in the section
|
||
<a href="#between-containers">Communication between containers</a>. Because
|
||
Docker may assign a different IP address to the linked containers
|
||
on restart, Docker updates the <code>ALIAS</code> entry in the <code>/etc/hosts</code> file
|
||
of the recipient containers.</p></li>
|
||
|
||
<li><p><code>--dns=IP_ADDRESS...</code> — sets the IP addresses added as <code>server</code>
|
||
lines to the container&rsquo;s <code>/etc/resolv.conf</code> file. Processes in the
|
||
container, when confronted with a hostname not in <code>/etc/hosts</code>, will
|
||
connect to these IP addresses on port 53 looking for name resolution
|
||
services.</p></li>
|
||
|
||
<li><p><code>--dns-search=DOMAIN...</code> — sets the domain names that are searched
|
||
when a bare unqualified hostname is used inside of the container, by
|
||
writing <code>search</code> lines into the container&rsquo;s <code>/etc/resolv.conf</code>.
|
||
When a container process attempts to access <code>host</code> and the search
|
||
domain <code>example.com</code> is set, for instance, the DNS logic will not
|
||
only look up <code>host</code> but also <code>host.example.com</code>.
|
||
Use <code>--dns-search=.</code> if you don&rsquo;t wish to set the search domain.</p></li>
|
||
</ul>
|
||
|
||
<p>Regarding DNS settings, in the absence of either the <code>--dns=IP_ADDRESS...</code>
|
||
or the <code>--dns-search=DOMAIN...</code> option, Docker makes each container&rsquo;s
|
||
<code>/etc/resolv.conf</code> look like the <code>/etc/resolv.conf</code> of the host machine (where
|
||
the <code>docker</code> daemon runs). When creating the container&rsquo;s <code>/etc/resolv.conf</code>,
|
||
the daemon filters out all localhost IP address <code>nameserver</code> entries from
|
||
the host&rsquo;s original file.</p>
|
||
|
||
<p>Filtering is necessary because all localhost addresses on the host are
|
||
unreachable from the container&rsquo;s network. After this filtering, if there
|
||
are no more <code>nameserver</code> entries left in the container&rsquo;s <code>/etc/resolv.conf</code>
|
||
file, the daemon adds public Google DNS nameservers
|
||
(8.8.8.8 and 8.8.4.4) to the container&rsquo;s DNS configuration. If IPv6 is
|
||
enabled on the daemon, the public IPv6 Google DNS nameservers will also
|
||
be added (2001:4860:4860::8888 and 2001:4860:4860::8844).</p>
|
||
|
||
<blockquote>
|
||
<p><strong>Note</strong>:
|
||
If you need access to a host&rsquo;s localhost resolver, you must modify your
|
||
DNS service on the host to listen on a non-localhost address that is
|
||
reachable from within the container.</p>
|
||
</blockquote>
|
||
|
||
<p>You might wonder what happens when the host machine&rsquo;s
|
||
<code>/etc/resolv.conf</code> file changes. The <code>docker</code> daemon has a file change
|
||
notifier active which will watch for changes to the host DNS configuration.</p>
|
||
|
||
<blockquote>
|
||
<p><strong>Note</strong>:
|
||
The file change notifier relies on the Linux kernel&rsquo;s inotify feature.
|
||
Because this feature is currently incompatible with the overlay filesystem
|
||
driver, a Docker daemon using &ldquo;overlay&rdquo; will not be able to take advantage
|
||
of the <code>/etc/resolv.conf</code> auto-update feature.</p>
|
||
</blockquote>
|
||
|
||
<p>When the host file changes, all stopped containers which have a matching
|
||
<code>resolv.conf</code> to the host will be updated immediately to this newest host
|
||
configuration. Containers which are running when the host configuration
|
||
changes will need to stop and start to pick up the host changes due to lack
|
||
of a facility to ensure atomic writes of the <code>resolv.conf</code> file while the
|
||
container is running. If the container&rsquo;s <code>resolv.conf</code> has been edited since
|
||
it was started with the default configuration, no replacement will be
|
||
attempted as it would overwrite the changes performed by the container.
|
||
If the options (<code>--dns</code> or <code>--dns-search</code>) have been used to modify the
|
||
default host configuration, then the replacement with an updated host&rsquo;s
|
||
<code>/etc/resolv.conf</code> will not happen as well.</p>
|
||
|
||
<blockquote>
|
||
<p><strong>Note</strong>:
|
||
For containers which were created prior to the implementation of
|
||
the <code>/etc/resolv.conf</code> update feature in Docker 1.5.0: those
|
||
containers will <strong>not</strong> receive updates when the host <code>resolv.conf</code>
|
||
file changes. Only containers created with Docker 1.5.0 and above
|
||
will utilize this auto-update feature.</p>
|
||
</blockquote>
|
||
|
||
<h2 id="communication-between-containers-and-the-wider-world">Communication between containers and the wider world</h2>
|
||
|
||
<p><a name="the-world"></a></p>
|
||
|
||
<p>Whether a container can talk to the world is governed by two factors.</p>
|
||
|
||
<ol>
|
||
<li><p>Is the host machine willing to forward IP packets? This is governed
|
||
by the <code>ip_forward</code> system parameter. Packets can only pass between
|
||
containers if this parameter is <code>1</code>. Usually you will simply leave
|
||
the Docker server at its default setting <code>--ip-forward=true</code> and
|
||
Docker will go set <code>ip_forward</code> to <code>1</code> for you when the server
|
||
starts up. To check the setting or turn it on manually:</p>
|
||
|
||
<pre><code>$ sysctl net.ipv4.conf.all.forwarding
|
||
net.ipv4.conf.all.forwarding = 0
|
||
$ sysctl net.ipv4.conf.all.forwarding=1
|
||
$ sysctl net.ipv4.conf.all.forwarding
|
||
net.ipv4.conf.all.forwarding = 1
|
||
</code></pre>
|
||
|
||
<p>Many using Docker will want <code>ip_forward</code> to be on, to at
|
||
least make communication <em>possible</em> between containers and
|
||
the wider world.</p>
|
||
|
||
<p>May also be needed for inter-container communication if you are
|
||
in a multiple bridge setup.</p></li>
|
||
|
||
<li><p>Do your <code>iptables</code> allow this particular connection? Docker will
|
||
never make changes to your system <code>iptables</code> rules if you set
|
||
<code>--iptables=false</code> when the daemon starts. Otherwise the Docker
|
||
server will append forwarding rules to the <code>DOCKER</code> filter chain.</p></li>
|
||
</ol>
|
||
|
||
<p>Docker will not delete or modify any pre-existing rules from the <code>DOCKER</code>
|
||
filter chain. This allows the user to create in advance any rules required
|
||
to further restrict access to the containers.</p>
|
||
|
||
<p>Docker&rsquo;s forward rules permit all external source IPs by default. To allow
|
||
only a specific IP or network to access the containers, insert a negated
|
||
rule at the top of the <code>DOCKER</code> filter chain. For example, to restrict
|
||
external access such that <em>only</em> source IP 8.8.8.8 can access the
|
||
containers, the following rule could be added:</p>
|
||
|
||
<pre><code>$ iptables -I DOCKER -i ext_if ! -s 8.8.8.8 -j DROP
|
||
</code></pre>
|
||
|
||
<h2 id="communication-between-containers">Communication between containers</h2>
|
||
|
||
<p><a name="between-containers"></a></p>
|
||
|
||
<p>Whether two containers can communicate is governed, at the operating
|
||
system level, by two factors.</p>
|
||
|
||
<ol>
|
||
<li><p>Does the network topology even connect the containers&rsquo; network
|
||
interfaces? By default Docker will attach all containers to a
|
||
single <code>docker0</code> bridge, providing a path for packets to travel
|
||
between them. See the later sections of this document for other
|
||
possible topologies.</p></li>
|
||
|
||
<li><p>Do your <code>iptables</code> allow this particular connection? Docker will never
|
||
make changes to your system <code>iptables</code> rules if you set
|
||
<code>--iptables=false</code> when the daemon starts. Otherwise the Docker server
|
||
will add a default rule to the <code>FORWARD</code> chain with a blanket <code>ACCEPT</code>
|
||
policy if you retain the default <code>--icc=true</code>, or else will set the
|
||
policy to <code>DROP</code> if <code>--icc=false</code>.</p></li>
|
||
</ol>
|
||
|
||
<p>It is a strategic question whether to leave <code>--icc=true</code> or change it to
|
||
<code>--icc=false</code> so that
|
||
<code>iptables</code> will protect other containers — and the main host — from
|
||
having arbitrary ports probed or accessed by a container that gets
|
||
compromised.</p>
|
||
|
||
<p>If you choose the most secure setting of <code>--icc=false</code>, then how can
|
||
containers communicate in those cases where you <em>want</em> them to provide
|
||
each other services?</p>
|
||
|
||
<p>The answer is the <code>--link=CONTAINER_NAME_or_ID:ALIAS</code> option, which was
|
||
mentioned in the previous section because of its effect upon name
|
||
services. If the Docker daemon is running with both <code>--icc=false</code> and
|
||
<code>--iptables=true</code> then, when it sees <code>docker run</code> invoked with the
|
||
<code>--link=</code> option, the Docker server will insert a pair of <code>iptables</code>
|
||
<code>ACCEPT</code> rules so that the new container can connect to the ports
|
||
exposed by the other container — the ports that it mentioned in the
|
||
<code>EXPOSE</code> lines of its <code>Dockerfile</code>. Docker has more documentation on
|
||
this subject — see the <a href="http://localhost/userguide/dockerlinks">linking Docker containers</a>
|
||
page for further details.</p>
|
||
|
||
<blockquote>
|
||
<p><strong>Note</strong>:
|
||
The value <code>CONTAINER_NAME</code> in <code>--link=</code> must either be an
|
||
auto-assigned Docker name like <code>stupefied_pare</code> or else the name you
|
||
assigned with <code>--name=</code> when you ran <code>docker run</code>. It cannot be a
|
||
hostname, which Docker will not recognize in the context of the
|
||
<code>--link=</code> option.</p>
|
||
</blockquote>
|
||
|
||
<p>You can run the <code>iptables</code> command on your Docker host to see whether
|
||
the <code>FORWARD</code> chain has a default policy of <code>ACCEPT</code> or <code>DROP</code>:</p>
|
||
|
||
<pre><code># When --icc=false, you should see a DROP rule:
|
||
|
||
$ sudo iptables -L -n
|
||
...
|
||
Chain FORWARD (policy ACCEPT)
|
||
target prot opt source destination
|
||
DOCKER all -- 0.0.0.0/0 0.0.0.0/0
|
||
DROP all -- 0.0.0.0/0 0.0.0.0/0
|
||
...
|
||
|
||
# When a --link= has been created under --icc=false,
|
||
# you should see port-specific ACCEPT rules overriding
|
||
# the subsequent DROP policy for all other packets:
|
||
|
||
$ sudo iptables -L -n
|
||
...
|
||
Chain FORWARD (policy ACCEPT)
|
||
target prot opt source destination
|
||
DOCKER all -- 0.0.0.0/0 0.0.0.0/0
|
||
DROP all -- 0.0.0.0/0 0.0.0.0/0
|
||
|
||
Chain DOCKER (1 references)
|
||
target prot opt source destination
|
||
ACCEPT tcp -- 172.17.0.2 172.17.0.3 tcp spt:80
|
||
ACCEPT tcp -- 172.17.0.3 172.17.0.2 tcp dpt:80
|
||
</code></pre>
|
||
|
||
<blockquote>
|
||
<p><strong>Note</strong>:
|
||
Docker is careful that its host-wide <code>iptables</code> rules fully expose
|
||
containers to each other&rsquo;s raw IP addresses, so connections from one
|
||
container to another should always appear to be originating from the
|
||
first container&rsquo;s own IP address.</p>
|
||
</blockquote>
|
||
|
||
<h2 id="binding-container-ports-to-the-host">Binding container ports to the host</h2>
|
||
|
||
<p><a name="binding-ports"></a></p>
|
||
|
||
<p>By default Docker containers can make connections to the outside world,
|
||
but the outside world cannot connect to containers. Each outgoing
|
||
connection will appear to originate from one of the host machine&rsquo;s own
|
||
IP addresses thanks to an <code>iptables</code> masquerading rule on the host
|
||
machine that the Docker server creates when it starts:</p>
|
||
|
||
<pre><code># You can see that the Docker server creates a
|
||
# masquerade rule that let containers connect
|
||
# to IP addresses in the outside world:
|
||
|
||
$ sudo iptables -t nat -L -n
|
||
...
|
||
Chain POSTROUTING (policy ACCEPT)
|
||
target prot opt source destination
|
||
MASQUERADE all -- 172.17.0.0/16 0.0.0.0/0
|
||
...
|
||
</code></pre>
|
||
|
||
<p>But if you want containers to accept incoming connections, you will need
|
||
to provide special options when invoking <code>docker run</code>. These options
|
||
are covered in more detail in the <a href="http://localhost/userguide/dockerlinks">Docker User Guide</a>
|
||
page. There are two approaches.</p>
|
||
|
||
<p>First, you can supply <code>-P</code> or <code>--publish-all=true|false</code> to <code>docker run</code> which
|
||
is a blanket operation that identifies every port with an <code>EXPOSE</code> line in the
|
||
image&rsquo;s <code>Dockerfile</code> or <code>--expose &lt;port&gt;</code> commandline flag and maps it to a
|
||
host port somewhere within an <em>ephemeral port range</em>. The <code>docker port</code> command
|
||
then needs to be used to inspect created mapping. The <em>ephemeral port range</em> is
|
||
configured by <code>/proc/sys/net/ipv4/ip_local_port_range</code> kernel parameter,
|
||
typically ranging from 32768 to 61000.</p>
|
||
|
||
<p>Mapping can be specified explicitly using <code>-p SPEC</code> or <code>--publish=SPEC</code> option.
|
||
It allows you to particularize which port on docker server - which can be any
|
||
port at all, not just one within the <em>ephemeral port range</em> — you want mapped
|
||
to which port in the container.</p>
|
||
|
||
<p>Either way, you should be able to peek at what Docker has accomplished
|
||
in your network stack by examining your NAT tables.</p>
|
||
|
||
<pre><code># What your NAT rules might look like when Docker
|
||
# is finished setting up a -P forward:
|
||
|
||
$ iptables -t nat -L -n
|
||
...
|
||
Chain DOCKER (2 references)
|
||
target prot opt source destination
|
||
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:49153 to:172.17.0.2:80
|
||
|
||
# What your NAT rules might look like when Docker
|
||
# is finished setting up a -p 80:80 forward:
|
||
|
||
Chain DOCKER (2 references)
|
||
target prot opt source destination
|
||
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 to:172.17.0.2:80
|
||
</code></pre>
|
||
|
||
<p>You can see that Docker has exposed these container ports on <code>0.0.0.0</code>,
|
||
the wildcard IP address that will match any possible incoming port on
|
||
the host machine. If you want to be more restrictive and only allow
|
||
container services to be contacted through a specific external interface
|
||
on the host machine, you have two choices. When you invoke <code>docker run</code>
|
||
you can use either <code>-p IP:host_port:container_port</code> or <code>-p IP::port</code> to
|
||
specify the external interface for one particular binding.</p>
|
||
|
||
<p>Or if you always want Docker port forwards to bind to one specific IP
|
||
address, you can edit your system-wide Docker server settings and add the
|
||
option <code>--ip=IP_ADDRESS</code>. Remember to restart your Docker server after
|
||
editing this setting.</p>
|
||
|
||
<blockquote>
|
||
<p><strong>Note</strong>:
|
||
With hairpin NAT enabled (<code>--userland-proxy=false</code>), containers port exposure
|
||
is achieved purely through iptables rules, and no attempt to bind the exposed
|
||
port is ever made. This means that nothing prevents shadowing a previously
|
||
listening service outside of Docker through exposing the same port for a
|
||
container. In such conflicting situation, Docker created iptables rules will
|
||
take precedence and route to the container.</p>
|
||
</blockquote>
|
||
|
||
<p>The <code>--userland-proxy</code> parameter, true by default, provides a userland
|
||
implementation for inter-container and outside-to-container communication. When
|
||
disabled, Docker uses both an additional <code>MASQUERADE</code> iptable rule and the
|
||
<code>net.ipv4.route_localnet</code> kernel parameter which allow the host machine to
|
||
connect to a local container exposed port through the commonly used loopback
|
||
address: this alternative is preferred for performance reason.</p>
|
||
|
||
<p>Again, this topic is covered without all of these low-level networking
|
||
details in the <a href="http://localhost/userguide/dockerlinks/">Docker User Guide</a> document if you
|
||
would like to use that as your port redirection reference instead.</p>
|
||
|
||
<h2 id="ipv6">IPv6</h2>
|
||
|
||
<p><a name="ipv6"></a></p>
|
||
|
||
<p>As we are <a href="http://en.wikipedia.org/wiki/IPv4_address_exhaustion">running out of IPv4 addresses</a>
|
||
the IETF has standardized an IPv4 successor, <a href="http://en.wikipedia.org/wiki/IPv6">Internet Protocol Version 6</a>
|
||
, in <a href="https://www.ietf.org/rfc/rfc2460.txt">RFC 2460</a>. Both protocols, IPv4 and
|
||
IPv6, reside on layer 3 of the <a href="http://en.wikipedia.org/wiki/OSI_model">OSI model</a>.</p>
|
||
|
||
<h3 id="ipv6-with-docker">IPv6 with Docker</h3>
|
||
|
||
<p>By default, the Docker server configures the container network for IPv4 only.
|
||
You can enable IPv4/IPv6 dualstack support by running the Docker daemon with the
|
||
<code>--ipv6</code> flag. Docker will set up the bridge <code>docker0</code> with the IPv6
|
||
<a href="http://en.wikipedia.org/wiki/Link-local_address">link-local address</a> <code>fe80::1</code>.</p>
|
||
|
||
<p>By default, containers that are created will only get a link-local IPv6 address.
|
||
To assign globally routable IPv6 addresses to your containers you have to
|
||
specify an IPv6 subnet to pick the addresses from. Set the IPv6 subnet via the
|
||
<code>--fixed-cidr-v6</code> parameter when starting Docker daemon:</p>
|
||
|
||
<pre><code>docker -d --ipv6 --fixed-cidr-v6=&quot;2001:db8:1::/64&quot;
|
||
</code></pre>
|
||
|
||
<p>The subnet for Docker containers should at least have a size of <code>/80</code>. This way
|
||
an IPv6 address can end with the container&rsquo;s MAC address and you prevent NDP
|
||
neighbor cache invalidation issues in the Docker layer.</p>
|
||
|
||
<p>With the <code>--fixed-cidr-v6</code> parameter set Docker will add a new route to the
|
||
routing table. Further IPv6 routing will be enabled (you may prevent this by
|
||
starting Docker daemon with <code>--ip-forward=false</code>):</p>
|
||
|
||
<pre><code>$ ip -6 route add 2001:db8:1::/64 dev docker0
|
||
$ sysctl net.ipv6.conf.default.forwarding=1
|
||
$ sysctl net.ipv6.conf.all.forwarding=1
|
||
</code></pre>
|
||
|
||
<p>All traffic to the subnet <code>2001:db8:1::/64</code> will now be routed
|
||
via the <code>docker0</code> interface.</p>
|
||
|
||
<p>Be aware that IPv6 forwarding may interfere with your existing IPv6
|
||
configuration: If you are using Router Advertisements to get IPv6 settings for
|
||
your host&rsquo;s interfaces you should set <code>accept_ra</code> to <code>2</code>. Otherwise IPv6
|
||
enabled forwarding will result in rejecting Router Advertisements. E.g., if you
|
||
want to configure <code>eth0</code> via Router Advertisements you should set:</p>
|
||
|
||
<pre><code>$ sysctl net.ipv6.conf.eth0.accept_ra=2
|
||
</code></pre>
|
||
|
||
<p><img src="http://localhost/articles/articles/article-img/ipv6_basic_host_config.svg" alt="" />
|
||
</p>
|
||
|
||
<p>Every new container will get an IPv6 address from the defined subnet. Further
|
||
a default route will be added on <code>eth0</code> in the container via the address
|
||
specified by the daemon option <code>--default-gateway-v6</code> if present, otherwise
|
||
via <code>fe80::1</code>:</p>
|
||
|
||
<pre><code>docker run -it ubuntu bash -c &quot;ip -6 addr show dev eth0; ip -6 route show&quot;
|
||
|
||
15: eth0: &lt;BROADCAST,UP,LOWER_UP&gt; mtu 1500
|
||
inet6 2001:db8:1:0:0:242:ac11:3/64 scope global
|
||
valid_lft forever preferred_lft forever
|
||
inet6 fe80::42:acff:fe11:3/64 scope link
|
||
valid_lft forever preferred_lft forever
|
||
|
||
2001:db8:1::/64 dev eth0 proto kernel metric 256
|
||
fe80::/64 dev eth0 proto kernel metric 256
|
||
default via fe80::1 dev eth0 metric 1024
|
||
</code></pre>
|
||
|
||
<p>In this example the Docker container is assigned a link-local address with the
|
||
network suffix <code>/64</code> (here: <code>fe80::42:acff:fe11:3/64</code>) and a globally routable
|
||
IPv6 address (here: <code>2001:db8:1:0:0:242:ac11:3/64</code>). The container will create
|
||
connections to addresses outside of the <code>2001:db8:1::/64</code> network via the
|
||
link-local gateway at <code>fe80::1</code> on <code>eth0</code>.</p>
|
||
|
||
<p>Often servers or virtual machines get a <code>/64</code> IPv6 subnet assigned (e.g.
|
||
<code>2001:db8:23:42::/64</code>). In this case you can split it up further and provide
|
||
Docker a <code>/80</code> subnet while using a separate <code>/80</code> subnet for other
|
||
applications on the host:</p>
|
||
|
||
<p><img src="http://localhost/articles/articles/article-img/ipv6_slash64_subnet_config.svg" alt="" />
|
||
</p>
|
||
|
||
<p>In this setup the subnet <code>2001:db8:23:42::/80</code> with a range from <code>2001:db8:23:42:0:0:0:0</code>
|
||
to <code>2001:db8:23:42:0:ffff:ffff:ffff</code> is attached to <code>eth0</code>, with the host listening
|
||
at <code>2001:db8:23:42::1</code>. The subnet <code>2001:db8:23:42:1::/80</code> with an address range from
|
||
<code>2001:db8:23:42:1:0:0:0</code> to <code>2001:db8:23:42:1:ffff:ffff:ffff</code> is attached to
|
||
<code>docker0</code> and will be used by containers.</p>
|
||
|
||
<h4 id="using-ndp-proxying">Using NDP proxying</h4>
|
||
|
||
<p>If your Docker host is only part of an IPv6 subnet but has not got an IPv6
|
||
subnet assigned you can use NDP proxying to connect your containers via IPv6 to
|
||
the internet.
|
||
For example your host has the IPv6 address <code>2001:db8::c001</code>, is part of the
|
||
subnet <code>2001:db8::/64</code> and your IaaS provider allows you to configure the IPv6
|
||
addresses <code>2001:db8::c000</code> to <code>2001:db8::c00f</code>:</p>
|
||
|
||
<pre><code>$ ip -6 addr show
|
||
1: lo: &lt;LOOPBACK,UP,LOWER_UP&gt; mtu 65536
|
||
inet6 ::1/128 scope host
|
||
valid_lft forever preferred_lft forever
|
||
2: eth0: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qlen 1000
|
||
inet6 2001:db8::c001/64 scope global
|
||
valid_lft forever preferred_lft forever
|
||
inet6 fe80::601:3fff:fea1:9c01/64 scope link
|
||
valid_lft forever preferred_lft forever
|
||
</code></pre>
|
||
|
||
<p>Let&rsquo;s split up the configurable address range into two subnets
|
||
<code>2001:db8::c000/125</code> and <code>2001:db8::c008/125</code>. The first one can be used by the
|
||
host itself, the latter by Docker:</p>
|
||
|
||
<pre><code>docker -d --ipv6 --fixed-cidr-v6 2001:db8::c008/125
|
||
</code></pre>
|
||
|
||
<p>You notice the Docker subnet is within the subnet managed by your router that
|
||
is connected to <code>eth0</code>. This means all devices (containers) with the addresses
|
||
from the Docker subnet are expected to be found within the router subnet.
|
||
Therefore the router thinks it can talk to these containers directly.</p>
|
||
|
||
<p><img src="http://localhost/articles/articles/article-img/ipv6_ndp_proxying.svg" alt="" />
|
||
</p>
|
||
|
||
<p>As soon as the router wants to send an IPv6 packet to the first container it
|
||
will transmit a neighbor solicitation request, asking, who has
|
||
<code>2001:db8::c009</code>? But it will get no answer because no one on this subnet has
|
||
this address. The container with this address is hidden behind the Docker host.
|
||
The Docker host has to listen to neighbor solicitation requests for the container
|
||
address and send a response that itself is the device that is responsible for
|
||
the address. This is done by a Kernel feature called <code>NDP Proxy</code>. You can
|
||
enable it by executing</p>
|
||
|
||
<pre><code>$ sysctl net.ipv6.conf.eth0.proxy_ndp=1
|
||
</code></pre>
|
||
|
||
<p>Now you can add the container&rsquo;s IPv6 address to the NDP proxy table:</p>
|
||
|
||
<pre><code>$ ip -6 neigh add proxy 2001:db8::c009 dev eth0
|
||
</code></pre>
|
||
|
||
<p>This command tells the Kernel to answer to incoming neighbor solicitation requests
|
||
regarding the IPv6 address <code>2001:db8::c009</code> on the device <code>eth0</code>. As a
|
||
consequence of this all traffic to this IPv6 address will go into the Docker
|
||
host and it will forward it according to its routing table via the <code>docker0</code>
|
||
device to the container network:</p>
|
||
|
||
<pre><code>$ ip -6 route show
|
||
2001:db8::c008/125 dev docker0 metric 1
|
||
2001:db8::/64 dev eth0 proto kernel metric 256
|
||
</code></pre>
|
||
|
||
<p>You have to execute the <code>ip -6 neigh add proxy ...</code> command for every IPv6
|
||
address in your Docker subnet. Unfortunately there is no functionality for
|
||
adding a whole subnet by executing one command.</p>
|
||
|
||
<h3 id="docker-ipv6-cluster">Docker IPv6 cluster</h3>
|
||
|
||
<h4 id="switched-network-environment">Switched network environment</h4>
|
||
|
||
<p>Using routable IPv6 addresses allows you to realize communication between
|
||
containers on different hosts. Let&rsquo;s have a look at a simple Docker IPv6 cluster
|
||
example:</p>
|
||
|
||
<p><img src="http://localhost/articles/articles/article-img/ipv6_switched_network_example.svg" alt="" />
|
||
</p>
|
||
|
||
<p>The Docker hosts are in the <code>2001:db8:0::/64</code> subnet. Host1 is configured
|
||
to provide addresses from the <code>2001:db8:1::/64</code> subnet to its containers. It
|
||
has three routes configured:</p>
|
||
|
||
<ul>
|
||
<li>Route all traffic to <code>2001:db8:0::/64</code> via <code>eth0</code></li>
|
||
<li>Route all traffic to <code>2001:db8:1::/64</code> via <code>docker0</code></li>
|
||
<li>Route all traffic to <code>2001:db8:2::/64</code> via Host2 with IP <code>2001:db8::2</code></li>
|
||
</ul>
|
||
|
||
<p>Host1 also acts as a router on OSI layer 3. When one of the network clients
|
||
tries to contact a target that is specified in Host1&rsquo;s routing table Host1 will
|
||
forward the traffic accordingly. It acts as a router for all networks it knows:
|
||
<code>2001:db8::/64</code>, <code>2001:db8:1::/64</code> and <code>2001:db8:2::/64</code>.</p>
|
||
|
||
<p>On Host2 we have nearly the same configuration. Host2&rsquo;s containers will get
|
||
IPv6 addresses from <code>2001:db8:2::/64</code>. Host2 has three routes configured:</p>
|
||
|
||
<ul>
|
||
<li>Route all traffic to <code>2001:db8:0::/64</code> via <code>eth0</code></li>
|
||
<li>Route all traffic to <code>2001:db8:2::/64</code> via <code>docker0</code></li>
|
||
<li>Route all traffic to <code>2001:db8:1::/64</code> via Host1 with IP <code>2001:db8:0::1</code></li>
|
||
</ul>
|
||
|
||
<p>The difference to Host1 is that the network <code>2001:db8:2::/64</code> is directly
|
||
attached to the host via its <code>docker0</code> interface whereas it reaches
|
||
<code>2001:db8:1::/64</code> via Host1&rsquo;s IPv6 address <code>2001:db8::1</code>.</p>
|
||
|
||
<p>This way every container is able to contact every other container. The
|
||
containers <code>Container1-*</code> share the same subnet and contact each other directly.
|
||
The traffic between <code>Container1-*</code> and <code>Container2-*</code> will be routed via Host1
|
||
and Host2 because those containers do not share the same subnet.</p>
|
||
|
||
<p>In a switched environment every host has to know all routes to every subnet. You
|
||
always have to update the hosts&rsquo; routing tables once you add or remove a host
|
||
to the cluster.</p>
|
||
|
||
<p>Every configuration in the diagram that is shown below the dashed line is
|
||
handled by Docker: The <code>docker0</code> bridge IP address configuration, the route to
|
||
the Docker subnet on the host, the container IP addresses and the routes on the
|
||
containers. The configuration above the line is up to the user and can be
|
||
adapted to the individual environment.</p>
|
||
|
||
<h4 id="routed-network-environment">Routed network environment</h4>
|
||
|
||
<p>In a routed network environment you replace the layer 2 switch with a layer 3
|
||
router. Now the hosts just have to know their default gateway (the router) and
|
||
the route to their own containers (managed by Docker). The router holds all
|
||
routing information about the Docker subnets. When you add or remove a host to
|
||
this environment you just have to update the routing table in the router - not
|
||
on every host.</p>
|
||
|
||
<p><img src="http://localhost/articles/articles/article-img/ipv6_routed_network_example.svg" alt="" />
|
||
</p>
|
||
|
||
<p>In this scenario containers of the same host can communicate directly with each
|
||
other. The traffic between containers on different hosts will be routed via
|
||
their hosts and the router. For example packet from <code>Container1-1</code> to
|
||
<code>Container2-1</code> will be routed through <code>Host1</code>, <code>Router</code> and <code>Host2</code> until it
|
||
arrives at <code>Container2-1</code>.</p>
|
||
|
||
<p>To keep the IPv6 addresses short in this example a <code>/48</code> network is assigned to
|
||
every host. The hosts use a <code>/64</code> subnet of this for its own services and one
|
||
for Docker. When adding a third host you would add a route for the subnet
|
||
<code>2001:db8:3::/48</code> in the router and configure Docker on Host3 with
|
||
<code>--fixed-cidr-v6=2001:db8:3:1::/64</code>.</p>
|
||
|
||
<p>Remember the subnet for Docker containers should at least have a size of <code>/80</code>.
|
||
This way an IPv6 address can end with the container&rsquo;s MAC address and you
|
||
prevent NDP neighbor cache invalidation issues in the Docker layer. So if you
|
||
have a <code>/64</code> for your whole environment use <code>/78</code> subnets for the hosts and
|
||
<code>/80</code> for the containers. This way you can use 4096 hosts with 16 <code>/80</code> subnets
|
||
each.</p>
|
||
|
||
<p>Every configuration in the diagram that is visualized below the dashed line is
|
||
handled by Docker: The <code>docker0</code> bridge IP address configuration, the route to
|
||
the Docker subnet on the host, the container IP addresses and the routes on the
|
||
containers. The configuration above the line is up to the user and can be
|
||
adapted to the individual environment.</p>
|
||
|
||
<h2 id="customizing-docker0">Customizing docker0</h2>
|
||
|
||
<p><a name="docker0"></a></p>
|
||
|
||
<p>By default, the Docker server creates and configures the host system&rsquo;s
|
||
<code>docker0</code> interface as an <em>Ethernet bridge</em> inside the Linux kernel that
|
||
can pass packets back and forth between other physical or virtual
|
||
network interfaces so that they behave as a single Ethernet network.</p>
|
||
|
||
<p>Docker configures <code>docker0</code> with an IP address, netmask and IP
|
||
allocation range. The host machine can both receive and send packets to
|
||
containers connected to the bridge, and gives it an MTU — the <em>maximum
|
||
transmission unit</em> or largest packet length that the interface will
|
||
allow — of either 1,500 bytes or else a more specific value copied from
|
||
the Docker host&rsquo;s interface that supports its default route. These
|
||
options are configurable at server startup:</p>
|
||
|
||
<ul>
|
||
<li><p><code>--bip=CIDR</code> — supply a specific IP address and netmask for the
|
||
<code>docker0</code> bridge, using standard CIDR notation like
|
||
<code>192.168.1.5/24</code>.</p></li>
|
||
|
||
<li><p><code>--fixed-cidr=CIDR</code> — restrict the IP range from the <code>docker0</code> subnet,
|
||
using the standard CIDR notation like <code>172.167.1.0/28</code>. This range must
|
||
be and IPv4 range for fixed IPs (ex: 10.20.0.0/16) and must be a subset
|
||
of the bridge IP range (<code>docker0</code> or set using <code>--bridge</code>). For example
|
||
with <code>--fixed-cidr=192.168.1.0/25</code>, IPs for your containers will be chosen
|
||
from the first half of <code>192.168.1.0/24</code> subnet.</p></li>
|
||
|
||
<li><p><code>--mtu=BYTES</code> — override the maximum packet length on <code>docker0</code>.</p></li>
|
||
</ul>
|
||
|
||
<p>Once you have one or more containers up and running, you can confirm
|
||
that Docker has properly connected them to the <code>docker0</code> bridge by
|
||
running the <code>brctl</code> command on the host machine and looking at the
|
||
<code>interfaces</code> column of the output. Here is a host with two different
|
||
containers connected:</p>
|
||
|
||
<pre><code># Display bridge info
|
||
|
||
$ sudo brctl show
|
||
bridge name bridge id STP enabled interfaces
|
||
docker0 8000.3a1d7362b4ee no veth65f9
|
||
vethdda6
|
||
</code></pre>
|
||
|
||
<p>If the <code>brctl</code> command is not installed on your Docker host, then on
|
||
Ubuntu you should be able to run <code>sudo apt-get install bridge-utils</code> to
|
||
install it.</p>
|
||
|
||
<p>Finally, the <code>docker0</code> Ethernet bridge settings are used every time you
|
||
create a new container. Docker selects a free IP address from the range
|
||
available on the bridge each time you <code>docker run</code> a new container, and
|
||
configures the container&rsquo;s <code>eth0</code> interface with that IP address and the
|
||
bridge&rsquo;s netmask. The Docker host&rsquo;s own IP address on the bridge is
|
||
used as the default gateway by which each container reaches the rest of
|
||
the Internet.</p>
|
||
|
||
<pre><code># The network, as seen from a container
|
||
|
||
$ docker run -i -t --rm base /bin/bash
|
||
|
||
$$ ip addr show eth0
|
||
24: eth0: &lt;BROADCAST,UP,LOWER_UP&gt; mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
|
||
link/ether 32:6f:e0:35:57:91 brd ff:ff:ff:ff:ff:ff
|
||
inet 172.17.0.3/16 scope global eth0
|
||
valid_lft forever preferred_lft forever
|
||
inet6 fe80::306f:e0ff:fe35:5791/64 scope link
|
||
valid_lft forever preferred_lft forever
|
||
|
||
$$ ip route
|
||
default via 172.17.42.1 dev eth0
|
||
172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.3
|
||
|
||
$$ exit
|
||
</code></pre>
|
||
|
||
<p>Remember that the Docker host will not be willing to forward container
|
||
packets out on to the Internet unless its <code>ip_forward</code> system setting is
|
||
<code>1</code> — see the section above on <a href="#between-containers">Communication between
|
||
containers</a> for details.</p>
|
||
|
||
<h2 id="building-your-own-bridge">Building your own bridge</h2>
|
||
|
||
<p><a name="bridge-building"></a></p>
|
||
|
||
<p>If you want to take Docker out of the business of creating its own
|
||
Ethernet bridge entirely, you can set up your own bridge before starting
|
||
Docker and use <code>-b BRIDGE</code> or <code>--bridge=BRIDGE</code> to tell Docker to use
|
||
your bridge instead. If you already have Docker up and running with its
|
||
old <code>docker0</code> still configured, you will probably want to begin by
|
||
stopping the service and removing the interface:</p>
|
||
|
||
<pre><code># Stopping Docker and removing docker0
|
||
|
||
$ sudo service docker stop
|
||
$ sudo ip link set dev docker0 down
|
||
$ sudo brctl delbr docker0
|
||
$ sudo iptables -t nat -F POSTROUTING
|
||
</code></pre>
|
||
|
||
<p>Then, before starting the Docker service, create your own bridge and
|
||
give it whatever configuration you want. Here we will create a simple
|
||
enough bridge that we really could just have used the options in the
|
||
previous section to customize <code>docker0</code>, but it will be enough to
|
||
illustrate the technique.</p>
|
||
|
||
<pre><code># Create our own bridge
|
||
|
||
$ sudo brctl addbr bridge0
|
||
$ sudo ip addr add 192.168.5.1/24 dev bridge0
|
||
$ sudo ip link set dev bridge0 up
|
||
|
||
# Confirming that our bridge is up and running
|
||
|
||
$ ip addr show bridge0
|
||
4: bridge0: &lt;BROADCAST,MULTICAST&gt; mtu 1500 qdisc noop state UP group default
|
||
link/ether 66:38:d0:0d:76:18 brd ff:ff:ff:ff:ff:ff
|
||
inet 192.168.5.1/24 scope global bridge0
|
||
valid_lft forever preferred_lft forever
|
||
|
||
# Tell Docker about it and restart (on Ubuntu)
|
||
|
||
$ echo 'DOCKER_OPTS=&quot;-b=bridge0&quot;' &gt;&gt; /etc/default/docker
|
||
$ sudo service docker start
|
||
|
||
# Confirming new outgoing NAT masquerade is set up
|
||
|
||
$ sudo iptables -t nat -L -n
|
||
...
|
||
Chain POSTROUTING (policy ACCEPT)
|
||
target prot opt source destination
|
||
MASQUERADE all -- 192.168.5.0/24 0.0.0.0/0
|
||
</code></pre>
|
||
|
||
<p>The result should be that the Docker server starts successfully and is
|
||
now prepared to bind containers to the new bridge. After pausing to
|
||
verify the bridge&rsquo;s configuration, try creating a container — you will
|
||
see that its IP address is in your new IP address range, which Docker
|
||
will have auto-detected.</p>
|
||
|
||
<p>Just as we learned in the previous section, you can use the <code>brctl show</code>
|
||
command to see Docker add and remove interfaces from the bridge as you
|
||
start and stop containers, and can run <code>ip addr</code> and <code>ip route</code> inside a
|
||
container to see that it has been given an address in the bridge&rsquo;s IP
|
||
address range and has been told to use the Docker host&rsquo;s IP address on
|
||
the bridge as its default gateway to the rest of the Internet.</p>
|
||
|
||
<h2 id="how-docker-networks-a-container">How Docker networks a container</h2>
|
||
|
||
<p><a name="container-networking"></a></p>
|
||
|
||
<p>While Docker is under active development and continues to tweak and
|
||
improve its network configuration logic, the shell commands in this
|
||
section are rough equivalents to the steps that Docker takes when
|
||
configuring networking for each new container.</p>
|
||
|
||
<p>Let&rsquo;s review a few basics.</p>
|
||
|
||
<p>To communicate using the Internet Protocol (IP), a machine needs access
|
||
to at least one network interface at which packets can be sent and
|
||
received, and a routing table that defines the range of IP addresses
|
||
reachable through that interface. Network interfaces do not have to be
|
||
physical devices. In fact, the <code>lo</code> loopback interface available on
|
||
every Linux machine (and inside each Docker container) is entirely
|
||
virtual — the Linux kernel simply copies loopback packets directly from
|
||
the sender&rsquo;s memory into the receiver&rsquo;s memory.</p>
|
||
|
||
<p>Docker uses special virtual interfaces to let containers communicate
|
||
with the host machine — pairs of virtual interfaces called “peers” that
|
||
are linked inside of the host machine&rsquo;s kernel so that packets can
|
||
travel between them. They are simple to create, as we will see in a
|
||
moment.</p>
|
||
|
||
<p>The steps with which Docker configures a container are:</p>
|
||
|
||
<ol>
|
||
<li><p>Create a pair of peer virtual interfaces.</p></li>
|
||
|
||
<li><p>Give one of them a unique name like <code>veth65f9</code>, keep it inside of
|
||
the main Docker host, and bind it to <code>docker0</code> or whatever bridge
|
||
Docker is supposed to be using.</p></li>
|
||
|
||
<li><p>Toss the other interface over the wall into the new container (which
|
||
will already have been provided with an <code>lo</code> interface) and rename
|
||
it to the much prettier name <code>eth0</code> since, inside of the container&rsquo;s
|
||
separate and unique network interface namespace, there are no
|
||
physical interfaces with which this name could collide.</p></li>
|
||
|
||
<li><p>Set the interface&rsquo;s MAC address according to the <code>--mac-address</code>
|
||
parameter or generate a random one.</p></li>
|
||
|
||
<li><p>Give the container&rsquo;s <code>eth0</code> a new IP address from within the
|
||
bridge&rsquo;s range of network addresses. The default route is set to the
|
||
IP address passed to the Docker daemon using the <code>--default-gateway</code>
|
||
option if specified, otherwise to the IP address that the Docker host
|
||
owns on the bridge. The MAC address is generated from the IP address
|
||
unless otherwise specified. This prevents ARP cache invalidation
|
||
problems, when a new container comes up with an IP used in the past by
|
||
another container with another MAC.</p></li>
|
||
</ol>
|
||
|
||
<p>With these steps complete, the container now possesses an <code>eth0</code>
|
||
(virtual) network card and will find itself able to communicate with
|
||
other containers and the rest of the Internet.</p>
|
||
|
||
<p>You can opt out of the above process for a particular container by
|
||
giving the <code>--net=</code> option to <code>docker run</code>, which takes four possible
|
||
values.</p>
|
||
|
||
<ul>
|
||
<li><p><code>--net=bridge</code> — The default action, that connects the container to
|
||
the Docker bridge as described above.</p></li>
|
||
|
||
<li><p><code>--net=host</code> — Tells Docker to skip placing the container inside of
|
||
a separate network stack. In essence, this choice tells Docker to
|
||
<strong>not containerize the container&rsquo;s networking</strong>! While container
|
||
processes will still be confined to their own filesystem and process
|
||
list and resource limits, a quick <code>ip addr</code> command will show you
|
||
that, network-wise, they live “outside” in the main Docker host and
|
||
have full access to its network interfaces. Note that this does
|
||
<strong>not</strong> let the container reconfigure the host network stack — that
|
||
would require <code>--privileged=true</code> — but it does let container
|
||
processes open low-numbered ports like any other root process.
|
||
It also allows the container to access local network services
|
||
like D-bus. This can lead to processes in the container being
|
||
able to do unexpected things like
|
||
<a href="https://github.com/docker/docker/issues/6401">restart your computer</a>.
|
||
You should use this option with caution.</p></li>
|
||
|
||
<li><p><code>--net=container:NAME_or_ID</code> — Tells Docker to put this container&rsquo;s
|
||
processes inside of the network stack that has already been created
|
||
inside of another container. The new container&rsquo;s processes will be
|
||
confined to their own filesystem and process list and resource
|
||
limits, but will share the same IP address and port numbers as the
|
||
first container, and processes on the two containers will be able to
|
||
connect to each other over the loopback interface.</p></li>
|
||
|
||
<li><p><code>--net=none</code> — Tells Docker to put the container inside of its own
|
||
network stack but not to take any steps to configure its network,
|
||
leaving you free to build any of the custom configurations explored
|
||
in the last few sections of this document.</p></li>
|
||
</ul>
|
||
|
||
<p>To get an idea of the steps that are necessary if you use <code>--net=none</code>
|
||
as described in that last bullet point, here are the commands that you
|
||
would run to reach roughly the same configuration as if you had let
|
||
Docker do all of the configuration:</p>
|
||
|
||
<pre><code># At one shell, start a container and
|
||
# leave its shell idle and running
|
||
|
||
$ docker run -i -t --rm --net=none base /bin/bash
|
||
root@63f36fc01b5f:/#
|
||
|
||
# At another shell, learn the container process ID
|
||
# and create its namespace entry in /var/run/netns/
|
||
# for the &quot;ip netns&quot; command we will be using below
|
||
|
||
$ docker inspect -f '{{.State.Pid}}' 63f36fc01b5f
|
||
2778
|
||
$ pid=2778
|
||
$ sudo mkdir -p /var/run/netns
|
||
$ sudo ln -s /proc/$pid/ns/net /var/run/netns/$pid
|
||
|
||
# Check the bridge's IP address and netmask
|
||
|
||
$ ip addr show docker0
|
||
21: docker0: ...
|
||
inet 172.17.42.1/16 scope global docker0
|
||
...
|
||
|
||
# Create a pair of &quot;peer&quot; interfaces A and B,
|
||
# bind the A end to the bridge, and bring it up
|
||
|
||
$ sudo ip link add A type veth peer name B
|
||
$ sudo brctl addif docker0 A
|
||
$ sudo ip link set A up
|
||
|
||
# Place B inside the container's network namespace,
|
||
# rename to eth0, and activate it with a free IP
|
||
|
||
$ sudo ip link set B netns $pid
|
||
$ sudo ip netns exec $pid ip link set dev B name eth0
|
||
$ sudo ip netns exec $pid ip link set eth0 address 12:34:56:78:9a:bc
|
||
$ sudo ip netns exec $pid ip link set eth0 up
|
||
$ sudo ip netns exec $pid ip addr add 172.17.42.99/16 dev eth0
|
||
$ sudo ip netns exec $pid ip route add default via 172.17.42.1
|
||
</code></pre>
|
||
|
||
<p>At this point your container should be able to perform networking
|
||
operations as usual.</p>
|
||
|
||
<p>When you finally exit the shell and Docker cleans up the container, the
|
||
network namespace is destroyed along with our virtual <code>eth0</code> — whose
|
||
destruction in turn destroys interface <code>A</code> out in the Docker host and
|
||
automatically un-registers it from the <code>docker0</code> bridge. So everything
|
||
gets cleaned up without our having to run any extra commands! Well,
|
||
almost everything:</p>
|
||
|
||
<pre><code># Clean up dangling symlinks in /var/run/netns
|
||
|
||
find -L /var/run/netns -type l -delete
|
||
</code></pre>
|
||
|
||
<p>Also note that while the script above used modern <code>ip</code> command instead
|
||
of old deprecated wrappers like <code>ipconfig</code> and <code>route</code>, these older
|
||
commands would also have worked inside of our container. The <code>ip addr</code>
|
||
command can be typed as <code>ip a</code> if you are in a hurry.</p>
|
||
|
||
<p>Finally, note the importance of the <code>ip netns exec</code> command, which let
|
||
us reach inside and configure a network namespace as root. The same
|
||
commands would not have worked if run inside of the container, because
|
||
part of safe containerization is that Docker strips container processes
|
||
of the right to configure their own networks. Using <code>ip netns exec</code> is
|
||
what let us finish up the configuration without having to take the
|
||
dangerous step of running the container itself with <code>--privileged=true</code>.</p>
|
||
|
||
<h2 id="tools-and-examples">Tools and examples</h2>
|
||
|
||
<p>Before diving into the following sections on custom network topologies,
|
||
you might be interested in glancing at a few external tools or examples
|
||
of the same kinds of configuration. Here are two:</p>
|
||
|
||
<ul>
|
||
<li><p>Jérôme Petazzoni has created a <code>pipework</code> shell script to help you
|
||
connect together containers in arbitrarily complex scenarios:
|
||
<a href="https://github.com/jpetazzo/pipework">https://github.com/jpetazzo/pipework</a></p></li>
|
||
|
||
<li><p>Brandon Rhodes has created a whole network topology of Docker
|
||
containers for the next edition of Foundations of Python Network
|
||
Programming that includes routing, NAT&rsquo;d firewalls, and servers that
|
||
offer HTTP, SMTP, POP, IMAP, Telnet, SSH, and FTP:
|
||
<a href="https://github.com/brandon-rhodes/fopnp/tree/m/playground">https://github.com/brandon-rhodes/fopnp/tree/m/playground</a></p></li>
|
||
</ul>
|
||
|
||
<p>Both tools use networking commands very much like the ones you saw in
|
||
the previous section, and will see in the following sections.</p>
|
||
|
||
<h2 id="building-a-point-to-point-connection">Building a point-to-point connection</h2>
|
||
|
||
<p><a name="point-to-point"></a></p>
|
||
|
||
<p>By default, Docker attaches all containers to the virtual subnet
|
||
implemented by <code>docker0</code>. You can create containers that are each
|
||
connected to some different virtual subnet by creating your own bridge
|
||
as shown in <a href="#bridge-building">Building your own bridge</a>, starting each
|
||
container with <code>docker run --net=none</code>, and then attaching the
|
||
containers to your bridge with the shell commands shown in <a href="#container-networking">How Docker
|
||
networks a container</a>.</p>
|
||
|
||
<p>But sometimes you want two particular containers to be able to
|
||
communicate directly without the added complexity of both being bound to
|
||
a host-wide Ethernet bridge.</p>
|
||
|
||
<p>The solution is simple: when you create your pair of peer interfaces,
|
||
simply throw <em>both</em> of them into containers, and configure them as
|
||
classic point-to-point links. The two containers will then be able to
|
||
communicate directly (provided you manage to tell each container the
|
||
other&rsquo;s IP address, of course). You might adjust the instructions of
|
||
the previous section to go something like this:</p>
|
||
|
||
<pre><code># Start up two containers in two terminal windows
|
||
|
||
$ docker run -i -t --rm --net=none base /bin/bash
|
||
root@1f1f4c1f931a:/#
|
||
|
||
$ docker run -i -t --rm --net=none base /bin/bash
|
||
root@12e343489d2f:/#
|
||
|
||
# Learn the container process IDs
|
||
# and create their namespace entries
|
||
|
||
$ docker inspect -f '{{.State.Pid}}' 1f1f4c1f931a
|
||
2989
|
||
$ docker inspect -f '{{.State.Pid}}' 12e343489d2f
|
||
3004
|
||
$ sudo mkdir -p /var/run/netns
|
||
$ sudo ln -s /proc/2989/ns/net /var/run/netns/2989
|
||
$ sudo ln -s /proc/3004/ns/net /var/run/netns/3004
|
||
|
||
# Create the &quot;peer&quot; interfaces and hand them out
|
||
|
||
$ sudo ip link add A type veth peer name B
|
||
|
||
$ sudo ip link set A netns 2989
|
||
$ sudo ip netns exec 2989 ip addr add 10.1.1.1/32 dev A
|
||
$ sudo ip netns exec 2989 ip link set A up
|
||
$ sudo ip netns exec 2989 ip route add 10.1.1.2/32 dev A
|
||
|
||
$ sudo ip link set B netns 3004
|
||
$ sudo ip netns exec 3004 ip addr add 10.1.1.2/32 dev B
|
||
$ sudo ip netns exec 3004 ip link set B up
|
||
$ sudo ip netns exec 3004 ip route add 10.1.1.1/32 dev B
|
||
</code></pre>
|
||
|
||
<p>The two containers should now be able to ping each other and make
|
||
connections successfully. Point-to-point links like this do not depend
|
||
on a subnet nor a netmask, but on the bare assertion made by <code>ip route</code>
|
||
that some other single IP address is connected to a particular network
|
||
interface.</p>
|
||
|
||
<p>Note that point-to-point links can be safely combined with other kinds
|
||
of network connectivity — there is no need to start the containers with
|
||
<code>--net=none</code> if you want point-to-point links to be an addition to the
|
||
container&rsquo;s normal networking instead of a replacement.</p>
|
||
|
||
<p>A final permutation of this pattern is to create the point-to-point link
|
||
between the Docker host and one container, which would allow the host to
|
||
communicate with that one container on some single IP address and thus
|
||
communicate “out-of-band” of the bridge that connects the other, more
|
||
usual containers. But unless you have very specific networking needs
|
||
that drive you to such a solution, it is probably far preferable to use
|
||
<code>--icc=false</code> to lock down inter-container communication, as we explored
|
||
earlier.</p>
|
||
|
||
<h2 id="editing-networking-config-files">Editing networking config files</h2>
|
||
|
||
<p>Starting with Docker v.1.2.0, you can now edit <code>/etc/hosts</code>, <code>/etc/hostname</code>
|
||
and <code>/etc/resolve.conf</code> in a running container. This is useful if you need
|
||
to install bind or other services that might override one of those files.</p>
|
||
|
||
<p>Note, however, that changes to these files will not be saved by
|
||
<code>docker commit</code>, nor will they be saved during <code>docker run</code>.
|
||
That means they won&rsquo;t be saved in the image, nor will they persist when a
|
||
container is restarted; they will only &ldquo;stick&rdquo; in a running container.</p>
|
||
</description>
|
||
</item>
|
||
|
||
<item>
|
||
<title>Using Puppet</title>
|
||
<link>http://localhost/articles/puppet/</link>
|
||
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
|
||
|
||
<guid>http://localhost/articles/puppet/</guid>
|
||
<description>
|
||
|
||
<h1 id="using-puppet">Using Puppet</h1>
|
||
|
||
<blockquote>
|
||
<p><em>Note:</em> Please note this is a community contributed installation path. The
|
||
only <code>official</code> installation is using the
|
||
<a href="http://localhost/articles/articles/installation/ubuntulinux"><em>Ubuntu</em></a> installation
|
||
path. This version may sometimes be out of date.</p>
|
||
</blockquote>
|
||
|
||
<h2 id="requirements">Requirements</h2>
|
||
|
||
<p>To use this guide you&rsquo;ll need a working installation of Puppet from
|
||
<a href="https://puppetlabs.com">Puppet Labs</a> .</p>
|
||
|
||
<p>The module also currently uses the official PPA so only works with
|
||
Ubuntu.</p>
|
||
|
||
<h2 id="installation">Installation</h2>
|
||
|
||
<p>The module is available on the <a href="https://forge.puppetlabs.com/garethr/docker/">Puppet
|
||
Forge</a> and can be
|
||
installed using the built-in module tool.</p>
|
||
|
||
<pre><code>$ puppet module install garethr/docker
|
||
</code></pre>
|
||
|
||
<p>It can also be found on
|
||
<a href="https://github.com/garethr/garethr-docker">GitHub</a> if you would rather
|
||
download the source.</p>
|
||
|
||
<h2 id="usage">Usage</h2>
|
||
|
||
<p>The module provides a puppet class for installing Docker and two defined
|
||
types for managing images and containers.</p>
|
||
|
||
<h3 id="installation-1">Installation</h3>
|
||
|
||
<pre><code>include 'docker'
|
||
</code></pre>
|
||
|
||
<h3 id="images">Images</h3>
|
||
|
||
<p>The next step is probably to install a Docker image. For this, we have a
|
||
defined type which can be used like so:</p>
|
||
|
||
<pre><code>docker::image { 'ubuntu': }
|
||
</code></pre>
|
||
|
||
<p>This is equivalent to running:</p>
|
||
|
||
<pre><code>$ docker pull ubuntu
|
||
</code></pre>
|
||
|
||
<p>Note that it will only be downloaded if an image of that name does not
|
||
already exist. This is downloading a large binary so on first run can
|
||
take a while. For that reason this define turns off the default 5 minute
|
||
timeout for the exec type. Note that you can also remove images you no
|
||
longer need with:</p>
|
||
|
||
<pre><code>docker::image { 'ubuntu':
|
||
ensure =&gt; 'absent',
|
||
}
|
||
</code></pre>
|
||
|
||
<h3 id="containers">Containers</h3>
|
||
|
||
<p>Now you have an image where you can run commands within a container
|
||
managed by Docker.</p>
|
||
|
||
<pre><code>docker::run { 'helloworld':
|
||
image =&gt; 'ubuntu',
|
||
command =&gt; '/bin/sh -c &quot;while true; do echo hello world; sleep 1; done&quot;',
|
||
}
|
||
</code></pre>
|
||
|
||
<p>This is equivalent to running the following command, but under upstart:</p>
|
||
|
||
<pre><code>$ docker run -d ubuntu /bin/sh -c &quot;while true; do echo hello world; sleep 1; done&quot;
|
||
</code></pre>
|
||
|
||
<p>Run also contains a number of optional parameters:</p>
|
||
|
||
<pre><code>docker::run { 'helloworld':
|
||
image =&gt; 'ubuntu',
|
||
command =&gt; '/bin/sh -c &quot;while true; do echo hello world; sleep 1; done&quot;',
|
||
ports =&gt; ['4444', '4555'],
|
||
volumes =&gt; ['/var/lib/couchdb', '/var/log'],
|
||
volumes_from =&gt; '6446ea52fbc9',
|
||
memory_limit =&gt; 10485760, # bytes
|
||
username =&gt; 'example',
|
||
hostname =&gt; 'example.com',
|
||
env =&gt; ['FOO=BAR', 'FOO2=BAR2'],
|
||
dns =&gt; ['8.8.8.8', '8.8.4.4'],
|
||
}
|
||
</code></pre>
|
||
|
||
<blockquote>
|
||
<p><em>Note:</em>
|
||
The <code>ports</code>, <code>env</code>, <code>dns</code> and <code>volumes</code> attributes can be set with either a single
|
||
string or as above with an array of values.</p>
|
||
</blockquote>
|
||
</description>
|
||
</item>
|
||
|
||
</channel>
|
||
</rss> |