Copyedit Dockerfile best practices doc (#6806)

This commit is contained in:
Gwendolynne Barr 2018-06-02 14:39:14 -07:00 committed by GitHub
parent c0be65e755
commit e2c8ccef8f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1 changed files with 205 additions and 193 deletions

View File

@ -10,46 +10,68 @@ redirect_from:
title: Best practices for writing Dockerfiles title: Best practices for writing Dockerfiles
--- ---
Docker can build images automatically by reading the instructions from a This document covers recommended best practices and methods for building
`Dockerfile`, a text file that contains all the commands, in order, needed to efficient images.
build a given image. `Dockerfile`s adhere to a specific format and use a
specific set of instructions. You can learn the basics on the
[Dockerfile Reference](/engine/reference/builder.md) page. If
youre new to writing `Dockerfile`s, you should start there.
This document covers the best practices and methods recommended by Docker, Docker builds images automatically by reading the instructions from a
Inc. and the Docker community for building efficient images. To see many of `Dockerfile` -- a text file that contains all commands, in order, needed to
these practices and recommendations in action, check out the Dockerfile for build a given image. A `Dockerfile` adheres to a specific format and set of
[buildpack-deps](https://github.com/docker-library/buildpack-deps/blob/master/jessie/Dockerfile). instructions which you can find at [Dockerfile reference](/engine/reference/builder/).
> **Note**: for more detailed explanations of any of the Dockerfile commands A Docker image consists of read-only layers each of which represents a
>mentioned here, visit the [Dockerfile Reference](/engine/reference/builder.md) page. Dockerfile instruction. The layers are stacked on top of each other and each
one is a a diff from the previous layer. Consider this `Dockerfile`:
```conf
FROM ubuntu:15.04
COPY . /app
RUN make /app
CMD python /app/app.py
```
Each instruction creates one layer:
- `FROM` creates a layer from the `ubuntu:15.04` Docker image.
- `COPY` adds files from your Docker client's current directory.
- `RUN` builds your application with `make`.
- `CMD` specifies what command to run within the container.
When you run an image and generate a container, you add a new _writable layer_
(the "container layer") on top of the underlying layers. All changes made to
the running container, such as writing new files, modifying existing files, and
deleting files, are written to this thin writable container layer.
For more on image layers (and how Docker builds and stores images), see
[About storage drivers](/storage/storagedriver/).
## General guidelines and recommendations ## General guidelines and recommendations
### Containers should be ephemeral ### Create ephemeral containers
The container produced by the image your `Dockerfile` defines should be as The image defined by your `Dockerfile` should generate containers that are as
ephemeral as possible. By “ephemeral,” we mean that it can be stopped and ephemeral as possible. By “ephemeral,” we mean that the container can be stopped
destroyed and a new one built and put in place with an absolute minimum of and destroyed, then rebuilt and replaced with an absolute minimum set up and
set-up and configuration. You may want to take a look at the configuration.
[Processes](https://12factor.net/processes) section of the 12 Factor app
Refer to [Processes](https://12factor.net/processes) under _The Twelve-factor App_
methodology to get a feel for the motivations of running containers in such a methodology to get a feel for the motivations of running containers in such a
stateless fashion. stateless fashion.
### Build context ### Understand build context
When you issue a `docker build` command, the current working directory is called When you issue a `docker build` command, the current working directory is called
the _build context_. By default, the Dockerfile is assumed to be located here, the _build context_. By default, the Dockerfile is assumed to be located here,
but you can specify a different location with the file flag (`-f`). Regardless but you can specify a different location with the file flag (`-f`). Regardless
of where the `Dockerfile` actually lives, all of the recursive contents of files of where the `Dockerfile` actually lives, all recursive contents of files and
and directories in the current directory are sent to the Docker daemon as the directories in the current directory are sent to the Docker daemon as the build
build context. context.
> Build context example > Build context example
> >
> Create a directory for the build context and `cd` into it. Write "hello" into a text file named `hello` and create a Dockerfile that runs `cat` on it. Build the image from within the build context (`.`): > Create a directory for the build context and `cd` into it. Write "hello" into
> > a text file named `hello` and create a Dockerfile that runs `cat` on it. Build
> the image from within the build context (`.`):
>
> ```shell > ```shell
> mkdir myproject && cd myproject > mkdir myproject && cd myproject
> echo "hello" > hello > echo "hello" > hello
@ -57,7 +79,9 @@ build context.
> docker build -t helloapp:v1 . > docker build -t helloapp:v1 .
> ``` > ```
> >
> Now move `Dockerfile` and `hello` into separate directories and build a second version of the image (without relying on cache from the last build). Use the `-f` to point to the Dockerfile and specify the directory of the build context: > Move `Dockerfile` and `hello` into separate directories and build a second
> version of the image (without relying on cache from the last build). Use `-f`
> to point to the Dockerfile and specify the directory of the build context:
> >
> ```shell > ```shell
> mkdir -p dockerfiles context > mkdir -p dockerfiles context
@ -66,37 +90,34 @@ build context.
> ``` > ```
Inadvertently including files that are not necessary for building an image Inadvertently including files that are not necessary for building an image
results in a larger build context and larger image size. This can increase build results in a larger build context and larger image size. This can increase the
time, time to pull and push the image, and the runtime size of containers. To time to build the image, time to pull and push it, and the container runtime
see how big your build context is, look for a message like this when building size. To see how big your build context is, look for a message like this when
your `Dockerfile`: building your `Dockerfile`:
```none ```none
Sending build context to Docker daemon 187.8MB Sending build context to Docker daemon 187.8MB
``` ```
### Use a .dockerignore file ### Exclude with .dockerignore
To exclude files which are not relevant to the build, without restructuring your To exclude files not relevant to the build (without restructuring your source
source repository, use a `.dockerignore` file. This file supports repository) use a `.dockerignore` file. This file supports exclusion patterns
exclusion patterns similar to `.gitignore` files. For information on creating similar to `.gitignore` files. For information on creating one, see the
one, see the [.dockerignore file](/engine/reference/builder.md#dockerignore-file). [.dockerignore file](/engine/reference/builder.md#dockerignore-file).
In addition to using a `.dockerignore` file, check out the information below
on [multi-stage builds](#use-multi-stage-builds).
### Use multi-stage builds ### Use multi-stage builds
If you use Docker 17.05 or higher, you can use [Multi-stage builds](multistage-build.md) (in [Docker 17.05](/release-notes/docker-ce/#17050-ce-2017-05-04) or higher)
[multi-stage builds](multistage-build.md) to allow you to drastically reduce the size of your final image, without struggling
drastically reduce the size of your final image, without the need to to reduce the number of intermediate layers and files.
jump through hoops to reduce the number of intermediate layers or remove
intermediate files during the build.
Images being built by the final stage only, you can most of the time benefit Because an image is built during the final stage of the build process, you can
both the build cache and minimize images layers. minimize image layers by [leveraging build cache](#leverage-build-cache).
Your build stage may contain several layers, ordered from the less frequently changed For example, if your build contains several layers, you can order them from the
to the more frequently changed for example: less frequently changed (to ensure the build cache is reusable) to the more
frequently changed:
* Install tools you need to build your application * Install tools you need to build your application
@ -104,25 +125,25 @@ to the more frequently changed for example:
* Generate your application * Generate your application
A Dockerfile for a go application could look like: A Dockerfile for a Go application could look like:
``` ```
FROM golang:1.9.2-alpine3.6 AS build FROM golang:1.9.2-alpine3.6 AS build
# Install tools required to build the project # Install tools required for project
# We need to run `docker build --no-cache .` to update those dependencies # Run `docker build --no-cache .` to update dependencies
RUN apk add --no-cache git RUN apk add --no-cache git
RUN go get github.com/golang/dep/cmd/dep RUN go get github.com/golang/dep/cmd/dep
# Gopkg.toml and Gopkg.lock lists project dependencies # List project dependencies with Gopkg.toml and Gopkg.lock
# These layers are only re-built when Gopkg files are updated # These layers are only re-built when Gopkg files are updated
COPY Gopkg.lock Gopkg.toml /go/src/project/ COPY Gopkg.lock Gopkg.toml /go/src/project/
WORKDIR /go/src/project/ WORKDIR /go/src/project/
# Install library dependencies # Install library dependencies
RUN dep ensure -vendor-only RUN dep ensure -vendor-only
# Copy all project and build it # Copy the entire project and build it
# This layer is rebuilt when ever a file has changed in the project directory # This layer is rebuilt when a file changes in the project directory
COPY . /go/src/project/ COPY . /go/src/project/
RUN go build -o /bin/project RUN go build -o /bin/project
@ -133,54 +154,51 @@ ENTRYPOINT ["/bin/project"]
CMD ["--help"] CMD ["--help"]
``` ```
### Avoid installing unnecessary packages ### Don't install unnecessary packages
To reduce complexity, dependencies, file sizes, and build times, you To reduce complexity, dependencies, file sizes, and build times, avoid
should avoid installing extra or unnecessary packages just because they installing extra or unnecessary packages just because they might be “nice to
might be “nice to have.” For example, you dont need to include a text editor have.” For example, you dont need to include a text editor in a database image.
in a database image.
### Each container should have only one concern ### Decouple applications
Decoupling applications into multiple containers makes it much easier to scale Each container should have only one concern. Decoupling applications into
horizontally and reuse containers. For instance, a web application stack might multiple containers makes it easier to scale horizontally and reuse containers.
consist of three separate containers, each with its own unique image, to manage For instance, a web application stack might consist of three separate
the web application, database, and an in-memory cache in a decoupled manner. containers, each with its own unique image, to manage the web application,
database, and an in-memory cache in a decoupled manner.
You may have heard that there should be "one process per container". While this Limiting each container to one process is a good rule of thumb, but it is not a
mantra has good intentions, it is not necessarily true that there should be only hard and fast rule. For example, not only can containers be
one operating system process per container. In addition to the fact that [spawned with an init process](/engine/reference/run.md#specifying-an-init-process),
containers can now be [spawned with an init process](/engine/reference/run.md#specifying-an-init-process),
some programs might spawn additional processes of their own accord. For some programs might spawn additional processes of their own accord. For
instance, [Celery](http://www.celeryproject.org/) can spawn multiple worker instance, [Celery](http://www.celeryproject.org/) can spawn multiple worker
processes, or [Apache](https://httpd.apache.org/) might create a process per processes, and [Apache](https://httpd.apache.org/) can create one process per
request. While "one process per container" is frequently a good rule of thumb, request.
it is not a hard and fast rule. Use your best judgment to keep containers as
clean and modular as possible.
If containers depend on each other, you can use [Docker container networks](/engine/userguide/networking/) Use your best judgment to keep containers as clean and modular as possible. If
to ensure that these containers can communicate. containers depend on each other, you can use [Docker container networks](/engine/userguide/networking/)
to ensure that these containers can communicate.
### Minimize the number of layers ### Minimize the number of layers
Prior to Docker 17.05, and even more, prior to Docker 1.10, it was important In older versions of Docker, it was important that you minimized the number of
to minimize the number of layers in your image. The following improvements have layers in your images to ensure they were performant. The following features
mitigated this need: were added to reduce this limitation:
- In Docker 1.10 and higher, only `RUN`, `COPY`, and `ADD` instructions create - In Docker 1.10 and higher, only the instructions `RUN`, `COPY`, `ADD` create
layers. Other instructions create temporary intermediate images, and no longer layers. Other instructions create temporary intermediate images, and do not
directly increase the size of the build. directly increase the size of the build.
- Docker 17.05 and higher add support for - In Docker 17.05 and higher, you can do [multi-stage builds](multistage-build.md)
[multi-stage builds](multistage-build.md), which allow you to copy only the and only copy the artifacts you need into the final image. This allows you to
artifacts you need into the final image. This allows you to include tools and include tools and debug information in your intermediate build stages without
debug information in your intermediate build stages without increasing the increasing the size of the final image.
size of the final image.
### Sort multi-line arguments ### Sort multi-line arguments
Whenever possible, ease later changes by sorting multi-line arguments Whenever possible, ease later changes by sorting multi-line arguments
alphanumerically. This helps you avoid duplication of packages and make the alphanumerically. This helps to avoid duplication of packages and make the
list much easier to update. This also makes PRs a lot easier to read and list much easier to update. This also makes PRs a lot easier to read and
review. Adding a space before a backslash (`\`) helps as well. review. Adding a space before a backslash (`\`) helps as well.
@ -193,57 +211,56 @@ Heres an example from the [`buildpack-deps` image](https://github.com/docker-
mercurial \ mercurial \
subversion subversion
### Build cache ### Leverage build cache
During the process of building an image Docker steps through the When building an image, Docker steps through the instructions in your
instructions in your `Dockerfile` executing each in the order specified. `Dockerfile`, executing each in the order specified. As each instruction is
As each instruction is examined Docker looks for an existing image in its examined, Docker looks for an existing image in its cache that it can reuse,
cache that it can reuse, rather than creating a new (duplicate) image. rather than creating a new (duplicate) image.
If you do not want to use the cache at all you can use the `--no-cache=true`
option on the `docker build` command.
However, if you do let Docker use its cache then it is very important to If you do not want to use the cache at all, you can use the `--no-cache=true`
understand when it can, and cannot, find a matching image. The basic rules option on the `docker build` command. However, if you do let Docker use its
that Docker follows are outlined below: cache, it is important to understand when it can, and cannot, find a matching
image. The basic rules that Docker follows are outlined below:
* Starting with a parent image that is already in the cache, the next - Starting with a parent image that is already in the cache, the next
instruction is compared against all child images derived from that base instruction is compared against all child images derived from that base
image to see if one of them was built using the exact same instruction. If image to see if one of them was built using the exact same instruction. If
not, the cache is invalidated. not, the cache is invalidated.
* In most cases simply comparing the instruction in the `Dockerfile` with one - In most cases, simply comparing the instruction in the `Dockerfile` with one
of the child images is sufficient. However, certain instructions require of the child images is sufficient. However, certain instructions require more
a little more examination and explanation. examination and explanation.
* For the `ADD` and `COPY` instructions, the contents of the file(s) - For the `ADD` and `COPY` instructions, the contents of the file(s)
in the image are examined and a checksum is calculated for each file. in the image are examined and a checksum is calculated for each file.
The last-modified and last-accessed times of the file(s) are not considered in The last-modified and last-accessed times of the file(s) are not considered in
these checksums. During the cache lookup, the checksum is compared against the these checksums. During the cache lookup, the checksum is compared against the
checksum in the existing images. If anything has changed in the file(s), such checksum in the existing images. If anything has changed in the file(s), such
as the contents and metadata, then the cache is invalidated. as the contents and metadata, then the cache is invalidated.
* Aside from the `ADD` and `COPY` commands, cache checking does not look at the - Aside from the `ADD` and `COPY` commands, cache checking does not look at the
files in the container to determine a cache match. For example, when processing files in the container to determine a cache match. For example, when processing
a `RUN apt-get -y update` command the files updated in the container a `RUN apt-get -y update` command the files updated in the container
are not examined to determine if a cache hit exists. In that case just are not examined to determine if a cache hit exists. In that case just
the command string itself is used to find a match. the command string itself is used to find a match.
Once the cache is invalidated, all subsequent `Dockerfile` commands Once the cache is invalidated, all subsequent `Dockerfile` commands generate new
generate new images and the cache is not used. images and the cache is not used.
## The Dockerfile instructions ## Dockerfile instructions
These recommendations help you to write an efficient and maintainable These recommendations are designed to help you create an efficient and
`Dockerfile`. maintainable `Dockerfile`.
### FROM ### FROM
[Dockerfile reference for the FROM instruction](/engine/reference/builder.md#from) [Dockerfile reference for the FROM instruction](/engine/reference/builder.md#from)
Whenever possible, use current Official Repositories as the basis for your Whenever possible, use current official repositories as the basis for your
image. We recommend the [Alpine image](https://hub.docker.com/_/alpine/) images. We recommend the [Alpine image](https://hub.docker.com/_/alpine/) as it
since its very tightly controlled and kept minimal (currently under 5 mb), is tightly controlled and small in size (currently under 5 MB), while still
while still being a full distribution. being a full Linux distribution.
### LABEL ### LABEL
@ -254,9 +271,8 @@ licensing information, to aid in automation, or for other reasons. For each
label, add a line beginning with `LABEL` and with one or more key-value pairs. label, add a line beginning with `LABEL` and with one or more key-value pairs.
The following examples show the different acceptable formats. Explanatory comments are included inline. The following examples show the different acceptable formats. Explanatory comments are included inline.
>**Note**: If your string contains spaces, it must be quoted **or** the spaces > Strings with spaces must be quoted **or** the spaces must be escaped. Inner
must be escaped. If your string contains inner quote characters (`"`), escape > quote characters (`"`), must also be escaped.
them as well.
```conf ```conf
# Set one or more individual labels # Set one or more individual labels
@ -297,21 +313,21 @@ objects](/config/labels-custom-metadata.md#managing-labels-on-objects). See also
[Dockerfile reference for the RUN instruction](/engine/reference/builder.md#run) [Dockerfile reference for the RUN instruction](/engine/reference/builder.md#run)
As always, to make your `Dockerfile` more readable, understandable, and Split long or complex `RUN` statements on multiple lines separated with
maintainable, split long or complex `RUN` statements on multiple lines separated backslashes to make your `Dockerfile` more readable, understandable, and
with backslashes. maintainable.
#### apt-get #### apt-get
Probably the most common use-case for `RUN` is an application of `apt-get`. The Probably the most common use-case for `RUN` is an application of `apt-get`.
`RUN apt-get` command, because it installs packages, has several gotchas to look Because it installs packages, the `RUN apt-get` command has several gotchas to
out for. look out for.
You should avoid `RUN apt-get upgrade` or `dist-upgrade`, as many of the Avoid `RUN apt-get upgrade` and `dist-upgrade`, as many of the “essential”
“essential” packages from the parent images can't upgrade inside an packages from the parent images cannot upgrade inside an
[unprivileged container](/engine/reference/run.md#security-configuration). [unprivileged container](/engine/reference/run.md#security-configuration). If a package
If a package contained in the parent image is out-of-date, you should contact its contained in the parent image is out-of-date, contact its maintainers. If you
maintainers. If you know theres a particular package, `foo`, that needs to be updated, use know there is a particular package, `foo`, that needs to be updated, use
`apt-get install -y foo` to update automatically. `apt-get install -y foo` to update automatically.
Always combine `RUN apt-get update` with `apt-get install` in the same `RUN` Always combine `RUN apt-get update` with `apt-get install` in the same `RUN`
@ -324,8 +340,8 @@ statement. For example:
Using `apt-get update` alone in a `RUN` statement causes caching issues and Using `apt-get update` alone in a `RUN` statement causes caching issues and
subsequent `apt-get install` instructions fail. subsequent `apt-get install` instructions fail. For example, say you have a
For example, say you have a Dockerfile: Dockerfile:
FROM ubuntu:14.04 FROM ubuntu:14.04
RUN apt-get update RUN apt-get update
@ -339,12 +355,12 @@ modify `apt-get install` by adding extra package:
RUN apt-get install -y curl nginx RUN apt-get install -y curl nginx
Docker sees the initial and modified instructions as identical and reuses the Docker sees the initial and modified instructions as identical and reuses the
cache from previous steps. As a result the `apt-get update` is *NOT* executed cache from previous steps. As a result the `apt-get update` is _not_ executed
because the build uses the cached version. Because the `apt-get update` is not because the build uses the cached version. Because the `apt-get update` is not
run, your build can potentially get an outdated version of the `curl` and `nginx` run, your build can potentially get an outdated version of the `curl` and
packages. `nginx` packages.
Using `RUN apt-get update && apt-get install -y` ensures your Dockerfile Using `RUN apt-get update && apt-get install -y` ensures your Dockerfile
installs the latest package versions with no further coding or manual installs the latest package versions with no further coding or manual
intervention. This technique is known as "cache busting". You can also achieve intervention. This technique is known as "cache busting". You can also achieve
cache-busting by specifying a package version. This is known as version pinning, cache-busting by specifying a package version. This is known as version pinning,
@ -387,7 +403,7 @@ reduces the image size, since the apt cache is not stored in a layer. Since the
`RUN` statement starts with `apt-get update`, the package cache is always `RUN` statement starts with `apt-get update`, the package cache is always
refreshed prior to `apt-get install`. refreshed prior to `apt-get install`.
> **Note**: The official Debian and Ubuntu images [automatically run `apt-get clean`](https://github.com/moby/moby/blob/03e2923e42446dbb830c654d0eec323a0b4ef02a/contrib/mkimage/debootstrap#L82-L105), > Official Debian and Ubuntu images [automatically run `apt-get clean`](https://github.com/moby/moby/blob/03e2923e42446dbb830c654d0eec323a0b4ef02a/contrib/mkimage/debootstrap#L82-L105),
> so explicit invocation is not required. > so explicit invocation is not required.
#### Using pipes #### Using pipes
@ -398,45 +414,42 @@ Some `RUN` commands depend on the ability to pipe the output of one command into
RUN wget -O - https://some.site | wc -l > /number RUN wget -O - https://some.site | wc -l > /number
``` ```
Docker executes these commands using the `/bin/sh -c` interpreter, which Docker executes these commands using the `/bin/sh -c` interpreter, which only
only evaluates the exit code of the last operation in the pipe to determine evaluates the exit code of the last operation in the pipe to determine success.
success. In the example above this build step succeeds and produces a new In the example above this build step succeeds and produces a new image so long
image so long as the `wc -l` command succeeds, even if the `wget` command as the `wc -l` command succeeds, even if the `wget` command fails.
fails.
If you want the command to fail due to an error at any stage in the pipe, If you want the command to fail due to an error at any stage in the pipe,
prepend `set -o pipefail &&` to ensure that an unexpected error prevents prepend `set -o pipefail &&` to ensure that an unexpected error prevents the
the build from inadvertently succeeding. For example: build from inadvertently succeeding. For example:
```Dockerfile ```Dockerfile
RUN set -o pipefail && wget -O - https://some.site | wc -l > /number RUN set -o pipefail && wget -O - https://some.site | wc -l > /number
``` ```
> Not all shells support the `-o pipefail` option.
> **Note**: Not all shells support the `-o pipefail` option. In such
> cases (such as the `dash` shell, which is the default shell on
> Debian-based images), consider using the *exec* form of `RUN`
> to explicitly choose a shell that does support the `pipefail` option.
> For example:
> >
> In such cases (such as the `dash` shell, which is the default shell on
```Dockerfile > Debian-based images), consider using the _exec_ form of `RUN` to explicitly
RUN ["/bin/bash", "-c", "set -o pipefail && wget -O - https://some.site | wc -l > /number"] > choose a shell that does support the `pipefail` option. For example:
``` >
> ```Dockerfile
> RUN ["/bin/bash", "-c", "set -o pipefail && wget -O - https://some.site | wc -l > /number"]
> ```
### CMD ### CMD
[Dockerfile reference for the CMD instruction](/engine/reference/builder.md#cmd) [Dockerfile reference for the CMD instruction](/engine/reference/builder.md#cmd)
The `CMD` instruction should be used to run the software contained by your The `CMD` instruction should be used to run the software contained by your
image, along with any arguments. `CMD` should almost always be used in the image, along with any arguments. `CMD` should almost always be used in the form
form of `CMD [“executable”, “param1”, “param2”…]`. Thus, if the image is for a of `CMD [“executable”, “param1”, “param2”…]`. Thus, if the image is for a
service, such as Apache and Rails, you would run something like service, such as Apache and Rails, you would run something like `CMD
`CMD ["apache2","-DFOREGROUND"]`. Indeed, this form of the instruction is ["apache2","-DFOREGROUND"]`. Indeed, this form of the instruction is recommended
recommended for any service-based image. for any service-based image.
In most other cases, `CMD` should be given an interactive shell, such as bash, python In most other cases, `CMD` should be given an interactive shell, such as bash,
and perl. For example, `CMD ["perl", "-de0"]`, `CMD ["python"]`, or python and perl. For example, `CMD ["perl", "-de0"]`, `CMD ["python"]`, or `CMD
`CMD [“php”, “-a”]`. Using this form means that when you execute something like [“php”, “-a”]`. Using this form means that when you execute something like
`docker run -it python`, youll get dropped into a usable shell, ready to go. `docker run -it python`, youll get dropped into a usable shell, ready to go.
`CMD` should rarely be used in the manner of `CMD [“param”, “param”]` in `CMD` should rarely be used in the manner of `CMD [“param”, “param”]` in
conjunction with [`ENTRYPOINT`](/engine/reference/builder.md#entrypoint), unless conjunction with [`ENTRYPOINT`](/engine/reference/builder.md#entrypoint), unless
@ -538,8 +551,8 @@ auto-extraction into the image, as in `ADD rootfs.tar.xz /`.
If you have multiple `Dockerfile` steps that use different files from your If you have multiple `Dockerfile` steps that use different files from your
context, `COPY` them individually, rather than all at once. This ensures that context, `COPY` them individually, rather than all at once. This ensures that
each step's build cache is only invalidated (forcing the step to be re-run) if the each step's build cache is only invalidated (forcing the step to be re-run) if
specifically required files change. the specifically required files change.
For example: For example:
@ -618,13 +631,12 @@ fi
exec "$@" exec "$@"
``` ```
> **Note**: > Configure app as PID 1
>
> This script uses [the `exec` Bash command](http://wiki.bash-hackers.org/commands/builtin/exec) > This script uses [the `exec` Bash command](http://wiki.bash-hackers.org/commands/builtin/exec)
> so that the final running application becomes the container's PID 1. This allows > so that the final running application becomes the container's PID 1. This
> the application to receive any Unix signals sent to the container. > allows the application to receive any Unix signals sent to the container.
> See the [`ENTRYPOINT`](/engine/reference/builder.md#entrypoint) > For more, see the [`ENTRYPOINT` reference](/engine/reference/builder.md#entrypoint).
> help for more details.
The helper script is copied into the container and run via `ENTRYPOINT` on The helper script is copied into the container and run via `ENTRYPOINT` on
container start: container start:
@ -664,35 +676,35 @@ If a service can run without privileges, use `USER` to change to a non-root
user. Start by creating the user and group in the `Dockerfile` with something user. Start by creating the user and group in the `Dockerfile` with something
like `RUN groupadd -r postgres && useradd --no-log-init -r -g postgres postgres`. like `RUN groupadd -r postgres && useradd --no-log-init -r -g postgres postgres`.
> **Note**: Users and groups in an image get a non-deterministic > Consider an explicit UID/GID
> UID/GID in that the “next” UID/GID gets assigned regardless of image >
> rebuilds. So, if its critical, you should assign an explicit UID/GID. > Users and groups in an image are assigned a non-deterministic UID/GID in that
> the “next” UID/GID is assigned regardless of image rebuilds. So, if its
> critical, you should assign an explicit UID/GID.
> **Note**: Due to an [unresolved bug](https://github.com/golang/go/issues/13548) > Due to an [unresolved bug](https://github.com/golang/go/issues/13548) in the
> in the Go archive/tar package's handling of sparse files, attempting to > Go archive/tar package's handling of sparse files, attempting to create a user
> create a user with a sufficiently large UID inside a Docker container can > with a significantly large UID inside a Docker container can lead to disk
> lead to disk exhaustion as `/var/log/faillog` in the container layer is > exhaustion because `/var/log/faillog` in the container layer is filled with
> filled with NUL (\0) characters. Passing the `--no-log-init` flag to > NULL (\0) characters. A workaround is to pass the `--no-log-init` flag to
> useradd works around this issue. The Debian/Ubuntu `adduser` wrapper > useradd. The Debian/Ubuntu `adduser` wrapper does not support this flag.
> does not support the `--no-log-init` flag and should be avoided.
Avoid installing or using `sudo` since it has unpredictable TTY and Avoid installing or using `sudo` as it has unpredictable TTY and
signal-forwarding behavior that can cause problems. If signal-forwarding behavior that can cause problems. If you absolutely need
you absolutely need functionality similar to `sudo`, such as initializing the functionality similar to `sudo`, such as initializing the daemon as `root` but
daemon as `root` but running it as non-`root`), consider using running it as non-`root`), consider using [“gosu”](https://github.com/tianon/gosu).
[“gosu”](https://github.com/tianon/gosu).
Lastly, to reduce layers and complexity, avoid switching `USER` back Lastly, to reduce layers and complexity, avoid switching `USER` back and forth
and forth frequently. frequently.
### WORKDIR ### WORKDIR
[Dockerfile reference for the WORKDIR instruction](/engine/reference/builder.md#workdir) [Dockerfile reference for the WORKDIR instruction](/engine/reference/builder.md#workdir)
For clarity and reliability, you should always use absolute paths for your For clarity and reliability, you should always use absolute paths for your
`WORKDIR`. Also, you should use `WORKDIR` instead of proliferating `WORKDIR`. Also, you should use `WORKDIR` instead of proliferating instructions
instructions like `RUN cd … && do-something`, which are hard to read, like `RUN cd … && do-something`, which are hard to read, troubleshoot, and
troubleshoot, and maintain. maintain.
### ONBUILD ### ONBUILD