Merge pull request #20150 from dvdksn/build-best-practices

consolidate best practices
This commit is contained in:
David Karlsson 2024-06-07 17:42:40 +02:00 committed by GitHub
commit 5ec193b333
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
33 changed files with 264 additions and 587 deletions

View File

@ -110,5 +110,5 @@ There are lots of resources available to help you write your `Dockerfile`.
* There's a [complete guide to all the instructions](../../reference/dockerfile.md) available for use in a `Dockerfile` in the reference section.
* To help you write a clear, readable, maintainable `Dockerfile`, we've also
written a [Dockerfile best practices guide](../../develop/develop-images/dockerfile_best-practices.md).
written a [Dockerfile best practices guide](../../build/building/best-practices.md).
* If your goal is to create a new Docker Official Image, read [Docker Official Images](../../trusted-content/official-images/_index.md).

View File

@ -1,11 +1,255 @@
---
description: Hints, tips and guidelines for writing clean, reliable Dockerfile instructions
keywords: parent image, images, dockerfile, best practices, hub, official image
title: Best practices for Dockerfile instructions
description: Hints, tips and guidelines for writing clean, reliable Dockerfiles
keywords: base images, dockerfile, best practices, hub, official image
title: Building best practices
tags: [Best practices]
aliases:
- /articles/dockerfile_best-practices/
- /engine/articles/dockerfile_best-practices/
- /docker-cloud/getting-started/intermediate/optimize-dockerfiles/
- /docker-cloud/tutorials/optimize-dockerfiles/
- /engine/userguide/eng-image/dockerfile_best-practices/
- /develop/develop-images/dockerfile_best-practices/
- /develop/develop-images/guidelines/
- /develop/develop-images/instructions/
- /develop/dev-best-practices/
- /develop/security-best-practices/
---
These recommendations are designed to help you create an efficient and
maintainable Dockerfile.
## Use multi-stage builds
Multi-stage builds let you reduce the size of your final image, by creating a
cleaner separation between the building of your image and the final output.
Split your Dockerfile instructions into distinct stages to make sure that the
resulting output only contains the files that's needed to run the application.
Using multiple stages can also let you build more efficiently by executing
build steps in parallel.
See [Multi-stage builds](../../build/building/multi-stage.md) for more
information.
### Create reusable stages
If you have multiple images with a lot in common, consider creating a reusable
stage that includes the shared components, and basing your unique stages on
that. Docker only needs to build the common stage once. This means that your derivative images use memory
on the Docker host more efficiently and load more quickly.
It's also easier to maintain a common base stage ("Don't repeat yourself"),
than it is to have multiple different stages doing similar things.
## Choose the right base image
The first step towards achieving a secure image is to choose the right base
image. When choosing an image, ensure it's built from a trusted source and keep
it small.
- [Docker Official Images](https://hub.docker.com/search?image_filter=official)
are some of the most secure and dependable images on Docker Hub. Typically,
Docker Official images have few or no packages containing CVEs, and are
thoroughly reviewed by Docker and project maintainers.
- [Verified Publisher](https://hub.docker.com/search?image_filter=store) images
are high-quality images published and maintained by the organizations
partnering with Docker, with Docker verifying the authenticity of the content
in their repositories.
- [Docker-Sponsored Open Source](https://hub.docker.com/search?image_filter=open_source)
are published and maintained by open source projects sponsored by Docker
through an [open source program](../../trusted-content/dsos-program).
When you pick your base image, look out for the badges indicating that the
image is part of these programs.
![Docker Hub Official and Verified Publisher images](../images/hub-official-images.webp)
When building your own image from a Dockerfile, ensure you choose a minimal base
image that matches your requirements. A smaller base image not only offers
portability and fast downloads, but also shrinks the size of your image and
minimizes the number of vulnerabilities introduced through the dependencies.
You should also consider using two types of base image: one for building and
unit testing, and another (typically slimmer) image for production. In the
later stages of development, your image may not require build tools such as
compilers, build systems, and debugging tools. A small image with minimal
dependencies can considerably lower the attack surface.
## Rebuild your images often
Docker images are immutable. Building an image is taking a snapshot of that
image at that moment. That includes any base images, libraries, or other
software you use in your build. To keep your images up-to-date and secure, make
sure to rebuild your image often, with updated dependencies.
To ensure that you're getting the latest versions of dependencies in your build,
you can use the `--no-cache` option to avoid cache hits.
```console
$ docker build --no-cache -t my-image:my-tag .
```
The following Dockerfile uses the `24.04` tag of the `ubuntu` image. Over time,
that tag may resolve to a different underlying version of the `ubuntu` image,
as the publisher rebuilds the image with new security patches and updated
libraries. Using the `--no-cache`, you can avoid cache hits and ensure a fresh
download of base images and dependencies.
```dockerfile
# syntax=docker/dockerfile:1
FROM ubuntu:24.04
RUN apt-get -y update && apt-get install -y python
```
Also consider [pinning base image versions](#pin-base-image-versions).
## Exclude with .dockerignore
To exclude files not relevant to the build, without restructuring your source
repository, use a `.dockerignore` file. This file supports exclusion patterns
similar to `.gitignore` files. For information on creating one, see
[Dockerignore file](../../build/building/context.md#dockerignore-files).
## Create ephemeral containers
The image defined by your Dockerfile should generate containers that are as
ephemeral as possible. Ephemeral means that the container can be stopped
and destroyed, then rebuilt and replaced with an absolute minimum set up and
configuration.
Refer to [Processes](https://12factor.net/processes) under _The Twelve-factor App_
methodology to get a feel for the motivations of running containers in such a
stateless fashion.
## Don't install unnecessary packages
Avoid installing extra or unnecessary packages just because they might be nice to have. For example, you dont need to include a text editor in a database image.
When you avoid installing extra or unnecessary packages, your images have reduced complexity, reduced dependencies, reduced file sizes, and reduced build times.
## Decouple applications
Each container should have only one concern. Decoupling applications into
multiple containers makes it easier to scale horizontally and reuse containers.
For instance, a web application stack might consist of three separate
containers, each with its own unique image, to manage the web application,
database, and an in-memory cache in a decoupled manner.
Limiting each container to one process is a good rule of thumb, but it's not a
hard and fast rule. For example, not only can containers be
[spawned with an init process](../../engine/reference/run.md#specify-an-init-process),
some programs might spawn additional processes of their own accord. For
instance, [Celery](https://docs.celeryproject.org/) can spawn multiple worker
processes, and [Apache](https://httpd.apache.org/) can create one process per
request.
Use your best judgment to keep containers as clean and modular as possible. If
containers depend on each other, you can use [Docker container networks](../../network/index.md)
to ensure that these containers can communicate.
## Sort multi-line arguments
Whenever possible, sort multi-line arguments alphanumerically to make maintenance easier.
This helps to avoid duplication of packages and make the
list much easier to update. This also makes PRs a lot easier to read and
review. Adding a space before a backslash (`\`) helps as well.
Heres an example from the [buildpack-deps image](https://github.com/docker-library/buildpack-deps):
```dockerfile
RUN apt-get update && apt-get install -y \
bzr \
cvs \
git \
mercurial \
subversion \
&& rm -rf /var/lib/apt/lists/*
```
## Leverage build cache
When building an image, Docker steps through the instructions in your
Dockerfile, executing each in the order specified. For each instruction, Docker
checks whether it can reuse the instruction from the build cache.
Understanding how the build cache works, and how cache invalidation occurs,
is critical for ensuring faster builds.
For more information about the Docker build cache and how to optimize your builds,
see [Docker build cache](../../build/cache/_index.md).
## Pin base image versions
Image tags are mutable, meaning a publisher can update a tag to point to a new
image. This is useful because it lets publishers update tags to point to
newer versions of an image. And as an image consumer, it means you
automatically get the new version when you re-build your image.
For example, if you specify `FROM alpine:3.19` in your Dockerfile, `3.19`
resolves to the latest patch version for `3.19`.
```dockerfile
# syntax=docker/dockerfile:1
FROM alpine:3.19
```
At one point in time, the `3.19` tag might point to version 3.19.1 of the
image. If you rebuild the image 3 months later, the same tag might point to a
different version, such as 3.19.4. This publishing workflow is best practice,
and most publishers use this tagging strategy, but it isn't enforced.
The downside with this is that you're not guaranteed to get the same for every
build. This could result in breaking changes, and it means you also don't have
an audit trail of the exact image versions that you're using.
To fully secure your supply chain integrity, you can pin the image version to a
specific digest. By pinning your images to a digest, you're guaranteed to
always use the same image version, even if a publisher replaces the tag with a
new image. For example, the following Dockerfile pins the Alpine image to the
same tag as earlier, `3.19`, but this time with a digest reference as well.
```dockerfile
# syntax=docker/dockerfile:1
FROM alpine:3.19@sha256:13b7e62e8df80264dbb747995705a986aa530415763a6c58f84a3ca8af9a5bcd
```
With this Dockerfile, even if the publisher updates the `3.19` tag, your builds
would still use the pinned image version:
`13b7e62e8df80264dbb747995705a986aa530415763a6c58f84a3ca8af9a5bcd`.
While this helps you avoid unexpected changes, it's also more tedious to have
to look up and include the image digest for base image versions manually each
time you want to update it. And you're opting out of automated security fixes,
which is likely something you want to get.
Docker Scout has a built-in [**Outdated base images**
policy](../../scout/policy/_index.md#outdated-base-images) that checks for
whether the base image version you're using is in fact the latest version. This
policy also checks if pinned digests in your Dockerfile correspond to the
correct version. If a publisher updates an image that you've pinned, the policy
evaluation returns a non-compliant status, indicating that you should update
your image.
Docker Scout also supports an automated remediation workflow for keeping your
base images up-to-date. When a new image digest is available, Docker Scout can
automatically raise a pull request on your repository to update your
Dockerfiles to use the latest version. This is better than using a tag that
changes the version automatically, because you're in control and you have an
audit trail of when and how the change occurred.
For more information about automatically updating your base images with Docker
Scout, see
[Remediation](../../scout/policy/remediation.md#automatic-base-image-updates)
## Build and test your images in CI
When you check in a change to source control or create a pull request, use
[GitHub Actions](../ci/github-actions/_index.md) or another CI/CD pipeline to
automatically build and tag a Docker image and test it.
## Dockerfile instructions
Follow these recommendations on how to properly use the [Dockerfile instructions](../../reference/dockerfile.md)
to create an efficient and maintainable Dockerfile.
### FROM

View File

@ -71,7 +71,7 @@ work at build time.
Related information:
- [Docker build cache](../cache/_index.md)
- [Dockerfile best practices](../../develop/develop-images/dockerfile_best-practices.md)
- [Dockerfile best practices](../../build/building/best-practices.md)
## Next steps

View File

Before

Width:  |  Height:  |  Size: 33 KiB

After

Width:  |  Height:  |  Size: 33 KiB

View File

@ -117,7 +117,7 @@ Make sure you have:
>Check that the `Dockerfile` has no file extension like `.txt`. Some editors may append this file extension automatically which results in an error when you run the application.
{ .important }
For more information on how to write Dockerfiles, see the [Docker user guide](../develop/index.md) and the [Dockerfile reference](/reference/dockerfile/).
For more information on how to write Dockerfiles, see the [Dockerfile reference](/reference/dockerfile/).
## Step 2: Define services in a Compose file

View File

@ -53,4 +53,3 @@ $ sudo pacman -S gnome-terminal
- Take a look at the [Get started](../../get-started/index.md) training modules to learn how to build an image and run it as a containerized application.
- [Explore Docker Desktop](../use-desktop/index.md) and all its features.
- Review the topics in [Develop with Docker](../../develop/index.md) to learn how to build new applications using Docker.

View File

@ -85,4 +85,3 @@ $ sudo apt-get install ./docker-desktop-<version>-<arch>.deb
- Take a look at the [Get started](../../get-started/_index.md) training modules to learn how to build an image and run it as a containerized application.
- [Explore Docker Desktop](../use-desktop/index.md) and all its features.
- Review the topics in [Develop with Docker](../../develop/index.md) to learn how to build new applications using Docker.

View File

@ -74,4 +74,3 @@ $ sudo dnf install ./docker-desktop-<version>-<arch>.rpm
- Take a look at the [Get started](../../get-started/_index.md) training modules to learn how to build an image and run it as a containerized application.
- [Explore Docker Desktop](../use-desktop/index.md) and all its features.
- Review the topics in [Develop with Docker](../../develop/index.md) to learn how to build new applications using Docker.

View File

@ -136,4 +136,3 @@ $ sudo dnf install ./docker-desktop-<version>-<arch>-rhel.rpm
- Take a look at the [Get started](../../get-started/_index.md) training modules to learn how to build an image and run it as a containerized application.
- [Explore Docker Desktop](../use-desktop/index.md) and all its features.
- Review the topics in [Develop with Docker](../../develop/index.md) to learn how to build new applications using Docker.

View File

@ -89,4 +89,3 @@ $ sudo apt-get install ./docker-desktop-<version>-<arch>.deb
- Take a look at the [Get started](../../get-started/_index.md) training modules to learn how to build an image and run it as a containerized application.
- [Explore Docker Desktop](../use-desktop/index.md) and all its features.
- Review the topics in [Develop with Docker](../../develop/index.md) to learn how to build new applications using Docker.

View File

@ -1,40 +0,0 @@
---
title: Develop with Docker
description: Overview of developer resources
keywords: developer, developing, apps, api, sdk
---
This page contains a list of resources for application developers who would like to build new applications using Docker.
## Prerequisites
Work through the learning modules in [Get started](../get-started/index.md) to understand how to build an image and run it as a containerized application.
## Develop new apps on Docker
If you're just getting started developing a brand new app on Docker, check out
these resources to understand some of the most common patterns for getting the
most benefits from Docker.
- Learn how to [build an image](../reference/dockerfile.md) using a Dockerfile
- Use [multi-stage builds](../build/building/multi-stage.md) to keep your images lean
- Manage application data using [volumes](../storage/volumes.md) and [bind mounts](../storage/bind-mounts.md)
- [Scale your app with Kubernetes](../get-started/kube-deploy.md)
- [Scale your app as a Swarm service](../get-started/swarm-deploy.md)
- [General application development best practices](dev-best-practices.md)
## Learn about language-specific app development with Docker
- [Docker for Java developers lab](https://github.com/docker/labs/tree/master/developer-tools/java/)
- [Port a node.js app to Docker lab](https://github.com/docker/labs/tree/master/developer-tools/nodejs/porting)
- [Ruby on Rails app on Docker lab](https://github.com/docker/labs/tree/master/developer-tools/ruby)
- [Dockerize a .Net Core application](../language/dotnet/index.md)
- [ASP.NET Core application with SQL Server](https://github.com/docker/awesome-compose/tree/master/aspnet-mssql) using Docker Compose
## Advanced development with the SDK or API
After you can write Dockerfiles or Compose files and use Docker CLI, take it to
the next level by using Docker Engine SDK for Go/Python or use the HTTP API
directly. Visit the [Develop with Docker Engine API](../engine/api/index.md)
section to learn more about developing with the Engine API, where to find SDKs
for your programming language of choice, and to see some examples.

View File

@ -1,101 +0,0 @@
---
title: Docker development best practices
description: Rules of thumb for making your life easier as a Docker application developer
keywords: application, development
tags: [Best practices]
---
The following development patterns have proven to be helpful for people
building applications with Docker.
<!-- markdownlint-disable-next-line -->
If you have discovered something we should add, [let us know]({{% param "repo" %}}/issues/new?template=doc_issue.yml&labels=status%2Ftriage).
## How to keep your images small
Small images are faster to pull over the network and faster to load into
memory when starting containers or services. There are a few rules of thumb to
keep image size small:
- Start with an appropriate base image. For instance, if you need a JDK,
consider basing your image on a Docker Official Image which includes OpenJDK,
such as `eclipse-temurin`, rather than building your own image from scratch.
- [Use multistage builds](../build/building/multi-stage.md). For
instance, you can use the `maven` image to build your Java application, then
reset to the `tomcat` image and copy the Java artifacts into the correct
location to deploy your app, all in the same Dockerfile. This means that your
final image doesn't include all of the libraries and dependencies pulled in by
the build, but only the artifacts and the environment needed to run them.
- If you need to use a version of Docker that does not include multistage
builds, try to reduce the number of layers in your image by minimizing the
number of separate `RUN` commands in your Dockerfile. You can do this by
consolidating multiple commands into a single `RUN` line and using your
shell's mechanisms to combine them together. Consider the following two
fragments. The first creates two layers in the image, while the second
only creates one.
```dockerfile
RUN apt-get -y update
RUN apt-get install -y python
```
```dockerfile
RUN apt-get -y update && apt-get install -y python
```
- If you have multiple images with a lot in common, consider creating your own
[base image](../build/building/base-images.md) with the shared
components, and basing your unique images on that. Docker only needs to load
the common layers once, and they are cached. This means that your
derivative images use memory on the Docker host more efficiently and load more
quickly.
- To keep your production image lean but allow for debugging, consider using the
production image as the base image for the debug image. Additional testing or
debugging tooling can be added on top of the production image.
- When building images, always tag them with useful tags which codify version
information, intended destination (`prod` or `test`, for instance), stability,
or other information that's useful when deploying the application in
different environments. Don't rely on the automatically-created `latest` tag.
## Where and how to persist application data
- Avoid storing application data in your container's writable layer using
[storage drivers](../storage/storagedriver/select-storage-driver.md). This increases the
size of your container and is less efficient from an I/O perspective than
using volumes or bind mounts.
- Instead, store data using [volumes](../storage/volumes.md).
- One case where it's appropriate to use
[bind mounts](../storage/bind-mounts.md) is during development,
when you may want to mount your source directory or a binary you just built
into your container. For production, use a volume instead, mounting it into
the same location as you mounted a bind mount during development.
- For production, use [secrets](../engine/swarm/secrets.md) to store sensitive
application data used by services, and use [configs](../engine/swarm/configs.md)
for non-sensitive data such as configuration files. If you currently use
standalone containers, consider migrating to use single-replica services, so
that you can take advantage of these service-only features.
## Use CI/CD for testing and deployment
- When you check in a change to source control or create a pull request, use
[Docker Hub](../docker-hub/builds/index.md) or
another CI/CD pipeline to automatically build and tag a Docker image and test
it.
- Take this even further by requiring your development, testing, and
security teams to [sign images](../reference/cli/docker/trust/_index.md)
before the teams deploy the images into production. This way, before an image is
deployed into production, it has been tested and signed off by, for instance,
development, quality, and security teams.
## Differences in development and production environments
| Development | Production |
|:--------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Use bind mounts to give your container access to your source code. | Use volumes to store container data. |
| Use Docker Desktop for Mac, Linux, or Windows. | Use Docker Engine, if possible with [userns mapping](../engine/security/userns-remap.md) for greater isolation of Docker processes from host processes. |

View File

@ -1,64 +0,0 @@
---
description: Overview of a Dockerfile and introduction to best practices
keywords: parent image, images, dockerfile, best practices, hub, official image
title: Overview of best practices for writing Dockerfiles
aliases:
- /articles/dockerfile_best-practices/
- /engine/articles/dockerfile_best-practices/
- /docker-cloud/getting-started/intermediate/optimize-dockerfiles/
- /docker-cloud/tutorials/optimize-dockerfiles/
- /engine/userguide/eng-image/dockerfile_best-practices/
tags: [Best practices]
---
This topic covers recommended best practices and methods for building
efficient images. It provides [general guidelines for your Dockerfiles](guidelines.md) and more [specific best practices for each Dockerfile instruction](instructions.md).
## What is a Dockerfile?
Docker builds images automatically by reading the instructions from a
Dockerfile which is a text file that contains all commands, in order, needed to
build a given image. A Dockerfile adheres to a specific format and set of
instructions which you can find at [Dockerfile reference](../../reference/dockerfile.md).
A Docker image consists of read-only layers each of which represents a
Dockerfile instruction. The layers are stacked and each one is a delta of the
changes from the previous layer.
```dockerfile
# syntax=docker/dockerfile:1
FROM ubuntu:22.04
COPY . /app
RUN make /app
CMD python /app/app.py
```
In the example above, each instruction creates one layer:
- `FROM` creates a layer from the `ubuntu:22.04` Docker image.
- `COPY` adds files from your Docker client's current directory.
- `RUN` builds your application with `make`.
- `CMD` specifies what command to run within the container.
When you run an image and generate a container, you add a new writable layer, also called the container layer, on top of the underlying layers. All changes made to
the running container, such as writing new files, modifying existing files, and
deleting files, are written to this writable container layer.
## Additional resources
* [Dockerfile reference](../../reference/dockerfile.md)
* [More about Automated builds](../../docker-hub/builds/index.md)
* [Guidelines for creating Docker Official Images](../../trusted-content/official-images/_index.md)
* [Best practices to containerize Node.js web applications with Docker](https://snyk.io/blog/10-best-practices-to-containerize-nodejs-web-applications-with-docker)
* [More about base images](../../build/building/base-images.md)
* [More on image layers and how Docker builds and stores images](../../storage/storagedriver/index.md).
## Examples of Docker Official Images
These Official Images have exemplary Dockerfiles:
* [Go](https://hub.docker.com/_/golang/)
* [Perl](https://hub.docker.com/_/perl/)
* [Hy](https://hub.docker.com/_/hylang/)
* [Ruby](https://hub.docker.com/_/ruby/)

View File

@ -1,156 +0,0 @@
---
description: Hints, tips and guidelines for writing clean, reliable Dockerfiles
keywords: parent image, images, dockerfile, best practices, hub, official image
title: General best practices for writing Dockerfiles
tags: [Best practices]
---
## Use multi-stage builds
Multi-stage builds let you reduce the size of your final image, by creating a
cleaner separation between the building of your image and the final output.
Split your Dockerfile instructions into distinct stages to make sure that the
resulting output only contains the files that's needed to run the application.
Using multiple stages can also let you build more efficiently by executing
build steps in parallel.
See [Multi-stage builds](../../build/building/multi-stage.md) for more
information.
## Exclude with .dockerignore
To exclude files not relevant to the build, without restructuring your source
repository, use a `.dockerignore` file. This file supports exclusion patterns
similar to `.gitignore` files. For information on creating one, see
[Dockerignore file](../../build/building/context.md#dockerignore-files).
## Create ephemeral containers
The image defined by your Dockerfile should generate containers that are as
ephemeral as possible. Ephemeral means that the container can be stopped
and destroyed, then rebuilt and replaced with an absolute minimum set up and
configuration.
Refer to [Processes](https://12factor.net/processes) under _The Twelve-factor App_
methodology to get a feel for the motivations of running containers in such a
stateless fashion.
## Don't install unnecessary packages
Avoid installing extra or unnecessary packages just because they might be nice to have. For example, you dont need to include a text editor in a database image.
When you avoid installing extra or unnecessary packages, your images have reduced complexity, reduced dependencies, reduced file sizes, and reduced build times.
## Decouple applications
Each container should have only one concern. Decoupling applications into
multiple containers makes it easier to scale horizontally and reuse containers.
For instance, a web application stack might consist of three separate
containers, each with its own unique image, to manage the web application,
database, and an in-memory cache in a decoupled manner.
Limiting each container to one process is a good rule of thumb, but it's not a
hard and fast rule. For example, not only can containers be
[spawned with an init process](../../engine/reference/run.md#specify-an-init-process),
some programs might spawn additional processes of their own accord. For
instance, [Celery](https://docs.celeryproject.org/) can spawn multiple worker
processes, and [Apache](https://httpd.apache.org/) can create one process per
request.
Use your best judgment to keep containers as clean and modular as possible. If
containers depend on each other, you can use [Docker container networks](../../network/index.md)
to ensure that these containers can communicate.
## Sort multi-line arguments
Whenever possible, sort multi-line arguments alphanumerically to make maintenance easier.
This helps to avoid duplication of packages and make the
list much easier to update. This also makes PRs a lot easier to read and
review. Adding a space before a backslash (`\`) helps as well.
Heres an example from the [buildpack-deps image](https://github.com/docker-library/buildpack-deps):
```dockerfile
RUN apt-get update && apt-get install -y \
bzr \
cvs \
git \
mercurial \
subversion \
&& rm -rf /var/lib/apt/lists/*
```
## Leverage build cache
When building an image, Docker steps through the instructions in your
Dockerfile, executing each in the order specified. For each instruction, Docker
checks whether it can reuse the instruction from the build cache.
Understanding how the build cache works, and how cache invalidation occurs,
is critical for ensuring faster builds.
For more information about the Docker build cache and how to optimize your builds,
see [Docker build cache](../../build/cache/_index.md).
## Pin base image versions
Image tags are mutable, meaning a publisher can update a tag to point to a new
image. This is useful because it lets publishers update tags to point to
newer versions of an image. And as an image consumer, it means you
automatically get the new version when you re-build your image.
For example, if you specify `FROM alpine:3.19` in your Dockerfile, `3.19`
resolves to the latest patch version for `3.19`.
```dockerfile
# syntax=docker/dockerfile:1
FROM alpine:3.19
```
At one point in time, the `3.19` tag might point to version 3.19.1 of the
image. If you rebuild the image 3 months later, the same tag might point to a
different version, such as 3.19.4. This publishing workflow is best practice,
and most publishers use this tagging strategy, but it isn't enforced.
The downside with this is that you're not guaranteed to get the same for every
build. This could result in breaking changes, and it means you also don't have
an audit trail of the exact image versions that you're using.
To fully secure your supply chain integrity, you can pin the image version to a
specific digest. By pinning your images to a digest, you're guaranteed to
always use the same image version, even if a publisher replaces the tag with a
new image. For example, the following Dockerfile pins the Alpine image to the
same tag as earlier, `3.19`, but this time with a digest reference as well.
```dockerfile
# syntax=docker/dockerfile:1
FROM alpine:3.19@sha256:13b7e62e8df80264dbb747995705a986aa530415763a6c58f84a3ca8af9a5bcd
```
With this Dockerfile, even if the publisher updates the `3.19` tag, your builds
would still use the pinned image version:
`13b7e62e8df80264dbb747995705a986aa530415763a6c58f84a3ca8af9a5bcd`.
While this helps you avoid unexpected changes, it's also more tedious to have
to look up and include the image digest for base image versions manually each
time you want to update it. And you're opting out of automated security fixes,
which is likely something you want to get.
Docker Scout has a built-in [**Outdated base images**
policy](../../scout/policy/_index.md#outdated-base-images) that checks for
whether the base image version you're using is in fact the latest version. This
policy also checks if pinned digests in your Dockerfile correspond to the
correct version. If a publisher updates an image that you've pinned, the policy
evaluation returns a non-compliant status, indicating that you should update
your image.
Docker Scout also supports an automated remediation workflow for keeping your
base images up-to-date. When a new image digest is available, Docker Scout can
automatically raise a pull request on your repository to update your
Dockerfiles to use the latest version. This is better than using a tag that
changes the version automatically, because you're in control and you have an
audit trail of when and how the change occurred.
For more information about automatically updating your base images with Docker
Scout, see
[Remediation](../../scout/policy/remediation.md#automatic-base-image-updates)

View File

@ -1,162 +0,0 @@
---
title: Security best practices
description: Image security best practices guide
keywords: docker, images, containers, vulnerability, cve
aliases:
- /develop/scan-images/
tags: [Best practices]
---
You can take a few steps to improve the security of your
container. This includes:
1. [Choosing the right base image](#choose-the-right-base-image) from a trusted source and keeping it small
2. [Using multi-stage builds](#use-multi-stage-builds)
3. [Rebuilding images](#rebuild-images)
4. [Checking your image for vulnerabilities](#check-your-image-for-vulnerabilities)
### Choose the right base image
The first step towards achieving a secure image is to choose the right base
image. When choosing an image, ensure it's built from a trusted source and keep
it small.
Docker Hub has more than 8.3 million repositories. Some of these images are
[Official Images](../trusted-content/official-images/_index.md), which are published by
Docker as a curated set of Docker open source and drop-in solution repositories.
Docker also offers images that are published by
[Verified Publishers](../trusted-content/dvp-program.md). These high-quality images
are published and maintained by the organizations partnering with Docker, with
Docker verifying the authenticity of the content in their repositories. When you
pick your base image, look out for the **Official Image** and **Verified Publisher**
badges.
![Docker Hub Official and Verified Publisher images](images/hub-official-images.webp)
When building your own image from a Dockerfile, ensure you choose a minimal base
image that matches your requirements. A smaller base image not only offers
portability and fast downloads, but also shrinks the size of your image and
minimizes the number of vulnerabilities introduced through the dependencies.
You should also consider using two types of base image: the first image for
development and unit testing and the second image for testing during the later
stages of development and production. In the later stages of development, your
image may not require build tools such as compilers, build systems, and
debugging tools. A small image with minimal dependencies can considerably
lower the attack surface.
### Use multi-stage builds
Multi-stage builds are designed to create an optimized Dockerfile that is easy
to read and maintain. With a multi-stage build, you can use multiple images and
selectively copy only the artifacts needed from a particular image.
You can use multiple `FROM` statements in your Dockerfile, and you can use a
different base image for each `FROM`. You can also selectively copy artifacts
from one stage to another, leaving behind things you dont need in the final
image. This can result in a concise final image.
This method of creating a tiny image doesn't only significantly reduce
complexity, but also reduces the chance of implementing vulnerable artifacts in your
image. Therefore, instead of images that are built on images, that again are
built on other images, multi-stage builds let you 'cherry pick' your
artifacts without inheriting the vulnerabilities from the base images on which
they rely.
For detailed information on how to configure multi-stage builds, see
[multi-stage builds](../build/building/multi-stage.md).
### Rebuild images
A Docker image is built from a Dockerfile. A Dockerfile contains a set of
instructions which lets you automate the steps you would normally
(manually) take to create an image. Additionally, it can include some imported
libraries and install custom software. These appear as instructions in the
Dockerfile.
Building your image is a snapshot of that image at that moment. When
you depend on a base image without a tag, youll get a different base image
every time you rebuild. Also, when you install packages using a package
installer, rebuilding can change the image drastically. For example, a
Dockerfile containing the following entries can potentially have a different
binary with every rebuild.
```dockerfile
# syntax=docker/dockerfile:1
FROM ubuntu:latest
RUN apt-get -y update && apt-get install -y python
```
Docker recommends that you rebuild your Docker image regularly to prevent known
vulnerabilities that have been addressed. When rebuilding, use the option
`--no-cache` to avoid cache hits and to ensure a fresh download.
For example:
```console
$ docker build --no-cache -t myImage:myTag myPath/
```
Consider the following best practices when rebuilding an image:
- Each container should have only one responsibility.
- Containers should be immutable, lightweight, and fast.
- Dont store data in your containers. Use a shared data store instead.
- Containers should be easy to destroy and rebuild.
- Use a small base image (such as Linux Alpine). Smaller images are easier to
distribute.
- Avoid installing unnecessary packages. This keeps the image clean and safe.
- Avoid cache hits when building.
- Auto-scan your image before deploying to avoid pushing vulnerable containers
to production.
- Analyze your images daily both during development and production for
vulnerabilities. Based on that, automate the rebuilding of images if necessary.
For detailed best practices and methods for building efficient images, see
[Dockerfile best practices](develop-images/dockerfile_best-practices.md).
## Check your image for vulnerabilities
In addition to following the best practices outlined on this page when
developing images, it's also important to continuously analyze and evaluate the
security posture of your images using vulnerability detection tools.
Docker tools come with features that help you to stay up to date about vulnerabilities
that affect images that you build or use.
- Docker Hub supports an automatic
[vulnerability scanning](../docker-hub/vulnerability-scanning.md) feature,
which when enabled automatically scans images when you push them to a Docker Hub
repository. Requires a [Docker subscription](../subscription/index.md).
- Docker Hub also supports an early-access
[advanced image analysis](../scout/image-analysis.md) feature, which extends
the "core" vulnerability scanning solution with enhanced capabilities and more
detailed and actionable insights.
- For the CLI, there's the
[`docker scout` CLI plugin](../reference/cli/docker/scout/_index.md)
which lets you explore vulnerabilities for images using the terminal.
- Docker Desktop has a detailed image view for images in your local image
store, that visualizes all of the known vulnerabilities affecting an image.
All of these security features are powered by the same technology:
[Docker Scout](../scout/index.md). These features help you to achieve a holistic
view of your supply chain security, and to provide actionable suggestions for
what you can do to remediate those vulnerabilities.
## Conclusion
Building secure images is a continuous process. Consider the recommendations and
best practices highlighted in this guide to plan and build efficient, scalable,
and secure images.
To summarize the topics covered in this guide:
- Start with a base image that you trust. Pay attention to the Official image and
Verified Publisher badges when you choose your base images.
- Secure your code and its dependencies.
- Select a minimal base image which contains only the required packages.
- Use multi-stage builds to optimize your image.
- Ensure you carefully monitor and manage the tools and dependencies you add to
your image.
- Ensure you scan images at multiple stages during your development lifecycle.
- Check your images frequently for vulnerabilities.

View File

@ -70,7 +70,7 @@ when the tests succeed.
9. For each branch or tag, enable or disable the **Build Caching** toggle.
[Build caching](../../develop/develop-images/dockerfile_best-practices.md#leverage-build-cache)
[Build caching](../../build/building/best-practices.md#leverage-build-cache)
can save time if you are building a large image frequently or have
many dependencies. Leave the build caching disabled to
make sure all of your dependencies are resolved at build time, or if

View File

@ -251,5 +251,3 @@ version.
## Next steps
- Continue to [Post-installation steps for Linux](linux-postinstall.md).
- Review the topics in [Develop with Docker](../../develop/index.md) to learn
how to build new applications using Docker.

View File

@ -237,5 +237,3 @@ You have to delete any edited configuration files manually.
## Next steps
- Continue to [Post-installation steps for Linux](linux-postinstall.md).
- Review the topics in [Develop with Docker](../../develop/index.md) to learn
how to build new applications using Docker.

View File

@ -257,5 +257,3 @@ You have to delete any edited configuration files manually.
## Next steps
- Continue to [Post-installation steps for Linux](linux-postinstall.md).
- Review the topics in [Develop with Docker](../../develop/index.md) to learn
how to build new applications using Docker.

View File

@ -233,5 +233,3 @@ You have to delete any edited configuration files manually.
## Next steps
- Continue to [Post-installation steps for Linux](linux-postinstall.md).
- Review the topics in [Develop with Docker](../../develop/index.md) to learn
how to build new applications using Docker.

View File

@ -141,5 +141,3 @@ options:
- Read the [Get started](../../get-started/index.md) training modules
to learn how to build an image and run it as a containerized application.
- Review the topics in [Develop with Docker](../../develop/index.md) to learn
how to build new applications using Docker.

View File

@ -246,5 +246,3 @@ You have to delete any edited configuration files manually.
## Next steps
- Continue to [Post-installation steps for Linux](linux-postinstall.md).
- Review the topics in [Develop with Docker](../../develop/index.md) to learn
how to build new applications using Docker.

View File

@ -248,5 +248,3 @@ You have to delete any edited configuration files manually.
## Next steps
- Continue to [Post-installation steps for Linux](linux-postinstall.md).
- Review the topics in [Develop with Docker](../../develop/index.md) to learn
how to build new applications using Docker.

View File

@ -256,5 +256,3 @@ You have to delete any edited configuration files manually.
## Next steps
- Continue to [Post-installation steps for Linux](linux-postinstall.md).
- Review the topics in [Develop with Docker](../../develop/index.md) to learn
how to build new applications using Docker.

View File

@ -260,5 +260,3 @@ You have to delete any edited configuration files manually.
## Next steps
- Continue to [Post-installation steps for Linux](linux-postinstall.md).
- Review the topics in [Develop with Docker](../../develop/index.md) to learn
how to build new applications using Docker.

View File

@ -199,7 +199,7 @@ In this section, you learned a few image building best practices, including laye
Related information:
- [Dockerfile reference](../reference/dockerfile.md)
- [Build with Docker guide](../build/guide/index.md)
- [Dockerfile best practices](../develop/develop-images/dockerfile_best-practices.md)
- [Dockerfile best practices](../build/building/best-practices.md)
## Next steps

View File

@ -27,10 +27,6 @@ dive-deeper:
description: Walk through practical Docker applications for specific scenarios.
link: /guides/use-case/
icon: task
- title: Develop with Docker
description: Master Docker best practices for efficient, secure development.
link: /develop/
icon: rule
- title: Build with Docker
description: Deep-dive into building software with Docker.
link: /build/guide/

View File

@ -16,8 +16,7 @@ The language-specific guides walk you through the process of:
In addition to the language-specific modules, Docker documentation also provides guidelines to build images and efficiently manage your development environment. For more information, refer to the following topics:
* [Best practices for writing Dockerfiles](../develop/develop-images/dockerfile_best-practices.md)
* [Docker development best practices](../develop/dev-best-practices.md)
* [Building best practices](../build/building/best-practices.md)
* [Build images with BuildKit](../build/buildkit/index.md#getting-started)
* [Build with Docker](../build/guide/_index.md)

View File

@ -100,7 +100,7 @@ determine if it's up-to-date.
If there's a policy violation, the recommended actions show how to update your
base image version to the latest version, while also pinning the base image
version to a specific digest. For more information, see [Pin base image
versions](../../develop/develop-images/guidelines.md#pin-base-image-versions).
versions](../../build/building/best-practices.md#pin-base-image-versions).
### GitHub integration enabled
@ -121,7 +121,7 @@ image to a digest is important for reproducibility, and helps avoid unwanted
changes from making their way into your supply chain.
For more information about base image pinning, see [Pin base image
versions](../../develop/develop-images/guidelines.md#pin-base-image-versions).
versions](../../build/building/best-practices.md#pin-base-image-versions).
<!--
TODO(dvdksn): no support for the following, yet

View File

@ -62,7 +62,7 @@ Each layer is only a set of differences from the layer before it. Note that both
_adding_, and _removing_ files will result in a new layer. In the example above,
the `$HOME/.cache` directory is removed, but will still be available in the
previous layer and add up to the image's total size. Refer to the
[Best practices for writing Dockerfiles](../../develop/develop-images/dockerfile_best-practices.md)
[Best practices for writing Dockerfiles](../../build/building/best-practices.md)
and [use multi-stage builds](../../build/building/multi-stage.md)
sections to learn how to optimize your Dockerfiles for efficient images.

View File

@ -31,7 +31,7 @@ on Docker Hub. This is particularly important as Docker Official Images are
some of the most popular on Docker Hub. Typically, Docker Official images have
few or no packages containing CVEs.
The images exemplify [`Dockerfile` best practices](../../develop/develop-images/dockerfile_best-practices.md)
The images exemplify [`Dockerfile` best practices](../../build/building/best-practices.md)
and provide clear documentation to serve as a reference for other `Dockerfile` authors.
Images that are part of this program have a special badge on Docker Hub making

View File

@ -619,7 +619,7 @@
- /go/scout-policy/
"/scout/policy/configure/":
- /go/scout-configure-policy/
"/develop/develop-images/guidelines/#pin-base-image-versions":
"/build/building/best-practices/#pin-base-image-versions":
- /go/base-image-pinning/
# integrations
"/scout/integrations/ci/gha/":
@ -696,9 +696,9 @@
- /go/filter/
# Docker Init
"/develop/develop-images/instructions/#user":
"/build/building/best-practices/#user":
- /go/dockerfile-user-best-practices/
"/develop/develop-images/instructions/#apt-get":
"/build/building/best-practices/#apt-get":
- /go/dockerfile-aptget-best-practices/
"/build/building/context/#dockerignore-files":
- /go/build-context-dockerignore/

View File

@ -228,24 +228,6 @@ Guides:
- path: /guides/use-case/databases/
title: Use containerized databases
- sectiontitle: Develop with Docker
section:
- path: /develop/
title: Overview
- path: /develop/dev-best-practices/
title: Development best practices
- sectiontitle: Dockerfile best practices
section:
- path: /develop/develop-images/dockerfile_best-practices/
title: Overview
- path: /develop/develop-images/guidelines/
title: General guidelines
- path: /develop/develop-images/instructions/
title: Best practices for Dockerfile instructions
- path: /develop/security-best-practices/
title: Security best practices
- sectiontitle: Build with Docker
section:
- path: /build/guide/
@ -1837,6 +1819,8 @@ Manuals:
title: Multi-stage builds
- path: /build/building/variables/
title: Variables
- path: /build/building/best-practices/
title: Best practices
- path: /build/building/multi-platform/
title: Multi-platform images
- path: /build/building/secrets/