Merge pull request #20715 from dvdksn/build-guide-refactor

build: incorporate build guide into manuals
This commit is contained in:
David Karlsson 2024-08-28 04:22:59 +02:00 committed by GitHub
commit 6bce6d72cf
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
43 changed files with 917 additions and 14496 deletions

View File

@ -0,0 +1,133 @@
---
title: Export binaries
description: Using Docker builds to create and export executable binaries
keywords: build, buildkit, buildx, guide, tutorial, build arguments, arg
aliases:
- /build/guide/export/
---
Did you know that you can use Docker to build your application to standalone
binaries? Sometimes, you dont want to package and distribute your application
as a Docker image. Use Docker to build your application, and use exporters to
save the output to disk.
The default output format for `docker build` is a container image. That image is
automatically loaded to your local image store, where you can run a container
from that image, or push it to a registry. Under the hood, this uses the default
exporter, called the `docker` exporter.
To export your build results as files instead, you can use the `--output` flag,
or `-o` for short. the `--output` flag lets you change the output format of
your build.
## Export binaries from a build
If you specify a filepath to the `docker build --output` flag, Docker exports
the contents of the build container at the end of the build to the specified
location on your host's filesystem. This uses the `local`
[exporter](/build/exporters/local-tar.md).
The neat thing about this is that you can use Docker's powerful isolation and
build features to create standalone binaries. This
works well for Go, Rust, and other languages that can compile to a single
binary.
The following example creates a simple Rust program that prints "Hello,
World!", and exports the binary to the host filesystem.
1. Create a new directory for this example, and navigate to it:
```console
$ mkdir hello-world-bin
$ cd hello-world-bin
```
2. Create a Dockerfile with the following contents:
```Dockerfile
# syntax=docker/dockerfile:1
FROM rust:alpine AS build
WORKDIR /src
COPY <<EOT hello.rs
fn main() {
println!("Hello World!");
}
EOT
RUN rustc -o /bin/hello hello.rs
FROM scratch
COPY --from=build /bin/hello /
ENTRYPOINT ["/hello"]
```
> [!TIP]
> The `COPY <<EOT` syntax is a [here-document](/reference/dockerfile.md#here-documents).
> It lets you write multi-line strings in a Dockerfile. Here it's used to
> create a simple Rust program inline in the Dockerfile.
This Dockerfile uses a multi-stage build to compile the program in the first
stage, and then copies the binary to a scratch image in the second. The
final image is a minimal image that only contains the binary. This use case
for the `scratch` image is common for creating minimal build artifacts for
programs that don't require a full operating system to run.
3. Build the Dockerfile and export the binary to the current working directory:
```console
$ docker build --output=. .
```
This command builds the Dockerfile and exports the binary to the current
working directory. The binary is named `hello`, and it's created in the
current working directory.
## Exporting multi-platform builds
You use the `local` exporter to export binaries in combination with
[multi-platform builds](/build/building/multi-platform.md). This lets you
compile multiple binaries at once, that can be run on any machine of any
architecture, provided that the target platform is supported by the compiler
you use.
Continuing on the example Dockerfile in the
[Export binaries from a build](#export-binaries-from-a-build) section:
```dockerfile
# syntax=docker/dockerfile:1
FROM rust:alpine AS build
WORKDIR /src
COPY <<EOT hello.rs
fn main() {
println!("Hello World!");
}
EOT
RUN rustc -o /bin/hello hello.rs
FROM scratch
COPY --from=build /bin/hello /
ENTRYPOINT ["/hello"]
```
You can build this Rust program for multiple platforms using the `--platform`
flag with the `docker build` command. In combination with the `--output` flag,
the build exports the binaries for each target to the specified directory.
For example, to build the program for both `linux/amd64` and `linux/arm64`:
```console
$ docker build --platform=linux/amd64,linux/arm64 --output=out .
$ tree out/
out/
├── linux_amd64
│   └── hello
└── linux_arm64
└── hello
3 directories, 2 files
```
## Additional information
In addition to the `local` exporter, there are other exporters available. To
learn more about the available exporters and how to use them, see the
[exporters](/build/exporters/_index.md) documentation.

View File

@ -1,37 +1,126 @@
--- ---
title: Multi-platform images title: Multi-platform builds
description: Introduction to multi-platform images and how to build them description: Introduction to what multi-platform builds are and how to execute them using Docker Buildx.
keywords: build, buildx, buildkit, multi-platform images keywords: build, buildx, buildkit, multi-platform, cross-platform, cross-compilation, emulation, QEMU, ARM, x86, Windows, Linux, macOS
aliases: aliases:
- /build/buildx/multiplatform-images/ - /build/buildx/multiplatform-images/
- /desktop/multi-arch/ - /desktop/multi-arch/
- /docker-for-mac/multi-arch/ - /docker-for-mac/multi-arch/
- /mackit/multi-arch/ - /mackit/multi-arch/
- /build/guide/multi-platform/
--- ---
A multi-platform image refers to a single image that includes variants for A multi-platform build refers to a single build invocation that targets
multiple different architectures and, in some cases, different operating multiple different operating system or CPU architecture combinations. When
systems, like Windows. This means that whether you are using an ARM-based building images, this lets you create a single image that can run on multiple
system or an x86 machine, Docker automatically detects and selects the platforms, such as `linux/amd64`, `linux/arm64`, and `windows/amd64`.
appropriate variant for your hosts's operating system and architecture.
Many of the Docker Official Images available on Docker Hub support various ## Why multi-platform builds?
architectures. For instance, the `busybox` image includes support for these
platforms:
- x86-64 (`linux/amd64`, `linux/i386`) Docker solves the "it works on my machine" problem by packaging applications
- ARM architectures (`linux/arm/v5`, `linux/arm/v6`, `linux/arm/v7`, `linux/arm64`) and their dependencies into containers. This makes it easy to run the same
- PowerPC and IBM Z (`linux/ppc64le`, `linux/s390x`) application on different environments, such as development, testing, and
production.
On an x86 machine, Docker will automatically use the `linux/amd64` variant But containerization by itself only solves part of the problem. Containers
when you run a container or invoke a build. share the host kernel, which means that the code that's running inside the
container must be compatible with the host's architecture. This is why you
can't run a `linux/amd64` container on a `linux/arm64` host, or a Windows
container on a Linux host.
Most Docker images use the `linux/` OS prefix to indicate they are Linux-based. Multi-platform builds solve this problem by packaging multiple variants of the
While Docker Desktop on macOS or Windows typically runs Linux containers using same application into a single image. This enables you to run the same image on
a Linux VM, Docker also supports Windows containers if you're operating in different types of hardware, such as development machines running x86-64 or
Windows container mode. ARM-based Amazon EC2 instances in the cloud, without the need for emulation.
## Building multi-platform images {{< accordion title="How it works" >}}
Multi-platform images have a different structure than single-platform images.
Single-platform images contain a single manifest that points to a single
configuration and a single set of layers. Multi-platform images contain a
manifest list, pointing to multiple manifests, each of which points to a
different configuration and set of layers.
![Multi-platform image structure](/build/images/single-vs-multiplatform-image.svg)
When you push a multi-platform image to a registry, the registry stores the
manifest list and all the individual manifests. When you pull the image, the
registry returns the manifest list, and Docker automatically selects the
correct variant based on the host's architecture. For example, if you run a
multi-platform image on an ARM-based Raspberry Pi, Docker selects the
`linux/arm64` variant. If you run the same image on an x86-64 laptop, Docker
selects the `linux/amd64` variant (if you're using Linux containers).
{{< /accordion >}}
## Prerequisites
To build multi-platform images, you first need to make sure that your builder
and Docker Engine support multi-platform builds. The easiest way to do this is
to [enable the containerd image store](#enable-the-containerd-image-store).
Alternatively, you can [create a custom builder](#create-a-custom-builder) that
uses the `docker-container` driver, which supports multi-platform builds.
### Enable the containerd image store
{{< tabs >}}
{{< tab name="Docker Desktop" >}}
To enable the containerd image store in Docker Desktop,
go to **Settings** and select **Use containerd for pulling and storing images**
in the **General** tab.
Note that changing the image store means you'll temporarily lose access to
images and containers in the classic image store.
Those resources still exist, but to view them, you'll need to
disable the containerd image store.
{{< /tab >}}
{{< tab name="Docker Engine" >}}
If you're not using Docker Desktop,
enable the containerd image store by adding the following feature configuration
to your `/etc/docker/daemon.json` configuration file.
```json {hl_lines=3}
{
"features": {
"containerd-snapshotter": true
}
}
```
Restart the daemon after updating the configuration file.
```console
$ systemctl restart docker
```
{{< /tab >}}
{{< /tabs >}}
### Create a custom builder
To create a custom builder, use the `docker buildx create` command to create a
builder that uses the `docker-container` driver. This driver runs the BuildKit
daemon in a container, as opposed to the default `docker` driver, which uses
the BuildKit library bundled with the Docker daemon. There isn't much
difference between the two drivers, but the `docker-container` driver provides
more flexibility and advanced features, including multi-platform support.
```console
$ docker buildx create \
--name container-builder \
--driver docker-container \
--use --bootstrap
```
This command creates a new builder named `container-builder` that uses the
`docker-container` driver (default) and sets it as the active builder. The
`--bootstrap` flag pulls the BuildKit image and starts the build container.
## Build multi-platform images
When triggering a build, use the `--platform` flag to define the target When triggering a build, use the `--platform` flag to define the target
platforms for the build output, such as `linux/amd64` and `linux/arm64`: platforms for the build output, such as `linux/amd64` and `linux/arm64`:
@ -40,37 +129,27 @@ platforms for the build output, such as `linux/amd64` and `linux/arm64`:
$ docker build --platform linux/amd64,linux/arm64 . $ docker build --platform linux/amd64,linux/arm64 .
``` ```
By default, Docker can build for only one platform at a time. > [!NOTE]
To build for multiple platforms concurrently, you can: > If you're using the `docker-container` driver, you need to specify the
> `--load` flag to load the image into the local image store after the build
- **Enable the containerd image store**: > finishes. This is because images built using the `docker-container` driver
The default image store in Docker Engine doesn't support multi-platform images. > aren't automatically loaded into the local image store.
The containerd image store does, and lets you create multi-platform images using the default builder.
Refer to the [containerd in Docker Desktop documentation](../../desktop/containerd.md).
- **Create a custom builder**:
Initialize a [builder](../builders/_index.md) that uses the `docker-container` driver, which supports multi-platform builds.
For more details, see the [`docker-container` driver documentation](/build/builders/drivers/docker-container.md).
## Strategies ## Strategies
You can build multi-platform images using three different strategies, You can build multi-platform images using three different strategies,
depending on your use case: depending on your use case:
1. Using emulation, via [QEMU](#qemu) support in the Linux kernel 1. Using emulation, via [QEMU](#qemu)
2. Building on a single builder backed by 2. Use a builder with [multiple native nodes](#multiple-native-nodes)
[multiple nodes of different architectures](#multiple-native-nodes). 3. Use [cross-compilation](#cross-compilation) with multi-stage builds
3. Using a stage in your Dockerfile to [cross-compile](#cross-compilation) to
different architectures
### QEMU ### QEMU
Building multi-platform images under emulation with QEMU is the easiest way to Building multi-platform images under emulation with QEMU is the easiest way to
get started if your builder already supports it. Docker Desktop supports it out get started if your builder already supports it. Using emulation requires no
of the box. It requires no changes to your Dockerfile, and BuildKit changes to your Dockerfile, and BuildKit automatically detects the
automatically detects the secondary architectures that are available. When architectures that are available for emulation.
BuildKit needs to run a binary for a different architecture, it automatically
loads it through a binary registered in the `binfmt_misc` handler.
> [!NOTE] > [!NOTE]
> >
@ -80,38 +159,38 @@ loads it through a binary registered in the `binfmt_misc` handler.
> Use [multiple native nodes](#multiple-native-nodes) or > Use [multiple native nodes](#multiple-native-nodes) or
> [cross-compilation](#cross-compilation) instead, if possible. > [cross-compilation](#cross-compilation) instead, if possible.
#### Support on Docker Desktop Docker Desktop supports running and building multi-platform images under
emulation by default. No configuration is necessary as the builder uses the
QEMU that's bundled within the Docker Desktop VM.
[Docker Desktop](../../desktop/_index.md) provides support for running and #### Install QEMU manually
building multi-platform images under emulation by default, which means you can
run containers for different Linux architectures such as `arm`, `mips`,
`ppc64le`, and even `s390x`.
This doesn't require any special configuration in the container itself as it If you're using a builder outside of Docker Desktop, such as if you're using
uses QEMU bundled within the Docker Desktop VM. Because of this, you can run Docker Engine on Linux, or a custom remote builder, you need to install QEMU
containers of non-native architectures like the `arm32v7` or `ppc64le` and register the executable types on the host OS. The prerequisites for
automatically. installing QEMU are:
#### QEMU without Docker Desktop - Linux kernel version 4.8 or later
- `binfmt-support` version 2.1.7 or later
- The QEMU binaries must be statically compiled and registered with the
`fix_binary` flag
If you're running Docker Engine on Linux, without Docker Desktop, you must Use the [`tonistiigi/binfmt`](https://github.com/tonistiigi/binfmt) image to
install statically compiled QEMU binaries and register them with install QEMU and register the executable types on the host with a single
[`binfmt_misc`](https://en.wikipedia.org/wiki/Binfmt_misc). This enables QEMU command:
to execute non-native file formats for emulation. The QEMU binaries must be
statically compiled and registered with the `fix_binary` flag. This requires a
kernel version 4.8 or later, and `binfmt-support` version 2.1.7 or later.
Once QEMU is installed and the executable types are registered on the host OS,
they work transparently inside containers. You can verify your registration by
checking if `F` is among the flags in `/proc/sys/fs/binfmt_misc/qemu-*`. While
Docker Desktop comes preconfigured with `binfmt_misc` support for additional
platforms, for other installations it likely needs to be installed using
[`tonistiigi/binfmt`](https://github.com/tonistiigi/binfmt) image:
```console ```console
$ docker run --privileged --rm tonistiigi/binfmt --install all $ docker run --privileged --rm tonistiigi/binfmt --install all
``` ```
This installs the QEMU binaries and registers them with
[`binfmt_misc`](https://en.wikipedia.org/wiki/Binfmt_misc), enabling QEMU to
execute non-native file formats for emulation.
Once QEMU is installed and the executable types are registered on the host OS,
they work transparently inside containers. You can verify your registration by
checking if `F` is among the flags in `/proc/sys/fs/binfmt_misc/qemu-*`.
### Multiple native nodes ### Multiple native nodes
Using multiple native nodes provide better support for more complicated cases Using multiple native nodes provide better support for more complicated cases
@ -178,136 +257,257 @@ FROM alpine
COPY --from=build /log /log COPY --from=build /log /log
``` ```
## Getting started ## Examples
Run the [`docker buildx ls` command](../../reference/cli/docker/buildx/ls.md) Here are some examples of multi-platform builds:
to list the existing builders:
- [Simple multi-platform build using emulation](#simple-multi-platform-build-using-emulation)
- [Multi-platform Neovim build using Docker Build Cloud](#multi-platform-neovim-build-using-docker-build-cloud)
- [Cross-compiling a Go application](#cross-compiling-a-go-application)
### Simple multi-platform build using emulation
This example demonstrates how to build a simple multi-platform image using
emulation with QEMU. The image contains a single file that prints the
architecture of the container.
Prerequisites:
- Docker Desktop, or Docker Engine with [QEMU installed](#install-qemu-manually)
- [containerd image store enabled](#enable-the-containerd-image-store)
Steps:
1. Create an empty directory and navigate to it:
```console ```console
$ docker buildx ls $ mkdir multi-platform
NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS $ cd multi-platform
default * docker
default default running v0.11.6 linux/amd64, linux/arm64, linux/arm/v7, linux/arm/v6
``` ```
This displays the default builtin driver, that uses the BuildKit server 2. Create a simple Dockerfile that prints the architecture of the container:
components built directly into the Docker Engine, also known as the [`docker` driver](/build/builders/drivers/docker.md).
Create a new builder using the [`docker-container` driver](/build/builders/drivers/docker-container.md)
which gives you access to more complex features like multi-platform builds
and the more advanced cache exporters, which are currently unsupported in the
default `docker` driver:
```console
$ docker buildx create --name mybuilder --bootstrap --use
```
Now listing the existing builders again, you can see that the new builder is
registered:
```console
$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
mybuilder * docker-container
mybuilder0 unix:///var/run/docker.sock running v0.12.1 linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/mips64le, linux/mips64, linux/arm/v7, linux/arm/v6
default docker
default default running v{{% param "buildkit_version" %}} linux/amd64, linux/arm64, linux/arm/v7, linux/arm/v6
```
## Example
Test the workflow to ensure you can build, push, and run multi-platform images.
Create a simple example Dockerfile, build a couple of image variants, and push
them to Docker Hub.
The following example uses a single `Dockerfile` to build an Alpine image with
cURL installed for multiple architectures:
```dockerfile ```dockerfile
# syntax=docker/dockerfile:1 # syntax=docker/dockerfile:1
FROM alpine:{{% param "example_alpine_version" %}} FROM alpine
RUN apk add curl RUN uname -m > /arch
``` ```
Build the Dockerfile with buildx, passing the list of architectures to 3. Build the image for `linux/amd64` and `linux/arm64`:
build for:
```console ```console
$ docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 -t <username>/<image>:latest --push . $ docker build --platform linux/amd64,linux/arm64 -t multi-platform .
...
#16 exporting to image
#16 exporting layers
#16 exporting layers 0.5s done
#16 exporting manifest sha256:71d7ecf3cd12d9a99e73ef448bf63ae12751fe3a436a007cb0969f0dc4184c8c 0.0s done
#16 exporting config sha256:a26f329a501da9e07dd9cffd9623e49229c3bb67939775f936a0eb3059a3d045 0.0s done
#16 exporting manifest sha256:5ba4ceea65579fdd1181dfa103cc437d8e19d87239683cf5040e633211387ccf 0.0s done
#16 exporting config sha256:9fcc6de03066ac1482b830d5dd7395da781bb69fe8f9873e7f9b456d29a9517c 0.0s done
#16 exporting manifest sha256:29666fb23261b1f77ca284b69f9212d69fe5b517392dbdd4870391b7defcc116 0.0s done
#16 exporting config sha256:92cbd688027227473d76e705c32f2abc18569c5cfabd00addd2071e91473b2e4 0.0s done
#16 exporting manifest list sha256:f3b552e65508d9203b46db507bb121f1b644e53a22f851185d8e53d873417c48 0.0s done
#16 ...
#17 [auth] <username>/<image>:pull,push token for registry-1.docker.io
#17 DONE 0.0s
#16 exporting to image
#16 pushing layers
#16 pushing layers 3.6s done
#16 pushing manifest for docker.io/<username>/<image>:latest@sha256:f3b552e65508d9203b46db507bb121f1b644e53a22f851185d8e53d873417c48
#16 pushing manifest for docker.io/<username>/<image>:latest@sha256:f3b552e65508d9203b46db507bb121f1b644e53a22f851185d8e53d873417c48 1.4s done
#16 DONE 5.6s
``` ```
> [!NOTE] 4. Run the image and print the architecture:
>
> * `<username>` must be a valid Docker ID and `<image>` and valid repository on
> Docker Hub.
> * The `--platform` flag informs buildx to create Linux images for x86 64-bit,
> ARM 64-bit, and ARMv7 architectures.
> * The `--push` flag generates a multi-arch manifest and pushes all the images
> to Docker Hub.
Inspect the image using [`docker buildx imagetools` command](../../reference/cli/docker/buildx/imagetools/_index.md):
```console ```console
$ docker buildx imagetools inspect <username>/<image>:latest $ docker run --rm multi-platform cat /arch
Name: docker.io/<username>/<image>:latest
MediaType: application/vnd.docker.distribution.manifest.list.v2+json
Digest: sha256:f3b552e65508d9203b46db507bb121f1b644e53a22f851185d8e53d873417c48
Manifests:
Name: docker.io/<username>/<image>:latest@sha256:71d7ecf3cd12d9a99e73ef448bf63ae12751fe3a436a007cb0969f0dc4184c8c
MediaType: application/vnd.docker.distribution.manifest.v2+json
Platform: linux/amd64
Name: docker.io/<username>/<image>:latest@sha256:5ba4ceea65579fdd1181dfa103cc437d8e19d87239683cf5040e633211387ccf
MediaType: application/vnd.docker.distribution.manifest.v2+json
Platform: linux/arm64
Name: docker.io/<username>/<image>:latest@sha256:29666fb23261b1f77ca284b69f9212d69fe5b517392dbdd4870391b7defcc116
MediaType: application/vnd.docker.distribution.manifest.v2+json
Platform: linux/arm/v7
``` ```
The image is now available on Docker Hub with the tag `<username>/<image>:latest`. - If you're running on an x86-64 machine, you should see `x86_64`.
You can use this image to run a container on Intel laptops, Amazon EC2 Graviton - If you're running on an ARM machine, you should see `aarch64`.
instances, Raspberry Pis, and on other architectures. Docker pulls the correct
image for the current architecture, so Raspberry PIs run the 32-bit ARM version
and EC2 Graviton instances run 64-bit ARM.
The digest identifies a fully qualified image variant. You can also run images ### Multi-platform Neovim build using Docker Build Cloud
targeted for a different architecture on Docker Desktop. For example, when
you run the following on a macOS: This example demonstrates how run a multi-platform build using Docker Build
Cloud to compile and export [Neovim](https://github.com/neovim/neovim) binaries
for the `linux/amd64` and `linux/arm64` platforms.
Docker Build Cloud provides managed multi-node builders that support native
multi-platform builds without the need for emulation, making it much faster to
do CPU-intensive tasks like compilation.
Prerequisites:
- You've [signed up for Docker Build Cloud and created a builder](/build-cloud/setup.md)
Steps:
1. Create an empty directory and navigate to it:
```console ```console
$ docker run --rm docker.io/<username>/<image>:latest@sha256:2b77acdfea5dc5baa489ffab2a0b4a387666d1d526490e31845eb64e3e73ed20 uname -m $ mkdir docker-build-neovim
aarch64 $ cd docker-build-neovim
``` ```
2. Create a Dockerfile that builds Neovim.
```dockerfile
# syntax=docker/dockerfile:1
FROM debian:bookworm AS build
WORKDIR /work
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update && apt-get install -y \
build-essential \
cmake \
curl \
gettext \
ninja-build \
unzip
ADD https://github.com/neovim/neovim.git#stable .
RUN make CMAKE_BUILD_TYPE=RelWithDebInfo
FROM scratch
COPY --from=build /work/build/bin/nvim /
```
3. Build the image for `linux/amd64` and `linux/arm64` using Docker Build Cloud:
```console ```console
$ docker run --rm docker.io/<username>/<image>:latest@sha256:723c22f366ae44e419d12706453a544ae92711ae52f510e226f6467d8228d191 uname -m $ docker build \
armv7l --builder <cloud-builder> \
--platform linux/amd64,linux/arm64 \
--output ./bin
``` ```
In the previous example, `uname -m` returns `aarch64` and `armv7l` as expected, This command builds the image using the cloud builder and exports the
even when running the commands on a native macOS or Windows developer machine. binaries to the `bin` directory.
4. Verify that the binaries are built for both platforms. You should see the
`nvim` binary for both `linux/amd64` and `linux/arm64`.
```console
$ tree ./bin
./bin
├── linux_amd64
│   └── nvim
└── linux_arm64
└── nvim
3 directories, 2 files
```
### Cross-compiling a Go application
This example demonstrates how to cross-compile a Go application for multiple
platforms using multi-stage builds. The application is a simple HTTP server
that listens on port 8080 and returns the architecture of the container.
This example uses Go, but the same principles apply to other programming
languages that support cross-compilation.
Cross-compilation with Docker builds works by leveraging a series of
pre-defined (in BuildKit) build arguments that give you information about
platforms of the builder and the build targets. You can use these pre-defined
arguments to pass the platform information to the compiler.
In Go, you can use the `GOOS` and `GOARCH` environment variables to specify the
target platform to build for.
Prerequisites:
- Docker Desktop or Docker Engine
Steps:
1. Create an empty directory and navigate to it:
```console
$ mkdir go-server
$ cd go-server
```
2. Create a base Dockerfile that builds the Go application:
```dockerfile
# syntax=docker/dockerfile:1
FROM golang:alpine AS build
WORKDIR /app
ADD https://github.com/dvdksn/buildme.git#eb6279e0ad8a10003718656c6867539bd9426ad8 .
RUN go build -o server .
FROM alpine
COPY --from=build /app/server /server
ENTRYPOINT ["/server"]
```
This Dockerfile can't build multi-platform with cross-compilation yet. If
you were to try to build this Dockerfile with `docker build`, the builder
would attempt to use emulation to build the image for the specified
platforms.
3. To add cross-compilation support, update the Dockerfile to use the
pre-defined `BUILDPLATFORM` and `TARGETPLATFORM` build arguments. These
arguments are automatically available in the Dockerfile when you use the
`--platform` flag with `docker build`.
- Pin the `golang` image to the platform of the builder using the
`--platform=$BUILDPLATFORM` option.
- Add `ARG` instructions for the Go compilation stages to make the
`TARGETOS` and `TARGETARCH` build arguments available to the commands in
this stage.
- Set the `GOOS` and `GOARCH` environment variables to the values of
`TARGETOS` and `TARGETARCH`. The Go compiler uses these variables to do
cross-compilation.
{{< tabs >}}
{{< tab name="Updated Dockerfile" >}}
```dockerfile
# syntax=docker/dockerfile:1
FROM --platform=$BUILDPLATFORM golang:alpine AS build
ARG TARGETOS
ARG TARGETARCH
WORKDIR /app
ADD https://github.com/dvdksn/buildme.git#eb6279e0ad8a10003718656c6867539bd9426ad8 .
RUN GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -o server .
FROM alpine
COPY --from=build /app/server /server
ENTRYPOINT ["/server"]
```
{{< /tab >}}
{{< tab name="Old Dockerfile" >}}
```dockerfile
# syntax=docker/dockerfile:1
FROM golang:alpine AS build
WORKDIR /app
ADD https://github.com/dvdksn/buildme.git#eb6279e0ad8a10003718656c6867539bd9426ad8 .
RUN go build -o server .
FROM alpine
COPY --from=build /app/server /server
ENTRYPOINT ["/server"]
```
{{< /tab >}}
{{< tab name="Diff" >}}
```diff
# syntax=docker/dockerfile:1
-FROM golang:alpine AS build
+FROM --platform=$BUILDPLATFORM golang:alpine AS build
+ARG TARGETOS
+ARG TARGETARCH
WORKDIR /app
ADD https://github.com/dvdksn/buildme.git#eb6279e0ad8a10003718656c6867539bd9426ad8 .
-RUN go build -o server .
RUN GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -o server .
FROM alpine
COPY --from=build /app/server /server
ENTRYPOINT ["/server"]
```
{{< /tab >}}
{{< /tabs >}}
4. Build the image for `linux/amd64` and `linux/arm64`:
```console
$ docker build --platform linux/amd64,linux/arm64 -t go-server .
```
This example has shown how to cross-compile a Go application for multiple
platforms with Docker builds. The specific steps on how to do cross-compilation
may vary depending on the programming language you're using. Consult the
documentation for your programming language to learn more about cross-compiling
for different platforms.
> [!TIP]
> You may also want to consider checking out
> [xx - Dockerfile cross-compilation helpers](https://github.com/tonistiigi/xx).
> `xx` is a Docker image containing utility scripts that make cross-compiling with Docker builds easier.

View File

@ -5,6 +5,7 @@ keywords: build, args, variables, parameters, env, environment variables, config
aliases: aliases:
- /build/buildkit/color-output-controls/ - /build/buildkit/color-output-controls/
- /build/building/env-vars/ - /build/building/env-vars/
- /build/guide/build-args/
--- ---
In Docker Build, build arguments (`ARG`) and environment variables (`ENV`) In Docker Build, build arguments (`ARG`) and environment variables (`ENV`)

View File

@ -48,231 +48,11 @@ And that's the Docker build cache in a nutshell. Once a layer changes, then all
downstream layers need to be rebuilt as well. Even if they wouldn't build downstream layers need to be rebuilt as well. Even if they wouldn't build
anything differently, they still need to re-run. anything differently, they still need to re-run.
For more details about how cache invalidation works, see [Cache invalidation](invalidation.md).
## Optimizing how you use the build cache
Now that you understand how the cache works, you can begin to use the cache to
your advantage. While the cache will automatically work on any `docker build`
that you run, you can often refactor your Dockerfile to get even better
performance. These optimizations can save precious seconds (or even minutes)
off of your builds.
### Order your layers
Putting the commands in your Dockerfile into a logical order is a great place
to start. Because a change causes a rebuild for steps that follow, try to make
expensive steps appear near the beginning of the Dockerfile. Steps that change
often should appear near the end of the Dockerfile, to avoid triggering
rebuilds of layers that haven't changed.
Consider the following example. A Dockerfile snippet that runs a JavaScript
build from the source files in the current directory:
```dockerfile
# syntax=docker/dockerfile:1
FROM node
WORKDIR /app
COPY . . # Copy over all files in the current directory
RUN npm install # Install dependencies
RUN npm build # Run build
```
This Dockerfile is rather inefficient. Updating any file causes a reinstall of
all dependencies every time you build the Docker image even if the dependencies
didn't change since last time!
Instead, the `COPY` command can be split in two. First, copy over the package
management files (in this case, `package.json` and `yarn.lock`). Then, install
the dependencies. Finally, copy over the project source code, which is subject
to frequent change.
```dockerfile
# syntax=docker/dockerfile:1
FROM node
WORKDIR /app
COPY package.json yarn.lock . # Copy package management files
RUN npm install # Install dependencies
COPY . . # Copy over project files
RUN npm build # Run build
```
By installing dependencies in earlier layers of the Dockerfile, there is no need
to rebuild those layers when a project file has changed.
### Keep layers small
One of the best things you can do to speed up image building is to just put less
stuff into your build. Fewer parts means the cache stay smaller, but also that
there should be fewer things that could be out-of-date and need rebuilding.
To get started, here are a few tips and tricks:
#### Don't include unnecessary files
Be considerate of what files you add to the image.
Running a command like `COPY . /src` will copy your entire [build context](../concepts/context.md)
into the image. If you've got logs, package manager artifacts, or even previous
build results in your current directory, those will also be copied over. This
could make your image larger than it needs to be, especially as those files are
usually not useful.
Avoid adding unnecessary files to your builds by explicitly stating the files or
directories you intend to copy over. For example, you might only want to add a
`Makefile` and your `src` directory to the image filesystem. In that case,
consider adding this to your Dockerfile:
```dockerfile
COPY ./src ./Makefile /src
```
As opposed to this:
```dockerfile
COPY . /src
```
You can also create a
[`.dockerignore` file](../concepts/context.md#dockerignore-files),
and use that to specify which files and directories to exclude from the build
context.
#### Use your package manager wisely
Most Docker image builds involve using a package manager to help install
software into the image. Debian has `apt`, Alpine has `apk`, Python has `pip`,
NodeJS has `npm`, and so on.
When installing packages, be considerate. Make sure to only install the packages
that you need. If you're not going to use them, don't install them. Remember
that this might be a different list for your local development environment and
your production environment. You can use multi-stage builds to split these up
efficiently.
#### Use the dedicated `RUN` cache
The `RUN` command supports a specialized cache, which you can use when you need
a more fine-grained cache between runs. For example, when installing packages,
you don't always need to fetch all of your packages from the internet each time.
You only need the ones that have changed.
To solve this problem, you can use `RUN --mount type=cache`. For example, for
your Debian-based image you might use the following:
```dockerfile
RUN \
--mount=type=cache,target=/var/cache/apt \
apt-get update && apt-get install -y git
```
Using the explicit cache with the `--mount` flag keeps the contents of the
`target` directory preserved between builds. When this layer needs to be
rebuilt, then it'll use the `apt` cache in `/var/cache/apt`.
### Minimize the number of layers
Keeping your layers small is a good first step, and the logical next step is to
reduce the number of layers that you have. Fewer layers mean that you have less
to rebuild, when something in your Dockerfile changes, so your build will
complete faster.
The following sections outline some tips you can use to keep the number of
layers to a minimum.
#### Use an appropriate base image
Docker provides over 170 pre-built [official images](https://hub.docker.com/search?q=&image_filter=official)
for almost every common development scenario. For example, if you're building a
Java web server, use a dedicated image such as [`eclipse-temurin`](https://hub.docker.com/_/eclipse-temurin/).
Even when there's not an official image for what you might want, Docker provides
images from [verified publishers](https://hub.docker.com/search?q=&image_filter=store)
and [open source partners](https://hub.docker.com/search?q=&image_filter=open_source)
that can help you on your way. The Docker community often produces third-party
images to use as well.
Using official images saves you time and ensures you stay up to date and secure
by default.
#### Use multi-stage builds
[Multi-stage builds](../building/multi-stage.md) let you split up your
Dockerfile into multiple distinct stages. Each stage completes a step in the
build process, and you can bridge the different stages to create your final
image at the end. The Docker builder will work out dependencies between the
stages and run them using the most efficient strategy. This even allows you to
run multiple builds concurrently.
Multi-stage builds use two or more `FROM` commands. The following example
illustrates building a simple web server that serves HTML from your `docs`
directory in Git:
```dockerfile
# syntax=docker/dockerfile:1
# stage 1
FROM alpine as git
RUN apk add git
# stage 2
FROM git as fetch
WORKDIR /repo
RUN git clone https://github.com/your/repository.git .
# stage 3
FROM nginx as site
COPY --from=fetch /repo/docs/ /usr/share/nginx/html
```
This build has 3 stages: `git`, `fetch` and `site`. In this example, `git` is
the base for the `fetch` stage. It uses the `COPY --from` flag to copy the data
from the `docs/` directory into the Nginx server directory.
Each stage has only a few instructions, and when possible, Docker will run these
stages in parallel. Only the instructions in the `site` stage will end up as
layers in the final image. The entire `git` history doesn't get embedded into
the final result, which helps keep the image small and secure.
#### Combine commands together wherever possible
Most Dockerfile commands, and `RUN` commands in particular, can often be joined
together. For example, instead of using `RUN` like this:
```dockerfile
RUN echo "the first command"
RUN echo "the second command"
```
It's possible to run both of these commands inside a single `RUN`, which means
that they will share the same cache! This is achievable using the `&&` shell
operator to run one command after another:
```dockerfile
RUN echo "the first command" && echo "the second command"
# or to split to multiple lines
RUN echo "the first command" && \
echo "the second command"
```
Another shell feature that allows you to simplify and concatenate commands in a
neat way are [`heredocs`](https://en.wikipedia.org/wiki/Here_document).
It enables you to create multi-line scripts with good readability:
```dockerfile
RUN <<EOF
set -e
echo "the first command"
echo "the second command"
EOF
```
(Note the `set -e` command to exit immediately after any command fails, instead
of continuing.)
## Other resources ## Other resources
For more information on using cache to do efficient builds, see: For more information on using cache to do efficient builds, see:
- [Cache invalidation](invalidation.md) - [Cache invalidation](invalidation.md)
- [Optimize build cache](optimization.md)
- [Garbage collection](garbage-collection.md) - [Garbage collection](garbage-collection.md)
- [Cache storage backends](./backends/index.md) - [Cache storage backends](./backends/_index.md)

368
content/build/cache/optimize.md vendored Normal file
View File

@ -0,0 +1,368 @@
---
title: Optimize cache usage in builds
description: An overview on how to optimize cache utilization in Docker builds.
keywords: build, buildkit, buildx, guide, tutorial, mounts, cache mounts, bind mounts
aliases:
- /build/guide/mounts/
---
When building with Docker, a layer is reused from the build cache if the
instruction and the files it depends on hasn't changed since it was previously
built. Reusing layers from the cache speeds up the build process because Docker
doesn't have to rebuild the layer again.
Here are a few techniques you can use to optimize build caching and speed up
the build process:
- [Order your layers](#order-your-layers): Putting the commands in your
Dockerfile into a logical order can help you avoid unnecessary cache
invalidation.
- [Keep the context small](#keep-the-context-small): The context is the set of
files and directories that are sent to the builder to process a build
instruction. Keeping the context as small reduces the amount of data that
needs to be sent to the builder, and reduces the likelihood of cache
invalidation.
- [Use bind mounts](#use-bind-mounts): Bind mounts let you mount a file or
directory from the host machine into the build container. Using bind mounts
can help you avoid unnecessary layers in the image, which can slow down the
build process.
- [Use cache mounts](#use-cache-mounts): Cache mounts let you specify a
persistent package cache to be used during builds. The persistent cache helps
speed up build steps, especially steps that involve installing packages using
a package manager. Having a persistent cache for packages means that even if
you rebuild a layer, you only download new or changed packages.
- [Use an external cache](#use-an-external-cache): An external cache lets you
store build cache at a remote location. The external cache image can be
shared between multiple builds, and across different environments.
## Order your layers
Putting the commands in your Dockerfile into a logical order is a great place
to start. Because a change causes a rebuild for steps that follow, try to make
expensive steps appear near the beginning of the Dockerfile. Steps that change
often should appear near the end of the Dockerfile, to avoid triggering
rebuilds of layers that haven't changed.
Consider the following example. A Dockerfile snippet that runs a JavaScript
build from the source files in the current directory:
```dockerfile
# syntax=docker/dockerfile:1
FROM node
WORKDIR /app
COPY . . # Copy over all files in the current directory
RUN npm install # Install dependencies
RUN npm build # Run build
```
This Dockerfile is rather inefficient. Updating any file causes a reinstall of
all dependencies every time you build the Docker image even if the dependencies
didn't change since last time.
Instead, the `COPY` command can be split in two. First, copy over the package
management files (in this case, `package.json` and `yarn.lock`). Then, install
the dependencies. Finally, copy over the project source code, which is subject
to frequent change.
```dockerfile
# syntax=docker/dockerfile:1
FROM node
WORKDIR /app
COPY package.json yarn.lock . # Copy package management files
RUN npm install # Install dependencies
COPY . . # Copy over project files
RUN npm build # Run build
```
By installing dependencies in earlier layers of the Dockerfile, there is
no need to rebuild those layers when a project file has changed.
## Keep the context small
The easiest way to make sure your context doesn't include unnecessary files is
to create a `.dockerignore` file in the root of your build context. The
`.dockerignore` file works similarly to `.gitignore` files, and lets you
exclude files and directories from the build context.
Here's an example `.dockerignore` file that excludes the `node_modules`
directory, all files and directories that start with `tmp`:
```plaintext {title=".dockerignore"}
node_modules
tmp*
```
Ignore-rules specified in the `.dockerignore` file apply to the entire build
context, including subdirectories. This means it's a rather coarse-grained
mechanism, but it's a good way to exclude files and directories that you know
you don't need in the build context, such as temporary files, log files, and
build artifacts.
## Use bind mounts
You might be familiar with bind mounts for when you run containers with `docker
run` or Docker Compose. Bind mounts let you mount a file or directory from the
host machine into a container.
```bash
# bind mount using the -v flag
docker run -v $(pwd):/path/in/container image-name
# bind mount using the --mount flag
docker run --mount=type=bind,src=.,dst=/path/in/container image-name
```
To use bind mounts in a build, you can use the `--mount` flag with the `RUN`
instruction in your Dockerfile:
```dockerfile
FROM golang:latest
WORKDIR /app
RUN --mount=type=bind,target=. go build -o /app/hello
```
In this example, the current directory is mounted into the build container
before the `go build` command gets executed. The source code is available in
the build container for the duration of that `RUN` instruction. When the
instruction is done executing, the mounted files are not persisted in the final
image, or in the build cache. Only the output of the `go build` command
remains.
The `COPY` and `ADD` instructions in a Dockerfile lets you copy files from the
build context into the build container. Using bind mounts is beneficial for
build cache optimization because you're not adding unnecessary layers to the
cache. If you have build context that's on the larger side, and it's only used
to generate an artifact, you're better off using bind mounts to temporarily
mount the source code required to generate the artifact into the build. If you
use `COPY` to add the files to the build container, BuildKit will include all
of those files in the cache, even if the files aren't used in the final image.
There are a few things to be aware of when using bind mounts in a build:
- Bind mounts are read-only by default. If you need to write to the mounted
directory, you need to specify the `rw` option. However, even with the `rw`
option, the changes are not persisted in the final image or the build cache.
The file writes are sustained for the duration of the `RUN` instruction, and
are discarded after the instruction is done.
- Mounted files are not persisted in the final image. Only the output of the
`RUN` instruction is persisted in the final image. If you need to include
files from the build context in the final image, you need to use the `COPY`
or `ADD` instructions.
- If the target directory is not empty, the contents of the target directory
are hidden by the mounted files. The original contents are restored after the
`RUN` instruction is done.
{{< accordion title="Example" >}}
For example, given a build context with only a `Dockerfile` in it:
```plaintext
.
└── Dockerfile
```
And a Dockerfile that mounts the current directory into the build container:
```dockerfile
FROM alpine:latest
WORKDIR /work
RUN touch foo.txt
RUN --mount=type=bind,target=. ls
RUN ls
```
The first `ls` command with the bind mount shows the contents of the mounted
directory. The second `ls` lists the contents of the original build context.
```plaintext {title="Build log"}
#8 [stage-0 3/5] RUN touch foo.txt
#8 DONE 0.1s
#9 [stage-0 4/5] RUN --mount=target=. ls -1
#9 0.040 Dockerfile
#9 DONE 0.0s
#10 [stage-0 5/5] RUN ls -1
#10 0.046 foo.txt
#10 DONE 0.1s
```
{{< /accordion >}}
## Use cache mounts
Regular cache layers in Docker correspond to an exact match of the instruction
and the files it depends on. If the instruction and the files it depends on
have changed since the layer was built, the layer is invalidated, and the build
process has to rebuild the layer.
Cache mounts are a way to specify a persistent cache location to be used during
builds. The cache is cumulative across builds, so you can read and write to the
cache multiple times. This persistent caching means that even if you need to
rebuild a layer, you only download new or changed packages. Any unchanged
packages are reused from the cache mount.
To use cache mounts in a build, you can use the `--mount` flag with the `RUN`
instruction in your Dockerfile:
```dockerfile
FROM node:latest
WORKDIR /app
RUN --mount=type=cache,target=/root/.npm npm install
```
In this example, the `npm install` command uses a cache mount for the
`/root/.npm` directory, the default location for the npm cache. The cache mount
is persisted across builds, so even if you end up rebuilding the layer, you
only download new or changed packages. Any changes to the cache are persisted
across builds, and the cache is shared between multiple builds.
How you specify cache mounts depends on the build tool you're using. If you're
unsure how to specify cache mounts, refer to the documentation for the build
tool you're using. Here are a few examples:
{{< tabs >}}
{{< tab name="Go" >}}
```dockerfile
RUN --mount=type=cache,target=/go/pkg/mod \
go build -o /app/hello
```
{{< /tab >}}
{{< tab name="Apt" >}}
```dockerfile
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt update && apt-get --no-install-recommends install -y gcc
```
{{< /tab >}}
{{< tab name="Python" >}}
```dockerfile
RUN --mount=type=cache,target=/root/.cache/pip \
pip install -r requirements.txt
```
{{< /tab >}}
{{< tab name="Ruby" >}}
```dockerfile
RUN --mount=type=cache,target=/root/.gem \
bundle install
```
{{< /tab >}}
{{< tab name="Rust" >}}
```dockerfile
RUN --mount=type=cache,target=/app/target/ \
--mount=type=cache,target=/usr/local/cargo/git/db \
--mount=type=cache,target=/usr/local/cargo/registry/ \
cargo build
```
{{< /tab >}}
{{< tab name=".NET" >}}
```dockerfile
RUN --mount=type=cache,target=/root/.nuget/packages \
dotnet restore
```
{{< /tab >}}
{{< tab name="PHP" >}}
```dockerfile
RUN --mount=type=cache,target=/tmp/cache \
composer install
```
{{< /tab >}}
{{< /tabs >}}
It's important that you read the documentation for the build tool you're using
to make sure you're using the correct cache mount options. Package managers
have different requirements for how they use the cache, and using the wrong
options can lead to unexpected behavior. For example, Apt needs exclusive
access to its data, so the caches use the option `sharing=locked` to ensure
parallel builds using the same cache mount wait for each other and not access
the same cache files at the same time.
## Use an external cache
The default cache storage for builds is internal to the builder (BuildKit
instance) you're using. Each builder uses its own cache storage. When you
switch between different builders, the cache is not shared between them. Using
an external cache lets you define a remote location for pushing and pulling
cache data.
External caches are especially useful for CI/CD pipelines, where the builders
are often ephemeral, and build minutes are precious. Reusing the cache between
builds can drastically speed up the build process and reduce cost. You can even
make use of the same cache in your local development environment.
To use an external cache, you specify the `--cache-to` and `--cache-from`
options with the `docker buildx build` command.
- `--cache-to` exports the build cache to the specified location.
- `--cache-from` specifies remote caches for the build to use.
The following example shows how to set up a GitHub Actions workflow using
`docker/build-push-action`, and push the build cache layers to an OCI registry
image:
```yaml {title=".github/workflows/ci.yml"}
name: ci
on:
push:
jobs:
docker:
runs-on: ubuntu-latest
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ vars.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and push
uses: docker/build-push-action@v6
with:
push: true
tags: user/app:latest
cache-from: type=registry,ref=user/app:buildcache
cache-to: type=registry,ref=user/app:buildcache,mode=max
```
This setup tells BuildKit to look for cache in the `user/app:buildcache` image.
And when the build is done, the new build cache is pushed to the same image,
overwriting the old cache.
This cache can be used locally as well. To pull the cache in a local build,
you can use the `--cache-from` option with the `docker buildx build` command:
```console
$ docker buildx build --cache-from type=registry,ref=user/app:buildcache .
```
## Summary
Optimizing cache usage in builds can significantly speed up the build process.
Keeping the build context small, using bind mounts, cache mounts, and external
caches are all techniques you can use to make the most of the build cache and
speed up the build process.
For more information about the concepts discussed in this guide, see:
- [.dockerignore files](/build/concepts/context.md#dockerignore-files)
- [Cache invalidation](/build/cache/invalidation.md)
- [Cache mounts](/reference/dockerfile.md#run---mounttypecache)
- [Cache backend types](/build/cache/backends/_index.md)
- [Building best practices](/build/building/best-practices.md)

View File

@ -1,34 +0,0 @@
---
title: Build with Docker
description: Explore the features of Docker Build in this step-by-step guide
keywords: build, buildkit, buildx, guide, tutorial
---
Welcome! This guide is an introduction and deep-dive into building software with
Docker.
Whether youre just getting started, or youre already an advanced Docker user,
this guide aims to provide useful pointers into the possibilities and best
practices of Docker's build features.
Topics covered in this guide include:
- Introduction to build concepts
- Image size optimization
- Build speed performance improvements
- Building and exporting binaries
- Cache mounts and bind mounts
- Software testing
- Multi-platform builds
Throughout this guide, an example application written in Go is used to
illustrate how the build features work. You dont need to know the Go
programming language to follow this guide.
The guide starts off with a simple Dockerfile example, and builds from there.
Some of the later sections in this guide describe advanced concepts and
workflows. You don't need to complete this entire guide from start to finish.
Follow the sections that seem relevant to you, and save the advanced sections at
the end for later, when you need them.
{{< button text="Get started" url="intro.md" >}}

View File

@ -1,153 +0,0 @@
---
title: Build arguments
description: Introduction to configurable builds, using build args
keywords: build, buildkit, buildx, guide, tutorial, build arguments, arg
---
Build arguments is a great way to add flexibility to your builds. You can pass
build arguments at build-time, and you can set a default value that the builder
uses as a fallback.
## Change runtime versions
A practical use case for build arguments is to specify runtime versions for
build stages. Your image uses the `golang:{{% param "example_go_version" %}}-alpine`
image as a base image.
But what if someone wanted to use a different version of Go for building the
application? They could update the version number inside the Dockerfile, but
thats inconvenient, it makes switching between versions more tedious than it
has to be. Build arguments make life easier:
```diff
# syntax=docker/dockerfile:1
- FROM golang:{{% param "example_go_version" %}}-alpine AS base
+ ARG GO_VERSION={{% param "example_go_version" %}}
+ FROM golang:${GO_VERSION}-alpine AS base
WORKDIR /src
RUN --mount=type=cache,target=/go/pkg/mod/ \
--mount=type=bind,source=go.sum,target=go.sum \
--mount=type=bind,source=go.mod,target=go.mod \
go mod download -x
FROM base AS build-client
RUN --mount=type=cache,target=/go/pkg/mod/ \
--mount=type=bind,target=. \
go build -o /bin/client ./cmd/client
FROM base AS build-server
RUN --mount=type=cache,target=/go/pkg/mod/ \
--mount=type=bind,target=. \
go build -o /bin/server ./cmd/server
FROM scratch AS client
COPY --from=build-client /bin/client /bin/
ENTRYPOINT [ "/bin/client" ]
FROM scratch AS server
COPY --from=build-server /bin/server /bin/
ENTRYPOINT [ "/bin/server" ]
```
The `ARG` keyword is interpolated in the image name in the `FROM` instruction.
The default value of the `GO_VERSION` build argument is set to `{{% param "example_go_version" %}}`.
If the build doesn't receive a `GO_VERSION` build argument, the `FROM` instruction
resolves to `golang:{{% param "example_go_version" %}}-alpine`.
Try setting a different version of Go to use for building, using the
`--build-arg` flag for the build command:
```console
$ docker build --build-arg="GO_VERSION=1.19" .
```
Running this command results in a build using the `golang:1.19-alpine` image.
## Inject values
You can also make use of build arguments to modify values in the source code of
your program, at build time. This is useful for dynamically injecting
information, avoiding hard-coded values. With Go, consuming external values at
build time is done using linker flags, or `-ldflags`.
The server part of the application contains a conditional statement to print the
app version, if a version is specified:
```go
// cmd/server/main.go
var version string
func main() {
if version != "" {
log.Printf("Version: %s", version)
}
```
You could declare the version string value directly in the code. But, updating
the version to line up with the release version of the application would require
updating the code ahead of every release. That would be both tedious and
error-prone. A better solution is to pass the version string as a build
argument, and inject the build argument into the code.
The following example adds an `APP_VERSION` build argument to the `build-server`
stage. The Go compiler uses the value of the build argument to set the value of
a variable in the code.
```diff
# syntax=docker/dockerfile:1
ARG GO_VERSION={{% param "example_go_version" %}}
FROM golang:${GO_VERSION}-alpine AS base
WORKDIR /src
RUN --mount=type=cache,target=/go/pkg/mod/ \
--mount=type=bind,source=go.sum,target=go.sum \
--mount=type=bind,source=go.mod,target=go.mod \
go mod download -x
FROM base AS build-client
RUN --mount=type=cache,target=/go/pkg/mod/ \
--mount=type=bind,target=. \
go build -o /bin/client ./cmd/client
FROM base AS build-server
+ ARG APP_VERSION="v0.0.0+unknown"
RUN --mount=type=cache,target=/go/pkg/mod/ \
--mount=type=bind,target=. \
- go build -o /bin/server ./cmd/server
+ go build -ldflags "-X main.version=$APP_VERSION" -o /bin/server ./cmd/server
FROM scratch AS client
COPY --from=build-client /bin/client /bin/
ENTRYPOINT [ "/bin/client" ]
FROM scratch AS server
COPY --from=build-server /bin/server /bin/
ENTRYPOINT [ "/bin/server" ]
```
Now the version of the server is injected when building the binary, without having to update
the source code. To verify this, you can build the `server` target and start a
container with `docker run`. The server outputs `v0.0.1` as the version on
startup.
```console
$ docker build --target=server --build-arg="APP_VERSION=v0.0.1" --tag=buildme-server .
$ docker run buildme-server
2023/04/06 08:54:27 Version: v0.0.1
2023/04/06 08:54:27 Starting server...
2023/04/06 08:54:27 Listening on HTTP port 3000
```
## Summary
This section showed how you can use build arguments to make builds more
configurable, and inject values at build-time.
Related information:
- [`ARG` Dockerfile reference](../../reference/dockerfile.md#arg)
## Next steps
The next section of this guide shows how you can use Docker builds to create not
only container images, but executable binaries as well.
{{< button text="Export binaries" url="export.md" >}}

View File

@ -1,114 +0,0 @@
---
title: Export binaries
description: Using Docker builds to create and export executable binaries
keywords: build, buildkit, buildx, guide, tutorial, build arguments, arg
---
Did you know that you can use Docker to build your application to standalone
binaries? Sometimes, you dont want to package and distribute your application
as a Docker image. Use Docker to build your application, and use exporters to
save the output to disk.
The default output format for `docker build` is a container image. That image is
automatically loaded to your local image store, where you can run a container
from that image, or push it to a registry. Under the hood, this uses the default
exporter, called the `docker` exporter.
To export your build results as files instead, you can use the `local` exporter.
The `local` exporter saves the filesystem of the build container to the
specified directory on the host machine.
## Export binaries
To use the `local` exporter, pass the `--output` option to the `docker build`
command. The `--output` flag takes one argument: the destination on the host
machine where you want to save the files.
The following commands exports the files from of the `server` target to the
current working directory on the host filesystem:
```console
$ docker build --output=. --target=server .
```
Running this command creates a binary at `./bin/server`. Its created under the
`bin/` directory because thats where the file was located inside the build
container.
```console
$ ls -l ./bin
total 14576
-rwxr-xr-x 1 user user 7459368 Apr 6 09:27 server
```
If you want to create a build that exports both binaries, you can create another
build stage in the Dockerfile that copies both of the binaries from each build
stage:
```diff
# syntax=docker/dockerfile:1
ARG GO_VERSION={{% param "example_go_version" %}}
FROM golang:${GO_VERSION}-alpine AS base
WORKDIR /src
RUN --mount=type=cache,target=/go/pkg/mod/ \
--mount=type=bind,source=go.sum,target=go.sum \
--mount=type=bind,source=go.mod,target=go.mod \
go mod download -x
FROM base as build-client
RUN --mount=type=cache,target=/go/pkg/mod/ \
--mount=type=bind,target=. \
go build -o /bin/client ./cmd/client
FROM base as build-server
ARG APP_VERSION="0.0.0+unknown"
RUN --mount=type=cache,target=/go/pkg/mod/ \
--mount=type=bind,target=. \
go build -ldflags "-X main.version=$APP_VERSION" -o /bin/server ./cmd/server
FROM scratch AS client
COPY --from=build-client /bin/client /bin/
ENTRYPOINT [ "/bin/client" ]
FROM scratch AS server
COPY --from=build-server /bin/server /bin/
ENTRYPOINT [ "/bin/server" ]
+
+ FROM scratch AS binaries
+ COPY --from=build-client /bin/client /
+ COPY --from=build-server /bin/server /
```
Now you can build the `binaries` target using the `--output` option to export
both the client and server binaries.
```console
$ docker build --output=bin --target=binaries .
$ ls -l ./bin
total 29392
-rwxr-xr-x 1 user user 7581933 Apr 6 09:33 client
-rwxr-xr-x 1 user user 7459368 Apr 6 09:33 server
```
## Summary
This section has demonstrated how you can use Docker to build and export
standalone binaries. These binaries can be distributed freely, and dont require
a container runtime like the Docker daemon.
The binaries you've generated so far are Linux binaries. That's because the
build environment is Linux. If your host OS is Linux, you can run these files.
Building binaries that work on Mac or Windows machines requires cross-compilation.
This is explored later on in this guide.
Related information:
- [`docker build --output` CLI reference](../../reference/cli/docker/buildx/build.md#output)
- [Build exporters](../exporters/index.md)
## Next steps
The next topic of this guide is testing: how you can use Docker to run
application tests.
{{< button text="Test" url="test.md" >}}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 160 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 171 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 182 KiB

File diff suppressed because it is too large Load Diff

Binary file not shown.

Before

Width:  |  Height:  |  Size: 296 KiB

File diff suppressed because it is too large Load Diff

Binary file not shown.

Before

Width:  |  Height:  |  Size: 490 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 186 KiB

View File

@ -1,170 +0,0 @@
---
title: Introduction
description: An introduction to the Docker Build guide
keywords: build, buildkit, buildx, guide, tutorial, introduction
---
The starting resources for this guide include a simple Go project and a
Dockerfile. From this starting point, the guide illustrates various ways that
you can improve how you build the application with Docker.
## Environment setup
To follow this guide:
1. Install [Docker Desktop or Docker Engine](/get-started/get-docker.md)
2. Clone or create a new repository from the
[application example on GitHub](https://github.com/dockersamples/buildme)
## The application
The example project for this guide is a client-server application for
translating messages to a fictional language.
Heres an overview of the files included in the project:
```text
.
├── Dockerfile
├── cmd
│   ├── client
│   │   ├── main.go
│   │   ├── request.go
│   │   └── ui.go
│   └── server
│   ├── main.go
│   └── translate.go
├── go.mod
└── go.sum
```
The `cmd/` directory contains the code for the two application components:
client and server. The client is a user interface for writing, sending, and
receiving messages. The server receives messages from clients, translates them,
and sends them back to the client.
## The Dockerfile
A Dockerfile is a text document in which you define the build steps for your
application. You write the Dockerfile in a domain-specific language, called the
Dockerfile syntax.
Here's the Dockerfile used as the starting point for this guide:
```dockerfile
# syntax=docker/dockerfile:1
FROM golang:{{% param "example_go_version" %}}-alpine
WORKDIR /src
COPY . .
RUN go mod download
RUN go build -o /bin/client ./cmd/client
RUN go build -o /bin/server ./cmd/server
ENTRYPOINT [ "/bin/server" ]
```
Heres what this Dockerfile does:
1. `# syntax=docker/dockerfile:1`
This comment is a
[Dockerfile parser directive](../../reference/dockerfile.md#parser-directives).
It specifies which version of the Dockerfile syntax to use. This file uses
the `dockerfile:1` syntax which is best practice: it ensures that you have
access to the latest Docker build features.
2. `FROM golang:{{% param "example_go_version" %}}-alpine`
The `FROM` instruction uses version `{{% param "example_go_version" %}}-alpine` of the `golang` official image.
3. `WORKDIR /src`
Creates the `/src` working directory inside the container.
4. `COPY . .`
Copies the files in the build context to the working directory in the
container.
5. `RUN go mod download`
Downloads the necessary Go modules to the container. Go modules is the
dependency management tool for the Go programming language, similar to
`npm install` for JavaScript, or `pip install` for Python.
6. `RUN go build -o /bin/client ./cmd/client`
Builds the `client` binary, which is used to send messages to be translated, into the
`/bin` directory.
7. `RUN go build -o /bin/server ./cmd/server`
Builds the `server` binary, which listens for client translation requests,
into the `/bin` directory.
8. `ENTRYPOINT [ "/bin/server" ]`
Specifies a command to run when the container starts. Starts the server
process.
## Build the image
To build an image using a Dockerfile, you use the `docker` command-line tool.
The command for building an image is `docker build`.
Run the following command to build the image.
```console
$ docker build --tag=buildme .
```
This creates an image with the tag `buildme`. An image tag is the name of the
image.
## Run the container
The image you just built contains two binaries, one for the server and one for
the client. To see the translation service in action, run a container that hosts
the server component, and then run another container that invokes the client.
To run a container, you use the `docker run` command.
1. Run a container from the image in detached mode.
```console
$ docker run --name=buildme --rm --detach buildme
```
This starts a container named `buildme`.
2. Run a new command in the `buildme` container that invokes the client binary.
```console
$ docker exec -it buildme /bin/client
```
The `docker exec` command opens a terminal user interface where you can submit
messages for the backend (server) process to translate.
When you're done testing, you can stop the container:
```console
$ docker stop buildme
```
## Summary
This section gave you an overview of the example application used in this guide,
an introduction to Dockerfiles and building. You've successfully built a
container image and created a container from it.
Related information:
- [Dockerfile reference](../../reference/dockerfile.md)
- [`docker build` CLI reference](../../reference/cli/docker/buildx/build.md)
- [`docker run` CLI reference](../../reference/cli/docker/container/run.md)
## Next steps
The next section explores how you can use layer cache to improve build speed.
{{< button text="Layers" url="layers.md" >}}

View File

@ -1,81 +0,0 @@
---
title: Layers
description: Improving the initial Dockerfile using layers
keywords: build, buildkit, buildx, guide, tutorial, layers
---
The order of Dockerfile instructions matters. A Docker build consists of a series
of ordered build instructions. Each instruction in a Dockerfile roughly translates
to an image layer. The following diagram illustrates how a Dockerfile translates
into a stack of layers in a container image.
![From Dockerfile to layers](./images/layers.png)
## Cached layers
When you run a build, the builder attempts to reuse layers from earlier builds.
If a layer of an image is unchanged, then the builder picks it up from the build cache.
If a layer has changed since the last build, that layer, and all layers that follow, must be rebuilt.
The Dockerfile from the previous section copies all project files to the
container (`COPY . .`) and then downloads application dependencies in the
following step (`RUN go mod download`). If you were to change any of the project
files, then that would invalidate the cache for the `COPY` layer. It also invalidates
the cache for all of the layers that follow.
![Layer cache is bust](./images/cache-bust.png)
Because of the current order of the Dockerfile instructions, the builder must
download the Go modules again, despite none of the packages having changed since
the last time.
## Update the instruction order
You can avoid this redundancy by reordering the instructions in the Dockerfile.
Change the order of the instructions so that downloading and installing dependencies
occur before the source code is copied over to the container. In that way, the
builder can reuse the "dependencies" layer from the cache, even when you
make changes to your source code.
Go uses two files, called `go.mod` and `go.sum`, to track dependencies for a project.
These files are to Go, what `package.json` and `package-lock.json` are to JavaScript.
For Go to know which dependencies to download, you need to copy the `go.mod` and
`go.sum` files to the container. Add another `COPY` instruction before
`RUN go mod download`, this time copying only the `go.mod` and `go.sum` files.
```diff
# syntax=docker/dockerfile:1
FROM golang:{{% param "example_go_version" %}}-alpine
WORKDIR /src
- COPY . .
+ COPY go.mod go.sum .
RUN go mod download
+ COPY . .
RUN go build -o /bin/client ./cmd/client
RUN go build -o /bin/server ./cmd/server
ENTRYPOINT [ "/bin/server" ]
```
Now if you edit your source code, building the image won't cause the
builder to download the dependencies each time. The `COPY . .` instruction
appears after the package management instructions, so the builder can reuse the
`RUN go mod download` layer.
![Reordered](./images/reordered-layers.png)
## Summary
Ordering your Dockerfile instructions appropriately helps you avoid unnecessary
work at build time.
Related information:
- [Docker build cache](../cache/_index.md)
- [Dockerfile best practices](../../build/building/best-practices.md)
## Next steps
The next section shows how you can make the build run faster, and make the
resulting output smaller, using multi-stage builds.
{{< button text="Multi-stage" url="multi-stage.md" >}}

View File

@ -1,225 +0,0 @@
---
title: Mounts
description: Introduction to cache mounts and bind mounts in builds
keywords: build, buildkit, buildx, guide, tutorial, mounts, cache mounts, bind mounts
---
This section describes how to use cache mounts and bind mounts with Docker
builds.
Cache mounts let you specify a persistent package cache to be used during
builds. The persistent cache helps speed up build steps, especially steps that
involve installing packages using a package manager. Having a persistent cache
for packages means that even if you rebuild a layer, you only download new or
changed packages.
Cache mounts are created using the `--mount` flag together with the `RUN`
instruction in the Dockerfile. To use a cache mount, the format for the flag is
`--mount=type=cache,target=<path>`, where `<path>` is the location of the cache
directory that you wish to mount into the container.
## Add a cache mount
The target path to use for the cache mount depends on the package manager youre
using. The application example in this guide uses Go modules. That means that
the target directory for the cache mount is the directory where the Go module
cache gets written to. According to the
[Go modules reference](https://go.dev/ref/mod#module-cache), the default
location for the module cache is `$GOPATH/pkg/mod`, and the default value for
`$GOPATH` is `/go`.
Update the build steps for downloading packages and compiling the program to
mount the `/go/pkg/mod` directory as a cache mount:
```diff
# syntax=docker/dockerfile:1
FROM golang:{{% param "example_go_version" %}}-alpine AS base
WORKDIR /src
COPY go.mod go.sum .
- RUN go mod download
+ RUN --mount=type=cache,target=/go/pkg/mod/ \
+ go mod download -x
COPY . .
FROM base AS build-client
- RUN go build -o /bin/client ./cmd/client
+ RUN --mount=type=cache,target=/go/pkg/mod/ \
+ go build -o /bin/client ./cmd/client
FROM base AS build-server
- RUN go build -o /bin/server ./cmd/server
+ RUN --mount=type=cache,target=/go/pkg/mod/ \
+ go build -o /bin/server ./cmd/server
FROM scratch AS client
COPY --from=build-client /bin/client /bin/
ENTRYPOINT [ "/bin/client" ]
FROM scratch AS server
COPY --from=build-server /bin/server /bin/
ENTRYPOINT [ "/bin/server" ]
```
The `-x` flag added to the `go mod download` command prints the download
executions that take place. Adding this flag lets you see how the cache mount is
being used in the next step.
## Rebuild the image
Before you rebuild the image, clear your build cache. This ensures that you're
starting from a clean slate, making it easier to see exactly what the build is
doing.
```console
$ docker builder prune -af
```
Now its time to rebuild the image. Invoke the build command, this time together
with the `--progress=plain` flag, while also redirecting the output to a log
file.
```console
$ docker build --target=client --progress=plain . 2> log1.txt
```
When the build has finished, inspect the `log1.txt` file. The logs show how the
Go modules were downloaded as part of the build.
```console
$ awk '/proxy.golang.org/' log1.txt
#11 0.168 # get https://proxy.golang.org/github.com/charmbracelet/lipgloss/@v/v0.6.0.mod
#11 0.168 # get https://proxy.golang.org/github.com/aymanbagabas/go-osc52/@v/v1.0.3.mod
#11 0.168 # get https://proxy.golang.org/github.com/atotto/clipboard/@v/v0.1.4.mod
#11 0.168 # get https://proxy.golang.org/github.com/charmbracelet/bubbletea/@v/v0.23.1.mod
#11 0.169 # get https://proxy.golang.org/github.com/charmbracelet/bubbles/@v/v0.14.0.mod
#11 0.218 # get https://proxy.golang.org/github.com/charmbracelet/bubbles/@v/v0.14.0.mod: 200 OK (0.049s)
#11 0.218 # get https://proxy.golang.org/github.com/aymanbagabas/go-osc52/@v/v1.0.3.mod: 200 OK (0.049s)
#11 0.218 # get https://proxy.golang.org/github.com/containerd/console/@v/v1.0.3.mod
#11 0.218 # get https://proxy.golang.org/github.com/go-chi/chi/v5/@v/v5.0.0.mod
#11 0.219 # get https://proxy.golang.org/github.com/charmbracelet/bubbletea/@v/v0.23.1.mod: 200 OK (0.050s)
#11 0.219 # get https://proxy.golang.org/github.com/atotto/clipboard/@v/v0.1.4.mod: 200 OK (0.051s)
#11 0.219 # get https://proxy.golang.org/github.com/charmbracelet/lipgloss/@v/v0.6.0.mod: 200 OK (0.051s)
...
```
Now, in order to see that the cache mount is being used, change the version of
one of the Go modules that your program imports. By changing the module version,
you're forcing Go to download the new version of the dependency the next time
you build. If you werent using cache mounts, your system would re-download all
modules. But because you've added a cache mount, Go can reuse most of the
modules and only download the package versions that doesn't already exist in the
`/go/pkg/mod` directory.
Update the version of the `chi` package that the server component of the
application uses:
```console
$ docker run -v $PWD:$PWD -w $PWD golang:{{% param "example_go_version" %}}-alpine \
go get github.com/go-chi/chi/v5@v5.0.8
```
Now, run another build, and again redirect the build logs to a log file:
```console
$ docker build --target=client --progress=plain . 2> log2.txt
```
Now if you inspect the `log2.txt` file, youll find that only the `chi` package
that was changed has been downloaded:
```console
$ awk '/proxy.golang.org/' log2.txt
#10 0.143 # get https://proxy.golang.org/github.com/go-chi/chi/v5/@v/v5.0.8.mod
#10 0.190 # get https://proxy.golang.org/github.com/go-chi/chi/v5/@v/v5.0.8.mod: 200 OK (0.047s)
#10 0.190 # get https://proxy.golang.org/github.com/go-chi/chi/v5/@v/v5.0.8.info
#10 0.199 # get https://proxy.golang.org/github.com/go-chi/chi/v5/@v/v5.0.8.info: 200 OK (0.008s)
#10 0.201 # get https://proxy.golang.org/github.com/go-chi/chi/v5/@v/v5.0.8.zip
#10 0.209 # get https://proxy.golang.org/github.com/go-chi/chi/v5/@v/v5.0.8.zip: 200 OK (0.008s)
```
## Add bind mounts
There are a few more small optimizations that you can implement to improve the
Dockerfile. Currently, it's using the `COPY` instruction to pull in the `go.mod`
and `go.sum` files before downloading modules. Instead of copying those files
over to the containers filesystem, you can use a bind mount. A bind mount makes
the files available to the container directly from the host. This change removes
the need for the additional `COPY` instruction (and layer) entirely.
```diff
# syntax=docker/dockerfile:1
FROM golang:{{% param "example_go_version" %}}-alpine AS base
WORKDIR /src
- COPY go.mod go.sum .
RUN --mount=type=cache,target=/go/pkg/mod/ \
+ --mount=type=bind,source=go.sum,target=go.sum \
+ --mount=type=bind,source=go.mod,target=go.mod \
go mod download -x
COPY . .
FROM base AS build-client
RUN --mount=type=cache,target=/go/pkg/mod/ \
go build -o /bin/client ./cmd/client
FROM base AS build-server
RUN --mount=type=cache,target=/go/pkg/mod/ \
go build -o /bin/server ./cmd/server
FROM scratch AS client
COPY --from=build-client /bin/client /bin/
ENTRYPOINT [ "/bin/client" ]
FROM scratch AS server
COPY --from=build-server /bin/server /bin/
ENTRYPOINT [ "/bin/server" ]
```
Similarly, you can use the same technique to remove the need for the second
`COPY` instruction as well. Specify bind mounts in the `build-client` and
`build-server` stages for mounting the current working directory.
```diff
# syntax=docker/dockerfile:1
FROM golang:{{% param "example_go_version" %}}-alpine AS base
WORKDIR /src
RUN --mount=type=cache,target=/go/pkg/mod/ \
--mount=type=bind,source=go.sum,target=go.sum \
--mount=type=bind,source=go.mod,target=go.mod \
go mod download -x
- COPY . .
FROM base AS build-client
RUN --mount=type=cache,target=/go/pkg/mod/ \
+ --mount=type=bind,target=. \
go build -o /bin/client ./cmd/client
FROM base AS build-server
RUN --mount=type=cache,target=/go/pkg/mod/ \
+ --mount=type=bind,target=. \
go build -o /bin/server ./cmd/server
FROM scratch AS client
COPY --from=build-client /bin/client /bin/
ENTRYPOINT [ "/bin/client" ]
FROM scratch AS server
COPY --from=build-server /bin/server /bin/
ENTRYPOINT [ "/bin/server" ]
```
## Summary
This section has shown how you can improve your build speed using cache and bind
mounts.
Related information:
- [Dockerfile reference](../../reference/dockerfile.md#run---mount)
- [Bind mounts](/engine/storage/bind-mounts.md)
## Next steps
The next section of this guide is an introduction to making your builds
configurable, using build arguments.
{{< button text="Build arguments" url="build-args.md" >}}

View File

@ -1,255 +0,0 @@
---
title: Multi-platform
description: Building for multiple operating systems and architectures
keywords: build, buildkit, buildx, guide, tutorial, multi-platform, emulation, cross-compilation
---
Up until this point in the guide, you've built Linux binaries. This section
describes how you can support other operating systems, and architectures, using
multi-platform builds via emulation and cross-compilation.
The easiest way to get started with building for multiple platforms is using
emulation. With emulation, you can build your app to multiple architectures
without having to make any changes to your Dockerfile. All you need to do is to
pass the `--platform` flag to the build command, specifying the OS and
architecture you want to build for.
The following command builds the server image for the `linux/arm/v7` platform:
```console
$ docker build --target=server --platform=linux/arm/v7 .
```
You can also use emulation to produce outputs for multiple platforms at once.
However, the default image store in Docker Engine doesn't support building
and loading multi-platform images. You need to enable the containerd image store
which supports concurrent multi-platform builds.
## Enable the containerd image store
{{< tabs >}}
{{< tab name="Docker Desktop" >}}
To enable the containerd image store in Docker Desktop,
go to **Settings** and select **Use containerd for pulling and storing images**
in the **General** tab.
Note that changing the image store means you'll temporarily lose access to
images and containers in the classic image store.
Those resources still exist, but to view them, you'll need to
disable the containerd image store.
{{< /tab >}}
{{< tab name="Docker Engine" >}}
If you're not using Docker Desktop,
enable the containerd image store by adding the following feature configuration
to your `/etc/docker/daemon.json` configuration file.
```json {hl_lines=3}
{
"features": {
"containerd-snapshotter": true
}
}
```
Restart the daemon after updating the configuration file.
```console
$ systemctl restart docker
```
{{< /tab >}}
{{< /tabs >}}
## Build using emulation
To run multi-platform builds, invoke the `docker build` command,
and pass it the same arguments as you did before.
Only this time, also add a `--platform` flag specifying multiple architectures.
```console {hl_lines=4}
$ docker build \
--target=binaries \
--output=bin \
--platform=linux/amd64,linux/arm64,linux/arm/v7 .
```
This command uses emulation to run the same build three times, once for each
platform. The build results are exported to a `bin` directory.
```text
bin
├── linux_amd64
│   ├── client
│   └── server
├── linux_arm64
│   ├── client
│   └── server
└── linux_arm_v7
├── client
└── server
```
When you build for multiple platforms concurrently,
BuildKit runs all of the build steps under emulation for each platform that you specify.
Effectively forking the build into multiple concurrent processes.
![Build pipelines using emulation](./images/emulation.png)
There are, however, a few downsides to running multi-platform builds using
emulation:
- If you tried running the command above, you may have noticed that it took a
long time to finish. Emulation can be much slower than native execution for
CPU-intensive tasks.
- Emulation only works when the architecture is supported by the base image
youre using. The example in this guide uses the Alpine Linux version of the
`golang` image, which means you can only build Linux images this way, for a
limited set of CPU architectures, without having to change the base image.
As an alternative to emulation, the next step explores cross-compilation.
Cross-compiling makes multi-platform builds much faster and versatile.
## Build using cross-compilation
Using cross-compilation means leveraging the capabilities of a compiler to build
for multiple platforms, without the need for emulation.
The first thing you'll need to do is pinning the builder to use the nodes
native architecture as the build platform. This is to prevent emulation. Then,
from the node's native architecture, the builder cross-compiles the application
to a number of other target platforms.
### Platform build arguments
This approach involves using a few pre-defined build arguments that you have
access to in your Docker builds: `BUILDPLATFORM` and `TARGETPLATFORM` (and
derivatives, like `TARGETOS`). These build arguments reflect the values you pass
to the `--platform` flag.
For example, if you invoke a build with `--platform=linux/amd64`, then the build
arguments resolve to:
- `TARGETPLATFORM=linux/amd64`
- `TARGETOS=linux`
- `TARGETARCH=amd64`
When you pass more than one value to the platform flag, build stages that use
the pre-defined platform arguments are forked automatically for each platform.
This is in contrast to builds running under emulation, where the entire build
pipeline runs per platform.
![Build pipelines using cross-compilation](./images/cross-compilation.png)
### Update the Dockerfile
To build the app using the cross-compilation technique, update the Dockerfile as
follows:
- Add `--platform=$BUILDPLATFORM` to the `FROM` instruction for the initial
`base` stage, pinning the platform of the `golang` image to match the
architecture of the host machine.
- Add `ARG` instructions for the Go compilation stages, making the `TARGETOS`
and `TARGETARCH` build arguments available to the commands in this stage.
- Set the `GOOS` and `GOARCH` environment variables to the values of `TARGETOS`
and `TARGETARCH`. The Go compiler uses these variables to do
cross-compilation.
```diff
# syntax=docker/dockerfile:1
ARG GO_VERSION={{% param "example_go_version" %}}
ARG GOLANGCI_LINT_VERSION={{% param "example_golangci_lint_version" %}}
- FROM golang:${GO_VERSION}-alpine AS base
+ FROM --platform=$BUILDPLATFORM golang:${GO_VERSION}-alpine AS base
WORKDIR /src
RUN --mount=type=cache,target=/go/pkg/mod \
--mount=type=bind,source=go.mod,target=go.mod \
--mount=type=bind,source=go.sum,target=go.sum \
go mod download -x
FROM base AS build-client
+ ARG TARGETOS
+ ARG TARGETARCH
RUN --mount=type=cache,target=/go/pkg/mod \
--mount=type=bind,target=. \
- go build -o /bin/client ./cmd/client
+ GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -o /bin/client ./cmd/client
FROM base AS build-server
+ ARG TARGETOS
+ ARG TARGETARCH
RUN --mount=type=cache,target=/go/pkg/mod \
--mount=type=bind,target=. \
- go build -o /bin/server ./cmd/server
+ GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -o /bin/server ./cmd/server
FROM scratch AS client
COPY --from=build-client /bin/client /bin/
ENTRYPOINT [ "/bin/client" ]
FROM scratch AS server
COPY --from=build-server /bin/server /bin/
ENTRYPOINT [ "/bin/server" ]
FROM scratch AS binaries
COPY --from=build-client /bin/client /
COPY --from=build-server /bin/server /
FROM golangci/golangci-lint:${GOLANGCI_LINT_VERSION} as lint
WORKDIR /test
RUN --mount=type=bind,target=. \
golangci-lint run
```
The only thing left to do now is to run the actual build. To run a
multi-platform build, set the `--platform` option, and specify a CSV string of
the OS and architectures that you want to build for. The following command
illustrates how to build, and export, binaries for Mac (ARM64), Windows, and
Linux:
```console
$ docker build \
--target=binaries \
--output=bin \
--platform=darwin/arm64,windows/amd64,linux/amd64 .
```
When the build finishes, youll find client and server binaries for all of the
selected platforms in the `bin` directory:
```diff
bin
├── darwin_arm64
│   ├── client
│   └── server
├── linux_amd64
│   ├── client
│   └── server
└── windows_amd64
├── client
└── server
```
## Summary
This section has demonstrated how you can get started with multi-platform builds
using emulation and cross-compilation.
Related information:
- [Multi-platfom images](../building/multi-platform.md)
- [containerd image store (Docker Desktop)](../../desktop/containerd.md)
- [containerd image store (Docker Engine)](/engine/storage/containerd.md)
You may also want to consider checking out
[xx - Dockerfile cross-compilation helpers](https://github.com/tonistiigi/xx).
`xx` is a Docker image containing utility scripts that make cross-compiling with Docker builds easier.
## Next steps
This section is the final part of the Build with Docker guide. The following
page contains some pointers for where to go next.
{{< button text="Next steps" url="next-steps.md" >}}

View File

@ -1,191 +0,0 @@
---
title: Multi-stage
description: Faster and smaller builds with multi-stage builds
keywords: build, buildkit, buildx, guide, tutorial, multi-stage
---
This section explores multi-stage builds. There are two main reasons for why
youd want to use multi-stage builds:
- They allow you to run build steps in parallel, making your build pipeline
faster and more efficient.
- They allow you to create a final image with a smaller footprint, containing
only what's needed to run your program.
In a Dockerfile, a build stage is represented by a `FROM` instruction. The
Dockerfile from the previous section doesnt leverage multi-stage builds. Its
all one build stage. That means that the final image is bloated with resources
used to compile the program.
```console
$ docker build --tag=buildme .
$ docker images buildme
REPOSITORY TAG IMAGE ID CREATED SIZE
buildme latest c021c8a7051f 5 seconds ago 150MB
```
The program compiles to executable binaries, so you dont need Go language
utilities to exist in the final image.
## Add stages
Using multi-stage builds, you can choose to use different base images for your
build and runtime environments. You can copy build artifacts from the build
stage over to the runtime stage.
Modify the Dockerfile as follows. This change creates another stage using a
minimal `scratch` image as a base. In the final `scratch` stage, the binaries
built in the previous stage are copied over to the filesystem of the new stage.
```diff
# syntax=docker/dockerfile:1
FROM golang:{{% param "example_go_version" %}}-alpine
WORKDIR /src
COPY go.mod go.sum .
RUN go mod download
COPY . .
RUN go build -o /bin/client ./cmd/client
RUN go build -o /bin/server ./cmd/server
+
+ FROM scratch
+ COPY --from=0 /bin/client /bin/server /bin/
ENTRYPOINT [ "/bin/server" ]
```
Now if you build the image and inspect it, you should see a significantly
smaller number:
```console
$ docker build --tag=buildme .
$ docker images buildme
REPOSITORY TAG IMAGE ID CREATED SIZE
buildme latest 436032454dd8 7 seconds ago 8.45MB
```
The image went from 150MB to only just 8.45MB in size. Thats because the
resulting image only contains the binaries, and nothing else.
## Parallelism
You've reduced the footprint of the image. The following step shows how you can
improve build speed with multi-stage builds, using parallelism. The build
currently produces the binaries one after the other. There is no reason why you
need to build the client before building the server, or vice versa.
You can split the binary-building steps into separate stages. In the final
`scratch` stage, copy the binaries from each corresponding build stage. By
segmenting these builds into separate stages, Docker can run them in parallel.
The stages for building each binary both require the Go compilation tools and
application dependencies. Define these common steps as a reusable base stage.
You can do that by assigning a name to the stage using the pattern
`FROM image AS stage_name`. This allows you to reference the stage name in a
`FROM` instruction of another stage (`FROM stage_name`).
You can also assign a name to the binary-building stages, and reference the
stage name in the `COPY --from=stage_name` instruction when copying the binaries
to the final `scratch` image.
```diff
# syntax=docker/dockerfile:1
- FROM golang:{{% param "example_go_version" %}}-alpine
+ FROM golang:{{% param "example_go_version" %}}-alpine AS base
WORKDIR /src
COPY go.mod go.sum .
RUN go mod download
COPY . .
+
+ FROM base AS build-client
RUN go build -o /bin/client ./cmd/client
+
+ FROM base AS build-server
RUN go build -o /bin/server ./cmd/server
FROM scratch
- COPY --from=0 /bin/client /bin/server /bin/
+ COPY --from=build-client /bin/client /bin/
+ COPY --from=build-server /bin/server /bin/
ENTRYPOINT [ "/bin/server" ]
```
Now, instead of first building the binaries one after the other, the
`build-client` and `build-server` stages are executed simultaneously.
![Stages executing in parallel](./images/parallelism.gif)
## Build targets
The final image is now small, and youre building it efficiently using
parallelism. But this image is slightly strange, in that it contains both the
client and the server binary in the same image. Shouldnt these be two different
images?
Its possible to create multiple different images using a single Dockerfile. You
can specify a target stage of a build using the `--target` flag. Replace the
unnamed `FROM scratch` stage with two separate stages named `client` and
`server`.
```diff
# syntax=docker/dockerfile:1
FROM golang:{{% param "example_go_version" %}}-alpine AS base
WORKDIR /src
COPY go.mod go.sum .
RUN go mod download
COPY . .
FROM base AS build-client
RUN go build -o /bin/client ./cmd/client
FROM base AS build-server
RUN go build -o /bin/server ./cmd/server
- FROM scratch
- COPY --from=build-client /bin/client /bin/
- COPY --from=build-server /bin/server /bin/
- ENTRYPOINT [ "/bin/server" ]
+ FROM scratch AS client
+ COPY --from=build-client /bin/client /bin/
+ ENTRYPOINT [ "/bin/client" ]
+ FROM scratch AS server
+ COPY --from=build-server /bin/server /bin/
+ ENTRYPOINT [ "/bin/server" ]
```
And now you can build the client and server programs as separate Docker images
(tags):
```console
$ docker build --tag=buildme-client --target=client .
$ docker build --tag=buildme-server --target=server .
$ docker images "buildme*"
REPOSITORY TAG IMAGE ID CREATED SIZE
buildme-client latest 659105f8e6d7 20 seconds ago 4.25MB
buildme-server latest 666d492d9f13 5 seconds ago 4.2MB
```
The images are now even smaller, about 4 MB each.
This change also avoids having to build both binaries each time. When selecting
to build the `client` target, Docker only builds the stages leading up to
that target. The `build-server` and `server` stages are skipped if theyre not
needed. Likewise, building the `server` target skips the `build-client` and
`client` stages.
## Summary
Multi-stage builds are useful for creating images with less bloat and a smaller
footprint, and also helps to make builds run faster.
Related information:
- [Multi-stage builds](../building/multi-stage.md)
- [Base images](../building/base-images.md)
## Next steps
The next section describes how you can use file mounts to further improve build
speeds.
{{< button text="Mounts" url="mounts.md" >}}

View File

@ -1,34 +0,0 @@
---
title: Next steps
description: Next steps following the Docker Build guide
keywords: build, buildkit, buildx, guide, tutorial
---
This guide has demonstrated some of the build features and capabilities
that Docker provides.
If you would like to continue learning about Docker build, consider exploring
the following resources:
- [BuildKit](../buildkit/_index.md): deep-dive into the open source build engine
that powers your Docker builds
- [Drivers](../builders/drivers/_index.md): configure for how and where your Docker builds
run
- [Exporters](../exporters/_index.md): save your build results to different
output formats
- [Bake](../bake/_index.md): orchestrate your build workflows
- [Attestations](../metadata/attestations/_index.md): annotate your build artifacts with
metadata
- [GitHub Actions](../ci/github-actions/_index.md): run Docker builds in CI
## Feedback
If you have suggestions for improving the content of this guide, you can use the
feedback widget to submit your feedback.
If you don't see the feedback widget, try turning off your content filtering
extension or ad blocker, if you use one.
You can also submit an issue on
[the docs GitHub repository](https://github.com/docker/docs/issues/new),
if you prefer.

View File

@ -1,115 +0,0 @@
---
title: Test
description: Running tests with Docker Build
keywords: build, buildkit, buildx, guide, tutorial, testing
---
This section focuses on testing. The example in this section focuses on linting,
but the same principles apply for other kinds of tests as well, such as unit
tests. Code linting is a static analysis of code that helps you detect errors,
style violations, and anti-patterns.
The exact steps for how to test your code can vary a lot depending on the
programming language or framework that you use. The example application used in
this guide is written in Go. You will add a build step that uses
`golangci-lint`, a popular linters runner for Go.
## Run tests
The `golangci-lint` tool is available as an image on Docker Hub. Before you add
the lint step to the Dockerfile, you can try it out using a `docker run`
command.
```console
$ docker run -v $PWD:/test -w /test \
golangci/golangci-lint golangci-lint run
```
You will notice that `golangci-lint` works: it finds an issue in the code where
there's a missing error check.
```text
cmd/server/main.go:23:10: Error return value of `w.Write` is not checked (errcheck)
w.Write([]byte(translated))
^
```
Now you can add this as a step to the Dockerfile.
```diff
# syntax=docker/dockerfile:1
ARG GO_VERSION={{% param "example_go_version" %}}
+ ARG GOLANGCI_LINT_VERSION={{% param "example_golangci_lint_version" %}}
FROM golang:${GO_VERSION}-alpine AS base
WORKDIR /src
RUN --mount=type=cache,target=/go/pkg/mod/ \
--mount=type=bind,source=go.sum,target=go.sum \
--mount=type=bind,source=go.mod,target=go.mod \
go mod download -x
FROM base AS build-client
RUN --mount=type=cache,target=/go/pkg/mod/ \
--mount=type=bind,target=. \
go build -o /bin/client ./cmd/client
FROM base AS build-server
ARG APP_VERSION="0.0.0+unknown"
RUN --mount=type=cache,target=/go/pkg/mod/ \
--mount=type=bind,target=. \
go build -ldflags "-X main.version=$APP_VERSION" -o /bin/server ./cmd/server
FROM scratch AS client
COPY --from=build-client /bin/client /bin/
ENTRYPOINT [ "/bin/client" ]
FROM scratch AS server
COPY --from=build-server /bin/server /bin/
ENTRYPOINT [ "/bin/server" ]
FROM scratch AS binaries
COPY --from=build-client /bin/client /
COPY --from=build-server /bin/server /
+
+ FROM golangci/golangci-lint:${GOLANGCI_LINT_VERSION} as lint
+ WORKDIR /test
+ RUN --mount=type=bind,target=. \
+ golangci-lint run
```
The added `lint` stage uses the `golangci/golangci-lint` image from Docker Hub
to invoke the `golangci-lint run` command with a bind-mount for the build
context.
The lint stage is independent of any of the other stages in the Dockerfile.
Therefore, running a regular build wont cause the lint step to run. To lint the
code, you must specify the `lint` stage:
```console
$ docker build --target=lint .
```
## Export test results
In addition to running tests, it's sometimes useful to be able to export the
results of a test to a test report.
Exporting test results is no different to exporting binaries, as shown in the
previous section of this guide:
1. Save the test results to a file.
2. Create a new stage in your Dockerfile using the `scratch` base image.
3. Export that stage using the `local` exporter.
The exact steps on how to do this is left as a reader's exercise :-)
## Summary
This section has shown an example on how you can use Docker builds to run tests
(or as shown in this section, linters).
## Next steps
The next topic in this guide is multi-platform builds, using emulation and
cross-compilation.
{{< button text="Multi-platform" url="multi-platform.md" >}}

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 104 KiB

View File

@ -203,8 +203,7 @@ Now that you have a base image, you can extend that image to build additional im
If youd like to dive deeper into the things you learned, check out the following resources: If youd like to dive deeper into the things you learned, check out the following resources:
* [docker image history CLI reference](/reference/cli/docker/image/history/) * [`docker image history`](/reference/cli/docker/image/history/)
* [docker image layers](/build/guide/layers/)
* [`docker container commit`](/reference/cli/docker/container/commit/) * [`docker container commit`](/reference/cli/docker/container/commit/)

View File

@ -204,7 +204,6 @@ Related information:
- [Dockerfile reference](/reference/dockerfile/) - [Dockerfile reference](/reference/dockerfile/)
- [docker CLI reference](/reference/cli/docker/) - [docker CLI reference](/reference/cli/docker/)
- [Build with Docker guide](/build/guide/_index.md)
## Next steps ## Next steps

View File

@ -201,7 +201,6 @@ In this section, you learned a few image building best practices, including laye
Related information: Related information:
- [Dockerfile reference](/reference/dockerfile/) - [Dockerfile reference](/reference/dockerfile/)
- [Build with Docker guide](/build/guide/_index.md)
- [Dockerfile best practices](/build/building/best-practices.md) - [Dockerfile best practices](/build/building/best-practices.md)
## Next steps ## Next steps

View File

@ -13,10 +13,6 @@ dive-deeper:
description: Walk through practical Docker applications for specific scenarios. description: Walk through practical Docker applications for specific scenarios.
link: /guides/use-case/ link: /guides/use-case/
icon: task icon: task
- title: Build with Docker
description: Deep-dive into building software with Docker.
link: /build/guide/
icon: build
- title: Deployment and Orchestration - title: Deployment and Orchestration
description: Deploy and manage Docker containers at scale. description: Deploy and manage Docker containers at scale.
link: /guides/deployment-orchestration/orchestration/ link: /guides/deployment-orchestration/orchestration/

View File

@ -18,7 +18,6 @@ In addition to the language-specific modules, Docker documentation also provides
* [Building best practices](../build/building/best-practices.md) * [Building best practices](../build/building/best-practices.md)
* [Build images with BuildKit](../build/buildkit/index.md#getting-started) * [Build images with BuildKit](../build/buildkit/index.md#getting-started)
* [Build with Docker](../build/guide/_index.md)
## Language-specific guides ## Language-specific guides

View File

@ -79,7 +79,6 @@ In this section, you learned how you can containerize and run your C++
application using Docker. application using Docker.
Related information: Related information:
- [Build with Docker guide](../../build/guide/index.md)
- [Docker Compose overview](../../compose/_index.md) - [Docker Compose overview](../../compose/_index.md)
## Next steps ## Next steps

View File

@ -120,7 +120,6 @@ application using Docker.
Related information: Related information:
- [Dockerfile reference](../../reference/dockerfile.md) - [Dockerfile reference](../../reference/dockerfile.md)
- [Build with Docker guide](../../build/guide/index.md)
- [.dockerignore file reference](../../reference/dockerfile.md#dockerignore-file) - [.dockerignore file reference](../../reference/dockerfile.md#dockerignore-file)
- [Docker Compose overview](../../compose/_index.md) - [Docker Compose overview](../../compose/_index.md)

View File

@ -100,15 +100,12 @@ You should see output containing the following.
#11 DONE 32.2s #11 DONE 32.2s
``` ```
To learn more about building and running tests, see the [Build with Docker guide](../../build/guide/_index.md).
## Summary ## Summary
In this section, you learned how to run tests when developing locally using Compose and how to run tests when building your image. In this section, you learned how to run tests when developing locally using Compose and how to run tests when building your image.
Related information: Related information:
- [docker compose run](/reference/cli/docker/compose/run/) - [docker compose run](/reference/cli/docker/compose/run/)
- [Build with Docker guide](../../build/guide/index.md)
## Next steps ## Next steps

View File

@ -86,8 +86,6 @@ You should see output containing the following.
#13 4.915 PASS #13 4.915 PASS
``` ```
To learn more about building and running tests, see the [Build with Docker guide](../../build/guide/_index.md).
## Next steps ## Next steps
In this section, you learned how to run tests when building your image. Next, In this section, you learned how to run tests when building your image. Next,

View File

@ -113,13 +113,6 @@ You should see output containing the following
#15 DONE 101.4s #15 DONE 101.4s
``` ```
## Summary
In this section, you learned how to run tests when building your image.
Related information:
- [Build with Docker guide](../../build/guide/index.md)
## Next steps ## Next steps
In the next section, youll take a look at how to set up a CI/CD pipeline using In the next section, youll take a look at how to set up a CI/CD pipeline using

View File

@ -275,7 +275,6 @@ application using Docker.
Related information: Related information:
- [Dockerfile reference](../../reference/dockerfile.md) - [Dockerfile reference](../../reference/dockerfile.md)
- [Build with Docker guide](../../build/guide/index.md)
- [.dockerignore file reference](../../reference/dockerfile.md#dockerignore-file) - [.dockerignore file reference](../../reference/dockerfile.md#dockerignore-file)
- [Docker Compose overview](../../compose/_index.md) - [Docker Compose overview](../../compose/_index.md)

View File

@ -125,8 +125,6 @@ Run the following command to build a new image using the test stage as the targe
$ docker build -t node-docker-image-test --progress=plain --no-cache --target test . $ docker build -t node-docker-image-test --progress=plain --no-cache --target test .
``` ```
To learn more about building and running tests, see the [Build with Docker guide](../../build/guide/_index.md).
You should see output containing the following. You should see output containing the following.
```console ```console
@ -164,7 +162,6 @@ In this section, you learned how to run tests when developing locally using Comp
Related information: Related information:
- [docker compose run](/reference/cli/docker/compose/run/) - [docker compose run](/reference/cli/docker/compose/run/)
- [Build with Docker guide](../../build/guide/index.md)
## Next steps ## Next steps

View File

@ -417,7 +417,6 @@ In this section, you took a look at setting up your Compose file to add a local
database and persist data. You also learned how to use Compose Watch to automatically sync your application when you update your code. And finally, you learned how to create a development container that contains the dependencies needed for development. database and persist data. You also learned how to use Compose Watch to automatically sync your application when you update your code. And finally, you learned how to create a development container that contains the dependencies needed for development.
Related information: Related information:
- [Build with Docker guide](../../build/guide/index.md)
- [Compose file reference](/reference/compose-file/) - [Compose file reference](/reference/compose-file/)
- [Compose file watch](../../compose/file-watch.md) - [Compose file watch](../../compose/file-watch.md)
- [Dockerfile reference](../../reference/dockerfile.md) - [Dockerfile reference](../../reference/dockerfile.md)

View File

@ -100,15 +100,12 @@ You should see output containing the following.
#18 0.395 OK (1 test, 1 assertion) #18 0.395 OK (1 test, 1 assertion)
``` ```
To learn more about building and running tests, see the [Build with Docker guide](../../build/guide/_index.md).
## Summary ## Summary
In this section, you learned how to run tests when developing locally using Compose and how to run tests when building your image. In this section, you learned how to run tests when developing locally using Compose and how to run tests when building your image.
Related information: Related information:
- [docker compose run](/reference/cli/docker/compose/run/) - [docker compose run](/reference/cli/docker/compose/run/)
- [Build with Docker guide](../../build/guide/index.md)
## Next steps ## Next steps

View File

@ -364,7 +364,6 @@ In this section, you learned how you can containerize and run your Python
application using Docker. application using Docker.
Related information: Related information:
- [Build with Docker guide](../../build/guide/index.md)
- [Docker Compose overview](../../compose/_index.md) - [Docker Compose overview](../../compose/_index.md)
## Next steps ## Next steps

View File

@ -86,7 +86,6 @@ In this section, you learned how you can containerize and run your R
application using Docker. application using Docker.
Related information: Related information:
- [Build with Docker guide](../../build/guide/index.md)
- [Docker Compose overview](../../compose/_index.md) - [Docker Compose overview](../../compose/_index.md)
## Next steps ## Next steps

View File

@ -385,7 +385,6 @@ In this section, you learned how you can containerize and run your Ruby
application using Docker. application using Docker.
Related information: Related information:
- [Build with Docker guide](../../build/guide/index.md)
- [Docker Compose overview](../../compose/_index.md) - [Docker Compose overview](../../compose/_index.md)
## Next steps ## Next steps

View File

@ -258,29 +258,6 @@ Guides:
- path: /guides/use-case/databases/ - path: /guides/use-case/databases/
title: Use containerized databases title: Use containerized databases
- sectiontitle: Build with Docker
section:
- path: /build/guide/
title: Overview
- path: /build/guide/intro/
title: 1. Introduction
- path: /build/guide/layers/
title: 2. Layers
- path: /build/guide/multi-stage/
title: 3. Multi-stage
- path: /build/guide/mounts/
title: 4. Mounts
- path: /build/guide/build-args/
title: 5. Build arguments
- path: /build/guide/export/
title: 6. Export binaries
- path: /build/guide/test/
title: 7. Test
- path: /build/guide/multi-platform/
title: 8. Multi-platform
- path: /build/guide/next-steps/
title: Next steps
- sectiontitle: Deployment and orchestration - sectiontitle: Deployment and orchestration
section: section:
- title: "Overview" - title: "Overview"
@ -1898,6 +1875,8 @@ Manuals:
title: Multi-platform images title: Multi-platform images
- path: /build/building/secrets/ - path: /build/building/secrets/
title: Build secrets title: Build secrets
- path: /build/building/export/
title: Export binaries
- path: /build/building/best-practices/ - path: /build/building/best-practices/
title: Best practices title: Best practices
- path: /build/building/base-images/ - path: /build/building/base-images/
@ -1956,6 +1935,8 @@ Manuals:
title: Overview title: Overview
- path: /build/cache/invalidation/ - path: /build/cache/invalidation/
title: Cache invalidation title: Cache invalidation
- path: /build/cache/optimize/
title: Optimize cache usage
- path: /build/cache/garbage-collection/ - path: /build/cache/garbage-collection/
title: Garbage collection title: Garbage collection
- sectiontitle: Cache backends - sectiontitle: Cache backends

View File

@ -15,6 +15,7 @@
"Admin-Console", "Admin-Console",
"After", "After",
"Angular", "Angular",
"Apt",
"Arch", "Arch",
"Arch-Linux", "Arch-Linux",
"Azure-Connect-OIDC", "Azure-Connect-OIDC",
@ -33,6 +34,7 @@
"Compliant", "Compliant",
"Debian", "Debian",
"Debian-GNU/Linux", "Debian-GNU/Linux",
"Diff",
"DocSearch-content", "DocSearch-content",
"Docker-Compose", "Docker-Compose",
"Docker-Desktop", "Docker-Desktop",
@ -84,6 +86,8 @@
"Node", "Node",
"Non-compliant", "Non-compliant",
"Okta", "Okta",
"Old-Dockerfile",
"PHP",
"PowerShell", "PowerShell",
"PowerShell-CLI", "PowerShell-CLI",
"Python", "Python",
@ -95,8 +99,10 @@
"Regular-install", "Regular-install",
"Remote-file", "Remote-file",
"Rootless-mode", "Rootless-mode",
"Ruby",
"Run-Ollama-in-a-container", "Run-Ollama-in-a-container",
"Run-Ollama-outside-of-a-container", "Run-Ollama-outside-of-a-container",
"Rust",
"Shell", "Shell",
"Shell-script", "Shell-script",
"Specific-version", "Specific-version",
@ -104,6 +110,7 @@
"Travis-CI", "Travis-CI",
"Ubuntu", "Ubuntu",
"Unix-pipe", "Unix-pipe",
"Updated-Dockerfile",
"Use-Docker-Init", "Use-Docker-Init",
"Use-OpenAI", "Use-OpenAI",
"Using-the-CLI", "Using-the-CLI",