Merge pull request #15618 from crazy-max/build-building-section

build: building images section
This commit is contained in:
CrazyMax 2022-09-14 12:39:52 +02:00 committed by GitHub
commit 525d80a624
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
20 changed files with 621 additions and 547 deletions

View File

@ -168,7 +168,7 @@ fetch-remote:
- dest: "build/bake"
src:
- "docs/guides/bake/**"
- dest: "build/buildx/drivers"
- dest: "build/building/drivers"
src:
- "docs/guides/drivers/**"

View File

@ -1179,8 +1179,6 @@ manuals:
title: Known issues for Mac
- sectiontitle: Additional resources
section:
- path: /desktop/multi-arch/
title: Multi-arch support
- path: /desktop/kubernetes/
title: Deploy on Kubernetes
- path: /desktop/backup-and-restore/
@ -1389,30 +1387,30 @@ manuals:
section:
- path: /build/
title: Overview
- path: /build/hellobuild/
title: Hello Build
- sectiontitle: Building images
section:
- path: /build/building/packaging/
title: Packaging your software
- sectiontitle: Choosing a build driver
section:
- path: /build/building/drivers/
title: Overview
- path: /build/building/drivers/docker/
title: Docker driver
- path: /build/building/drivers/docker-container/
title: Docker container driver
- path: /build/building/drivers/kubernetes/
title: Kubernetes driver
- path: /build/building/drivers/remote/
title: Remote driver
- path: /build/building/multi-platform/
title: Multi-platform images
- sectiontitle: Buildx
section:
- path: /build/buildx/
title: Buildx overview
- path: /build/buildx/install/
title: Install Buildx
- sectiontitle: Drivers
section:
- path: /build/buildx/drivers/
title: Overview
- path: /build/buildx/drivers/docker/
title: Docker driver
- path: /build/buildx/drivers/docker-container/
title: Docker container driver
- path: /build/buildx/drivers/kubernetes/
title: Kubernetes driver
- path: /build/buildx/drivers/remote/
title: Remote driver
- path: /build/buildx/multiple-builders/
title: Using multiple builders
- path: /build/buildx/multiplatform-images/
title: Building multi-platform images
- sectiontitle: Bake
section:
- path: /build/bake/

View File

@ -1,5 +1,5 @@
---
title: "Buildx drivers overview"
title: "Drivers overview"
keywords: build, buildx, driver, builder, docker-container, kubernetes, remote
fetch_remote:
line_start: 2

View File

@ -0,0 +1,275 @@
---
title: Multi-platform images
description: Different strategies for building multi-platform images
keywords: build, buildx, buildkit, multi-platform images
redirect_from:
- /build/buildx/multiplatform-images/
- /docker-for-mac/multi-arch/
- /mackit/multi-arch/
---
Docker images can support multiple platforms, which means that a single image
may contain variants for different architectures, and sometimes for different
operating systems, such as Windows.
When running an image with multi-platform support, `docker` automatically
selects the image that matches your OS and architecture.
Most of the Docker Official Images on Docker Hub provide a [variety of architectures](https://github.com/docker-library/official-images#architectures-other-than-amd64){: target="_blank" rel="noopener" class="_" }.
For example, the `busybox` image supports `amd64`, `arm32v5`, `arm32v6`,
`arm32v7`, `arm64v8`, `i386`, `ppc64le`, and `s390x`. When running this image
on an `x86_64` / `amd64` machine, the `amd64` variant is pulled and run.
## Building multi-platform images
Docker is now making it easier than ever to develop containers on, and for Arm
servers and devices. Using the standard Docker tooling and processes, you can
start to build, push, pull, and run images seamlessly on different compute
architectures. In most cases, you don't have to make any changes to Dockerfiles
or source code to start building for Arm.
BuildKit with Buildx is designed to work well for building for multiple
platforms and not only for the architecture and operating system that the user
invoking the build happens to run.
When you invoke a build, you can set the `--platform` flag to specify the target
platform for the build output, (for example, `linux/amd64`, `linux/arm64`, or
`darwin/amd64`).
When the current builder instance is backed by the `docker-container` driver,
you can specify multiple platforms together. In this case, it builds a manifest
list which contains images for all specified architectures. When you use this
image in [`docker run`](../../engine/reference/commandline/run.md) or
[`docker service`](../../engine/reference/commandline/service.md), Docker picks
the correct image based on the node's platform.
You can build multi-platform images using three different strategies that are
supported by Buildx and Dockerfiles:
1. Using the QEMU emulation support in the kernel
2. Building on multiple native nodes using the same builder instance
3. Using a stage in Dockerfile to cross-compile to different architectures
QEMU is the easiest way to get started if your node already supports it (for
example. if you are using Docker Desktop). It requires no changes to your
Dockerfile and BuildKit automatically detects the secondary architectures that
are available. When BuildKit needs to run a binary for a different architecture,
it automatically loads it through a binary registered in the `binfmt_misc`
handler.
For QEMU binaries registered with `binfmt_misc` on the host OS to work
transparently inside containers, they must be statically compiled and registered
with the `fix_binary` flag. This requires a kernel >= 4.8 and
binfmt-support >= 2.1.7. You can check for proper registration by checking if
`F` is among the flags in `/proc/sys/fs/binfmt_misc/qemu-*`. While Docker
Desktop comes preconfigured with `binfmt_misc` support for additional platforms,
for other installations it likely needs to be installed using
[`tonistiigi/binfmt`](https://github.com/tonistiigi/binfmt){:target="_blank" rel="noopener" class="_"}
image.
```console
$ docker run --privileged --rm tonistiigi/binfmt --install all
```
Using multiple native nodes provide better support for more complicated cases
that are not handled by QEMU and generally have better performance. You can
add additional nodes to the builder instance using the `--append` flag.
Assuming contexts `node-amd64` and `node-arm64` exist in `docker context ls`;
```console
$ docker buildx create --use --name mybuild node-amd64
mybuild
$ docker buildx create --append --name mybuild node-arm64
$ docker buildx build --platform linux/amd64,linux/arm64 .
```
Finally, depending on your project, the language that you use may have good
support for cross-compilation. In that case, multi-stage builds in Dockerfiles
can be effectively used to build binaries for the platform specified with
`--platform` using the native architecture of the build node. A list of build
arguments like `BUILDPLATFORM` and `TARGETPLATFORM` is available automatically
inside your Dockerfile and can be leveraged by the processes running as part
of your build.
```dockerfile
# syntax=docker/dockerfile:1
FROM --platform=$BUILDPLATFORM golang:alpine AS build
ARG TARGETPLATFORM
ARG BUILDPLATFORM
RUN echo "I am running on $BUILDPLATFORM, building for $TARGETPLATFORM" > /log
FROM alpine
COPY --from=build /log /log
```
## Getting started
Run the [`docker buildx ls` command](../../engine/reference/commandline/buildx_ls.md)
to list the existing builders:
```console
$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
default * docker
default default running 20.10.17 linux/amd64, linux/arm64, linux/arm/v7, linux/arm/v6
```
This displays the default builtin driver, that uses the BuildKit server
components built directly into the docker engine, also known as the [`docker` driver](../building/drivers/docker.md).
Create a new builder using the [`docker-container` driver](../building/drivers/docker-container.md)
which gives you access to more complex features like multi-platform builds
and the more advanced cache exporters, which are currently unsupported in the
default `docker` driver:
```console
$ docker buildx create --name mybuilder --driver docker-container --bootstrap
mybuilder
```
Switch to the new builder and inspect it:
```console
$ docker buildx use mybuilder
```
> **Note**
>
> Alternatively, run `docker buildx create --name mybuilder --driver docker-container --bootstrap --use`
> to create a new builder and switch to it using a single command.
And inspect it:
```console
$ docker buildx inspect
Name: mybuilder
Driver: docker-container
Nodes:
Name: mybuilder0
Endpoint: unix:///var/run/docker.sock
Status: running
Buildkit: v0.10.4
Platforms: linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/mips64le, linux/mips64, linux/arm/v7, linux/arm/v6
```
Now listing the existing builders again, we can see our new builder is
registered:
```console
$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
mybuilder docker-container
mybuilder0 unix:///var/run/docker.sock running v0.10.4 linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/mips64le, linux/mips64, linux/arm/v7, linux/arm/v6
default * docker
default default running 20.10.17 linux/amd64, linux/arm64, linux/arm/v7, linux/arm/v6
```
## Example
Test the workflow to ensure you can build, push, and run multi-platform images.
Create a simple example Dockerfile, build a couple of image variants, and push
them to Docker Hub.
The following example uses a single `Dockerfile` to build an Alpine image with
cURL installed for multiple architectures:
```dockerfile
# syntax=docker/dockerfile:1
FROM alpine:3.16
RUN apk add curl
```
Build the Dockerfile with buildx, passing the list of architectures to
build for:
```console
$ docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 -t <username>/<image>:latest --push .
...
#16 exporting to image
#16 exporting layers
#16 exporting layers 0.5s done
#16 exporting manifest sha256:71d7ecf3cd12d9a99e73ef448bf63ae12751fe3a436a007cb0969f0dc4184c8c 0.0s done
#16 exporting config sha256:a26f329a501da9e07dd9cffd9623e49229c3bb67939775f936a0eb3059a3d045 0.0s done
#16 exporting manifest sha256:5ba4ceea65579fdd1181dfa103cc437d8e19d87239683cf5040e633211387ccf 0.0s done
#16 exporting config sha256:9fcc6de03066ac1482b830d5dd7395da781bb69fe8f9873e7f9b456d29a9517c 0.0s done
#16 exporting manifest sha256:29666fb23261b1f77ca284b69f9212d69fe5b517392dbdd4870391b7defcc116 0.0s done
#16 exporting config sha256:92cbd688027227473d76e705c32f2abc18569c5cfabd00addd2071e91473b2e4 0.0s done
#16 exporting manifest list sha256:f3b552e65508d9203b46db507bb121f1b644e53a22f851185d8e53d873417c48 0.0s done
#16 ...
#17 [auth] <username>/<image>:pull,push token for registry-1.docker.io
#17 DONE 0.0s
#16 exporting to image
#16 pushing layers
#16 pushing layers 3.6s done
#16 pushing manifest for docker.io/<username>/<image>:latest@sha256:f3b552e65508d9203b46db507bb121f1b644e53a22f851185d8e53d873417c48
#16 pushing manifest for docker.io/<username>/<image>:latest@sha256:f3b552e65508d9203b46db507bb121f1b644e53a22f851185d8e53d873417c48 1.4s done
#16 DONE 5.6s
```
> **Note**
>
> * `<username>` must be a valid Docker ID and `<image>` and valid repository on
> Docker Hub.
> * The `--platform` flag informs buildx to create Linux images for AMD 64-bit,
> Arm 64-bit, and Armv7 architectures.
> * The `--push` flag generates a multi-arch manifest and pushes all the images
> to Docker Hub.
Inspect the image using [`docker buildx imagetools` command](../../engine/reference/commandline/buildx_imagetools.md):
```console
$ docker buildx imagetools inspect <username>/<image>:latest
Name: docker.io/<username>/<image>:latest
MediaType: application/vnd.docker.distribution.manifest.list.v2+json
Digest: sha256:f3b552e65508d9203b46db507bb121f1b644e53a22f851185d8e53d873417c48
Manifests:
Name: docker.io/<username>/<image>:latest@sha256:71d7ecf3cd12d9a99e73ef448bf63ae12751fe3a436a007cb0969f0dc4184c8c
MediaType: application/vnd.docker.distribution.manifest.v2+json
Platform: linux/amd64
Name: docker.io/<username>/<image>:latest@sha256:5ba4ceea65579fdd1181dfa103cc437d8e19d87239683cf5040e633211387ccf
MediaType: application/vnd.docker.distribution.manifest.v2+json
Platform: linux/arm64
Name: docker.io/<username>/<image>:latest@sha256:29666fb23261b1f77ca284b69f9212d69fe5b517392dbdd4870391b7defcc116
MediaType: application/vnd.docker.distribution.manifest.v2+json
Platform: linux/arm/v7
```
The image is now available on Docker Hub with the tag `<username>/<image>:latest`.
You can use this image to run a container on Intel laptops, Amazon EC2 Graviton
instances, Raspberry Pis, and on other architectures. Docker pulls the correct
image for the current architecture, so Raspberry PIs run the 32-bit Arm version
and EC2 Graviton instances run 64-bit Arm.
The digest identifies a fully qualified image variant. You can also run images
targeted for a different architecture on Docker Desktop. For example, when
you run the following on a macOS:
```console
$ docker run --rm docker.io/<username>/<image>:latest@sha256:2b77acdfea5dc5baa489ffab2a0b4a387666d1d526490e31845eb64e3e73ed20 uname -m
aarch64
```
```console
$ docker run --rm docker.io/<username>/<image>:latest@sha256:723c22f366ae44e419d12706453a544ae92711ae52f510e226f6467d8228d191 uname -m
armv7l
```
In the above example, `uname -m` returns `aarch64` and `armv7l` as expected,
even when running the commands on a native macOS or Windows developer machine.
## Support on Docker Desktop
[Docker Desktop](../../desktop/index.md) provides `binfmt_misc`
multi-architecture support, which means you can run containers for different
Linux architectures such as `arm`, `mips`, `ppc64le`, and even `s390x`.
This does not require any special configuration in the container itself as it
uses [qemu-static](https://wiki.qemu.org/Main_Page){: target="_blank" rel="noopener" class="_" }
from the **Docker for Mac VM**. Because of this, you can run an ARM container,
like the `arm32v7` or `ppc64le` variants of the busybox image.

213
build/building/packaging.md Normal file
View File

@ -0,0 +1,213 @@
---
title: Packaging your software
keywords: build, buildx, buildkit, getting started, dockerfile
redirect_from:
- /build/hellobuild/
---
## Dockerfile
It all starts with a Dockerfile.
Docker builds images by reading the instructions from a Dockerfile. This is a
text file containing instructions that adhere to a specific format needed to
assemble your application into a container image and for which you can find
its specification reference in the [Dockerfile reference](../../engine/reference/builder.md).
Here are the most common types of instructions:
| Instruction | Description |
|--------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [`FROM <image>`](../../engine/reference/builder.md#from) | Defines a base for your image. |
| [`RUN <command>`](../../engine/reference/builder.md#run) | Executes any commands in a new layer on top of the current image and commits the result. `RUN` also has a shell form for running commands. |
| [`WORKDIR <directory>`](../../engine/reference/builder.md#workdir) | Sets the working directory for any `RUN`, `CMD`, `ENTRYPOINT`, `COPY`, and `ADD` instructions that follow it in the Dockerfile. |
| [`COPY <src> <dest>`](../../engine/reference/builder.md#copy) | Copies new files or directories from `<src>` and adds them to the filesystem of the container at the path `<dest>`. |
| [`CMD <command>`](../../engine/reference/builder.md#cmd) | Lets you define the default program that is run once you start the container based on this image. Each Dockerfile only has one `CMD`, and only the last `CMD` instance is respected when multiple exist. |
Dockerfiles are crucial inputs for image builds and can facilitate automated,
multi-layer image builds based on your unique configurations. Dockerfiles can
start simple and grow with your needs and support images that require complex
instructions. For all the possible instructions, see the [Dockerfile reference](../../engine/reference/builder.md).
Docker images consist of **read-only layers**, each resulting from an
instruction in the Dockerfile. Layers are stacked sequentially and each one is
a delta representing the changes applied to the previous layer.
## Example
Here's a simple Dockerfile example to get you started with building images.
We'll take a simple "Hello World" Python Flask application, and bundle it into
a Docker image that can test locally or deploy anywhere!
Let's say we have a `hello.py` file with the following content:
```python
from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello():
return "Hello World!"
```
Don't worry about understanding the full example if you're not familiar with
Python, it's just a simple web server that will contain a single page that
says "Hello World".
> **Note**
>
> If you test the example, make sure to copy over the indentation as well! For
> more information about this sample Flask application, check the
> [Flask Quickstart](https://flask.palletsprojects.com/en/2.1.x/quickstart/){:target="_blank" rel="noopener" class="_"}
> page.
Here's the Dockerfile that will be used to create an image for our application:
```dockerfile
# syntax=docker/dockerfile:1
FROM ubuntu:22.04
# install app dependencies
RUN apt-get update && apt-get install -y python3 python3-pip
RUN pip install flask==2.1.*
# install app
COPY hello.py /
# final configuration
ENV FLASK_APP=hello
EXPOSE 8000
CMD flask run --host 0.0.0.0 --port 8000
```
We start by specifying the [syntax directive](../../engine/reference/builder.md#syntax).
It pins the exact version of the Dockerfile syntax we're using:
```dockerfile
# syntax=docker/dockerfile:1
```
As a [best practice](../../develop/dev-best-practices.md), this should be the
very first line in all our Dockerfiles as it informs BuildKit the right version
of the Dockerfile to use.
Next we define the first instruction:
```dockerfile
FROM ubuntu:22.04
```
Here the [`FROM` instruction](../../engine/reference/builder.md#from) sets our
base image to the 22.04 release of Ubuntu. All following instructions are
executed on this base image, in this case, an Ubuntu environment. The notation
`ubuntu:22:04`, follows the `name:tag` standard for naming docker images. When
you build your image you use this notation to name your images and use it to
specify any existing Docker image. There are many public images you can
leverage in your projects. Explore [Docker Hub](https://hub.docker.com/search?image_filter=official&q=&type=image){:target="_blank" rel="noopener" class="_"}
to find out.
```dockerfile
# install app dependencies
RUN apt-get update && apt-get install -y python3 python3-pip
```
This [`RUN` instruction](../../engine/reference/builder.md#run) executes a shell
command in the build context. A build's context is the set of files located in
the specified PATH or URL.
In this example, our context is a full Ubuntu operating system, so we have
access to its package manager, apt. The provided commands update our package
lists and then, after that succeeds, installs `python3` and `pip`, the package
manager for Python.
Also note `# install app dependencies` line. This is a comment. Comments in
Dockerfiles begin with the `#` symbol. As your Dockerfile evolves, comments can
be instrumental to document how your dockerfile works for any future readers
and editors of the file.
> **Note**
>
> Starting your Dockerfile by a `#` like regular comments is treated as a
> directive when you are using BuildKit (default), otherwise it is ignored.
```dockerfile
RUN pip install flask==2.1.*
```
This second `RUN` instruction requires that we've installed pip in the layer
before. After applying the previous directive, we can use the pip command to
install the flask web framework. This is the framework we've used to write
our basic "Hello World" application from above, so to run it in Docker, we'll
need to make sure it's installed.
```dockerfile
COPY hello.py /
```
Now we use the [`COPY` instruction](../../engine/reference/builder.md#copy) to
copy our `hello.py` file from the local build context into the root directory
of our image. After being executed, we'll end up with a file called `/hello.py`
inside the image.
```dockerfile
ENV FLASK_APP=hello
```
This [`ENV` instruction](../../engine/reference/builder.md#env) sets a Linux
environment variable we'll need later. This is a flask-specific variable,
that configures the command later used to run our `hello.py` application.
Without this, flask wouldn't know where to find our application to be able to
run it.
```dockerfile
EXPOSE 8000
```
This [`EXPOSE` instruction](../../engine/reference/builder.md#expose) marks that
our final image has a service listening on port `8000`. This isn't required,
but it is a good practice, as users and tools can use this to understand what
your image does.
```dockerfile
CMD flask run --host 0.0.0.0 --port 8000
```
Finally, [`CMD` instruction](../../engine/reference/builder.md#cmd) sets the
command that is run when the user starts a container based on this image. In
this case we'll start the flask development server listening on all addresses
on port `8000`.
## Testing
To test our Dockerfile, we'll first build it using the [`docker build` command](../../engine/reference/commandline/build.md):
```console
$ docker build -t test:latest .
```
* `-t test:latest` option specifies the name (required) and tag (optional) of
the image we're building.
* `.` specifies the build context as the current directory. In this example,
this is where build expects to find the Dockerfile and the local files the
Dockerfile needs to access, in this case your python application.
So, in accordance with the build command issued and how build context works,
your Dockerfile and python app need to be in the same directory.
Now run your newly built image:
```console
$ docker run -p 8000:8000 test:latest
```
From your computer, open a browser and navigate to `http://localhost:8000`
> **Note**
>
> You can also build and run using [Play with Docker](https://labs.play-with-docker.com){:target="_blank" rel="noopener" class="_"}
> that provides you with a temporary Docker instance in the cloud.
## Other resources
If you are interested in examples in other languages, such as Go, check out
our [language-specific guides](../../language/index.md) in the Guides section.

View File

@ -1,53 +0,0 @@
---
title: Working with Buildx
description: Working with Docker Buildx
keywords: build, buildx, buildkit
redirect_from:
- /buildx/working-with-buildx/
---
## Overview
Docker Buildx is a CLI plugin that extends the docker command with the full
support of the features provided by [Moby BuildKit](https://github.com/moby/buildkit){:target="_blank" rel="noopener" class="_"}
builder toolkit. It provides the same user experience as docker build with many
new features like creating scoped builder instances and building against
multiple nodes concurrently.
## Build with Buildx
To start a new build, run the command `docker buildx build .`
```console
$ docker buildx build .
[+] Building 8.4s (23/32)
=> ...
```
Buildx builds using the BuildKit engine and does not require `DOCKER_BUILDKIT=1`
environment variable to start the builds.
The [`docker buildx build` command](../../engine/reference/commandline/buildx_build.md)
supports features available for `docker build`, including features such as
outputs configuration, inline build caching, and specifying target platform.
In addition, Buildx also supports new features that are not yet available for
regular `docker build` like building manifest lists, distributed caching, and
exporting build results to OCI image tarballs.
Buildx is flexible and can be run in different configurations that are exposed
through various "drivers". Each driver defines how and where a build should
run, and have different feature sets.
We currently support the following drivers:
* The `docker` driver ([guide](drivers/docker.md), [reference](/engine/reference/commandline/buildx_create/#driver))
* The `docker-container` driver ([guide](drivers/docker-container.md), [reference](/engine/reference/commandline/buildx_create/#driver))
* The `kubernetes` driver ([guide](drivers/kubernetes.md), [reference](/engine/reference/commandline/buildx_create/#driver))
* The `remote` driver ([guide](drivers/remote.md))
For more information on drivers, see the [drivers guide](drivers/index.md).
## High-level build options with Bake
Check out our guide about [Bake](../bake/index.md) to get started with the
[`docker buildx bake` command](../../engine/reference/commandline/buildx_bake.md).

View File

@ -1,79 +0,0 @@
---
title: Building multi-platform images
description: Different strategies for building multi-platform images
keywords: build, buildx, buildkit, multi-platform images
---
BuildKit is designed to work well for building for multiple platforms and not
only for the architecture and operating system that the user invoking the build
happens to run.
When you invoke a build, you can set the `--platform` flag to specify the target
platform for the build output, (for example, `linux/amd64`, `linux/arm64`, or
`darwin/amd64`).
When the current builder instance is backed by the `docker-container` driver,
you can specify multiple platforms together. In this case, it builds a manifest
list which contains images for all specified architectures. When you use this
image in [`docker run`](../../engine/reference/commandline/run.md) or
[`docker service`](../../engine/reference/commandline/service.md), Docker picks
the correct image based on the node's platform.
You can build multi-platform images using three different strategies that are
supported by Buildx and Dockerfiles:
1. Using the QEMU emulation support in the kernel
2. Building on multiple native nodes using the same builder instance
3. Using a stage in Dockerfile to cross-compile to different architectures
QEMU is the easiest way to get started if your node already supports it (for
example. if you are using Docker Desktop). It requires no changes to your
Dockerfile and BuildKit automatically detects the secondary architectures that
are available. When BuildKit needs to run a binary for a different architecture,
it automatically loads it through a binary registered in the `binfmt_misc`
handler.
For QEMU binaries registered with `binfmt_misc` on the host OS to work
transparently inside containers, they must be statically compiled and registered
with the `fix_binary` flag. This requires a kernel >= 4.8 and
binfmt-support >= 2.1.7. You can check for proper registration by checking if
`F` is among the flags in `/proc/sys/fs/binfmt_misc/qemu-*`. While Docker
Desktop comes preconfigured with `binfmt_misc` support for additional platforms,
for other installations it likely needs to be installed using
[`tonistiigi/binfmt`](https://github.com/tonistiigi/binfmt){:target="_blank" rel="noopener" class="_"}
image.
```console
$ docker run --privileged --rm tonistiigi/binfmt --install all
```
Using multiple native nodes provide better support for more complicated cases
that are not handled by QEMU and generally have better performance. You can
add additional nodes to the builder instance using the `--append` flag.
Assuming contexts `node-amd64` and `node-arm64` exist in `docker context ls`;
```console
$ docker buildx create --use --name mybuild node-amd64
mybuild
$ docker buildx create --append --name mybuild node-arm64
$ docker buildx build --platform linux/amd64,linux/arm64 .
```
Finally, depending on your project, the language that you use may have good
support for cross-compilation. In that case, multi-stage builds in Dockerfiles
can be effectively used to build binaries for the platform specified with
`--platform` using the native architecture of the build node. A list of build
arguments like `BUILDPLATFORM` and `TARGETPLATFORM` is available automatically
inside your Dockerfile and can be leveraged by the processes running as part
of your build.
```dockerfile
# syntax=docker/dockerfile:1
FROM --platform=$BUILDPLATFORM golang:alpine AS build
ARG TARGETPLATFORM
ARG BUILDPLATFORM
RUN echo "I am running on $BUILDPLATFORM, building for $TARGETPLATFORM" > /log
FROM alpine
COPY --from=build /log /log
```

View File

@ -1,155 +0,0 @@
---
title: Hello Build
description: Build Hello World
keywords: build, buildx, buildkit, getting started, dockerfile, image layers, build instructions, build context
---
## Hello Build!
It all starts with a Dockerfile.
Dockerfiles are text files containing instructions. Dockerfiles adhere to a specific format and contain a **set of instructions** for which you can find a full reference in the [Dockerfile reference](../../engine/reference/builder).
Docker builds images by reading the instructions from a Dockerfile.
Docker images consist of **read-only layers**, each resulting from an instruction in the Dockerfile. Layers are stacked sequentially and each one is a delta representing the changes applied to the previous layer.
## Dockerfile basics
A Dockerfile is a text file containing all necessary instructions needed to assemble your application into a Docker container image.
Here are the most common types of instructions:
* [**FROM \<image\>**](../../engine/reference/builder/#from) - defines a base for your image.
* [**RUN \<command\>**](../../engine/reference/builder/#run) - executes any commands in a new layer on top of the current image and commits the result.
RUN also has a shell form for running commands.
* [**WORKDIR \<directory\>**](../../engine/reference/builder/#workdir) - sets the working directory for any RUN, CMD, ENTRYPOINT, COPY, and ADD instructions that follow it in the Dockerfile.
* [**COPY \<src\> \<dest\>**](../../engine/reference/builder/#copy) - copies new files or directories from `<src>` and adds them to the filesystem of the container at the path `<dest>`.
* [**CMD \<command\>**](../../engine/reference/builder/#cmd) - lets you define the default program that is run once you start the container based on this image.
Each Dockerfile only has one CMD, and only the last CMD instance is respected when multiple exist.
Dockerfiles are crucial inputs for image builds and can facilitate automated, multi-layer image builds based on your unique configurations. Dockerfiles can start simple and grow with your needs and support images that require complex instructions.
For all the possible instructions, see the [Dockerfile reference](../../engine/reference/builder/).
## Example
Heres a simple Dockerfile example to get you started with building images. Well take a simple "Hello World" Python Flask application, and bundle it into a Docker image that we can test locally or deploy anywhere!
**Sample A**
Lets say we have the following in a `hello.py` file in our local directory:
```python
from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello():
return "Hello World!"
```
Dont worry about understanding the full example if youre not familiar with Python - its just a simple web server that will contain a single page that says “Hello World”.
> **Note**
>
> If you test the example, make sure to copy over the indentation as well! For more information about this sample Flask application, check the [Flask Quickstart](https://flask.palletsprojects.com/en/2.1.x/quickstart/){:target="_blank" rel="noopener" class="_"} page.
**Sample B**
Heres a Dockerfile that Docker Build can use to create an image for our application:
```dockerfile
# syntax=docker/dockerfile:1
FROM ubuntu:22.04
# install app dependencies
RUN apt-get update && apt-get install -y python3 python3-pip
RUN pip install flask==2.1.*
# install app
COPY hello.py /
# final configuration
ENV FLASK_APP=hello
EXPOSE 8000
CMD flask run --host 0.0.0.0 --port 8000
```
* `# syntax=docker/dockerfile:1`
This is our syntax directive. It pins the exact version of the dockerfile syntax were using. As a [best practice](../../develop/dev-best-practices/), this should be the very first line in all our Dockerfiles as it informs Buildkit the right version of the Dockerfile to use.
See also [Syntax](../../engine/reference/builder/#syntax).
> **Note**
>
> Initiated by a `#` like regular comments, this line is treated as a directive when you are using BuildKit (default), otherwise it is ignored.
* `FROM ubuntu:22.04`
Here the `FROM` instruction sets our base image to the 22.04 release of Ubuntu. All following instructions are executed on this base image, in this case, a Ubuntu environment.
The notation `ubuntu:22:04`, follows the `name:tag` standard for naming docker images.
When you build your image you use this notation to name your images and use it to specify any existing Docker image.
There are many public images you can leverage in your projects.
Explore [Docker Hub](https://hub.docker.com/search?image_filter=official&q=&type=image){:target="_blank" rel="noopener" class="_"} to find out.
* `# install app dependencies`
Comments in dockerfiles begin with the `#` symbol.
As your Dockerfile evolves, comments can be instrumental to document how your dockerfile works for any future readers and editors of the file.
See also the [FROM instruction](../../engine/reference/builder/#from) page in the Dockerfile reference.
* `RUN apt-get update && apt-get install -y python3 python3-pip`
This `RUN` instruction executes a shell command in the build context. A build's context is the set of files located in the specified PATH or URL. In this example, our context is a full Ubuntu operating system, so we have access to its package manager, apt. The provided commands update our package lists and then, after that succeeds, installs python3 and pip, the package manager for Python.
See also the [RUN instruction](../../engine/reference/builder/#run) page in the Dockerfile reference.
* `RUN pip install flask`
This second `RUN` instruction requires that weve installed pip in the layer before. After applying the previous directive, we can use the pip command to install the flask web framework. This is the framework weve used to write our basic “Hello World” application from above, so to run it in Docker, well need to make sure its installed.
See also the [RUN instruction](../../engine/reference/builder/#run) page in the Dockerfile reference.
* `COPY hello.py /`
This COPY instruction copies our `hello.py` file from the builds context local directory into the root directory of our image. After this executes, well end up with a file called `/hello.py` inside the image, with all the content of our local copy!
See also the [COPY instruction](../../engine/reference/builder/#copy) page in the Dockerfile reference.
* `ENV FLASK_APP=hello`
This ENV instruction sets a Linux environment variable well need later. This is a flask-specific variable, that configures the command later used to run our `hello.py` application. Without this, flask wouldnt know where to find our application to be able to run it.
See also the [ENV instruction](../../engine/reference/builder/#env) page in the Dockerfile reference.
* `EXPOSE 8000`
This EXPOSE instruction marks that our final image has a service listening on port 8000. This isnt required, but it is a good practice, as users and tools can use this to understand what your image does.
See also the [EXPOSE instruction](../../engine/reference/builder/#expose) page in the Dockerfile reference.
* `CMD flask run --host 0.0.0.0 --port 8000`
This CMD instruction sets the command that is run when the user starts a container based on this image. In this case well start the flask development server listening on all hosts on port 8000.
[CMD instruction](../../engine/reference/builder/#cmd) page in the Dockerfile reference.
## Test the example
Go ahead and try this example in your local Docker installation or you can use [Play with Docker](https://labs.play-with-docker.com){:target="_blank" rel="noopener" class="_"} that provides you with a temporary Docker instance on the cloud.
To test this example:
1. Create a file hello.py with the content of sample A.
2. Create a file named Dockerfile without an extension with the contents of sample B.
3. From your Docker instance build it with `docker build -t test:latest .`
Breaking down the docker build command:
* **`-t test:latest`** option specifies the name (required) and tag (optional) of the image were building.
* **`.`** specifies the build context as the current directory. In this example, this is where build expects to find the Dockerfile and the local files the Dockerfile needs to access, in this case your python application.
So, in accordance with the build command issued and how build context works, your Dockerfile and python app need to be on the same directory.
4. Run your newly built image with `docker run -p 8000:8000 test:latest`
From your computer, open a browser and navigate to `http://localhost:8000` or, if youre using [Play with Docker](https://labs.play-with-docker.com){:target="_blank" rel="noopener" class="_"}, click on Open Port.
## Other resources
If you are interested in examples in other languages, such as GO, check out our [language-specific guides](../../language) in the Guides section.

View File

@ -2,82 +2,110 @@
title: Overview of Docker Build
description: Introduction and overview of Docker Build
keywords: build, buildx, buildkit
redirect_from:
- /build/buildx/
---
Docker Build is one of Docker Engines most used features. Whenever you are
## Overview
Docker Build is one of Docker Engine's most used features. Whenever you are
creating an image you are using Docker Build. Build is a key part of your
software development life cycle allowing you to package and bundle your code
and ship it anywhere.
Engine uses a client-server architecture and is composed of multiple components
and tools. The most common method of executing a build is by issuing a
`docker build` command from the Docker CLI. The CLI sends the request to Docker
Engine which, in turn, executes your build.
[`docker build` command](../engine/reference/commandline/build.md). The CLI
sends the request to Docker Engine which, in turn, executes your build.
There are now two components in Engine that can be used to create the build.
Starting with the 18.09 release, Engine is shipped with [Moby BuildKit](https://github.com/moby/buildkit){:target="_blank" rel="noopener" class="_"},
There are now two components in Engine that can be used to build an image.
Starting with the [18.09 release](../engine/release-notes/18.09.md#18090), Engine is
shipped with Moby [BuildKit](https://github.com/moby/buildkit){:target="_blank" rel="noopener" class="_"},
the new component for executing your builds by default.
With BuildKit, the new client [Docker Buildx](buildx/index.md), becomes
available as a CLI plugin. Docker Buildx extends the docker build command -
namely through the additional `docker buildx build` command - and fully
supports the new features BuildKit offers.
BuildKit is the backend evolution from the Legacy Builder, it comes with new
and much improved functionality that can be powerful tools for improving your
builds' performance or reusability of your Dockerfiles, and it also introduces
support for complex scenarios.
## Docker Build features
The new client [Docker Buildx](https://github.com/docker/buildx){:target="_blank" rel="noopener" class="_"},
is a CLI plugin that extends the docker command with the full support of the
features provided by BuildKit builder toolkit. `docker buildx build` provides
the same user experience as `docker build` with many new features like creating
scoped builder instances, building against multiple nodes concurrently, outputs
configuration, inline build caching, and specifying target platform. In
addition, Buildx also supports new features that are not yet available for
regular `docker build` like building manifest lists, distributed caching, and
exporting build results to OCI image tarballs.
Docker Build is way more than your `docker build` command and is not only about packaging your code, its a whole ecosystem of tools and features that support you not only with common workflow tasks but also provides you with support for more complex and advanced scenarios.
Heres an overview of all the use cases with which Build can support you:
Docker Build is way more than a simple build command and is not only about
packaging your code, it's a whole ecosystem of tools and features that support
not only common workflow tasks but also provides support for more complex and
advanced scenarios:
### Building your images
## Building your images
* **Packaging your software**
Bundle and package your code to run anywhere, from your local Docker Desktop, to Docker Engine and Kubernetes on the cloud.
To get started with Build, see the [Hello Build](hellobuild.md) page.
### Packaging your software
Bundle and package your code to run anywhere, from your local Docker Desktop,
to Docker Engine and Kubernetes on the cloud. To get started with Build,
see the [Packaging your software](building/packaging.md) page.
### Choosing a build driver
* **Choosing a build driver**
Run Buildx with different configurations depending on the scenario you are
working on, regardless of whether you are using your local machine or a remote
compute cluster, all from the comfort of your local working environment.
For more information on drivers, see the [drivers guide](buildx/drivers/index.md).
For more information on drivers, see the [drivers guide](building/drivers/index.md).
* **Optimizing builds with cache management**
Improve build performance by using a persistent shared build cache to avoid repeating costly operations such as package installations, downloading files from the internet, or code build steps.
### Optimizing builds with cache management
* **Creating build-once, run-anywhere with multi-platform builds**
Collaborate across platforms with one build artifact.
See [Build multi-platform images](buildx/multiplatform-images.md).
Improve build performance by using a persistent shared build cache to avoid
repeating costly operations such as package installations, downloading files
from the internet, or code build steps.
### Automating your builds
### Creating build-once, run-anywhere with multi-platform builds
* **Integrating with GitHub**
Automate your image builds to run in GitHub actions using the official docker build actions. See:
* [GitHub Action to build and push Docker images with Buildx](https://github.com/docker/build-push-action).
* [GitHub Action to extract metadata from Git reference and GitHub events](https://github.com/docker/metadata-action/).
Collaborate across platforms with one build artifact. See
[Multi-platform images](building/multi-platform.md) page.
* **Orchestrating builds across complex projects together**
Connect your builds together and easily parameterize your images using buildx bake.
## Automating your builds
### Integrating with GitHub
Automate your image builds to run in GitHub actions using the official docker
build actions:
* [GitHub Action to build and push Docker images with Buildx](https://github.com/docker/build-push-action).
* [GitHub Action to extract metadata from Git reference and GitHub events](https://github.com/docker/metadata-action/).
### Orchestrating builds across complex projects together
Connect your builds together and easily parameterize your images using buildx bake.
See [High-level build options with Bake](bake/index.md).
### Customizing your Builds
## Customizing your Builds
* **Select your build output format**
Choose from a variety of available output formats, to export any artifact you like from BuildKit, not just docker images.
See [Set the export action for the build result](../engine/reference/commandline/buildx_build.md/#output).
### Select your build output format
* **Managing build secrets**
Securely access protected repositories and resources at build time without leaking data into the final build or the cache.
Choose from a variety of available output formats, to export any artifact you
like from BuildKit, not just docker images. See [Set the export action for the build result](../engine/reference/commandline/buildx_build.md#output).
### Extending BuildKit
### Managing build secrets
* **Custom syntax on Dockerfile**
Use experimental versions of the Dockerfile frontend, or even just bring your own to BuildKit using the power of custom frontends.
See also the [Syntax directive](../engine/reference/builder/#syntax).
Securely access protected repositories and resources at build time without
leaking data into the final build or the cache.
* **Configure BuildKit**
Take a deep dive into the internal BuildKit configuration to get the most out of your builds.
See also [`buildkitd.toml`](https://github.com/moby/buildkit/blob/master/docs/buildkitd.toml.md), the configuration file for `buildkitd`.
## Extending BuildKit
### Custom syntax on Dockerfile
Use experimental versions of the Dockerfile frontend, or even just bring your
own to BuildKit using the power of custom frontends. See also the
[Syntax directive](../engine/reference/builder.md#syntax).
### Configure BuildKit
Take a deep dive into the internal BuildKit configuration to get the most out
of your builds. See also [`buildkitd.toml`](https://github.com/moby/buildkit/blob/master/docs/buildkitd.toml.md),
the configuration file for `buildkitd`.

View File

@ -6,7 +6,7 @@ toc_max: 2
---
This page contains information about the new features, improvements, and bug
fixes in [Buildx](buildx/index.md).
fixes in [Docker Buildx](https://github.com/docker/buildx){:target="_blank" rel="noopener" class="_"}.
## 0.9.1
@ -29,7 +29,7 @@ For more details, see the complete release notes in the [Buildx GitHub repositor
### New features
* Support for new [driver `remote`](buildx/drivers/remote.md) that you can use
* Support for new [driver `remote`](building/drivers/remote.md) that you can use
to connect to any already running BuildKit instance {% include github_issue.md repo="docker/buildx" number="1078" %}
{% include github_issue.md repo="docker/buildx" number="1093" %} {% include github_issue.md repo="docker/buildx" number="1094" %}
{% include github_issue.md repo="docker/buildx" number="1103" %} {% include github_issue.md repo="docker/buildx" number="1134" %}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 53 KiB

View File

@ -13,25 +13,48 @@ Docker Desktop retrieves the extension image according to the users system ar
### Build and push for multiple architectures
If you created an extension from the `docker extension init` command, the `Makefile` at the root of the directory includes a target with name `push-extension`.
If you created an extension from the `docker extension init` command, the
`Makefile` at the root of the directory includes a target with name
`push-extension`.
You can do `make push-extension` to build your extension against both `linux/amd64` and `linux/arm64` platforms, and push them to DockerHub. For example:
You can do `make push-extension` to build your extension against both
`linux/amd64` and `linux/arm64` platforms, and push them to DockerHub.
`docker buildx build --platform=linux/amd64,linux/arm64 -t <name-of-your-extension> .`
Alternatively, if you started from an empty directory, use the command below to build your extension for multiple architectures:
```
docker buildx build \
--push \
--platform=linux/amd64,linux/arm64 \
--tag=my-extension:0.0.1 .
For example:
```console
$ make push-extension
```
The information above serves as a guide to help you get started. Its up to you to define the CI/CD process to build and push the extension.
Alternatively, if you started from an empty directory, use the command below
to build your extension for multiple architectures:
![hub-multi-arch-extension](images/hub-multi-arch-extension.png)
```console
$ docker buildx build --push --platform=linux/amd64,linux/arm64 --tag=username/my-extension:0.0.1 .
```
You can then check the image manifest to see if the image is available for both
architectures using the [`docker buildx imagetools` command](../../../engine/reference/commandline/buildx_imagetools.md):
```console
$ docker buildx imagetools inspect username/my-extension:0.0.1
Name: docker.io/username/my-extension:0.0.1
MediaType: application/vnd.docker.distribution.manifest.list.v2+json
Digest: sha256:f3b552e65508d9203b46db507bb121f1b644e53a22f851185d8e53d873417c48
Manifests:
Name: docker.io/username/my-extension:0.0.1@sha256:71d7ecf3cd12d9a99e73ef448bf63ae12751fe3a436a007cb0969f0dc4184c8c
MediaType: application/vnd.docker.distribution.manifest.v2+json
Platform: linux/amd64
Name: docker.io/username/my-extension:0.0.1@sha256:5ba4ceea65579fdd1181dfa103cc437d8e19d87239683cf5040e633211387ccf
MediaType: application/vnd.docker.distribution.manifest.v2+json
Platform: linux/arm64
```
> **Note**
>
> For more information, see [Multi-platform images](../../../build/building/multi-platform.md) page.
### Adding multi-arch binaries

View File

@ -45,7 +45,7 @@ It provides a simple interface that enables you to manage your containers, appli
- [Docker Engine](../engine/index.md)
- Docker CLI client
- [Docker Buildx](../build/buildx/index.md)
- [Docker Buildx](../build/index.md)
- [Docker Compose](../compose/index.md)
- [Docker Content Trust](../engine/security/trust/index.md)
- [Kubernetes](https://github.com/kubernetes/kubernetes/)

View File

@ -1,176 +0,0 @@
---
description: Multi-CPU Architecture Support
keywords: mac, windows, Multi-CPU architecture support
redirect_from:
- /docker-for-mac/multi-arch/
- /mackit/multi-arch/
title: Leverage multi-CPU architecture support
---
Docker images can support multiple architectures, which means that a single
image may contain variants for different architectures, and sometimes for different
operating systems, such as Windows.
When running an image with multi-architecture support, `docker` automatically
selects the image variant that matches your OS and architecture.
Most of the Docker Official Images on Docker Hub provide a [variety of architectures](https://github.com/docker-library/official-images#architectures-other-than-amd64){: target="_blank" rel="noopener" class="_" }.
For example, the `busybox` image supports `amd64`, `arm32v5`, `arm32v6`,
`arm32v7`, `arm64v8`, `i386`, `ppc64le`, and `s390x`. When running this image
on an `x86_64` / `amd64` machine, the `amd64` variant is pulled and run.
## Multi-arch support on Docker Desktop
**Docker Desktop** provides `binfmt_misc` multi-architecture support,
which means you can run containers for different Linux architectures
such as `arm`, `mips`, `ppc64le`, and even `s390x`.
This does not require any special configuration in the container itself as it uses
[qemu-static](https://wiki.qemu.org/Main_Page){: target="_blank" rel="noopener" class="_" }
from the **Docker for Mac VM**. Because of this, you can run an ARM container,
like the `arm32v7` or `ppc64le` variants of the busybox image.
## Build multi-arch images with Buildx
Docker is now making it easier than ever to develop containers on, and for Arm
servers and devices. Using the standard Docker tooling and processes, you can
start to build, push, pull, and run images seamlessly on different compute
architectures. In most cases, you don't have to make any changes to Dockerfiles
or source code to start building for Arm.
Docker introduces a new CLI command called `buildx`. You can use the `buildx`
command on Docker Desktop for Mac and Windows to build multi-arch images, link
them together with a manifest file, and push them all to a registry using a
single command. With the included emulation, you can transparently build more
than just native images. Buildx accomplishes this by adding new builder
instances based on BuildKit, and leveraging Docker Desktop's technology stack
to run non-native binaries.
For more information about the Buildx CLI command, see [Buildx](../build/buildx/index.md)
and the [`docker buildx` command line reference](../engine/reference/commandline/buildx.md).
### Build and run multi-architecture images
Run the `docker buildx ls` command to list the existing builders. This displays
the default builder, which is our old builder.
```console
$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS PLATFORMS
default * docker
default default running linux/amd64, linux/arm64, linux/arm/v7, linux/arm/v6
```
Create a new builder which gives access to the new multi-architecture features.
```console
$ docker buildx create --name mybuilder
mybuilder
```
Alternatively, run `docker buildx create --name mybuilder --use` to create a new
builder and switch to it using a single command.
Switch to the new builder and inspect it.
```console
$ docker buildx use mybuilder
$ docker buildx inspect --bootstrap
[+] Building 2.5s (1/1) FINISHED
=> [internal] booting buildkit 2.5s
=> => pulling image moby/buildkit:master 1.3s
=> => creating container buildx_buildkit_mybuilder0 1.2s
Name: mybuilder
Driver: docker-container
Nodes:
Name: mybuilder0
Endpoint: unix:///var/run/docker.sock
Status: running
Platforms: linux/amd64, linux/arm64, linux/arm/v7, linux/arm/v6
```
Test the workflow to ensure you can build, push, and run multi-architecture
images. Create a simple example Dockerfile, build a couple of image variants,
and push them to Docker Hub.
The following example uses a single `Dockerfile` to build an Ubuntu image with cURL
installed for multiple architectures.
Create a `Dockerfile` with the following:
```dockerfile
FROM ubuntu:20.04
RUN apt-get update && apt-get install -y curl
```
Build the Dockerfile with buildx, passing the list of architectures to build for:
```console
$ docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 -t username/demo:latest --push .
[+] Building 6.9s (19/19) FINISHED
...
=> => pushing layers 2.7s
=> => pushing manifest for docker.io/username/demo:latest 2.2
```
Where, `username` is a valid Docker username.
> **Notes:**
>
> - The `--platform` flag informs buildx to generate Linux images for AMD 64-bit,
> Arm 64-bit, and Armv7 architectures.
> - The `--push` flag generates a multi-arch manifest and pushes all the images
> to Docker Hub.
Inspect the image using `docker buildx imagetools`.
```console
$ docker buildx imagetools inspect username/demo:latest
Name: docker.io/username/demo:latest
MediaType: application/vnd.docker.distribution.manifest.list.v2+json
Digest: sha256:2a2769e4a50db6ac4fa39cf7fb300fa26680aba6ae30f241bb3b6225858eab76
Manifests:
Name: docker.io/username/demo:latest@sha256:8f77afbf7c1268aab1ee7f6ce169bb0d96b86f585587d259583a10d5cd56edca
MediaType: application/vnd.docker.distribution.manifest.v2+json
Platform: linux/amd64
Name: docker.io/username/demo:latest@sha256:2b77acdfea5dc5baa489ffab2a0b4a387666d1d526490e31845eb64e3e73ed20
MediaType: application/vnd.docker.distribution.manifest.v2+json
Platform: linux/arm64
Name: docker.io/username/demo:latest@sha256:723c22f366ae44e419d12706453a544ae92711ae52f510e226f6467d8228d191
MediaType: application/vnd.docker.distribution.manifest.v2+json
Platform: linux/arm/v7
```
The image is now available on Docker Hub with the tag `username/demo:latest`. You
can use this image to run a container on Intel laptops, Amazon EC2 Graviton instances,
Raspberry Pis, and on other architectures. Docker pulls the correct image for the
current architecture, so Raspberry Pis run the 32-bit Arm version and EC2 Graviton
instances run 64-bit Arm. The SHA tags identify a fully qualified image variant.
You can also run images targeted for a different architecture on Docker Desktop.
You can run the images using the SHA tag, and verify the architecture. For
example, when you run the following on a macOS:
```console
$ docker run --rm docker.io/username/demo:latest@sha256:2b77acdfea5dc5baa489ffab2a0b4a387666d1d526490e31845eb64e3e73ed20 uname -m
aarch64
```
```console
$ docker run --rm docker.io/username/demo:latest@sha256:723c22f366ae44e419d12706453a544ae92711ae52f510e226f6467d8228d191 uname -m
armv7l
```
In the above example, `uname -m` returns `aarch64` and `armv7l` as expected,
even when running the commands on a native macOS or Windows developer machine.

View File

@ -435,7 +435,7 @@ Note that you must sign in and create a Docker ID in order to download Docker De
Docker Desktop Community 2.1.0.0 contains the following experimental features.
* Docker App: Docker App is a CLI plugin that helps configure, share, and install applications. For more information, see [Working with Docker App](/app/working-with-app/).
* Docker Buildx: Docker Buildx is a CLI plugin for extended build capabilities with BuildKit. For more information, see [Buildx component](../../build/buildx/index.md).
* Docker Buildx: Docker Buildx is a CLI plugin for extended build capabilities with BuildKit. For more information, see the [Build page](../../build/index.md).
### Bug fixes and minor changes

View File

@ -557,7 +557,7 @@ Note that you must sign in and create a Docker ID in order to download Docker De
Docker Desktop Community 2.1.0.0 contains the following experimental features:
* Docker App: Docker App is a CLI plugin that helps configure, share, and install applications. For more information, see [Working with Docker App](/app/working-with-app/).
* Docker Buildx: Docker Buildx is a CLI plugin for extended build capabilities with BuildKit. For more information, see [Buildx component](../../build/buildx/index.md).
* Docker Buildx: Docker Buildx is a CLI plugin for extended build capabilities with BuildKit. For more information, see the [Build page](../../build/index.md).
### Bug fixes and minor changes