build: restructured manuals toc

Signed-off-by: David Karlsson <david.karlsson@docker.com>
This commit is contained in:
David Karlsson 2022-12-13 11:25:19 +01:00
parent 76c26baec2
commit 46901bb961
30 changed files with 212 additions and 199 deletions

View File

@ -1561,66 +1561,78 @@ manuals:
section:
- path: /build/
title: Overview
- path: /build/install-buildx/
title: Install Buildx
- sectiontitle: Building images
section:
- path: /build/building/packaging/
title: Packaging your software
- path: /build/building/context/
title: Build context
- sectiontitle: Build drivers
section:
- path: /build/building/drivers/
title: Overview
- path: /build/building/drivers/docker/
title: Docker driver
- path: /build/building/drivers/docker-container/
title: Docker container driver
- path: /build/building/drivers/kubernetes/
title: Kubernetes driver
- path: /build/building/drivers/remote/
title: Remote driver
- sectiontitle: Cache
section:
- path: /build/building/cache/
title: Optimizing builds with cache
- path: /build/building/cache/garbage-collection/
title: Garbage collection
- sectiontitle: Cache backends
section:
- path: /build/building/cache/backends/
title: Overview
- path: /build/building/cache/backends/inline/
title: Inline
- path: /build/building/cache/backends/local/
title: Local
- path: /build/building/cache/backends/registry/
title: Registry
- path: /build/building/cache/backends/gha/
title: GitHub Actions
- path: /build/building/cache/backends/azblob/
title: Azure Blob Storage
- path: /build/building/cache/backends/s3/
title: Amazon S3
- sectiontitle: Exporters
section:
- path: /build/building/exporters/
title: Overview
- path: /build/building/exporters/image-registry/
title: Image and registry exporters
- path: /build/building/exporters/local-tar/
title: Local and tar exporters
- path: /build/building/exporters/oci-docker/
title: OCI and Docker exporters
- path: /build/building/multi-stage/
title: Multi-stage builds
- path: /build/building/multi-platform/
title: Multi-platform images
- path: /build/building/base-images/
title: Create your own base image
- sectiontitle: Customizing builds
- path: /build/building/multiple-builders/
title: Using multiple builders
- sectiontitle: Drivers
section:
- path: /build/customize/color-output-controls/
title: Color output controls
- path: /build/drivers/
title: Drivers overview
- path: /build/drivers/docker/
title: Docker driver
- path: /build/drivers/docker-container/
title: Docker container driver
- path: /build/drivers/kubernetes/
title: Kubernetes driver
- path: /build/drivers/remote/
title: Remote driver
- sectiontitle: Cache
section:
- path: /build/cache/
title: Optimizing builds with cache
- path: /build/cache/garbage-collection/
title: Garbage collection
- sectiontitle: Cache backends
section:
- path: /build/cache/backends/
title: Overview
- path: /build/cache/backends/inline/
title: Inline
- path: /build/cache/backends/local/
title: Local
- path: /build/cache/backends/registry/
title: Registry
- path: /build/cache/backends/gha/
title: GitHub Actions
- path: /build/cache/backends/azblob/
title: Azure Blob Storage
- path: /build/cache/backends/s3/
title: Amazon S3
- sectiontitle: Exporters
section:
- path: /build/exporters/
title: Overview
- path: /build/exporters/image-registry/
title: Image and registry exporters
- path: /build/exporters/local-tar/
title: Local and tar exporters
- path: /build/exporters/oci-docker/
title: OCI and Docker exporters
- sectiontitle: Continuous integration
section:
- path: /build/ci/
title: CI with Docker
- sectiontitle: GitHub Actions
section:
- path: /build/ci/github-actions/
title: Introduction
- path: /build/ci/github-actions/examples/
title: Examples
- path: /build/ci/github-actions/configure-builder/
title: Configuring your builder
- sectiontitle: Bake
section:
- path: /build/bake/
@ -1645,24 +1657,8 @@ manuals:
title: Configure
- path: /build/buildkit/toml-configuration/
title: TOML configuration
- sectiontitle: Buildx
section:
- path: /build/buildx/install/
title: Install Buildx
- path: /build/buildx/multiple-builders/
title: Using multiple builders
- sectiontitle: Continuous integration
section:
- path: /build/ci/
title: CI with Docker
- sectiontitle: GitHub Actions
section:
- path: /build/ci/github-actions/
title: Introduction
- path: /build/ci/github-actions/examples/
title: Examples
- path: /build/ci/github-actions/configure-builder/
title: Configuring your builder
- path: /build/buildkit/color-output-controls/
title: Color output controls
- path: /build/release-notes/
title: Release notes
- sectiontitle: Docker Compose

View File

@ -92,7 +92,7 @@
- /go/attack-surface/
"/build/buildkit/#getting-started":
- /go/buildkit/
"/build/buildx/install/":
"/build/install-buildx/":
# Instructions on installing Buildx. Redirect used in Docker CLI.
- /go/buildx/
"/engine/reference/commandline/compose_build/":

View File

@ -1,4 +1,5 @@
---
title: Create a base image
description: How to create base images
keywords: images, base image, examples
redirect_from:
@ -6,7 +7,6 @@ redirect_from:
- /engine/articles/baseimages/
- /engine/userguide/eng-image/baseimages/
- /develop/develop-images/baseimages/
title: Create a base image
---
Most Dockerfiles start from a parent image. If you need to completely control

View File

@ -116,9 +116,9 @@ default * docker
```
This displays the default builtin driver, that uses the BuildKit server
components built directly into the docker engine, also known as the [`docker` driver](../building/drivers/docker.md).
components built directly into the docker engine, also known as the [`docker` driver](../drivers/docker.md).
Create a new builder using the [`docker-container` driver](../building/drivers/docker-container.md)
Create a new builder using the [`docker-container` driver](../drivers/docker-container.md)
which gives you access to more complex features like multi-platform builds
and the more advanced cache exporters, which are currently unsupported in the
default `docker` driver:

View File

@ -2,6 +2,8 @@
title: Color output controls
description: Learn how to control the color of BuildKit progress output.
keywords: build, buildx buildkit, color, terminal
redirect_from:
- /build/customize/color-output-controls/
---
BuildKit and Buildx have support for modifying the colors that are used to

View File

@ -1,37 +0,0 @@
---
title: Using multiple builders
description: How to instantiate and work with multiple builders
keywords: build, buildx, buildkit, builders, build drivers
---
By default, Buildx uses the `docker` driver if it is supported, providing a user
experience very similar to the native `docker build`. Note that you must use a
local shared daemon to build your applications.
Buildx allows you to create new instances of isolated builders. You can use this
to get a scoped environment for your CI builds that does not change the state of
the shared daemon, or for isolating builds for different projects. You can create
a new instance for a set of remote nodes, forming a build farm, and quickly
switch between them.
You can create new instances using the [`docker buildx create`](../../engine/reference/commandline/buildx_create.md)
command. This creates a new builder instance with a single node based on your
current configuration.
To use a remote node you can specify the `DOCKER_HOST` or the remote context name
while creating the new builder. After creating a new instance, you can manage its
lifecycle using the [`docker buildx inspect`](../../engine/reference/commandline/buildx_inspect.md),
[`docker buildx stop`](../../engine/reference/commandline/buildx_stop.md), and
[`docker buildx rm`](../../engine/reference/commandline/buildx_rm.md) commands.
To list all available builders, use [`docker buildx ls`](../../engine/reference/commandline/buildx_ls.md).
After creating a new builder you can also append new nodes to it.
To switch between different builders, use [`docker buildx use <name>`](../../engine/reference/commandline/buildx_use.md).
After running this command, the build commands will automatically use this
builder.
Docker also features a [`docker context`](../../engine/reference/commandline/context.md)
command that you can use to provide names for remote Docker API endpoints. Buildx
integrates with `docker context` to ensure all the contexts automatically get a
default builder instance. You can also set the context name as the target when
you create a new builder instance or when you add a node to it.

View File

@ -1,6 +1,8 @@
---
title: "Azure Blob Storage cache"
keywords: build, buildx, cache, backend, azblob, azure
redirect_from:
- /build/building/cache/backends/azblob/
---
> **Warning**

View File

@ -1,6 +1,8 @@
---
title: "GitHub Actions cache"
keywords: build, buildx, cache, backend, gha, github, actions
redirect_from:
- /build/building/cache/backends/gha/
---
> **Warning**
@ -110,4 +112,4 @@ For more information on the `gha` cache backend, see the
[BuildKit README](https://github.com/moby/buildkit#github-actions-cache-experimental){:target="blank" rel="noopener" class=""}.
For more information about using GitHub Actions with Docker, see
[Introduction to GitHub Actions](../../../ci/github-actions/index.md)
[Introduction to GitHub Actions](../../ci/github-actions/index.md)

View File

@ -1,6 +1,8 @@
---
title: "Cache storage backends"
keywords: build, buildx, cache, backend, gha, azblob, s3, registry, local
redirect_from:
- /build/building/cache/backends/
---
To ensure fast builds, BuildKit automatically caches the build result in its own
@ -15,7 +17,7 @@ important to keep the runtime of image builds as low as possible.
>
> If you use secrets or credentials inside your build process, ensure you
> manipulate them using the dedicated
> [`--secret` option](../../../../engine/reference/commandline/buildx_build/#secret).
> [`--secret` option](../../../engine/reference/commandline/buildx_build/#secret).
> Manually managing secrets using `COPY` or `ARG` could result in leaked
> credentials.
@ -46,9 +48,9 @@ Buildx supports the following cache storage backends:
## Command syntax
To use any of the cache backends, you first need to specify it on build with the
[`--cache-to` option](../../../../engine/reference/commandline/buildx_build/#cache-to)
[`--cache-to` option](../../../engine/reference/commandline/buildx_build/#cache-to)
to export the cache to your storage backend of choice. Then, use the
[`--cache-from` option](../../../../engine/reference/commandline/buildx_build/#cache-from)
[`--cache-from` option](../../../engine/reference/commandline/buildx_build/#cache-from)
to import the cache from the storage backend into the current build. Unlike the
local BuildKit cache (which is always enabled), all of the cache storage
backends must be explicitly exported to, and explicitly imported from. All cache

View File

@ -1,6 +1,8 @@
---
title: "Inline cache"
keywords: build, buildx, cache, backend, inline
redirect_from:
- /build/building/cache/backends/inline/
---
The `inline` cache storage backend is the simplest way to get an external cache

View File

@ -1,6 +1,8 @@
---
title: "Local cache"
keywords: build, buildx, cache, backend, local
redirect_from:
- /build/building/cache/backends/local/
---
The `local` cache store is a simple cache option that stores your cache as files

View File

@ -1,6 +1,8 @@
---
title: "Registry cache"
keywords: build, buildx, cache, backend, registry
redirect_from:
- /build/building/cache/backends/registry/
---
The `registry` cache storage can be thought of as an extension to the `inline`

View File

@ -1,6 +1,8 @@
---
title: "Amazon S3 cache"
keywords: build, buildx, cache, backend, s3
redirect_from:
- /build/building/cache/backends/s3/
---
> **Warning**

View File

@ -1,10 +1,12 @@
---
title: Garbage collection
keywords: build, buildx, buildkit, garbage collection, prune
redirect_from:
- /build/building/cache/garbage-collection/
---
While [`docker builder prune`](../../../engine/reference/commandline/builder_prune.md)
or [`docker buildx prune`](../../../engine/reference/commandline/buildx_prune.md)
While [`docker builder prune`](../../engine/reference/commandline/builder_prune.md)
or [`docker buildx prune`](../../engine/reference/commandline/buildx_prune.md)
commands run at once, garbage collection runs periodically and follows an
ordered list of prune policies.
@ -19,7 +21,7 @@ Depending on the [driver](../drivers/index.md) used by your builder instance,
the garbage collection will use a different configuration file.
If you're using the [`docker` driver](../drivers/docker.md), garbage collection
can be configured in the [Docker Daemon configuration](../../../engine/reference/commandline/dockerd.md#daemon-configuration-file).
can be configured in the [Docker Daemon configuration](../../engine/reference/commandline/dockerd.md#daemon-configuration-file).
file:
```json
@ -39,7 +41,7 @@ file:
```
For other drivers, garbage collection can be configured using the
[BuildKit configuration](../../buildkit/toml-configuration.md) file:
[BuildKit configuration](../buildkit/toml-configuration.md) file:
```toml
[worker.oci]

View File

@ -4,6 +4,8 @@ description: Improve your build speeds by taking advantage of the builtin cache
keywords: >
build, buildx, buildkit, dockerfile, image layers, build instructions, build
context
redirect_from:
- /build/building/cache/
---
You will likely find yourself rebuilding the same Docker image over and over
@ -35,7 +37,7 @@ Each instruction in this Dockerfile translates (roughly) to a layer in your
final image. You can think of image layers as a stack, with each layer adding
more content on top of the layers that came before it:
![Image layer diagram showing the above commands chained together one after the other](../../images/cache-stack.svg){:.invertible}
![Image layer diagram showing the above commands chained together one after the other](../images/cache-stack.svg){:.invertible}
Whenever a layer changes, that layer will need to be re-built. For example,
suppose you make a change to your program in the `main.c` file. After this
@ -43,13 +45,13 @@ change, the `COPY` command will have to run again in order for those changes to
appear in the image. In other words, Docker will invalidate the cache for this
layer.
![Image layer diagram, but now with the link between COPY and WORKDIR marked as invalid](../../images/cache-stack-invalidate-copy.svg){:.invertible}
![Image layer diagram, but now with the link between COPY and WORKDIR marked as invalid](../images/cache-stack-invalidate-copy.svg){:.invertible}
If a layer changes, all other layers that come after it are also affected. When
the layer with the `COPY` command gets invalidated, all layers that follow will
need to run again, too:
![Image layer diagram, but now with all links after COPY marked as invalid](../../images/cache-stack-invalidate-rest.svg){:.invertible}
![Image layer diagram, but now with all links after COPY marked as invalid](../images/cache-stack-invalidate-rest.svg){:.invertible}
And that's the Docker build cache in a nutshell. Once a layer changes, then all
downstream layers need to be rebuilt as well. Even if they wouldn't build
@ -65,7 +67,7 @@ anything differently, they still need to re-run.
> the image on the same host one week later will still get you the same packages
> as before. The only way to force a rebuild is by making sure that a layer
> before it has changed, or by clearing the build cache using
> [`docker builder prune`](/engine/reference/commandline/builder_build/).
> [`docker builder prune`](../../engine/reference/commandline/builder_prune/).
## How can I use the cache efficiently?
@ -129,7 +131,7 @@ To get started, here are a few tips and tricks:
Be considerate of what files you add to the image.
Running a command like `COPY . /src` will `COPY` your entire [build context](../context.md)
Running a command like `COPY . /src` will `COPY` your entire [build context](../building/context.md)
into the image. If you've got logs, package manager artifacts, or even previous
build results in your current directory, those will also be copied over. This
could make your image larger than it needs to be, especially as those files are
@ -151,7 +153,7 @@ COPY . /src
```
You can also create a
[`.dockerignore` file](../../../engine/reference/builder.md#dockerignore-file),
[`.dockerignore` file](../../engine/reference/builder.md#dockerignore-file),
and use that to specify which files and directories to exclude from the build
context.

View File

@ -45,7 +45,7 @@ Logs will be available at the end of a job:
## Daemon configuration
You can provide a [BuildKit configuration](../../buildkit/toml-configuration.md)
to your builder if you're using the [`docker-container` driver](../../building/drivers/docker-container.md)
to your builder if you're using the [`docker-container` driver](../../drivers/docker-container.md)
(default) with the `config` or `config-inline` inputs:
### Registry mirror
@ -133,7 +133,7 @@ fields:
| `buildkitd-flags` | String | [Flags for buildkitd](../../../engine/reference/commandline/buildx_create.md#buildkitd-flags) daemon |
| `platforms` | String | Fixed [platforms](../../../engine/reference/commandline/buildx_create.md#platform) for the node. If not empty, values take priority over the detected ones. |
Here is an example using remote nodes with the [`remote` driver](../../building/drivers/remote.md)
Here is an example using remote nodes with the [`remote` driver](../../drivers/remote.md)
and [TLS authentication](#tls-authentication):
{% raw %}
@ -178,7 +178,7 @@ using SSH or TLS.
### SSH authentication
To be able to connect to an SSH endpoint using the [`docker-container` driver](../../building/drivers/docker-container.md),
To be able to connect to an SSH endpoint using the [`docker-container` driver](../../drivers/docker-container.md),
you have to set up the SSH private key and configuration on the GitHub Runner:
{% raw %}
@ -209,7 +209,7 @@ jobs:
### TLS authentication
You can also [set up a remote BuildKit instance](../../building/drivers/remote.md#example-remote-buildkit-in-docker-container)
You can also [set up a remote BuildKit instance](../../drivers/remote.md#example-remote-buildkit-in-docker-container)
using the remote driver. To ease the integration in your workflow, you can use
an environment variables that sets up authentication using the BuildKit client
certificates for the `tcp://`:
@ -286,7 +286,7 @@ some packages may be particularly resource-intensive to build and require more
compute. Or they require a builder equipped with a particular capability or
hardware.
For more information about remote builder, see [`remote` driver](../../building/drivers/remote.md)
For more information about remote builder, see [`remote` driver](../../drivers/remote.md)
and the [append builder nodes example](#append-additional-nodes-to-the-builder).
{% raw %}

View File

@ -201,12 +201,12 @@ actions.
> **Note**
>
> See [Cache storage backends](../../building/cache/backends/index.md) for more
> See [Cache storage backends](../../cache/backends/index.md) for more
> details about cache storage backends.
### Inline cache
In most cases you want to use the [inline cache exporter](../../building/cache/backends/inline.md).
In most cases you want to use the [inline cache exporter](../../cache/backends/inline.md).
However, note that the `inline` cache exporter only supports `min` cache mode.
To use `max` cache mode, push the image and the cache separately using the
registry cache exporter with the `cache-to` option, as shown in the [registry cache example](#registry-cache).
@ -251,7 +251,7 @@ jobs:
### Registry cache
You can import/export cache from a cache manifest or (special) image
configuration on the registry with the [registry cache exporter](../../building/cache/backends/registry.md).
configuration on the registry with the [registry cache exporter](../../cache/backends/registry.md).
{% raw %}
```yaml
@ -300,7 +300,7 @@ jobs:
> if you experience any issues.
{: .warning }
The [GitHub Actions cache exporter](../../building/cache/backends/gha.md)
The [GitHub Actions cache exporter](../../cache/backends/gha.md)
backend uses the [GitHub Cache API](https://github.com/tonistiigi/go-actions-cache/blob/master/api.md)
to fetch and upload cache blobs. That's why you should only use this cache
backend in a GitHub Action workflow, as the `url` (`$ACTIONS_CACHE_URL`) and
@ -354,7 +354,7 @@ jobs:
{: .warning }
You can also leverage [GitHub cache](https://docs.github.com/en/actions/using-workflows/caching-dependencies-to-speed-up-workflows){:target="blank" rel="noopener" class=""}
using the [actions/cache](https://github.com/actions/cache) and [local cache exporter](../../building/cache/backends/local.md)
using the [actions/cache](https://github.com/actions/cache) and [local cache exporter](../../cache/backends/local.md)
with this action:
{% raw %}

View File

@ -3,6 +3,7 @@ title: "Docker container driver"
keywords: build, buildx, driver, builder, docker-container
redirect_from:
- /build/buildx/drivers/docker-container/
- /build/building/drivers/docker-container/
---
The buildx Docker container driver allows creation of a managed and customizable
@ -144,14 +145,14 @@ $ docker buildx build \
You can customize the network that the builder container uses. This is useful
if you need to use a specific network for your builds.
For example, let's [create a network](../../../engine/reference/commandline/network_create.md)
For example, let's [create a network](../../engine/reference/commandline/network_create.md)
named `foonet`:
```console
$ docker network create foonet
```
Now create a [`docker-container` builder](../../../engine/reference/commandline/buildx_create.md)
Now create a [`docker-container` builder](../../engine/reference/commandline/buildx_create.md)
that will use this network:
```console
@ -161,13 +162,13 @@ $ docker buildx create --use \
--driver-opt "network=foonet"
```
Boot and [inspect `mybuilder`](../../../engine/reference/commandline/buildx_inspect.md):
Boot and [inspect `mybuilder`](../../engine/reference/commandline/buildx_inspect.md):
```console
$ docker buildx inspect --bootstrap
```
[Inspect the builder container](../../../engine/reference/commandline/inspect.md)
[Inspect the builder container](../../engine/reference/commandline/inspect.md)
and see what network is being used:
{% raw %}
@ -180,4 +181,4 @@ map[foonet:0xc00018c0c0]
## Further reading
For more information on the Docker container driver, see the
[buildx reference](../../../engine/reference/commandline/buildx_create.md#driver).
[buildx reference](../../engine/reference/commandline/buildx_create.md#driver).

View File

@ -3,6 +3,7 @@ title: "Docker driver"
keywords: build, buildx, driver, builder, docker
redirect_from:
- /build/buildx/drivers/docker/
- /build/building/drivers/docker/
---
The Buildx Docker driver is the default driver. It uses the BuildKit server
@ -32,4 +33,4 @@ If you need additional configuration and flexibility, consider using the
## Further reading
For more information on the Docker driver, see the
[buildx reference](../../../engine/reference/commandline/buildx_create.md#driver).
[buildx reference](../../engine/reference/commandline/buildx_create.md#driver).

View File

@ -3,6 +3,8 @@ title: "Drivers overview"
keywords: build, buildx, driver, builder, docker-container, kubernetes, remote
redirect_from:
- /build/buildx/drivers/
- /build/building/drivers/
- /build/buildx/multiple-builders/
---
Buildx drivers are configurations for how and where the BuildKit backend runs.
@ -22,12 +24,12 @@ provide more flexibility and are better at handling advanced scenarios.
The following table outlines some differences between drivers.
| Feature | `docker` | `docker-container` | `kubernetes` | `remote` |
|:-----------------------------|:-----------:|:------------------:|:------------:|:------------------:|
| **Automatically load image** | ✅ | | | |
| **Cache export** | Inline only | ✅ | ✅ | ✅ |
| **Tarball output** | | ✅ | ✅ | ✅ |
| **Multi-arch images** | | ✅ | ✅ | ✅ |
| **BuildKit configuration** | | ✅ | ✅ | Managed externally |
| :--------------------------- | :---------: | :----------------: | :----------: | :----------------: |
| **Automatically load image** | ✅ | | | |
| **Cache export** | Inline only | ✅ | ✅ | ✅ |
| **Tarball output** | | ✅ | ✅ | ✅ |
| **Multi-arch images** | | ✅ | ✅ | ✅ |
| **BuildKit configuration** | | ✅ | ✅ | Managed externally |
## List available builders
@ -55,19 +57,37 @@ desktop-linux * docker
```
This is because the Docker driver builders are automatically pulled from the
available [Docker Contexts](../../../engine/context/working-with-contexts.md).
When you add new contexts using `docker context create`, these will appear in
your list of buildx builders.
available [Docker Contexts](../../engine/context/working-with-contexts.md). When
you add new contexts using `docker context create`, these will appear in your
list of buildx builders.
The asterisk (`*`) next to the builder name indicates that this is the selected
builder which gets used by default, unless you specify a builder using the
`--builder` option.
## Create a new builder
Use the [`docker buildx create`](../../../engine/reference/commandline/buildx_create.md)
Use the
[`docker buildx create`](../../engine/reference/commandline/buildx_create.md)
command to create a builder, and specify the driver using the `--driver` option.
```console
$ docker buildx create --name=<builder-name> --driver=<driver> --driver-opt=<driver-options>
```
This creates a new builder instance with a single build node. After creating a
new builder you can also
[append new nodes to it](../../engine/reference/commandline/buildx_create/#append).
To use a remote node for your builders, you can set the `DOCKER_HOST`
environment variable or provide a remote context name when creating the builder.
## Switch between builders
To switch between different builders, use the `docker buildx use <name>`
command. After running this command, the build commands will automatically use
this builder.
## What's next
Read about each of the Buildx drivers to learn about how they work and how to

View File

@ -3,6 +3,7 @@ title: "Kubernetes driver"
keywords: build, buildx, driver, builder, kubernetes
redirect_from:
- /build/buildx/drivers/kubernetes/
- /build/building/drivers/kubernetes/
---
The Buildx Kubernetes driver allows connecting your local development or CI
@ -26,7 +27,7 @@ The following table describes the available driver-specific options that you can
pass to `--driver-opt`:
| Parameter | Type | Default | Description |
|-------------------|-------------------|-----------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------|
| ----------------- | ----------------- | --------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------ |
| `image` | String | | Sets the image to use for running BuildKit. |
| `namespace` | String | Namespace in current Kubernetes context | Sets the Kubernetes namespace. |
| `replicas` | Integer | 1 | Sets the number of Pod replicas to create. See [scaling BuildKit][1] |
@ -96,7 +97,7 @@ replicas. `sticky` (the default) attempts to connect the same build performed
multiple times to the same node each time, ensuring better use of local cache.
For more information on scalability, see the options for
[buildx create](../../../engine/reference/commandline/buildx_create.md#driver-opt).
[buildx create](../../engine/reference/commandline/buildx_create.md#driver-opt).
## Node assignment
@ -127,15 +128,15 @@ $ docker buildx create \
## Multi-platform builds
The Buildx Kubernetes driver has support for creating
[multi-platform images](../multi-platform.md), either using QEMU or by
[multi-platform images](../building/multi-platform.md), either using QEMU or by
leveraging the native architecture of nodes.
### QEMU
Like the `docker-container` driver, the Kubernetes driver also supports using
[QEMU](https://www.qemu.org/){:target="blank" rel="noopener" class=""} (user mode)
to build images for non-native platforms. Include the `--platform` flag and
specify which platforms you want to output to.
[QEMU](https://www.qemu.org/){:target="blank" rel="noopener" class=""} (user
mode) to build images for non-native platforms. Include the `--platform` flag
and specify which platforms you want to output to.
For example, to build a Linux image for `amd64` and `arm64`:
@ -227,7 +228,8 @@ that you want to support.
The Kubernetes driver supports rootless mode. For more information on how
rootless mode works, and it's requirements, see
[here](https://github.com/moby/buildkit/blob/master/docs/rootless.md){:target="blank" rel="noopener" class=""}.
[here](https://github.com/moby/buildkit/blob/master/docs/rootless.md){:target="blank"
rel="noopener" class=""}.
To turn it on in your cluster, you can use the `rootless=true` driver option:
@ -255,11 +257,13 @@ This guide shows you how to:
Prerequisites:
- You have an existing Kubernetes cluster. If you don't already have one, you
can follow along by installing [minikube](https://minikube.sigs.k8s.io/docs/){:target="blank" rel="noopener" class=""}.
can follow along by installing
[minikube](https://minikube.sigs.k8s.io/docs/){:target="blank" rel="noopener"
class=""}.
- The cluster you want to connect to is accessible via the `kubectl` command,
with the `KUBECONFIG` environment variable
[set appropriately](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#set-the-kubeconfig-environment-variable){:target="blank" rel="noopener" class=""}
if necessary.
[set appropriately](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#set-the-kubeconfig-environment-variable){:target="blank"
rel="noopener" class=""} if necessary.
1. Create a `buildkit` namespace.
@ -325,4 +329,4 @@ That's it! You've now built an image from a Kubernetes pod, using Buildx!
## Further reading
For more information on the Kubernetes driver, see the
[buildx reference](../../../engine/reference/commandline/buildx_create.md#driver).
[buildx reference](../../engine/reference/commandline/buildx_create.md#driver).

View File

@ -3,6 +3,7 @@ title: "Remote driver"
keywords: build, buildx, driver, builder, remote
redirect_from:
- /build/buildx/drivers/remote/
- /build/building/drivers/remote/
---
The Buildx remote driver allows for more complex custom build workloads,

View File

@ -2,6 +2,8 @@
title: "Image and registry exporters"
keywords: >
build, buildx, buildkit, exporter, image, registry
redirect_from:
- /build/building/exporters/image-registry/
---
The `image` exporter outputs the build result into a container image format. The

View File

@ -3,11 +3,13 @@ title: "Exporters overview"
keywords: >
build, buildx, buildkit, exporter, image, registry, local, tar, oci, docker,
cacheonly
redirect_from:
- /build/building/exporters/
---
Exporters save your build results to a specified output type. You specify the
exporter to use with the
[`--output` CLI option](../../../engine/reference/commandline/buildx_build/#output).
[`--output` CLI option](../../engine/reference/commandline/buildx_build/#output).
Buildx supports the following exporters:
- `image`: exports the build result to a container image.
@ -16,10 +18,10 @@ Buildx supports the following exporters:
- `local`: exports the build root filesystem into a local directory.
- `tar`: packs the build root filesystem into a local tarball.
- `oci`: exports the build result to the local filesystem in the
[OCI image layout](https://github.com/opencontainers/image-spec/blob/v1.0.1/image-layout.md){:target="blank" rel="noopener" class=""}
[OCI image layout](https://github.com/opencontainers/image-spec/blob/v1.0.1/image-layout.md){:target="blank" rel="noopener" class="_"}
format.
- `docker`: exports the build result to the local filesystem in the
[Docker image](https://github.com/docker/docker/blob/v20.10.2/image/spec/v1.2.md){:target="blank" rel="noopener" class=""}
[Docker image](https://github.com/docker/docker/blob/v20.10.2/image/spec/v1.2.md){:target="blank" rel="noopener" class="_"}
format.
- `cacheonly`: doesn't export a build output, but runs the build and creates a
cache.
@ -148,8 +150,8 @@ specified location. The `tar` exporter creates a tarball archive file.
$ docker buildx build --output type=tar,dest=<path/to/output> .
```
The `local` exporter is useful in [multi-stage builds](../multi-stage.md) since
it allows you to export only a minimal number of build artifacts. For example,
The `local` exporter is useful in [multi-stage builds](../building/multi-stage.md)
since it allows you to export only a minimal number of build artifacts. For example,
self-contained binaries.
### Cache-only export
@ -182,7 +184,7 @@ WARNING: No output specified with docker-container driver.
## Multiple exporters
You can only specify a single exporter for any given build (see
[this pull request](https://github.com/moby/buildkit/pull/2760) for details){:target="blank" rel="noopener" class=""}.
[this pull request](https://github.com/moby/buildkit/pull/2760) for details){:target="blank" rel="noopener" class="_"}.
But you can perform multiple builds one after another to export the same content
twice. BuildKit caches the build, so unless any of the layers change, all
successive builds following the first are instant.
@ -240,8 +242,8 @@ the previous compression algorithm.
> **Note**
>
> The `gzip` and `estargz` compression methods use the [`compress/gzip` package](https://pkg.go.dev/compress/gzip){:target="blank" rel="noopener" class=""},
> while `zstd` uses the [`github.com/klauspost/compress/zstd` package](https://github.com/klauspost/compress/tree/master/zstd){:target="blank" rel="noopener" class=""}.
> The `gzip` and `estargz` compression methods use the [`compress/gzip` package](https://pkg.go.dev/compress/gzip){: target="blank" rel="noopener" class="_" },
> while `zstd` uses the [`github.com/klauspost/compress/zstd` package](https://github.com/klauspost/compress/tree/master/zstd){: target="blank" rel="noopener" class="_" }.
### OCI media types

View File

@ -2,6 +2,8 @@
title: "Local and tar exporters"
keywords: >
build, buildx, buildkit, exporter, local, tar
redirect_from:
- /build/building/exporters/local-tar/
---
The `local` and `tar` exporters output the root filesystem of the build result

View File

@ -2,6 +2,8 @@
title: "OCI and Docker exporters"
keywords: >
build, buildx, buildkit, exporter, oci, docker
redirect_from:
- /build/building/exporters/local-tar/
---
The `oci` exporter outputs the build result into an

View File

@ -8,14 +8,12 @@ redirect_from:
- /develop/develop-images/build_enhancements/
---
## Overview
Docker Build is one of Docker Engine's most used features. Whenever you are
creating an image you are using Docker Build. Build is a key part of your
software development life cycle allowing you to package and bundle your code and
ship it anywhere.
Engine uses a client-server architecture and is composed of multiple components
The Docker Engine uses a client-server architecture and is composed of multiple components
and tools. The most common method of executing a build is by issuing a
[`docker build` command](../engine/reference/commandline/build.md). The CLI
sends the request to Docker Engine which, in turn, executes your build.
@ -26,19 +24,19 @@ Engine is shipped with Moby [BuildKit](buildkit/index.md), the new component for
executing your builds by default.
The new client [Docker Buildx](https://github.com/docker/buildx){:target="blank" rel="noopener" class=""},
is a CLI plugin that extends the docker command with the full support of the
is a CLI plugin that extends the `docker` command with the full support of the
features provided by [BuildKit](buildkit/index.md) builder toolkit.
[`docker buildx build` command](../engine/reference/commandline/buildx_build.md)
provides the same user experience as `docker build` with many new features like
creating scoped [builder instances](building/drivers/index.md), building against
creating scoped [builder instances](drivers/index.md), building against
multiple nodes concurrently, outputs configuration, inline
[build caching](building/cache/index.md), and specifying target platform. In
addition, Buildx also supports new features that are not yet available for
[build caching](cache/index.md), and specifying target platform. In
addition, Buildx also supports new features that aren't yet available for
regular `docker build` like building manifest lists, distributed caching, and
exporting build results to OCI image tarballs.
Docker Build is way more than a simple build command and is not only about
packaging your code, it's a whole ecosystem of tools and features that support
Docker Build is more than a simple build command, and it's not only about
packaging your code. It's a whole ecosystem of tools and features that support
not only common workflow tasks but also provides support for more complex and
advanced scenarios.
@ -88,11 +86,11 @@ advanced scenarios.
<div class="col-xs-12 col-sm-12 col-md-12 col-lg-4 block">
<div class="component">
<div class="component-icon">
<a href="/build/building/drivers/">
<a href="/build/drivers/">
<img src="/assets/images/build-drivers.svg" alt="Silhouette of an engineer, with cogwheels in the background" width="70px" height="70px">
</a>
</div>
<h2><a href="/build/building/drivers/">Build drivers</a></h2>
<h2><a href="/build/drivers/">Build drivers</a></h2>
<p>
Configure where and how you run your builds.
</p>
@ -101,11 +99,11 @@ advanced scenarios.
<div class="col-xs-12 col-sm-12 col-md-12 col-lg-4 block">
<div class="component">
<div class="component-icon">
<a href="/build/building/cache/">
<a href="/build/cache/">
<img src="/assets/images/build-cache.svg" alt="Two arrows rotating in a circle" width="70px" height="70px">
</a>
</div>
<h2><a href="/build/building/cache/">Build caching</a></h2>
<h2><a href="/build/cache/">Build caching</a></h2>
<p>
Avoid unnecessary repetitions of costly operations, such as package installs.
</p>
@ -129,11 +127,11 @@ advanced scenarios.
<div class="col-xs-12 col-sm-12 col-md-12 col-lg-4 block">
<div class="component">
<div class="component-icon">
<a href="/build/building/exporters/">
<a href="/build/exporters/">
<img src="/assets/images/build-exporters.svg" alt="Arrow coming out of a box" width="70px" height="70px">
</a>
</div>
<h2><a href="/build/building/exporters/">Exporters</a></h2>
<h2><a href="/build/exporters/">Exporters</a></h2>
<p>
Export any artifact you like, not just Docker images.
</p>

View File

@ -2,19 +2,20 @@
title: Install Docker Buildx
description: How to install Docker Buildx
keywords: build, buildx, buildkit
redirect_from:
- /build/buildx/install/
---
## Docker Desktop
Docker Buildx is included in [Docker Desktop](../../desktop/index.md) for
Windows, macOS, and Linux.
Docker Buildx is included by default in Docker Desktop.
## Linux packages
## Docker Engine via package manager
Docker Linux packages also include Docker Buildx when installed using the
[DEB or RPM packages](../../engine/install/index.md).
`.deb` or `.rpm` packages.
## Dockerfile
## Install using a Dockerfile
Here is how to install and use Buildx inside a Dockerfile through the
[`docker/buildx-bin`](https://hub.docker.com/r/docker/buildx-bin){:target="blank" rel="noopener" class=""}
@ -27,21 +28,21 @@ COPY --from=docker/buildx-bin:latest /buildx /usr/libexec/docker/cli-plugins/doc
RUN docker buildx version
```
## Manual download
## Download manually
> **Important**
>
> This section is for unattended installation of the buildx component. These
> This section is for unattended installation of the Buildx component. These
> instructions are mostly suitable for testing purposes. We do not recommend
> installing buildx using manual download in production environments as they
> installing Buildx using manual download in production environments as they
> will not be updated automatically with security updates.
>
> On Windows, macOS, and Linux workstations we recommend that you install
> [Docker Desktop](../../desktop/index.md) instead. For Linux servers, we recommend
> that you follow the [instructions specific for your distribution](#linux-packages).
> Docker Desktop instead. For Linux servers, we recommend that you follow the
> instructions specific for your distribution.
{: .important}
You can also download the latest binary from the [releases page on GitHub](https://github.com/docker/buildx/releases/latest){:target="blank" rel="noopener" class=""}.
You can also download the latest binary from the [releases page on GitHub](https://github.com/docker/buildx/releases/latest){:target="blank" rel="noopener" class="_"}.
Rename the relevant binary and copy it to the destination matching your OS:
@ -70,11 +71,11 @@ On Windows:
> $ chmod +x ~/.docker/cli-plugins/docker-buildx
> ```
## Set buildx as the default builder
## Set Buildx as the default builder
Running the command [`docker buildx install`](../../engine/reference/commandline/buildx_install.md)
sets up docker builder command as an alias to `docker buildx`. This results in
the ability to have [`docker build`](../../engine/reference/commandline/build.md)
use the current buildx builder.
Running the command [`docker buildx install`](../engine/reference/commandline/buildx_install.md)
sets up the `docker build` command as an alias to `docker buildx`. This results in
the ability to have [`docker build`](../engine/reference/commandline/build.md)
use the current Buildx builder.
To remove this alias, run [`docker buildx uninstall`](../../engine/reference/commandline/buildx_uninstall.md).
To remove this alias, run [`docker buildx uninstall`](../engine/reference/commandline/buildx_uninstall.md).

View File

@ -26,7 +26,7 @@ For more details, see the complete release notes in the [Buildx GitHub repositor
### New
* Support for new [driver `remote`](building/drivers/remote.md) that you can use
* Support for new [driver `remote`](drivers/remote.md) that you can use
to connect to any already running BuildKit instance {% include github_issue.md repo="docker/buildx" number="1078" %}
{% include github_issue.md repo="docker/buildx" number="1093" %} {% include github_issue.md repo="docker/buildx" number="1094" %}
{% include github_issue.md repo="docker/buildx" number="1103" %} {% include github_issue.md repo="docker/buildx" number="1134" %}

View File

@ -235,7 +235,7 @@ View Tags on DockerHub to see multi-platform result:
> `Multiple platforms feature is currently not supported for docker driver. Please switch to a different driver`.
>
> Install a newer version of Buildx following the instructions on
> [Docker Buildx Manual download](../../build/buildx/install/#manual-download).
> [how to manually download Buildx](../../build/install-buildx/#download-manually).
- In Docker Desktop 4.12.0, the containerd image store feature is incompatible
with the Kubernetes cluster support. Turn off the containerd image store