Merge pull request #18610 from dvdksn/hb-ga

hb ga
This commit is contained in:
David Karlsson 2023-12-15 21:44:44 +01:00 committed by GitHub
commit 5167c68f7f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
11 changed files with 823 additions and 689 deletions

View File

@ -10,7 +10,7 @@ You can set the following environment variables to enable, disable, or change
the behavior of features related to building:
| Variable | Type | Description |
|-----------------------------------------------------------------------------|-------------------|------------------------------------------------------|
| --------------------------------------------------------------------------- | ----------------- | ---------------------------------------------------- |
| [BUILDKIT_COLORS](#buildkit_colors) | String | Configure text color for the terminal output. |
| [BUILDKIT_HOST](#buildkit_host) | String | Specify host to use for remote builders. |
| [BUILDKIT_PROGRESS](#buildkit_progress) | String | Configure type of progress output. |
@ -250,4 +250,4 @@ Usage:
```console
$ export BUILDX_NO_DEFAULT_LOAD=1
```
```

View File

@ -39,8 +39,8 @@ You can build multi-platform images using three different strategies,
depending on your use case:
1. Using the [QEMU emulation](#qemu) support in the kernel
2. Building on [multiple native nodes](#multiple-native-nodes) using the same
builder instance
2. Building on a single builder backed by
[multiple nodes of different architectures](#multiple-native-nodes).
3. Using a stage in your Dockerfile to [cross-compile](#cross-compilation) to
different architectures
@ -58,7 +58,8 @@ loads it through a binary registered in the `binfmt_misc` handler.
> Emulation with QEMU can be much slower than native builds, especially for
> compute-heavy tasks like compilation and compression or decompression.
>
> Use [cross-compilation](#cross-compilation) instead, if possible.
> Use [multiple native nodes](#multiple-native-nodes) or
> [cross-compilation](#cross-compilation) instead, if possible.
For QEMU binaries registered with `binfmt_misc` on the host OS to work
transparently inside containers, they must be statically compiled and
@ -78,10 +79,13 @@ $ docker run --privileged --rm tonistiigi/binfmt --install all
### Multiple native nodes
Using multiple native nodes provide better support for more complicated cases
that are not handled by QEMU and generally have better performance. You can
add additional nodes to the builder instance using the `--append` flag.
that QEMU can't handle, and also provides better performance.
Assuming contexts `node-amd64` and `node-arm64` exist in `docker context ls`;
You can add additional nodes to a builder using the `--append` flag.
The following command creates a multi-node builder from Docker contexts named
`node-amd64` and `node-arm64`. This example assumes that you've already added
those contexts.
```console
$ docker buildx create --use --name mybuild node-amd64
@ -90,9 +94,25 @@ $ docker buildx create --append --name mybuild node-arm64
$ docker buildx build --platform linux/amd64,linux/arm64 .
```
For information on using multiple native nodes in CI, with GitHub Actions,
refer to
[Configure your GitHub Actions builder](../ci/github-actions/configure-builder.md#append-additional-nodes-to-the-builder).
While this approach has advantages over emulation, managing multi-node builders
introduces some overhead of setting up and managing builder clusters.
Alternatively, you can use [Docker Build Cloud](/build/cloud/), a service that
provides managed multi-node builders on Docker's infrastructure. With Docker
Build Cloud, you get native multi-platform Arm and X86-64 builders without the
burden of maintaining them. Using cloud builders also provides additional
benefits, such as a shared build cache.
After signing up for Docker Build Cloud, add the builder to your local
environment and start building.
```console
$ docker buildx create --driver cloud <ORG>/<BUILDER_NAME>
cloud-<ORG>-<BUILDER_NAME>
$ docker buildx build --builder cloud-<ORG>-<BUILDER_NAME> \
--platform linux/amd64,linux/arm64,linux/arm/v7 \
--tag <IMAGE_NAME> \
--push .
```
### Cross-compilation

View File

@ -0,0 +1,68 @@
---
title: Docker Build Cloud
description: Get started with Docker Build Cloud
keywords: build, cloud, cloud build, remote builder
aliases:
- /hydrobuild/
---
Docker Build Cloud is a service that lets you build your container images
faster, both locally and in CI. Builds run on cloud infrastructure optimally
dimensioned for your workloads, no configuration required. The service uses a
remote build cache, ensuring fast builds anywhere and for all team members.
## How Docker Build Cloud works
Using Docker Build Cloud is no different from running a regular build. You invoke a
build the same way you normally would, using `docker buildx build`. The
difference is in where and how that build gets executed.
By default when you invoke a build command, the build runs on a local instance
of BuildKit, bundled with the Docker daemon. With Docker Build Cloud, you send
the build request to a BuildKit instance running remotely, in the cloud. The
build request is transmitted over a secure, end-to-end encrypted connection.
The remote builder executes the build steps, and sends the resulting build
output to the destination that you specify. For example, back to your local
Docker Engine image store, or to an image registry.
Docker Build Cloud provides several benefits over local builds:
- Improved build speed
- Shared build cache
- Native multi-platform builds
And the best part: you don't need to worry about managing builders or
infrastructure. Just connect to your builders, and start building.
> **Note**
>
> Docker Build Cloud is currently only available in the US East region. Users
> in Europe and Asia may experience increased latency compared to users based
> in North America.
>
> Support for multi-region builders is on the roadmap.
## Get Docker Build Cloud
To get started with Docker Build Cloud, [create a Docker
account](../../docker-id/_index.md) and sign up for the free plan on the
[Docker Build Cloud Dashboard](https://build.docker.com/).
> **Note**
>
> If your organization isn't already on a paid Docker subscription, you will
> need to provide a payment method to sign up for Docker Build Cloud. If you
> select the free plan, there will be no charges on the provided payment
> method, it's only required for verification purposes.
Once you've signed up and created a builder, continue by [setting up the
builder in your local environment](./setup.md).
For more information about the available subscription plans, see [Docker Build Cloud
subscriptions and features](/subscriptions/build-details).
## Frequently asked questions
The [Docker Build Cloud FAQ](./faq.md) page lists common questions and answers about
Docker Build Cloud.

366
content/build/cloud/ci.md Normal file
View File

@ -0,0 +1,366 @@
---
title: Use Docker Build Cloud in CI
description: Speed up your continuous integration pipelines with Docker Build Cloud in CI
keywords: build, cloud build, ci, gha, gitlab, buildkite, jenkins, circle ci
aliases:
- /hydrobuild/ci/
---
Using Docker Build Cloud in CI can speed up your build pipelines, which means less time
spent waiting and context switching. You control your CI workflows as usual,
and delegate the build execution to Docker Build Cloud.
Building with Docker Build Cloud in CI involves the following steps:
1. Sign in to a Docker account.
2. Set up Buildx and connect to the builder.
3. Run the build.
When using Docker Build Cloud in CI, it's recommended that you push the result to a
registry directly, rather than loading the image and then pushing it. Pushing
directly speeds up your builds and avoids unnecessary file transfers.
If you just want to build and discard the output, export the results to the
build cache or build without tagging the image. When you use Docker Build Cloud,
Buildx automatically loads the build result if you build a tagged image.
See [Loading build results](./usage/#loading-build-results) for details.
> **Note**
>
> Builds on Docker Build Cloud have a timeout limit of two hours. Builds that
> run for longer than two hours are automatically cancelled.
{{< tabs >}}
{{< tab name="GitHub Actions" >}}
```yaml
name: ci
on:
push:
branches:
- "main"
jobs:
docker:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Log in to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_USER }}
password: ${{ secrets.DOCKER_PAT }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: "lab:latest"
driver: cloud
endpoint: "<ORG>/default"
install: true
- name: Build and push
uses: docker/build-push-action@v5
with:
context: .
tags: "<IMAGE>"
# For pull requests, export results to the build cache.
# Otherwise, push to a registry.
outputs: ${{ github.event_name == 'pull_request' && 'type=cacheonly' || 'type=registry,push=true' }}
```
{{< /tab >}}
{{< tab name="GitLab" >}}
```yaml
default:
image: docker:24-dind
services:
- docker:24-dind
before_script:
- docker info
- echo "$DOCKER_PAT" | docker login --username "$DOCKER_USER" --password-stdin
- |
apk add curl jq
ARCH=${CI_RUNNER_EXECUTABLE_ARCH#*/}
BUILDX_URL=$(curl -s https://raw.githubusercontent.com/docker/actions-toolkit/main/.github/buildx-lab-releases.json | jq -r ".latest.assets[] | select(endswith(\"linux-$ARCH\"))")
mkdir -vp ~/.docker/cli-plugins/
curl --silent -L --output ~/.docker/cli-plugins/docker-buildx $BUILDX_URL
chmod a+x ~/.docker/cli-plugins/docker-buildx
- docker buildx create --use --driver cloud ${DOCKER_ORG}/default
variables:
IMAGE_NAME: <IMAGE>
DOCKER_ORG: <ORG>
# Build multi-platform image and push to a registry
build_push:
stage: build
script:
- |
docker buildx build \
--platform linux/amd64,linux/arm64 \
--tag "${IMAGE_NAME}:${CI_COMMIT_SHORT_SHA}" \
--push .
# Build an image and discard the result
build_cache:
stage: build
script:
- |
docker buildx build \
--platform linux/amd64,linux/arm64 \
--tag "${IMAGE_NAME}:${CI_COMMIT_SHORT_SHA}" \
--output type=cacheonly \
.
```
{{< /tab >}}
{{< tab name="Circle CI" >}}
```yaml
version: 2.1
jobs:
# Build multi-platform image and push to a registry
build_push:
machine:
image: ubuntu-2204:current
steps:
- checkout
- run: |
mkdir -vp ~/.docker/cli-plugins/
ARCH=amd64
BUILDX_URL=$(curl -s https://raw.githubusercontent.com/docker/actions-toolkit/main/.github/buildx-lab-releases.json | jq -r ".latest.assets[] | select(endswith(\"linux-$ARCH\"))")
curl --silent -L --output ~/.docker/cli-plugins/docker-buildx $BUILDX_URL
chmod a+x ~/.docker/cli-plugins/docker-buildx
- run: echo "$DOCKER_PAT" | docker login --username $DOCKER_USER --password-stdin
- run: docker buildx create --use --driver cloud "<ORG>/default"
- run: |
docker buildx build \
--platform linux/amd64,linux/arm64 \
--push \
--tag "<IMAGE>" .
# Build an image and discard the result
build_cache:
machine:
image: ubuntu-2204:current
steps:
- checkout
- run: |
mkdir -vp ~/.docker/cli-plugins/
ARCH=amd64
BUILDX_URL=$(curl -s https://raw.githubusercontent.com/docker/actions-toolkit/main/.github/buildx-lab-releases.json | jq -r ".latest.assets[] | select(endswith(\"linux-$ARCH\"))")
curl --silent -L --output ~/.docker/cli-plugins/docker-buildx $BUILDX_URL
chmod a+x ~/.docker/cli-plugins/docker-buildx
- run: echo "$DOCKER_PAT" | docker login --username $DOCKER_USER --password-stdin
- run: docker buildx create --use --driver cloud "<ORG>/default"
- run: |
docker buildx build \
--tag temp \
--output type=cacheonly \
.
workflows:
pull_request:
jobs:
- build_cache
release:
jobs:
- build_push
```
{{< /tab >}}
{{< tab name="Buildkite" >}}
The following example sets up a Buildkite pipeline using Docker Build Cloud. The
example assumes that the pipeline name is `build-push-docker` and that you
manage the Docker access token using environment hooks, but feel free to adapt
this to your needs.
Add the following `environment` hook agent's hook directory:
```bash
#!/bin/bash
set -euo pipefail
if [[ "$BUILDKITE_PIPELINE_NAME" == "build-push-docker" ]]; then
export DOCKER_PAT="<DOCKER_PERSONAL_ACCESS_TOKEN>"
fi
```
Create a `pipeline.yml` that uses the `docker-login` plugin:
```yaml
env:
DOCKER_ORG: <ORG>
IMAGE_NAME: <IMAGE>
steps:
- command: ./build.sh
key: build-push
plugins:
- docker-login#v2.1.0:
username: <DOCKER_USER>
password-env: DOCKER_PAT # the variable name in the environment hook
```
Create the `build.sh` script:
```bash
DOCKER_DIR=/usr/libexec/docker
# Get download link for latest buildx binary.
# Set $ARCH to the CPU architecture (e.g. amd64, arm64)
UNAME_ARCH=`uname -m`
case $UNAME_ARCH in
aarch64)
ARCH="arm64";
;;
amd64)
ARCH="amd64";
;;
*)
ARCH="amd64";
;;
esac
BUILDX_URL=$(curl -s https://raw.githubusercontent.com/docker/actions-toolkit/main/.github/buildx-lab-releases.json | jq -r ".latest.assets[] | select(endswith(\"linux-$ARCH\"))")
# Download docker buildx with Hyrdobuild support
curl --silent -L --output $DOCKER_DIR/cli-plugins/docker-buildx $BUILDX_URL
chmod a+x ~/.docker/cli-plugins/docker-buildx
# Connect to your builder and set it as the default builder
docker buildx create --use --driver cloud "$DOCKER_ORG/default"
# Cache-only image build
docker buildx build \
--platform linux/amd64,linux/arm64 \
--tag "$IMAGE_NAME:$BUILDKITE_COMMIT" \
--output type=cacheonly \
.
# Build, tag, and push a multi-arch docker image
docker buildx build \
--platform linux/amd64,linux/arm64 \
--push \
--tag "$IMAGE_NAME:$BUILDKITE_COMMIT" \
.
```
{{< /tab >}}
{{< tab name="Jenkins" >}}
```groovy
pipeline {
agent any
environment {
ARCH = 'amd64'
DOCKER_PAT = credentials('docker-personal-access-token')
DOCKER_USER = credentials('docker-username')
DOCKER_ORG = '<ORG>'
IMAGE_NAME = '<IMAGE>'
}
stages {
stage('Build') {
environment {
BUILDX_URL = sh (returnStdout: true, script: 'curl -s https://raw.githubusercontent.com/docker/actions-toolkit/main/.github/buildx-lab-releases.json | jq -r ".latest.assets[] | select(endswith(\\"linux-$ARCH\\"))"').trim()
}
steps {
sh 'mkdir -vp ~/.docker/cli-plugins/'
sh 'curl --silent -L --output ~/.docker/cli-plugins/docker-buildx $BUILDX_URL'
sh 'chmod a+x ~/.docker/cli-plugins/docker-buildx'
sh 'echo "$DOCKER_PAT" | docker login --username $DOCKER_USER --password-stdin'
sh 'docker buildx create --use --driver cloud "$DOCKER_ORG/default"'
// Cache-only build
sh 'docker buildx build --platform linux/amd64,linux/arm64 --tag "$IMAGE_NAME" --output type=cacheonly .'
// Build and push a multi-platform image
sh 'docker buildx build --platform linux/amd64,linux/arm64 --push --tag "$IMAGE_NAME" .'
}
}
}
}
```
{{< /tab >}}
{{< tab name="Shell" >}}
```bash
#!/bin/bash
# Get download link for latest buildx binary. Set $ARCH to the CPU architecture (e.g. amd64, arm64)
ARCH=amd64
BUILDX_URL=$(curl -s https://raw.githubusercontent.com/docker/actions-toolkit/main/.github/buildx-lab-releases.json | jq -r ".latest.assets[] | select(endswith(\"linux-$ARCH\"))")
# Download docker buildx with Hyrdobuild support
mkdir -vp ~/.docker/cli-plugins/
curl --silent -L --output ~/.docker/cli-plugins/docker-buildx $BUILDX_URL
chmod a+x ~/.docker/cli-plugins/docker-buildx
# Login to Docker Hub. For security reasons $DOCKER_PAT should be a Personal Access Token. See https://docs.docker.com/security/for-developers/access-tokens/
echo "$DOCKER_PAT" | docker login --username $DOCKER_USER --password-stdin
# Connect to your builder and set it as the default builder
docker buildx create --use --driver cloud "<ORG>/default"
# Cache-only image build
docker buildx build \
--tag temp \
--output type=cacheonly \
.
# Build, tag, and push a multi-arch docker image
docker buildx build \
--platform linux/amd64,linux/arm64 \
--push \
--tag "<IMAGE>" \
.
```
{{< /tab >}}
{{< tab name="Docker Compose" >}}
Use this implementation if you want to use `docker compose build` with
Docker Build Cloud in CI.
```bash
#!/bin/bash
# Get download link for latest buildx binary. Set $ARCH to the CPU architecture (e.g. amd64, arm64)
ARCH=amd64
BUILDX_URL=$(curl -s https://raw.githubusercontent.com/docker/actions-toolkit/main/.github/buildx-lab-releases.json | jq -r ".latest.assets[] | select(endswith(\"linux-$ARCH\"))")
COMPOSE_URL=$(curl -sL \
-H "Accept: application/vnd.github+json" \
-H "Authorization: Bearer <GITHUB_TOKEN>" \
-H "X-GitHub-Api-Version: 2022-11-28" \
https://api.github.com/repos/docker/compose-desktop/releases \
| jq "[ .[] | select(.prerelease==false and .draft==false) ] | .[0].assets.[] | select(.name | endswith(\"linux-${ARCH}\")) | .browser_download_url")
# Download docker buildx with Hyrdobuild support
mkdir -vp ~/.docker/cli-plugins/
curl --silent -L --output ~/.docker/cli-plugins/docker-buildx $BUILDX_URL
curl --silent -L --output ~/.docker/cli-plugins/docker-compose $COMPOSE_URL
chmod a+x ~/.docker/cli-plugins/docker-buildx
chmod a+x ~/.docker/cli-plugins/docker-compose
# Login to Docker Hub. For security reasons $DOCKER_PAT should be a Personal Access Token. See https://docs.docker.com/security/for-developers/access-tokens/
echo "$DOCKER_PAT" | docker login --username $DOCKER_USER --password-stdin
# Connect to your builder and set it as the default builder
docker buildx create --use --driver cloud "<ORG>/default"
# Build the image build
docker compose build
```
{{< /tab >}}
{{< /tabs >}}

View File

@ -0,0 +1,63 @@
---
title: Docker Build Cloud FAQ
description: Frequently asked questions about Docker Build Cloud
keywords: build, cloud build, faq, troubleshooting
aliases:
- /hydrobuild/faq/
---
### How do I remove Docker Build Cloud from my system?
If you want to stop using Docker Build Cloud, remove the cloud builder using
the `docker buildx rm` command.
```console
$ docker buildx rm cloud-<ORG>-default
```
This doesn't deprovision the builder backend, it only removes the builder from
your local Docker client.
### Are builders shared between organizations?
No. Each cloud builder provisioned to an organization is completely
isolated to a single Amazon EC2 instance, with a dedicated EBS volume for build
cache, and end-to-end encryption. That means there are no shared processes or
data between cloud builders.
### Do I need to add my secrets to the builder to access private resources?
No. Your interface to Docker Build Cloud is Buildx, and you can use the existing
`--secret` and `--ssh` CLI flags for managing build secrets.
For more information, refer to:
- [`docker buildx build --secret`](/engine/reference/commandline/buildx_build/#secret)
- [`docker buildx build --ssh`](/engine/reference/commandline/buildx_build/#ssh)
### How do I unset Docker Build Cloud as the default builder?
If you've set a cloud builder as the default builder and want to revert to using the
default `docker` builder, run the following command:
```console
$ docker context use default
```
### How do I manage the build cache with Docker Build Cloud?
You don't need to manage the builder's cache manually. The system manages it
for you through [garbage collection](/build/cache/garbage-collection/).
Old cache is automatically removed if you hit your storage limit. You can check
your current cache state using the
[`docker buildx du` command](/engine/reference/commandline/buildx_du/).
To clear the builder's cache manually, you can use the
[`docker buildx prune` command](/engine/reference/commandline/buildx_prune/)
command. This works like pruning the cache for any other builder.
> **Note**
>
> Pruning a cloud builder's cache also removes the cache for other team members
> using the same builder.

View File

@ -0,0 +1,92 @@
---
title: Optimize for building in the cloud
description: Building remotely is different from building locally. Here's how to optimize for remote builders.
keywords: build, cloud build, optimize, remote, local, cloud
aliases:
- /hydrobuild/optimization/
---
Docker Build Cloud runs your builds remotely, and not on the machine where you
invoke the build. This means that file transfers between the client and builder
happen over the network.
Transferring files over the network has a higher latency and lower bandwidth
than local transfers. Docker Build Cloud has several features to mitigate this:
- It uses attached storage volumes for build cache, which makes reading and
writing cache very fast.
- Loading build results back to the client only pulls the layers that were
changed compared to previous builds.
Despite these optimizations, building remotely can still yield slow context
transfers and image loads, for large projects or if the network connection is
slow. Here are some ways that you can optimize your builds to make the transfer
more efficient:
- [Dockerignore files](#dockerignore-files)
- [Slim base images](#slim-base-images)
- [Multi-stage builds](#multi-stage-builds)
- [Fetch remote files in build](#fetch-remote-files-in-build)
- [Multi-threaded tools](#multi-threaded-tools)
### Dockerignore files
Using a [`.dockerignore` file](/build/building/context/#dockerignore-files),
you can be explicit about which local files you dont want to include in the
build context. Files caught by the glob patterns you specify in your
ignore-file aren't transferred to the remote builder.
Some examples of things you might want to add to your `.dockerignore` file are:
- `.git` — skip sending the version control history in the build context. Note
that this means you wont be able to run Git commands in your build steps,
such as `git rev-parse`.
- Directories containing build artifacts, such as binaries. Build artifacts
created locally during development.
- Vendor directories for package managers, such as `node_modules`.
In general, the contents of your `.dockerignore` file should be similar to what
you have in your `.gitignore`.
### Slim base images
Selecting smaller images for your `FROM` instructions in your Dockerfile can
help reduce the size of the final image. The [Alpine image](https://hub.docker.com/_/alpine)
is a good example of a minimal Docker image that provides all of the OS
utilities you would expect from a Linux container.
Theres also the [special `scratch` image](https://hub.docker.com/_/scratch),
which contains nothing at all. Useful for creating images of statically linked
binaries, for example.
### Multi-stage builds
[Multi-stage builds](/build/building/multi-stage/) can make your build run faster,
because stages can run in parallel. It can also make your end-result smaller.
Write your Dockerfile in such a way that the final runtime stage uses the
smallest possible base image, with only the resources that your program requires
to run.
Its also possible to
[copy resources from other images or stages](/build/building/multi-stage/#name-your-build-stages),
using the Dockerfile `COPY --from` instruction. This technique can reduce the
number of layers, and the size of those layers, in the final stage.
### Fetch remote files in build
When possible, you should fetch files from a remote location in the build,
rather than bundling the files into the build context. Downloading files on the
Docker Build Cloud server directly is better, because it will likely be faster
than transferring the files with the build context.
You can fetch remote files during the build using the
[Dockerfile `ADD` instruction](/engine/reference/builder/#add),
or in your `RUN` instructions with tools like `wget` and `rsync`.
### Multi-threaded tools
Some tools that you use in your build instructions may not utilize multiple
cores by default. One such example is `make` which uses a single thread by
default, unless you specify the `make --jobs=<n>` option. For build steps
involving such tools, try checking if you can optimize the execution with
parallelization.

View File

@ -0,0 +1,76 @@
---
title: Docker Build Cloud setup
description: How to get started with Docker Build Cloud
keywords: build, cloud build
aliases:
- /hydrobuild/setup/
---
Before you can start using Docker Build Cloud, you must add the builder to your local
environment.
## Prerequisites
To get started with Docker Build Cloud, you need to:
- Download and install Docker Desktop version 4.26.0 or later.
- Sign up for a Docker Build Cloud subscription in the [Docker Build Cloud Dashboard](https://build.docker.com/).
### Use Docker Build Cloud without Docker Desktop
To use Docker Build Cloud without Docker Desktop, you must download and install
a version of Buildx with support for Docker Build Cloud (the `cloud` driver).
You can find compatible Buildx binaries on the releases page of
[this repository](https://github.com/docker/buildx-desktop).
If you plan on building with Docker Build Cloud using the `docker compose
build` command, you also need a version of Docker Compose that supports Docker
Build Cloud. You can find compatible Docker Compose binaries on the releases
page of [this repository](https://github.com/docker/compose-desktop).
## Steps
You can add a cloud builder using the CLI, with the `docker buildx create`
command, or using the Docker Desktop settings GUI.
{{< tabs >}}
{{< tab name="CLI" >}}
1. Sign in to your Docker account.
```console
$ docker login
```
2. Add the cloud builder endpoint.
```console
$ docker buildx create --driver cloud <ORG>/<BUILDER_NAME>
```
Replace `ORG` with the Docker Hub namespace of your Docker organization.
This creates a builder named `cloud-ORG-BUILDER_NAME`.
{{< /tab >}}
{{< tab name="Docker Desktop" >}}
1. Sign in to your Docker account using the **Sign in** button in Docker Desktop.
2. Open the Docker Desktop settings and navigate to the **Builders** tab.
3. Under **Available builders**, select **Create builder**.
![Create builder GUI screenshot](/build/images/create-builder-gui.webp)
{{< /tab >}}
{{< /tabs >}}
The builder has native support for the `linux/amd64` and `linux/arm64`
architectures. This gives you a high-performance build cluster for building
multi-platform images natively.
## What's next
- See [Building with Docker Build Cloud](usage.md) for examples on how to use Docker Build Cloud.
- See [Use Docker Build Cloud in CI](ci.md) for examples on how to use Docker Build Cloud with CI systems.

View File

@ -0,0 +1,113 @@
---
title: Building with Docker Build Cloud
description: Invoke your cloud builds with the Buildx CLI client
keywords: build, cloud build, usage, cli, buildx, client
aliases:
- /hydrobuild/usage/
---
To build using Docker Build Cloud, invoke a build command and specify the name of the
builder using the `--builder` flag.
```console
$ docker buildx build --builder cloud-<ORG>-<BUILDER_NAME> --tag <IMAGE> .
```
## Use by default
If you want to use Docker Build Cloud without having to specify the `--builder` flag
each time, you can set it as the default builder.
{{< tabs group="ui" >}}
{{< tab name="CLI" >}}
Run the following command:
```console
$ docker buildx use cloud-<ORG>-<BUILDER_NAME> --global
```
{{< /tab >}}
{{< tab name="Docker Desktop" >}}
1. Open the Docker Desktop settings and navigate to the **Builders** tab.
2. Find the cloud builder under **Available builders**.
3. Open the drop-down menu and select **Use**.
![Selecting the cloud builder as default using the Docker Desktop GUI](/build/images/set-default-builder-gui.webp)
{{< /tab >}}
{{< /tabs >}}
Changing your default builder with `docker buildx use` only changes the default
builder for the `docker buildx build` command. The `docker build` command still
uses the `default` builder, unless you specify the `--builder` flag explicitly.
If you use build scripts, such as `make`, we recommend that you update your
build commands from `docker build` to `docker buildx build`, to avoid any
confusion with regards to builder selection. Alternatively, you can run `docker
buildx install` to make the default `docker build` command behave like `docker
buildx build`, without discrepancies.
## Loading build results
Building with `--tag` loads the build result to the local image store
automatically when the build finishes. To build without a tag and load the
result, you must pass the `--load` flag.
Loading the build result for multi-platform images is not supported. Use the
`docker buildx build --push` flag when building multi-platform images to push
the output to a registry.
```console
$ docker buildx build --builder cloud-<ORG>-<BUILDER_NAME> \
--platform linux/amd64,linux/arm64 \
--tag <IMAGE> \
--push .
```
If you want to build with a tag, but you don't want to load the results to your
local image store, you can export the build results to the build cache only:
```console
$ docker buildx build --builder cloud-<ORG>-<BUILDER_NAME> \
--platform linux/amd64,linux/arm64 \
--tag <IMAGE> \
--output type=cacheonly .
```
## Multi-platform builds
To run multi-platform builds, you must specify all of the platforms that you
want to build for using the `--platform` flag.
```console
$ docker buildx build --builder cloud-<ORG>-<BUILDER_NAME> \
--platform linux/amd64,linux/arm64 \
--tag <IMAGE> \
--push .
```
If you don't specify the platform, the cloud builder automatically builds for the
architecture matching your local environment.
To learn more about building for multiple platforms, refer to [Multi-platform
builds](/build/building/multi-platform/).
## Cloud builds in Docker Desktop
The Docker Desktop [Builds view](/desktop/use-desktop/builds/) works with
Docker Build Cloud out of the box. This view can show information about not only your
own builds, but also builds initiated by your team members using the same
builder.
Teams using a shared builder get access to information such as:
- Ongoing and completed builds
- Build configuration, statistics, dependencies, and results
- Build source (Dockerfile)
- Build logs and errors
This lets you and your team work collaboratively on troubleshooting and
improving build speeds, without having to send build logs and benchmarks back
and forth between each other.

View File

@ -1,677 +0,0 @@
---
title: Hydrobuild
description: Get started with Docker Hydrobuild
sitemap: false
---
> **Early Access**
>
> Docker Hydrobuild is an early-access service that provides cloud-based
> builders for your Docker organization.
>
> If you want to get involved in testing Hydrobuild, you can
> [sign up for the early access program](https://www.docker.com/build-early-access-program/?utm_source=docs).
{ .restricted }
Hydrobuild is a service that lets you build your container images faster, both
locally and in CI. Builds run on cloud infrastructure optimally dimensioned for
your workloads, no configuration required. The service uses a remote build
cache, ensuring fast builds anywhere and for all team members.
## How Hydrobuild works
Using Hydrobuild is no different from running a regular build. You invoke a
build the same way you normally would, using `docker buildx build`. The
difference is in where and how that build gets executed.
By default when you invoke a build command, the build runs on a local instance
of BuildKit, bundled with the Docker daemon. With Hydrobuild, you send the
build request to a BuildKit instance running remotely, in the cloud.
The remote builder executes the build steps, and sends the resulting build
output to the destination that you specify. For example, back to your local
Docker Engine image store, or to an image registry.
Hydrobuild provides several benefits over local builds:
- Improved build speed
- Shared build cache
- Native multi-platform builds
And the best part: you don't need to worry about managing builders or
infrastructure. Just connect to your builders, and start building.
## Setup
To get started with Hydrobuild, you need to:
- Download and install a version of Buildx that supports Hydrobuild.
- Have a Docker ID that's part of a Docker organization participating in the
[Hydrobuild early access program](https://www.docker.com/build-early-access-program/?utm_source=docs).
Docker Desktop 4.23.0 and later versions ship with a Hydrobuild-compatible
Buildx binary. Alternatively, you can download and install the binary manually
from [this repository](https://github.com/docker/buildx-desktop).
## Connect to Hydrobuild
To start using Hydrobuild, you must first add the builder's endpoint to your
local Docker configuration.
{{< tabs group="ui" >}}
{{< tab name="CLI" >}}
1. Sign in to your Docker account
```console
$ docker login
```
2. Add the Hydrobuild endpoint.
```console
$ docker buildx create --driver cloud <ORG>/default
```
Replace `ORG` with the Docker Hub namespace of your Docker organization.
This creates a builder named `cloud-ORG-default`.
{{< /tab >}}
{{< tab name="GUI" >}}
Enable the [Builds view](../desktop/use-desktop/builds.md) in Docker Desktop
and complete the following steps:
1. Sign in to your Docker account using the **Sign in** button in Docker Desktop.
2. Open the Docker Desktop settings and navigate to the **Builders** tab.
3. Under **Available builders**, select **Create builder**.
![Create builder GUI screenshot](./images/create-builder-gui.webp)
{{< /tab >}}
{{< /tabs >}}
The builder has native support for the `linux/amd64` and `linux/arm64`
architectures. This gives you a high-performance build cluster for building
multi-platform images natively.
## Use Hydrobuild from the CLI
To build using Hydrobuild, invoke a build command and specify the name of the
builder using the `--builder` flag.
```console
$ docker buildx build --builder cloud-<ORG>-default --tag <IMAGE> .
```
If you want to use Hydrobuild without having to specify the `--builder` flag
each time, you can set it as the default builder.
{{< tabs group="ui" >}}
{{< tab name="CLI" >}}
Run the following command:
```console
$ docker buildx use cloud-<ORG>-default --global
```
{{< /tab >}}
{{< tab name="GUI" >}}
1. Open the Docker Desktop settings and navigate to the **Builders** tab.
2. Find the Hydrobuild builder under **Available builders**.
3. Open the drop-down menu and select **Use**.
![Selecting Hydrobuild as default using the GUI](./images/set-default-builder-gui.webp)
{{< /tab >}}
{{< /tabs >}}
Changing your default builder with `docker buildx use` only changes the default
builder for the `docker buildx build` command. The `docker build` command still
uses the `default` builder, unless you specify the `--builder` flag explicitly.
If you use build scripts, such as `make`, we recommend that you update your
build commands from `docker build` to `docker buildx build`, to avoid any
confusion with regards to builder selection. Alternatively, you can run `docker
buildx install` to make the default `docker build` command behave like `docker
buildx build`, without discrepancies.
## Loading build results
Building with `--tag` loads the build result to the local image store
automatically when the build finishes. To build without a tag and load the
result, you must pass the `--load` flag.
Loading the build result for multi-platform images is not supported. Use the
`docker buildx build --push` flag when building multi-platform images to push
the output to a registry.
```console
$ docker buildx build --builder cloud-<ORG>-default \
--platform linux/amd64,linux/arm64 \
--tag <IMAGE> \
--push .
```
If you want to build with a tag, but you don't want to load the results to your
local image store, you can export the build results to the build cache only:
```console
$ docker buildx build --builder cloud-<ORG>-default \
--platform linux/amd64,linux/arm64 \
--tag <IMAGE> \
--output type=cacheonly .
```
## Multi-platform builds
To run multi-platform builds, you must specify all of the platforms that you
want to build for using the `--platform` flag.
```console
$ docker buildx build --builder cloud-<ORG>-default \
--platform linux/amd64,linux/arm64 \
--tag <IMAGE> \
--push .
```
If you don't specify the platform, Hydrobuild automatically builds for the
architecture matching your local environment.
To learn more about building for multiple platforms, refer to [Multi-platform
builds](./building/multi-platform.md).
## Use Hydrobuild in CI
Using Hydrobuild in CI can speed up your build pipelines, which means less time
spent waiting and context switching. You control your CI workflows as usual,
and delegate the build execution to Hydrobuild.
Building with Hydrobuild in CI involve the following steps:
1. Sign in to a Docker account.
2. Set up Buildx and create the builder.
3. Run the build.
When using Hydrobuild in CI, it's recommended that you push the result to a
registry directly, rather than loading the image and then pushing it. Pushing
directly speeds up your builds and avoids unnecessary file transfers.
If you just want to build and discard the output, export the results to the
build cache or build without tagging the image. Hydrobuild automatically loads
the build result if you build a tagged image. See [Loading build
results](#loading-build-results) for details.
{{< tabs >}}
{{< tab name="GitHub Actions" >}}
```yaml
name: ci
on:
push:
branches:
- "main"
jobs:
docker:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Log in to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_USER }}
password: ${{ secrets.DOCKER_PAT }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: "lab:latest"
driver: cloud
endpoint: "<ORG>/default"
install: true
- name: Build and push
uses: docker/build-push-action@v5
with:
context: .
tags: "<IMAGE>"
# For pull requests, export results to the build cache.
# Otherwise, push to a registry.
outputs: ${{ github.event_name == 'pull_request' && 'type=cacheonly' || 'type=registry,push=true' }}
```
{{< /tab >}}
{{< tab name="GitLab" >}}
```yaml
default:
image: docker:24-dind
services:
- docker:24-dind
before_script:
- docker info
- echo "$DOCKER_PAT" | docker login --username "$DOCKER_USER" --password-stdin
- |
apk add curl jq
ARCH=${CI_RUNNER_EXECUTABLE_ARCH#*/}
BUILDX_URL=$(curl -s https://raw.githubusercontent.com/docker/actions-toolkit/main/.github/buildx-lab-releases.json | jq -r ".latest.assets[] | select(endswith(\"linux-$ARCH\"))")
mkdir -vp ~/.docker/cli-plugins/
curl --silent -L --output ~/.docker/cli-plugins/docker-buildx $BUILDX_URL
chmod a+x ~/.docker/cli-plugins/docker-buildx
- docker buildx create --use --driver cloud ${DOCKER_ORG}/default
variables:
IMAGE_NAME: <IMAGE>
DOCKER_ORG: <ORG>
# Build multi-platform image and push to a registry
build_push:
stage: build
script:
- |
docker buildx build \
--platform linux/amd64,linux/arm64 \
--tag "${IMAGE_NAME}:${CI_COMMIT_SHORT_SHA}" \
--push .
# Build an image and discard the result
build_cache:
stage: build
script:
- |
docker buildx build \
--platform linux/amd64,linux/arm64 \
--tag "${IMAGE_NAME}:${CI_COMMIT_SHORT_SHA}" \
--output type=cacheonly \
--push .
```
{{< /tab >}}
{{< tab name="CircleCI" >}}
```yaml
version: 2.1
jobs:
# Build multi-platform image and push to a registry
build_push:
machine:
image: ubuntu-2204:current
steps:
- checkout
- run: |
mkdir -vp ~/.docker/cli-plugins/
ARCH=amd64
BUILDX_URL=$(curl -s https://raw.githubusercontent.com/docker/actions-toolkit/main/.github/buildx-lab-releases.json | jq -r ".latest.assets[] | select(endswith(\"linux-$ARCH\"))")
curl --silent -L --output ~/.docker/cli-plugins/docker-buildx $BUILDX_URL
chmod a+x ~/.docker/cli-plugins/docker-buildx
- run: echo "$DOCKER_PAT" | docker login --username $DOCKER_USER --password-stdin
- run: docker buildx create --use --driver cloud "<ORG>/default"
- run: |
docker buildx build \
--platform linux/amd64,linux/arm64 \
--push \
--tag "<IMAGE>" .
# Build an image and discard the result
build_cache:
machine:
image: ubuntu-2204:current
steps:
- checkout
- run: |
mkdir -vp ~/.docker/cli-plugins/
ARCH=amd64
BUILDX_URL=$(curl -s https://raw.githubusercontent.com/docker/actions-toolkit/main/.github/buildx-lab-releases.json | jq -r ".latest.assets[] | select(endswith(\"linux-$ARCH\"))")
curl --silent -L --output ~/.docker/cli-plugins/docker-buildx $BUILDX_URL
chmod a+x ~/.docker/cli-plugins/docker-buildx
- run: echo "$DOCKER_PAT" | docker login --username $DOCKER_USER --password-stdin
- run: docker buildx create --use --driver cloud "<ORG>/default"
- run: |
docker buildx build \
--tag temp \
--output type=cacheonly \
.
workflows:
pull_request:
jobs:
- build_cache
release:
jobs:
- build_push
```
{{< /tab >}}
{{< tab name="Buildkite" >}}
The following example sets up a Buildkite pipeline using Hydrobuild. The
example assumes that the pipeline name is `build-push-docker` and that you
manage the Docker access token using environment hooks, but feel free to adapt
this to your needs.
Add the following `environment` hook agent's hook directory:
```bash
#!/bin/bash
set -euo pipefail
if [[ "$BUILDKITE_PIPELINE_NAME" == "build-push-docker" ]]; then
export DOCKER_PAT="<DOCKER_PERSONAL_ACCESS_TOKEN>"
fi
```
Create a `pipeline.yml` that uses the `docker-login` plugin:
```yaml
env:
DOCKER_ORG: <ORG>
IMAGE_NAME: <IMAGE>
steps:
- command: ./build.sh
key: build-push
plugins:
- docker-login#v2.1.0:
username: <DOCKER_USER>
password-env: DOCKER_PAT # the variable name in the environment hook
```
Create the `build.sh` script:
```bash
DOCKER_DIR=/usr/libexec/docker
# Get download link for latest buildx binary.
# Set $ARCH to the CPU architecture (e.g. amd64, arm64)
UNAME_ARCH=`uname -m`
case $UNAME_ARCH in
aarch64)
ARCH="arm64";
;;
amd64)
ARCH="amd64";
;;
*)
ARCH="amd64";
;;
esac
BUILDX_URL=$(curl -s https://raw.githubusercontent.com/docker/actions-toolkit/main/.github/buildx-lab-releases.json | jq -r ".latest.assets[] | select(endswith(\"linux-$ARCH\"))")
# Download docker buildx with Hyrdobuild support
curl --silent -L --output $DOCKER_DIR/cli-plugins/docker-buildx $BUILDX_URL
chmod a+x ~/.docker/cli-plugins/docker-buildx
# Connect to your builder and set it as the default builder
docker buildx create --use --driver cloud "$DOCKER_ORG/default"
# Cache-only image build
docker buildx build \
--platform linux/amd64,linux/arm64 \
--tag "$IMAGE_NAME:$BUILDKITE_COMMIT" \
--output type=cacheonly \
.
# Build, tag, and push a multi-arch docker image
docker buildx build \
--platform linux/amd64,linux/arm64 \
--push \
--tag "$IMAGE_NAME:$BUILDKITE_COMMIT" \
.
```
{{< /tab >}}
{{< tab name="Jenkins" >}}
```groovy
pipeline {
agent any
environment {
ARCH = 'amd64'
DOCKER_PAT = credentials('docker-personal-access-token')
DOCKER_USER = credentials('docker-username')
DOCKER_ORG = '<ORG>'
IMAGE_NAME = '<IMAGE>'
}
stages {
stage('Build') {
environment {
BUILDX_URL = sh (returnStdout: true, script: 'curl -s https://raw.githubusercontent.com/docker/actions-toolkit/main/.github/buildx-lab-releases.json | jq -r ".latest.assets[] | select(endswith(\\"linux-$ARCH\\"))"').trim()
}
steps {
sh 'mkdir -vp ~/.docker/cli-plugins/'
sh 'curl --silent -L --output ~/.docker/cli-plugins/docker-buildx $BUILDX_URL'
sh 'chmod a+x ~/.docker/cli-plugins/docker-buildx'
sh 'echo "$DOCKER_PAT" | docker login --username $DOCKER_USER --password-stdin'
sh 'docker buildx create --use --driver cloud "$DOCKER_ORG/default"'
// Cache-only build
sh 'docker buildx build --platform linux/amd64,linux/arm64 --tag "$IMAGE_NAME" --output type=cacheonly .'
// Build and push a multi-platform image
sh 'docker buildx build --platform linux/amd64,linux/arm64 --push --tag "$IMAGE_NAME" .'
}
}
}
}
```
{{< /tab >}}
{{< tab name="Shell" >}}
```bash
#!/bin/bash
# Get download link for latest buildx binary. Set $ARCH to the CPU architecture (e.g. amd64, arm64)
ARCH=amd64
BUILDX_URL=$(curl -s https://raw.githubusercontent.com/docker/actions-toolkit/main/.github/buildx-lab-releases.json | jq -r ".latest.assets[] | select(endswith(\"linux-$ARCH\"))")
# Download docker buildx with Hyrdobuild support
mkdir -vp ~/.docker/cli-plugins/
curl --silent -L --output ~/.docker/cli-plugins/docker-buildx $BUILDX_URL
chmod a+x ~/.docker/cli-plugins/docker-buildx
# Login to Docker Hub. For security reasons $DOCKER_PAT should be a Personal Access Token. See https://docs.docker.com/security/for-developers/access-tokens/
echo "$DOCKER_PAT" | docker login --username $DOCKER_USER --password-stdin
# Connect to your builder and set it as the default builder
docker buildx create --use --driver cloud "<ORG>/default"
# Cache-only image build
docker buildx build \
--tag temp \
--output type=cacheonly \
.
# Build, tag, and push a multi-arch docker image
docker buildx build \
--platform linux/amd64,linux/arm64 \
--push \
--tag "<IMAGE>" \
.
```
{{< /tab >}}
{{< /tabs >}}
## Hydrobuild in Docker Desktop
The Docker Desktop [Builds view](../desktop/use-desktop/builds.md) works with
Hydrobuild out of the box. With Hydrobuild, the Builds view becomes a
collaboration tool, showing information about not only your own builds, but
also builds initiated by your team members using the same builder.
Teams using a shared builder get access to information such as:
- Ongoing and completed builds
- Build configuration, statistics, dependencies, and results
- Build source (Dockerfile)
- Build logs and errors
This lets you and your team can work collaboratively on troubleshooting and
improving build speeds, without having to send build logs and benchmarks back
and forth between each other.
## Optimize for building in the cloud
Hydrobuild runs your builds remotely, and not on the machine where you invoke
the build. This means that file transfers between the client and builder happens
over the network.
Transferring files over the network has a higher latency and lower bandwidth
than local transfers. Hydrobuild has several features to mitigate this:
- It uses attached storage volumes for build cache, which makes reading and
writing cache very fast.
- Loading build results back to the client only pulls the layers that were
changed compared to previous builds.
Despite these optimizations, building remotely can still yield slow context
transfers and image loads, for large projects or if the network connection is
slow. Here are some ways that you can optimize your builds to make the transfer
more efficient:
- [Dockerignore files](#dockerignore-files)
- [Slim base images](#slim-base-images)
- [Multi-stage builds](#multi-stage-builds)
- [Fetch remote files in build](#fetch-remote-files-in-build)
- [Multi-threaded tools](#multi-threaded-tools)
### Dockerignore files
Using a [`.dockerignore` file](./building/context.md#dockerignore-files), you can be
explicit about which local files that you dont want to include in the build
context. Files caught by the glob patterns you specify in your ignore-file are
not transferred to the remote builder.
Some examples of things you might want to add to your `.dockerignore` file are:
- `.git` — skip sending the version control history in the build context. Note
that this means you wont be able to run Git commands in your build steps,
such as `git rev-parse` etc.
- Directories containing build artifacts, such as binaries. Build artifacts
created locally during development.
- Vendor directories for package managers, such as `node_modules`.
In general, the contents of your `.dockerignore` file should be similar to what
you have in your `.gitignore`.
### Slim base images
Selecting smaller images for your `FROM` instructions in your Dockerfile can
help reduce the size of the final image. The [Alpine image](https://hub.docker.com/_/alpine)
is a good example of a minimal Docker image that provides all of the OS
utilities you would expect from a Linux container.
Theres also the [special `scratch` image](https://hub.docker.com/_/scratch),
which contains nothing at all. Useful for creating images of statically linked
binaries, for example.
### Multi-stage builds
[Multi-stage builds](./guide/multi-stage.md) can make your build run faster,
because stages can run in parallel. It can also make your end-result smaller.
Write your Dockerfile in such a way that the final runtime stage uses the
smallest possible base image, with only the resources that your program requires
to run.
Its also possible to
[copy resources from other images or stages](./building/multi-stage.md#name-your-build-stages),
using the Dockerfile `COPY --from` instruction. This technique can reduce the
number of layers, and the size of those layers, in the final stage.
### Fetch remote files in build
When possible, you should fetch files from a remote location in the build,
rather than bundling the files into the build context. Downloading files on the
Hydrobuild server directly is better, because it will likely be faster than
transferring the files with the build context.
You can fetch remote files during the build using the
[Dockerfile `ADD` instruction](../engine/reference/builder.md#add),
or in your `RUN` instructions with tools like `wget` and `rsync`.
### Multi-threaded tools
Some tools that you use in your build instructions may not utilize multiple
cores by default. One such example is `make` which uses a single thread by
default, unless you specify the `make --jobs=<n>` option. For build steps
involving such tools, try checking if you can optimize the execution with
parallelization.
## Frequently asked questions
### How do I remove Hydrobuild from my system?
If you want to stop using Hydrobuild, and remove it from your system, remove
the builder using the `docker buildx rm` command.
```console
$ docker buildx rm cloud-<ORG>-default
```
This doesn't deprovision the builder backend, it only removes the builder from
your local Docker client.
### Are builders shared between organizations?
No. Each Hydrobuild builder provisioned to an organization is completely
isolated to a single Amazon EC2 instance, with a dedicated EBS volume for build
cache, and end-to-end encryption. That means there are no shared processes or
data between Hydrobuild jobs.
### Do I need to add my secrets the builder to access private resources?
No. Your interface to Hydrobuild is Buildx, and you can use the existing
`--secret` and `--ssh` CLI flags for managing build secrets.
For more information, refer to:
- [docker buildx build --secret](../engine/reference/commandline/buildx_build.md#secret)
- [docker buildx build --ssh](../engine/reference/commandline/buildx_build.md#ssh)
### How do I unset Hydrobuild as the default builder?
If you've set Hydrobuild as the default builder and want to revert to using the
default `docker` builder, run the following command:
```console
$ docker context use default
```
### How do I manage the build cache with Hydrobuild?
You don't need to manage the builder's cache manually. The system manages it
for you through [garbage collection](./cache/garbage-collection.md).
Hydrobuild uses the following garbage collection limits:
- Size: 90% of 1TB
- Age: cache not used in the past 180 days
- Number of build history records: 10 000
Old cache is automatically removed if you hit any of these limits. You can
check your current cache state using the [`docker buildx du`
command](../engine/reference/commandline/buildx_du.md).
To clear the builder's cache manually, you can use the [`docker buildx prune`
command](../engine/reference/commandline/buildx_prune.md) command. This works
like pruning the cache for any other builder.
> **Note**
>
> Pruning Hydrobuild cache also removes the cache for other team members using
> the same builder.

View File

@ -1814,6 +1814,18 @@ Manuals:
title: Kubernetes driver
- path: /build/drivers/remote/
title: Remote driver
- sectiontitle: Build Cloud
section:
- path: /build/cloud/
title: Overview
- path: /build/cloud/setup/
title: Setup
- path: /build/cloud/usage/
title: Usage
- path: /build/cloud/ci/
title: Build Cloud in CI
- path: /build/cloud/optimization/
title: Optimize for cloud builds
- sectiontitle: Exporters
section:
- path: /build/exporters/
@ -2205,6 +2217,8 @@ FAQ:
title: Overview
- path: /billing/faqs/
title: Billing
- path: /build/cloud/faq/
title: Build Cloud
- path: /compose/faq/
title: Compose
- sectiontitle: Desktop

View File

@ -1,7 +1,6 @@
{{- if hugo.IsProduction -}}
User-agent: *
Disallow: /build/hydrobuild/
Disallow: /desktop/synchronized-file-sharing/
Sitemap: {{ site.BaseURL }}/sitemap.xml