Compare commits

...

194 Commits

Author SHA1 Message Date
Tõnis Tiigi 0bed0b5653
Merge pull request #3242 from rrjjvv/new-bakefile-env-var
Allow bake files to be specified via environment variable
2025-06-25 08:54:04 -07:00
CrazyMax b034cff8c2
Merge pull request #3268 from thaJeztah/bump_engine
vendor: github.com/docker/docker, github.com/docker/cli v28.3.0
2025-06-25 09:38:40 +02:00
CrazyMax fdb0ebc6cb
Merge pull request #3252 from thaJeztah/docker_28.3
Dockerfile: update to docker v28.3.0
2025-06-25 09:26:55 +02:00
Sebastiaan van Stijn 25a9ad6abd
vendor: github.com/docker/cli v28.3.0
full diff: https://github.com/docker/cli/compare/v28.2.2...v28.3.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-06-25 09:21:46 +02:00
Sebastiaan van Stijn a11757121a
vendor: github.com/docker/docker v28.3.0
full diff: https://github.com/docker/docker/compare/v28.2.2...v28.3.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-06-25 09:17:40 +02:00
Sebastiaan van Stijn 7a05ca4547
Dockerfile: update to docker v28.3.0
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-06-25 09:06:20 +02:00
Tõnis Tiigi 63bb3db985
Merge pull request #3264 from crazy-max/fix-args-history
history: fix required args for inspect attachment command
2025-06-24 11:13:05 -07:00
Tõnis Tiigi fba5d5e554
Merge pull request #3265 from crazy-max/update-govulncheck
dockerfile: update govulncheck to v1.1.4
2025-06-24 11:12:20 -07:00
CrazyMax 179aad79b5
history: fix required args for inspect attachment command
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-06-24 16:06:54 +02:00
Roberto Villarreal d44ffb4bd4 Display source of bake definitions when read from environment
While it would make sense to add "from file" to complement "from env,"
 (in the common case of `--file` or using the default), it wouldn't
 provide any real value.

A simpler solution would have been looking for the existence of the
variable at the point where printing happens.  It felt wrong
duplicating the logic.  Executing the same logic (if it was extracted)
wouldn't be as bad, but still not ideal.

A 'correct' solution would be to explicitly track the source of each
definition, which would be clearer and more future-proof.  It didn't
seem like this feature warranted that amount of engineering (with no
known features that might make use of it).

This implementation seemed like a fair compromise; none of the functions
 are exported, and all have only one caller.

I also considered converting prefixing environment values with `env://`
so they could be thought of (and processed like) `cmd://` values.  I
didn't think it would be viewed as a good solution.

Co-authored-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
Signed-off-by: Roberto Villarreal <rrjjvv@yahoo.com>
2025-06-24 00:32:37 -06:00
Tõnis Tiigi 4c1e7b2119
Merge pull request #3258 from crazy-max/docs-fix-history-attachment
docs: fix history inspect attachment examples
2025-06-23 16:48:09 -07:00
CrazyMax 2d3a9ef229
dockerfile: update govulncheck to v1.1.4
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-06-23 09:56:45 +02:00
CrazyMax ec45eb6ebc
docs: fix history inspect attachment examples
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-06-23 09:40:13 +02:00
CrazyMax e9b6a01aef
Merge pull request #3259 from crazy-max/build-metadata-provenance-02
build: fix buildx.build.provenance metadata
2025-06-23 09:23:37 +02:00
Tõnis Tiigi c48ccdee36
Merge pull request #3262 from crazy-max/buildkit-0.23.1
dockerfile: update buildkit to 0.23.1
2025-06-20 13:11:18 -07:00
CrazyMax 22f776f664
Merge pull request #3253 from samifruit514/master
driver kubernetes: allow to work in a Memory mount to speed up things
2025-06-20 16:13:19 +02:00
CrazyMax 8da4f0fe64
dockerfile: update buildkit to 0.23.1
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-06-20 11:49:38 +02:00
CrazyMax 2588b66fd9
build: fix buildx.build.provenance metadata
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-06-19 18:47:27 +02:00
CrazyMax 931e714919
vendor: github.com/moby/buildkit 9b91d20
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-06-19 18:47:27 +02:00
Samuel Archambault d5f914a263 driver kubernetes: allow to work in a Memory mount to speed up things
Signed-off-by: Samuel Archambault <samuel.archambault@getmaintainx.com>
2025-06-18 14:49:54 -04:00
Tõnis Tiigi d09eb752a5
Merge pull request #3256 from jsternberg/buildkit-bump
dockerfile: update buildkit to 0.23.0
2025-06-17 17:42:39 -07:00
Jonathan A. Sternberg 3c2decea38
dockerfile: update buildkit to 0.23.0
Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-06-17 13:29:22 -05:00
Tõnis Tiigi 18041a5855
Merge pull request #3254 from crazy-max/buildkit-0.23.0
vendor: update buildkit v0.23.0
2025-06-17 08:32:22 -07:00
CrazyMax 96ebe9d9a9
vendor: update buildkit v0.23.0
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-06-17 15:31:32 +02:00
Tõnis Tiigi 08dd378b59
Merge pull request #3249 from tonistiigi/update-buildkit-v0.23.0-rc2
vendor: update buildkit v0.23.0-rc2
2025-06-16 14:31:08 -07:00
Tonis Tiigi cb29cd0efb
vendor: update buildkit v0.23.0-rc2
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-06-16 13:42:31 -07:00
Tõnis Tiigi 99f1c4b15c
Merge pull request #3245 from crazy-max/history-slsa-check
history: slsa v1 support
2025-06-16 10:53:26 -07:00
Tõnis Tiigi 77e4a88781
Merge pull request #3248 from jsternberg/printer-bake-wait-fix
progress: ensure bake waits for progress to finish printing on error conditions
2025-06-16 10:52:50 -07:00
Jonathan A. Sternberg 7660acf9c7
progress: ensure bake waits for progress to finish printing on error conditions
Some minor fixes to the printer and how bake invokes it. Bake previously
had a race condition that could result in the display not updating on an
error condition, but it was much rarer because the channel communication
was much closer. The refactor added a proxy for the status channel so
there was more of an opportunity to surface the race condition.

When bake exits with an error when reading the bakefiles, it doesn't
wait for the printer to finish so it is possible for the printer to
update the display after an error is printed. This adds an extra `Wait`
in a defer to make sure the printer is finished.

`Wait` has also been fixed to allow it to be called multiple times and
have the same behavior. Previously, it only waited for the done channel
once so only the first wait would block.

The `onclose` method is now called every time the display is paused or
stopped. That was the previous behavior and it's been restored here.

The display only gets refreshed if we aren't exiting. There's no point
in initializing another display if we're about to exit.

The metric writer attached to the printer was erroneously removed. It is
now assigned properly.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-06-16 12:24:04 -05:00
Tõnis Tiigi 03737f11bc
Merge pull request #3244 from crazy-max/bake-extra-hosts-multi-ip
bake: multi ips support for extra hosts
2025-06-16 09:21:39 -07:00
CrazyMax 4a22b92775
history: slsa v1 support
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-06-16 16:23:20 +02:00
CrazyMax ba782f195b
Merge pull request #3236 from docker/dependabot/github_actions/softprops/action-gh-release-2.3.2
build(deps): bump softprops/action-gh-release from 2.2.2 to 2.3.2
2025-06-16 13:38:29 +02:00
CrazyMax 989978a42b
bake: multi ips support for extra hosts
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-06-16 11:55:17 +02:00
Roberto Villarreal cb54ddb9fe Allow bake files to be specified via environment variable
The environment variable `BUILDX_BAKE_FILE` (and optional variable
`BUILDX_BAKE_FILE_SEPARATOR`) can be used to specify one or more bake
files (similar to `compose`).  This is mutually exclusive with`--file`
(which takes precedence).

This is done very early to ensure the values are treated just like
`--file`, e.g., participate in telemetry.  This includes leaving
relative paths as-is, which deviates from `compose` (which makes them
absolute).

Signed-off-by: Roberto Villarreal <rrjjvv@yahoo.com>
2025-06-16 00:08:54 -06:00
Tõnis Tiigi eb43f4c237
Merge pull request #3183 from crazy-max/modernize-fix
hack: modernize-fix bake target
2025-06-13 15:39:07 -07:00
Tõnis Tiigi 43e2f27cac
Merge pull request #3240 from jsternberg/remove-debugcmd-package
commands: remove debug package in commands
2025-06-13 11:46:37 -07:00
Jonathan A. Sternberg 7f5ff6b797
commands: remove debug package in commands
The package just causes the entire flow to be more complicated as build
has to pretend it doesn't know about debug options and the debugger has
to pretend it doesn't know about the build.

This abstraction has been difficult when integrating a DAP command into
this same workflow so I don't think this abstraction has much of a
value.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-06-13 09:32:35 -05:00
Tõnis Tiigi 32e9bfcba8
Merge pull request #3237 from jsternberg/vendor-update
vendor: github.com/moby/buildkit v0.23.0-rc1
2025-06-11 14:47:39 -07:00
Jonathan A. Sternberg e1adeee898
vendor: github.com/moby/buildkit v0.23.0-rc1
Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-06-11 16:29:31 -05:00
Tõnis Tiigi 1e969978aa
Merge pull request #3234 from crazy-max/bake-add-host
bake: extra-hosts support
2025-06-11 12:50:34 -07:00
dependabot[bot] 640541cefa
build(deps): bump softprops/action-gh-release from 2.2.2 to 2.3.2
Bumps [softprops/action-gh-release](https://github.com/softprops/action-gh-release) from 2.2.2 to 2.3.2.
- [Release notes](https://github.com/softprops/action-gh-release/releases)
- [Changelog](https://github.com/softprops/action-gh-release/blob/master/CHANGELOG.md)
- [Commits](da05d55257...72f2c25fcb)

---
updated-dependencies:
- dependency-name: softprops/action-gh-release
  dependency-version: 2.3.2
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-06-11 18:13:37 +00:00
CrazyMax b514ed45fb
bake: extra-hosts support
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-06-11 19:38:01 +02:00
Tõnis Tiigi 1b4bd20e6f
Merge pull request #3233 from tonistiigi/imagetools-registrytoken
imagetools: support registrytoken auth in docker config
2025-06-11 09:07:19 -07:00
Tonis Tiigi da426ecd3a
imagetools: support registrytoken auth in docker config
This is not supported by the Authorizer from containerd and
needs to be added manually. Build authentication happens through
BuildKit session that already supports this.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-06-10 23:20:08 -07:00
Tonis Tiigi 10618d4c73
imagetools: move auth function to separate file
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-06-10 22:04:56 -07:00
Tõnis Tiigi 52b5d0862f
Merge pull request #3224 from jsternberg/evaluate-handler
build: change build handler to evaluate instead of onresult
2025-06-10 11:07:31 -07:00
Tõnis Tiigi d1e22e5fc3
Merge pull request #3228 from tonistiigi/hack-link-gold
lint: fix linter error on arm64
2025-06-10 10:37:36 -07:00
Jonathan A. Sternberg 38cf84346c
build: change build handler to evaluate instead of onresult
This changes the build handler to customize the behavior of evaluate
rather than onresult and also simplifies the `ResultHandle`. The
`ResultHandle` is now only valid within the gateway callback and can be
used to start containers from the handler.

`Evaluate` now executes inside of the gateway callback rather than
having a separate implementation that executes or re-invokes the build.
This keeps the gateway callback session open until the debugger has
returned.

The `ErrReload` for monitor has now been moved into the `build` package
and been renamed to `ErrRestart`. This is because it restarts the build
so the name makes a bit more sense. The actual use of this functionality
is still tied to the monitor reload.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-06-10 11:48:41 -05:00
Jonathan A. Sternberg 34e59ca1bd
progress: fix progress writer pause and unpause to prevent panics
This changes the progress printer's pause and unpause implementation to
be reentrant to prevent race conditions and it also allows the status
updates to be buffered when the display is paused.

The previous implementation mixed the pause implementation with the
finish implementation and could cause a send on closed channel panic
because it could close the status channel before it had finished being
used. Now, the status channel is not closed.

When the display is enabled, the status channel will be forwarded to an
internal channel that is used to display the updates. When the display
is paused, the status channel will have the statuses buffered in memory
to be sent when the progress display is resumed.

The `Unpause` method has also been renamed to `Resume`.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-06-09 14:07:52 -05:00
Tonis Tiigi 2706e2f429
lint: fix linter error on arm64
Something has changed in golang or alpine requiring gold linker by
default. In future this could be updated to clang/lld instead, eg.
by just calling xx.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-06-09 10:59:23 -07:00
Tõnis Tiigi 02ab492cac
Merge pull request #3226 from ArthurFlag/ENGDOCS-2699-build-list-and-explain-accepted-schemes
docs: restructure examples for context
2025-06-06 11:54:37 -07:00
Tõnis Tiigi b8d8c7b1a6
Merge pull request #3227 from crazy-max/hcl-merge-tests
bake: hcl merged tests
2025-06-06 11:52:59 -07:00
ArthurFlag dc6ec35e1d
docs: restructure examples for context
Signed-off-by: ArthurFlag <arthur.flageul@docker.com>
2025-06-06 17:22:22 +02:00
CrazyMax 3f49ee5a90
bake: hcl merged tests
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-06-06 17:16:56 +02:00
Tõnis Tiigi c45185fde0
Merge pull request #3222 from jsternberg/controller-remove-final
controller: remove remaining parts of the controller
2025-06-05 10:18:40 -07:00
Jonathan A. Sternberg 1d7cda1232
controller: remove remaining parts of the controller
Removes all references to the controller and moves the remaining
sections of code to other packages.

Processes has been moved to monitor where it is used and the data
structs have been removed so buildflags is used directly. The controller
build function has been moved to the commands package.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-06-05 11:57:03 -05:00
Tõnis Tiigi fb916a960c
Merge pull request #3214 from tonistiigi/internal-codes
cmd: custom exit codes for internal, resource and canceled errors
2025-06-05 09:11:19 -07:00
Tõnis Tiigi 60b1eda2df
Merge pull request #3220 from jsternberg/monitor-driven-build
monitor: move remaining controller functionality into monitor
2025-06-04 13:50:24 -07:00
Jonathan A. Sternberg 8f2604b6b4
monitor: move remaining controller functionality into monitor
This creates a `Monitor` type that keeps the global state between
monitor invocations and allows the monitor to exist during the build so
it can be utilized for callbacks.

The result handler is now registered with the monitor during the build
and `Run` will use the result if it is present and the configuration
intends the monitor to be invoked with the given result.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-06-04 15:27:24 -05:00
Tõnis Tiigi bb5b5e37e8
Merge pull request #3219 from jsternberg/monitor-reload-refactor
monitor: refactor how reload works
2025-06-04 13:26:48 -07:00
Jonathan A. Sternberg 21ebf82c99
monitor: refactor how reload works
The build now happens in a loop and the monitor is run after every
build. The monitor can return `ErrReload` to signal to the main thread
that it should reload the build result.

This will be used in the future to move the monitor into a callback
rather than as a separate existence. It allows the monitor to not
control the build itself which now makes it possible to completely
remove the controller.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-06-04 15:06:31 -05:00
Tõnis Tiigi d61853bbb3
Merge pull request #3213 from jsternberg/build-refactors
build: refactor some of the build functions into smaller utility functions
2025-06-03 14:43:26 -07:00
Jonathan A. Sternberg 65e46cc6af
commands: simplify passing stdin to the build when the monitor is configured
The monitor needs stdin to run and isn't compatible with loading a
context or dockerfile from stdin. We already disallow this combination
and, with the removal of the remote controller, there's no way to use
stdin during the build when invoke is configured.

This just removes the extra code to allow forwarding stdin to the build
when the monitor is configured to simplify that section of code.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-06-03 15:41:23 -05:00
Jonathan A. Sternberg 6a0f5610e3
controller: remove the controller interface
The controller interface is removed and the local controller is used for
only the initial build, invoke, and rebuilds.

Process control has been moved to the monitor.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-06-03 15:41:23 -05:00
Jonathan A. Sternberg e78aa98c92
build: refactor some of the build functions into smaller utility functions
Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-06-03 15:41:22 -05:00
CrazyMax e6ff731323
Merge pull request #3216 from jsternberg/keep-storage-deprecation-notice
commands: update deprecation notice for keep-storage
2025-06-02 16:58:58 +02:00
Jonathan A. Sternberg 9bd1ba2f5c
commands: update deprecation notice for keep-storage
The `--keep-storage` flag was changed to `--reserved-space`. Before it was
changed to that name, it was changed to `--max-storage`. This flag never
made it into a release as the name was changed before release, but the
update to the flag in buildx forgot to update the deprecation notice.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-06-02 09:35:39 -05:00
CrazyMax f90170965a
Merge pull request #3207 from rrjjvv/show-var-types
Show types during variable list operation
2025-06-02 09:09:18 +02:00
Tonis Tiigi b3e37e899f
cmd: custom exit codes for internal, resource and canceled errors
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-05-30 11:12:24 -07:00
Tõnis Tiigi a04b7d8689
Merge pull request #3212 from thaJeztah/bump_engine
vendor: github.com/docker/docker, docker/cli v28.2.2
2025-05-30 10:49:29 -07:00
Tõnis Tiigi 52bf4bf7ce
Merge pull request #3210 from thaJeztah/dockerfile_bump_docker
Dockerfile: update to docker v28.2.2
2025-05-30 10:49:08 -07:00
Sebastiaan van Stijn 13031cc2ca
vendor: github.com/docker/docker, docker/cli v28.2.2
no changes in vendored file, just version update

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-05-30 17:30:09 +02:00
Sebastiaan van Stijn 46fae59e2e
Dockerfile: update to docker v28.2.2
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-05-30 17:24:23 +02:00
Roberto Villarreal b40b2caf1a Show types during variable list operation
If a type was explicitly provided, it will be displayed in the variable
listing.  Inferred type names are not displayed, as they likely would
not match the user's intent.

Previously only `string` and `bool` default values were displayed in the
 listing.  All default values, regardless of type, are now displayed.

Signed-off-by: Roberto Villarreal <rrjjvv@yahoo.com>
2025-05-29 17:36:46 -06:00
Tõnis Tiigi 1436f93aa1
Merge pull request #3194 from thaJeztah/bump_engine
vendor: github.com/docker/docker, github.com/docker/cli v28.2.1
2025-05-29 16:05:26 -07:00
Sebastiaan van Stijn 99d82e6cea
vendor: github.com/docker/cli v28.2.1
full diff: https://github.com/docker/cli/compare/v28.1.1...v28.2.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-05-29 09:36:54 +02:00
Sebastiaan van Stijn bc620fcc71
vendor: github.com/docker/docker v28.2.1
full diff: https://github.com/docker/docker/compare/v28.1.1...v28.2.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-05-29 09:34:27 +02:00
CrazyMax e3c6618db2
Merge pull request #3201 from jsternberg/remove-generated-files
hack: remove code generation related to generated files
2025-05-23 11:14:45 +02:00
Tõnis Tiigi 542bda49f2
Merge pull request #3188 from crazy-max/buildkit-0.22
dockerfile: update buildkit to 0.22.0
2025-05-22 15:33:45 -07:00
Jonathan A. Sternberg 781a3f117a
hack: remove code generation related to generated files
With the removal of the protobuf for the controller, there are no longer
any generated files. Remove the makefile targets and the associated
dockerfiles and bake targets.

This wasn't being included in CI because it wasn't part of the
`validate` target.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-05-22 14:59:42 -05:00
CrazyMax 614cc880dd
dockerfile: update buildkit to 0.22.0
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-05-21 16:55:34 +02:00
CrazyMax dfad6e0b1f
Merge pull request #3189 from rrjjvv/var-typing-docs
Add variable typing to reference docs
2025-05-21 16:34:59 +02:00
CrazyMax 776dbd4086
Merge pull request #3198 from rrjjvv/var-typing-no-value-fix
Consider typed, value-less variables to have `null` value
2025-05-21 16:34:42 +02:00
Tõnis Tiigi 75f1d5e26b
Merge pull request #3199 from crazy-max/buildkit-0.22.0
vendor: github.com/moby/buildkit v0.22.0
2025-05-21 07:26:25 -07:00
CrazyMax 291c353575
bake: TestEmptyVariable
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-05-21 16:15:30 +02:00
CrazyMax a11bb4985c
vendor: github.com/moby/buildkit v0.22.0
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-05-21 14:57:25 +02:00
Roberto Villarreal cfeca919a9 Add variable typing to reference docs
This documents the variable typing introduced in #3167.

Signed-off-by: Roberto Villarreal <rrjjvv@yahoo.com>
2025-05-20 13:49:27 -06:00
Roberto Villarreal 3c0f5c5c21 Consider typed, value-less variables to have `null` value
A variable with a type but no default value or override resulted in an
empty string.  This matches the legacy behavior of untyped variables,
but does not make sense when using types (an empty string is itself a
type violation for everything except `string`).  All variables defined
with a type but with no value are now a typed `null`.

A variable explicitly typed `any` was previously treated as if the
typing was omitted; with no defined value or override, that resulted in
an empty string.  The `any` type is now distinguished from an omitted
type; these variables, with no default or override, are also `null`.

In other respects, the behavior of `any` is unchanged and largely
behaves as if the type was omitted.  It's not clear whether it should be
 supported, let alone how it should behave, so these tests were removed.
It's being treated as undefined behavior.

Signed-off-by: Roberto Villarreal <rrjjvv@yahoo.com>
2025-05-20 13:09:13 -06:00
CrazyMax ea2b7020a4
Merge pull request #3193 from crazy-max/buildkit-0.22.0-rc2
vendor: github.com/moby/buildkit v0.22.0-rc2
2025-05-19 17:29:11 +02:00
CrazyMax 5ba7d7eb4f
vendor: github.com/moby/buildkit v0.22.0-rc2
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-05-19 17:14:59 +02:00
CrazyMax 95ac2b4d09
Merge pull request #3192 from crazy-max/update-cli-docs-tool
vendor: github.com/docker/cli-docs-tool v0.10.0
2025-05-19 16:23:50 +02:00
CrazyMax 934cca3ab1
vendor: github.com/docker/cli-docs-tool v0.10.0
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-05-19 15:54:56 +02:00
CrazyMax 6e562e9ede
Merge pull request #3191 from glours/bump-compose-go-v2.6.3
bump compose-go to v2.6.3
2025-05-19 15:05:34 +02:00
Guillaume Lours 51b8646c44
bump compose-go to v2.6.3
Signed-off-by: Guillaume Lours <705411+glours@users.noreply.github.com>
2025-05-19 14:49:51 +02:00
CrazyMax 57a1c97c9d
Merge pull request #3187 from crazy-max/buildkit-0.22.0-rc1
vendor: github.com/moby/buildkit v0.22.0-rc1
2025-05-14 20:29:12 +02:00
CrazyMax 7a54b6ee7e
vendor: github.com/moby/buildkit v0.22.0-rc1
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-05-14 20:10:27 +02:00
CrazyMax 2e3108975d
Merge pull request #3186 from crazy-max/fix-readme
update readme
2025-05-14 13:40:07 +02:00
CrazyMax cd48c516e2
update readme
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-05-14 13:21:45 +02:00
Tõnis Tiigi 2149f03225
Merge pull request #3184 from tonistiigi/lint-merge-conflict-fix
lint: fix linter after merge conflict
2025-05-13 16:52:49 -07:00
Tonis Tiigi f41d5072fd
lint: fix linter after merge conflict
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-05-13 16:19:33 -07:00
Tõnis Tiigi 06a1a6344a
Merge pull request #3180 from crazy-max/dockerfile-update
dockerfile: update docker to 28.1.1 and buildkit to 0.21.1
2025-05-13 15:09:27 -07:00
Tõnis Tiigi 4feb05b0bf
Merge pull request #3179 from tonistiigi/ls-format-json-current
ls: make sure current builder is available in JSON output
2025-05-13 14:27:36 -07:00
Tõnis Tiigi 277548e91b
Merge pull request #3152 from crazy-max/history-export-finalize
history: make sure build record is finalized before exporting
2025-05-13 14:20:04 -07:00
Tõnis Tiigi 3f0aec1b3e
Merge pull request #3182 from crazy-max/go-1.24
update to go 1.24
2025-05-13 12:14:37 -07:00
CrazyMax 1383aa30c1
lint: modernize fix
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-05-13 20:44:57 +02:00
CrazyMax 09b824b9dc
update to go 1.24
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-05-13 19:27:03 +02:00
CrazyMax c1209acb27
hack: modernize-fix bake target
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-05-13 18:22:24 +02:00
CrazyMax 68ce10c4d9
tests: history cmds
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-05-13 17:17:47 +02:00
CrazyMax 78353f4e8e
history: make sure build record is finalized before exporting
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-05-13 17:17:47 +02:00
CrazyMax 03f9877429
Merge pull request #3181 from crazy-max/golangci-lint-v2
update golangci-lint to v2.1.5
2025-05-13 17:17:17 +02:00
CrazyMax b606e2f6bb
update golangci-lint to v2.1.5
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-05-13 16:54:43 +02:00
CrazyMax 874bb14de9
hack: golangci build from source support
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-05-13 16:20:35 +02:00
CrazyMax a9ab809d15
Merge pull request #3138 from crazy-max/history-copy
history: copy update
2025-05-13 13:06:05 +02:00
CrazyMax 72fde4c53a
history: copy update
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-05-13 12:20:00 +02:00
CrazyMax df8b997588
dockerfile: update buildkit to 0.21.1
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-05-13 12:01:12 +02:00
CrazyMax f92c679e14
dockerfile: update docker to 28.1.1
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-05-13 12:00:45 +02:00
Tonis Tiigi a3180cbf3d
ls: make sure current builder is available in JSON output
While lsBuilder has a field called Current that gets lost
because the embedded struct implements custom MarshalJSON method.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-05-12 22:30:44 -07:00
Tõnis Tiigi c398e2a224
Merge pull request #3177 from crazy-max/docs-fix-hcl-syntax
docs: remove commas in bake hcl object blocks
2025-05-12 10:47:11 -07:00
Tõnis Tiigi 865ad2b8d5
Merge pull request #3167 from rrjjvv/variable-typing
Allow variables to be explicitly typed (and enforced)
2025-05-12 10:45:10 -07:00
CrazyMax 729d58152c
docs: remove commas in bake hcl object blocks
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-05-12 10:30:45 +02:00
CrazyMax 9998ef7045
Merge pull request #3171 from glours/bump-compose-go-v2.6.2
bump compose-go to v2.6.2
2025-05-12 09:15:42 +02:00
CrazyMax 7e960152a1
Merge pull request #3168 from tonistiigi/bake-call-empty
bake: fix nil deference on empty call definition
2025-05-12 09:13:05 +02:00
Roberto Villarreal 56d39e619d Skip case-sensitive test on Windows
Signed-off-by: Roberto Villarreal <rrjjvv@yahoo.com>
2025-05-09 18:36:11 -06:00
Roberto Villarreal 65aea3028f Apply suggestions from code review
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Signed-off-by: Roberto Villarreal <rrjjvv@yahoo.com>
2025-05-09 18:36:11 -06:00
Roberto Villarreal 956fc0c9eb Use unique environment variables to separate JSON from default parsing
The primary intent is to make JSON parsing explicitly opt-in rather than
 using heuristics to determine intent.

With some exceptions, given bake variable `VAR`, an environment variable
 `VAR_JSON` must be used to provide JSON content.  The value in
 `VAR_JSON` will be ignored when:
* a bake built-in of that same name exists
* a user-provided variable of that same name exists
* typing (attribute `type`) is not present

The first is unlikely to happen as built-ins will likely start with
`BUILDX_BAKE_`, an unlikely prefix for end users.  The second may be a
real scenario, where users have `VAR_JSON` dedicated to accepting a
string with JSON content and decoding via an HCL function.  This will
continue to work as-is, but can be simplified by removing the variable
from their bake file (`VAR_JSON`) and applying typing (to `VAR`).

Signed-off-by: Roberto Villarreal <rrjjvv@yahoo.com>
2025-05-09 18:36:11 -06:00
Roberto Villarreal 1f56984626 Implement CSV-based overrides for list-like variables
Though CSV is favored for 'simple' lists, a JSON value will be used if
it parses without error.  This assumes that it is extremely unlikely
that something that parses as JSON would be intended to be parsed as
CSV, e.g. `["a"` and `"b"]`, as opposed to `a` and `b`.  If
parsing/conversion fails, it is treated as if it was a CSV.

Since the CSV approach required processing of each element, code was
refactored to reuse the same logic used for individual non-typed
variables.

Signed-off-by: Roberto Villarreal <rrjjvv@yahoo.com>
2025-05-09 18:36:11 -06:00
Roberto Villarreal 5b8a3b3728 Allow variables to be explicitly typed (and enforced)
This allows variables to have explicit types, similar to Terraform
variables.  It uses HCL's `typeexpr` extension for the specification.
For conversion of overrides to complex types (when explicit typing is
provided), HCL's native JSON-based unmarshalling is used.

Typing is independent of any default, but if a default is provided, it
will be validated.  Similarly, if an override is provided, it will be
converted to that type.

When typing is not provided, previous behavior is used, namely
passing through as a string when no default, converting to primitives if
 the default was primitive, and failing otherwise (complex types).

For complex types, the happy path is lists of primitives, but in theory
any complex/composite type can be used provided they are expressed
correctly in JSON.  In the interest of simplicity and correctness, there
 are no shortcuts for lists.  There *is* a shortcut for strings as users
  don't provide them for untyped variables and would be unintuitive.

Signed-off-by: Roberto Villarreal <rrjjvv@yahoo.com>
2025-05-09 18:36:11 -06:00
Guillaume Lours acdf95fe75
bump compose-go to v2.6.2
Signed-off-by: Guillaume Lours <705411+glours@users.noreply.github.com>
2025-05-07 15:29:18 +02:00
Tõnis Tiigi 9e17bc7a4c
Merge pull request #3127 from sarahsanders-docker/docs-buildx-history
docs: add descriptions and examples for buildx history commands
2025-05-05 15:00:01 -07:00
Tonis Tiigi e1e8f5c68d
docs: updated reference docs generation
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-05-05 14:32:03 -07:00
sarahsanders-docker 6ed39b2618
fix examples and headings
Signed-off-by: sarahsanders-docker <sarah.sanders@docker.com>
2025-05-05 14:32:03 -07:00
sarahsanders-docker 03019049e8
addressed feedback
Signed-off-by: sarahsanders-docker <sarah.sanders@docker.com>
2025-05-05 14:32:03 -07:00
sarahsanders-docker 23ce21c341
feedback + updated examples + added links for h3 headings
Signed-off-by: sarahsanders-docker <sarah.sanders@docker.com>
2025-05-05 14:32:03 -07:00
sarahsanders-docker 4dac5295a1
Add descriptions and examples for buildx history commands
Signed-off-by: sarahsanders-docker <sarah.sanders@docker.com>
2025-05-05 14:32:03 -07:00
Tonis Tiigi b00dd42037
bake: fix nil deference on empty call definition
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-05-05 11:07:54 -07:00
Tõnis Tiigi 9a48aca461
Merge pull request #3136 from ctalledo/reworked-fix-for-moby-45458
Output correct image ID when using Docker with the containerd-snapshotter
2025-05-01 16:59:42 -07:00
Cesar Talledo 679407862f Output correct image ID when using Docker with the containerd-snapshotter.
Prior to this change, the following command emits the wrong image ID when buildx
uses the "docker-container" driver and Docker is configured with the
containerd-snapshotter.

$ docker buildx build --load --iidfile=img.txt

$ docker run --rm "$(cat img.txt)" echo hello
docker: Error response from daemon: No such image: sha256:4ac37e81e00f242010e42f3251094e47de6100e01d25e9bd0feac6b8906976df.
See 'docker run --help'.

The problem is that buildx is outputing the incorrect image ID in this scenario
(it's outputing the container image config digest, instead of the container
image digest used by the containerd-snapshotter).

This commit fixes this. See https://github.com/moby/moby/issues/45458.

Signed-off-by: Cesar Talledo <cesar.talledo@docker.com>
2025-05-01 16:33:22 -07:00
Tõnis Tiigi 674cfff1a4
Merge pull request #3165 from tonistiigi/fix-openbsd-ci
attempt openbsd fix
2025-05-01 11:29:09 -07:00
Tonis Tiigi 19a241f4ed
attempt openbsd fix
7.5 packages seem to be removed from main mirrors. Couldn't find
a popular 7.6/7.7 image in vagrant cloud.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-05-01 11:02:38 -07:00
Tõnis Tiigi 7da31076ae
Merge pull request #3164 from jsternberg/controller-errdefs-proto-removal
controller: remove controller/errdefs protobuf files
2025-05-01 10:40:41 -07:00
Jonathan A. Sternberg 384f0565f5
controller: remove controller/errdefs protobuf files
Remove the protobuf files associated with controller/errdefs.

This doesn't completely remove the type as the monitor still uses it as
a signal to start the monitor.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-05-01 12:14:36 -05:00
Tõnis Tiigi 6df3a09284
Merge pull request #3126 from jsternberg/controller-removal
controller: remove controller grpc service
2025-04-30 18:20:51 -07:00
Tõnis Tiigi e7be640d9b
Merge pull request #3155 from crazy-max/fix-bin-image
ci: fix bin-image job
2025-04-30 17:53:20 -07:00
Jonathan A. Sternberg 2f1be25b8f
controller: remove controller grpc service
Remove the controller grpc service along with associated code related to
sessions or remote controllers.

Data types that are still used with complicated dependency chains have
been kept in the same package for a future refactor.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-04-30 13:46:58 -05:00
CrazyMax a40edbb47b
ci: fix bin-image job
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-04-24 14:40:43 +02:00
Tõnis Tiigi 2eaea647d8
Merge pull request #3146 from fiam/alberto/propagate-otel-trace
chore(dockerutil): propagate OTEL context to Docker daemon
2025-04-23 09:45:17 -07:00
Alberto Garcia Hierro f3a3d9c26b
chore(dockerutil): propagate OTEL context to Docker daemon
This allows to correlate operations triggered by a build (e.g.
a client-side pull) with the build that generated them.

Signed-off-by: Alberto Garcia Hierro <damaso.hierro@docker.com>
2025-04-22 20:29:30 +01:00
Tõnis Tiigi 9ba3f77219
Merge pull request #3143 from crazy-max/ci-fix-vagrant
ci: fix vagrant build
2025-04-22 12:17:29 -07:00
Tõnis Tiigi 2799ed6dd8
Merge pull request #3142 from thaJeztah/bump_docker_28.1.1
vendor: github.com/docker/docker, docker/cli v28.1.1, containerd v2.0.5
2025-04-22 11:37:27 -07:00
CrazyMax 719a41a4c3
Merge pull request #3135 from docker/dependabot/github_actions/softprops/action-gh-release-2.2.2
build(deps): bump softprops/action-gh-release from 2.2.1 to 2.2.2
2025-04-22 14:03:03 +02:00
CrazyMax a9807be458
ci: fix vagrant build
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-04-22 13:47:03 +02:00
CrazyMax 7a7be2ffa1
Merge pull request #3141 from ndeloof/path.IsAbs
use filepath.IsAbs to support windows paths
2025-04-22 13:42:30 +02:00
Sebastiaan van Stijn ab533b0cb4
vendor: github.com/docker/cli v28.1.1
no changes in vendored code

diff:  https://github.com/docker/cli/compare/v28.1.0...v28.1.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-04-22 12:30:00 +02:00
Sebastiaan van Stijn 0855cab1bd
vendor: github.com/docker/docker v28.1.1
diff:  https://github.com/docker/docker/compare/v28.1.0...v28.1.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-04-22 12:29:21 +02:00
Sebastiaan van Stijn 735555ff7b
vendor: github.com/containerd/containerd v2.0.5
full diff: https://github.com/containerd/containerd/compare/v2.0.4...v2.0.5

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-04-22 12:28:37 +02:00
Sebastiaan van Stijn 67ccbd06f6
vendor: golang.org/x/oauth2 v0.29.0
notable changes

- fixes CVE-2025-22868
- oauth2.go: use a more straightforward return value
- oauth2: Deep copy context client in NewClient
- jws: improve fix for CVE-2025-22868

full diff: https://github.com/golang/oauth2/compare/v0.23.0...v0.29.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-04-22 12:26:36 +02:00
Nicolas De Loof c370f90b73
use filepath.IsAbs to support windows paths
Signed-off-by: Nicolas De Loof <nicolas.deloof@gmail.com>
2025-04-22 11:37:53 +02:00
CrazyMax 9730a20f6b
Merge pull request #3133 from tonistiigi/build-defers-fix
build: make sure defers always run in the end of the build
2025-04-22 09:39:05 +02:00
dependabot[bot] 2e93ac32bc
build(deps): bump softprops/action-gh-release from 2.2.1 to 2.2.2
Bumps [softprops/action-gh-release](https://github.com/softprops/action-gh-release) from 2.2.1 to 2.2.2.
- [Release notes](https://github.com/softprops/action-gh-release/releases)
- [Changelog](https://github.com/softprops/action-gh-release/blob/master/CHANGELOG.md)
- [Commits](c95fe14893...da05d55257)

---
updated-dependencies:
- dependency-name: softprops/action-gh-release
  dependency-version: 2.2.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-04-21 18:55:17 +00:00
Tonis Tiigi 19c22136b4
build: make sure defers always run in the end of the build
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-04-21 09:28:44 -07:00
Tõnis Tiigi bad5063577
Merge pull request #3107 from thaJeztah/bump_engine
vendor: github.com/docker/docker, github.com/docker/cli v28.1.0
2025-04-18 17:05:35 -07:00
Sebastiaan van Stijn 286c018f84
vendor: github.com/docker/cli v28.1.0
full diff: https://github.com/docker/cli/compare/v28.0.4...v28.1.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-04-18 15:54:36 -07:00
Sebastiaan van Stijn ac970c03e7
vendor: github.com/docker/docker v28.1.0
full diff: https://github.com/docker/docker/compare/v28.0.4...v28.1.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-04-18 15:54:08 -07:00
Sebastiaan van Stijn 5398c33937
vendor: github.com/mattn/go-runewidth v0.0.16
adds support for Unicode 15.1.0

full diff: https://github.com/mattn/go-runewidth/compare/v0.0.15...v0.0.16

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-04-18 15:54:06 -07:00
Tõnis Tiigi 1365652a74
Merge pull request #3113 from crazy-max/update-hcl
vendor: update hcl dependencies
2025-04-18 15:51:53 -07:00
Tõnis Tiigi a4f0a21468
Merge pull request #3125 from thaJeztah/dockerfile_update_engine
Dockerfile: update to docker v28.1.0
2025-04-18 15:51:03 -07:00
CrazyMax d55616b22c
Merge pull request #3130 from crazy-max/fix-pr-assign
ci: update pr-assign-author
2025-04-18 13:52:27 +02:00
CrazyMax 113606a24c
ci: update pr-assign-author
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-04-18 13:38:59 +02:00
CrazyMax cd38da0244
Merge pull request #3123 from thaJeztah/update_spdy
vendor: github.com/moby/spdystream v0.5.0 (indirect)
2025-04-17 16:25:49 +02:00
Sebastiaan van Stijn cc6547c51d
Dockerfile: update to docker v28.1.0
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-04-17 16:17:55 +02:00
Sebastiaan van Stijn 26f2e002c6
vendor: github.com/moby/spdystream v0.5.0 (indirect)
This is an indirect dependency, but I recalled it fixed some leaking
goroutines, so it may be worth considering updating.

full diff: https://github.com/moby/spdystream/compare/v0.4.0...v0.5.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-04-17 14:26:25 +02:00
CrazyMax 372feb38ff
Merge pull request #3120 from crazy-max/ci-pr-assign
ci: assign author on pull request
2025-04-16 15:39:06 +02:00
CrazyMax b08d576ec0
Merge pull request #3119 from thaJeztah/bump_archive
vendor: github.com/moby/go-archive v0.1.0
2025-04-16 15:14:07 +02:00
CrazyMax 0034cdbffc
ci: assign author on pull request
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-04-16 15:06:01 +02:00
CrazyMax a9666e7df1
Merge pull request #3118 from crazy-max/dockerfile-buildkit-0.21.0
dockerfile: update buildkit to 0.21.0
2025-04-16 14:37:09 +02:00
CrazyMax b7e77af256
dockerfile: update buildkit to 0.21.0
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-04-16 14:14:35 +02:00
CrazyMax d72ff8f88c
Merge pull request #2842 from thaJeztah/test_registry_v3
Dockerfile: update to registry v3.0.0
2025-04-16 14:14:00 +02:00
Sebastiaan van Stijn d75c650792
vendor: github.com/moby/go-archive v0.1.0
full diff: https://github.com/moby/go-archive/compare/21f3f3385ab7...v0.1.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-04-16 13:49:57 +02:00
Tõnis Tiigi 8c74109330
Merge pull request #3115 from crazy-max/buildkit-v0.21.0
vendor: github.com/moby/buildkit v0.21.0
2025-04-15 09:30:51 -07:00
CrazyMax 9f102b5c34
vendor: github.com/moby/buildkit v0.21.0
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-04-15 16:54:08 +02:00
CrazyMax b4b2dc9664
Merge pull request #3114 from tonistiigi/bake-variadic-fix
bake: fix variadic_params inconsistency for user functions
2025-04-15 15:48:49 +02:00
Tonis Tiigi 2e81e301ae
bake: fix variadic_params inconsistency for user functions
There was inconsistency between variables used for function
definitions in HCL and JSON format. Updated JSON to match HCL,
fixed documentation and removed the unused code from userfunc
pkg (based on HCL upstream) to avoid confusion.

Theoretically we could add some temporary backwards compatibility
for the JSON format but I think it is unlikely that someone uses
JSON format for this and also defined variadic parameters.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-04-14 10:56:20 -07:00
CrazyMax fb4417e14d
vendor: update hcl dependencies
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-04-14 12:55:59 +02:00
CrazyMax eb74b483bd
Merge pull request #3110 from crazy-max/buildkit-0.21.0-rc2
vendor: github.com/moby/buildkit v0.21.0-rc2
2025-04-11 19:44:05 +02:00
CrazyMax db194abdc8
vendor: github.com/moby/buildkit v0.21.0-rc2
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-04-11 19:28:20 +02:00
CrazyMax 86eb3be1c4
Merge pull request #3103 from thaJeztah/use_atomicwriter
migrate to use github.com/moby/sys/atomicwriter
2025-04-11 12:05:00 +02:00
CrazyMax a05a166f81
Merge pull request #3106 from crazy-max/inline-result
build: print frontend inline message
2025-04-11 12:04:47 +02:00
CrazyMax cfc9d3a8c9
Merge pull request #3105 from glours/bump-compose-go-v2.6.0
bump compose-go to version v2.6.0
2025-04-11 10:57:53 +02:00
CrazyMax 5bac0b1197
build: print frontend inline message
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-04-11 09:45:25 +02:00
Guillaume Lours 0b4e624aaa
bump compose-go to version v2.6.0
Signed-off-by: Guillaume Lours <705411+glours@users.noreply.github.com>
2025-04-10 18:04:00 +02:00
Sebastiaan van Stijn b7b5a3a1cc
migrate to use github.com/moby/sys/atomicwriter
The github.com/docker/docker/pkg/atomicwriter package was moved
to a separate module.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-04-10 12:13:01 +02:00
CrazyMax f8de3c3bdc
Merge pull request #3095 from thaJeztah/migrate_archive
migrate to github.com/moby/go-archive module
2025-04-10 10:31:55 +02:00
Sebastiaan van Stijn fa0c3e3786
migrate to github.com/moby/go-archive module
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-04-10 09:16:43 +02:00
Sebastiaan van Stijn df6d36af35
Dockerfile: update to registry v3.0.0
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-04-08 14:20:58 +02:00
1944 changed files with 73869 additions and 108094 deletions

5
.github/labeler.yml vendored
View File

@ -48,11 +48,6 @@ area/cli:
- cmd/**
- commands/**
# Add 'area/controller' label to changes in the controller
area/controller:
- changed-files:
- any-glob-to-any-file: 'controller/**'
# Add 'area/docs' label to markdown files in the docs folder
area/docs:
- changed-files:

View File

@ -36,8 +36,8 @@ env:
TEST_CACHE_SCOPE: "test"
TESTFLAGS: "-v --parallel=6 --timeout=30m"
GOTESTSUM_FORMAT: "standard-verbose"
GO_VERSION: "1.23"
GOTESTSUM_VERSION: "v1.9.0" # same as one in Dockerfile
GO_VERSION: "1.24"
GOTESTSUM_VERSION: "v1.12.0" # same as one in Dockerfile
jobs:
test-integration:
@ -54,9 +54,9 @@ jobs:
- master
- latest
- buildx-stable-1
- v0.20.2
- v0.19.0
- v0.18.2
- v0.23.1
- v0.22.0
- v0.21.1
worker:
- docker-container
- remote
@ -260,6 +260,9 @@ jobs:
- freebsd
- netbsd
- openbsd
env:
# https://github.com/hashicorp/vagrant/issues/13652
VAGRANT_DISABLE_STRICT_DEPENDENCY_ENFORCEMENT: 1
steps:
-
name: Prepare
@ -420,6 +423,9 @@ jobs:
haskell: true
large-packages: true
swap-storage: true
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up QEMU
uses: docker/setup-qemu-action@v3
@ -453,9 +459,10 @@ jobs:
name: Build and push image
uses: docker/bake-action@v6
with:
source: .
files: |
./docker-bake.hcl
cwd://${{ steps.meta.outputs.bake-file }}
${{ steps.meta.outputs.bake-file }}
targets: image-cross
push: ${{ github.event_name != 'pull_request' }}
sbom: true
@ -528,7 +535,7 @@ jobs:
-
name: GitHub Release
if: startsWith(github.ref, 'refs/tags/v')
uses: softprops/action-gh-release@c95fe1489396fe8a9eb87c0abf8aa5b2ef267fda # v2.2.1
uses: softprops/action-gh-release@72f2c25fcb47643c292f7107632f7a47c1df5cd8 # v2.3.2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:

View File

@ -17,7 +17,7 @@ on:
pull_request:
env:
GO_VERSION: "1.23"
GO_VERSION: "1.24"
jobs:
codeql:

17
.github/workflows/pr-assign-author.yml vendored Normal file
View File

@ -0,0 +1,17 @@
name: pr-assign-author
permissions:
contents: read
on:
pull_request_target:
types:
- opened
- reopened
jobs:
run:
uses: crazy-max/.github/.github/workflows/pr-assign-author.yml@c27924b5b93ccfe6dcc0d7b22e779ef3c05f9a92
permissions:
contents: read
pull-requests: write

View File

@ -1,17 +1,16 @@
version: "2"
run:
timeout: 30m
modules-download-mode: vendor
linters:
default: none
enable:
- bodyclose
- depguard
- forbidigo
- gocritic
- gofmt
- goimports
- gosec
- gosimple
- govet
- ineffassign
- makezero
@ -21,99 +20,101 @@ linters:
- revive
- staticcheck
- testifylint
- typecheck
- unused
- whitespace
disable-all: true
linters-settings:
gocritic:
disabled-checks:
- "ifElseChain"
- "assignOp"
- "appendAssign"
- "singleCaseSwitch"
- "exitAfterDefer" # FIXME
importas:
alias:
# Enforce alias to prevent it accidentally being used instead of
# buildkit errdefs package (or vice-versa).
- pkg: "github.com/containerd/errdefs"
alias: "cerrdefs"
# Use a consistent alias to prevent confusion with "github.com/moby/buildkit/client"
- pkg: "github.com/docker/docker/client"
alias: "dockerclient"
- pkg: "github.com/opencontainers/image-spec/specs-go/v1"
alias: "ocispecs"
- pkg: "github.com/opencontainers/go-digest"
alias: "digest"
govet:
enable:
- nilness
- unusedwrite
# enable-all: true
# disable:
# - fieldalignment
# - shadow
depguard:
settings:
depguard:
rules:
main:
deny:
- pkg: "github.com/containerd/containerd/errdefs"
desc: The containerd errdefs package was migrated to a separate module. Use github.com/containerd/errdefs instead.
- pkg: "github.com/containerd/containerd/log"
desc: The containerd log package was migrated to a separate module. Use github.com/containerd/log instead.
- pkg: "github.com/containerd/containerd/platforms"
desc: The containerd platforms package was migrated to a separate module. Use github.com/containerd/platforms instead.
- pkg: "io/ioutil"
desc: The io/ioutil package has been deprecated.
forbidigo:
forbid:
- pattern: ^context\.WithCancel(# use context\.WithCancelCause instead)?$
- pattern: ^context\.WithDeadline(# use context\.WithDeadline instead)?$
- pattern: ^context\.WithTimeout(# use context\.WithTimeoutCause instead)?$
- pattern: ^ctx\.Err(# use context\.Cause instead)?$
- pattern: ^fmt\.Errorf(# use errors\.Errorf instead)?$
- pattern: ^platforms\.DefaultString(# use platforms\.Format(platforms\.DefaultSpec()) instead\.)?$
gocritic:
disabled-checks:
- "ifElseChain"
- "assignOp"
- "appendAssign"
- "singleCaseSwitch"
- "exitAfterDefer" # FIXME
gosec:
excludes:
- G204
- G402
- G115
config:
G306: "0644"
govet:
enable:
- nilness
- unusedwrite
importas:
alias:
- pkg: "github.com/containerd/errdefs"
alias: "cerrdefs"
- pkg: "github.com/docker/docker/client"
alias: "dockerclient"
- pkg: "github.com/opencontainers/image-spec/specs-go/v1"
alias: "ocispecs"
- pkg: "github.com/opencontainers/go-digest"
alias: "digest"
testifylint:
disable:
- empty
- bool-compare
- len
- negative-positive
exclusions:
generated: lax
presets:
- comments
- common-false-positives
- legacy
- std-error-handling
rules:
main:
deny:
- pkg: "github.com/containerd/containerd/errdefs"
desc: The containerd errdefs package was migrated to a separate module. Use github.com/containerd/errdefs instead.
- pkg: "github.com/containerd/containerd/log"
desc: The containerd log package was migrated to a separate module. Use github.com/containerd/log instead.
- pkg: "github.com/containerd/containerd/platforms"
desc: The containerd platforms package was migrated to a separate module. Use github.com/containerd/platforms instead.
- pkg: "io/ioutil"
desc: The io/ioutil package has been deprecated.
forbidigo:
forbid:
- '^context\.WithCancel(# use context\.WithCancelCause instead)?$'
- '^context\.WithDeadline(# use context\.WithDeadline instead)?$'
- '^context\.WithTimeout(# use context\.WithTimeoutCause instead)?$'
- '^ctx\.Err(# use context\.Cause instead)?$'
- '^fmt\.Errorf(# use errors\.Errorf instead)?$'
- '^platforms\.DefaultString(# use platforms\.Format(platforms\.DefaultSpec()) instead\.)?$'
gosec:
excludes:
- G204 # Audit use of command execution
- G402 # TLS MinVersion too low
- G115 # integer overflow conversion (TODO: verify these)
config:
G306: "0644"
testifylint:
disable:
# disable rules that reduce the test condition
- "empty"
- "bool-compare"
- "len"
- "negative-positive"
- linters:
- revive
text: stutters
- linters:
- revive
text: empty-block
- linters:
- revive
text: superfluous-else
- linters:
- revive
text: unused-parameter
- linters:
- revive
text: redefines-builtin-id
- linters:
- revive
text: if-return
paths:
- .*\.pb\.go$
formatters:
enable:
- gofmt
- goimports
exclusions:
generated: lax
paths:
- .*\.pb\.go$
issues:
exclude-files:
- ".*\\.pb\\.go$"
exclude-rules:
- linters:
- revive
text: "stutters"
- linters:
- revive
text: "empty-block"
- linters:
- revive
text: "superfluous-else"
- linters:
- revive
text: "unused-parameter"
- linters:
- revive
text: "redefines-builtin-id"
- linters:
- revive
text: "if-return"
# show all
max-issues-per-linter: 0
max-same-issues: 0

View File

@ -1,17 +1,17 @@
# syntax=docker/dockerfile:1
ARG GO_VERSION=1.23
ARG GO_VERSION=1.24
ARG ALPINE_VERSION=3.21
ARG XX_VERSION=1.6.1
# for testing
ARG DOCKER_VERSION=28.0.0
ARG DOCKER_VERSION=28.3.0
ARG DOCKER_VERSION_ALT_27=27.5.1
ARG DOCKER_VERSION_ALT_26=26.1.3
ARG DOCKER_CLI_VERSION=${DOCKER_VERSION}
ARG GOTESTSUM_VERSION=v1.12.0
ARG REGISTRY_VERSION=2.8.3
ARG BUILDKIT_VERSION=v0.20.2
ARG REGISTRY_VERSION=3.0.0
ARG BUILDKIT_VERSION=v0.23.1
ARG UNDOCK_VERSION=0.9.0
FROM --platform=$BUILDPLATFORM tonistiigi/xx:${XX_VERSION} AS xx

View File

@ -8,7 +8,7 @@ endif
export BUILDX_CMD ?= docker buildx
BAKE_TARGETS := binaries binaries-cross lint lint-gopls validate-vendor validate-docs validate-authors validate-generated-files
BAKE_TARGETS := binaries binaries-cross lint lint-gopls validate-vendor validate-docs validate-authors
.PHONY: all
all: binaries
@ -35,7 +35,7 @@ release:
./hack/release
.PHONY: validate-all
validate-all: lint test validate-vendor validate-docs validate-generated-files
validate-all: lint test validate-vendor validate-docs
.PHONY: test
test:
@ -68,7 +68,3 @@ authors:
.PHONY: mod-outdated
mod-outdated:
$(BUILDX_CMD) bake mod-outdated
.PHONY: generated-files
generated-files:
$(BUILDX_CMD) bake update-generated-files

View File

@ -79,7 +79,6 @@ Area or component of the project affected. Please note that the table below may
| `area/checks` | Any | `checks` |
| `area/ci` | Any | Project CI |
| `area/cli` | Any | `cli` |
| `area/controller` | Any | `controller` |
| `area/debug` | Any | `debug` |
| `area/dependencies` | Any | Project dependencies |
| `area/dockerfile` | Any | `dockerfile` |

View File

@ -6,7 +6,7 @@
[![Go Report Card](https://goreportcard.com/badge/github.com/docker/buildx?style=flat-square)](https://goreportcard.com/report/github.com/docker/buildx)
[![codecov](https://img.shields.io/codecov/c/github/docker/buildx?logo=codecov&style=flat-square)](https://codecov.io/gh/docker/buildx)
`buildx` is a Docker CLI plugin for extended build capabilities with
Buildx is a Docker CLI plugin for extended build capabilities with
[BuildKit](https://github.com/moby/buildkit).
Key features:
@ -16,7 +16,7 @@ Key features:
- Multiple builder instance support
- Multi-node builds for cross-platform images
- Compose build support
- High-level build constructs (`bake`)
- High-level build options (`bake`)
- In-container driver support (both Docker and Kubernetes)
# Table of Contents
@ -26,10 +26,9 @@ Key features:
- [Linux packages](#linux-packages)
- [Manual download](#manual-download)
- [Dockerfile](#dockerfile)
- [Set buildx as the default builder](#set-buildx-as-the-default-builder)
- [Building](#building)
- [Getting started](#getting-started)
- [Building with buildx](#building-with-buildx)
- [Building with Buildx](#building-with-buildx)
- [Working with builder instances](#working-with-builder-instances)
- [Building multi-platform images](#building-multi-platform-images)
- [Reference](docs/reference/buildx.md)
@ -49,12 +48,9 @@ Key features:
- [`buildx version`](docs/reference/buildx_version.md)
- [Contributing](#contributing)
For more information on how to use Buildx, see
[Docker Build docs](https://docs.docker.com/build/).
# Installing
Using `buildx` with Docker requires Docker engine 19.03 or newer.
Using Buildx with Docker requires Docker engine 19.03 or newer.
> [!WARNING]
> Using an incompatible version of Docker may result in unexpected behavior,
@ -75,9 +71,9 @@ Docker Engine package repositories contain Docker Buildx packages when installed
## Manual download
> [!IMPORTANT]
> This section is for unattended installation of the buildx component. These
> This section is for unattended installation of the Buildx component. These
> instructions are mostly suitable for testing purposes. We do not recommend
> installing buildx using manual download in production environments as they
> installing Buildx using manual download in production environments as they
> will not be updated automatically with security updates.
>
> On Windows and macOS, we recommend that you install [Docker Desktop](https://docs.docker.com/desktop/)
@ -87,11 +83,11 @@ You can also download the latest binary from the [GitHub releases page](https://
Rename the relevant binary and copy it to the destination matching your OS:
| OS | Binary name | Destination folder |
| -------- | -------------------- | -----------------------------------------|
| Linux | `docker-buildx` | `$HOME/.docker/cli-plugins` |
| macOS | `docker-buildx` | `$HOME/.docker/cli-plugins` |
| Windows | `docker-buildx.exe` | `%USERPROFILE%\.docker\cli-plugins` |
| OS | Binary name | Destination folder |
|---------|---------------------|-------------------------------------|
| Linux | `docker-buildx` | `$HOME/.docker/cli-plugins` |
| macOS | `docker-buildx` | `$HOME/.docker/cli-plugins` |
| Windows | `docker-buildx.exe` | `%USERPROFILE%\.docker\cli-plugins` |
Or copy it into one of these folders for installing it system-wide.
@ -123,14 +119,6 @@ COPY --from=docker/buildx-bin /buildx /usr/libexec/docker/cli-plugins/docker-bui
RUN docker buildx version
```
# Set buildx as the default builder
Running the command [`docker buildx install`](docs/reference/buildx_install.md)
sets up docker builder command as an alias to `docker buildx build`. This
results in the ability to have `docker build` use the current buildx builder.
To remove this alias, run [`docker buildx uninstall`](docs/reference/buildx_uninstall.md).
# Building
```console
@ -151,17 +139,17 @@ $ make install
# Getting started
## Building with buildx
## Building with Buildx
Buildx is a Docker CLI plugin that extends the `docker build` command with the
full support of the features provided by [Moby BuildKit](https://github.com/moby/buildkit)
full support of the features provided by [Moby BuildKit](https://docs.docker.com/build/buildkit/)
builder toolkit. It provides the same user experience as `docker build` with
many new features like creating scoped builder instances and building against
multiple nodes concurrently.
After installation, buildx can be accessed through the `docker buildx` command
with Docker 19.03. `docker buildx build` is the command for starting a new
build. With Docker versions older than 19.03 buildx binary can be called
After installation, Buildx can be accessed through the `docker buildx` command
with Docker 19.03. `docker buildx build` is the command for starting a new
build. With Docker versions older than 19.03 Buildx binary can be called
directly to access the `docker buildx` subcommands.
```console
@ -180,20 +168,25 @@ are not yet available for regular `docker build` like building manifest lists,
distributed caching, and exporting build results to OCI image tarballs.
Buildx is flexible and can be run in different configurations that are exposed
through various "drivers". Each driver defines how and where a build should
run, and have different feature sets.
through various [drivers](https://docs.docker.com/build/builders/drivers/).
Each driver defines how and where a build should run, and have different
feature sets.
We currently support the following drivers:
- The `docker` driver ([guide](https://docs.docker.com/build/drivers/docker/), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
- The `docker-container` driver ([guide](https://docs.docker.com/build/drivers/docker-container/), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
- The `kubernetes` driver ([guide](https://docs.docker.com/build/drivers/kubernetes/), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
- The `remote` driver ([guide](https://docs.docker.com/build/drivers/remote/))
- The `docker` driver ([manual](https://docs.docker.com/build/builders/drivers/docker/))
- The `docker-container` driver ([manual](https://docs.docker.com/build/builders/drivers/docker-container/))
- The `kubernetes` driver ([manual](https://docs.docker.com/build/drivers/kubernetes/))
- The `remote` driver ([manual](https://docs.docker.com/build/builders/drivers/remote/))
For more information on drivers, see the [drivers guide](https://docs.docker.com/build/drivers/).
For more information, see the [builders](https://docs.docker.com/build/builders/)
and [drivers](https://docs.docker.com/build/builders/drivers/) guide.
> [!NOTE]
> For more information, see [Docker Build docs](https://docs.docker.com/build/concepts/overview/).
## Working with builder instances
By default, buildx will initially use the `docker` driver if it is supported,
By default, Buildx will initially use the `docker` driver if it is supported,
providing a very similar user experience to the native `docker build`. Note that
you must use a local shared daemon to build your applications.
@ -212,7 +205,7 @@ while creating the new builder. After creating a new instance, you can manage it
lifecycle using the [`docker buildx inspect`](docs/reference/buildx_inspect.md),
[`docker buildx stop`](docs/reference/buildx_stop.md), and
[`docker buildx rm`](docs/reference/buildx_rm.md) commands. To list all
available builders, use [`buildx ls`](docs/reference/buildx_ls.md). After
available builders, use [`docker buildx ls`](docs/reference/buildx_ls.md). After
creating a new builder you can also append new nodes to it.
To switch between different builders, use [`docker buildx use <name>`](docs/reference/buildx_use.md).
@ -223,9 +216,12 @@ Docker also features a [`docker context`](https://docs.docker.com/engine/referen
command that can be used for giving names for remote Docker API endpoints.
Buildx integrates with `docker context` so that all of your contexts
automatically get a default builder instance. While creating a new builder
instance or when adding a node to it you can also set the context name as the
instance or when adding a node to it, you can also set the context name as the
target.
> [!NOTE]
> For more information, see [Builders docs](https://docs.docker.com/build/builders/).
## Building multi-platform images
BuildKit is designed to work well for building for multiple platforms and not
@ -239,8 +235,8 @@ platform for the build output, (for example, `linux/amd64`, `linux/arm64`, or
When the current builder instance is backed by the `docker-container` or
`kubernetes` driver, you can specify multiple platforms together. In this case,
it builds a manifest list which contains images for all specified architectures.
When you use this image in [`docker run`](https://docs.docker.com/engine/reference/commandline/run/)
or [`docker service`](https://docs.docker.com/engine/reference/commandline/service/),
When you use this image in [`docker run`](https://docs.docker.com/reference/cli/docker/container/run/)
or [`docker service`](https://docs.docker.com/reference/cli/docker/service/),
Docker picks the correct image based on the node's platform.
You can build multi-platform images using three different strategies that are
@ -304,6 +300,9 @@ COPY --from=build /log /log
You can also use [`tonistiigi/xx`](https://github.com/tonistiigi/xx) Dockerfile
cross-compilation helpers for more advanced use-cases.
> [!NOTE]
> For more information, see [Multi-platform builds docs](https://docs.docker.com/build/building/multi-platform/).
## High-level build options
See [High-level builds with Bake](https://docs.docker.com/build/bake/) for more details.

View File

@ -4,6 +4,7 @@ import (
"context"
"encoding"
"encoding/json"
"fmt"
"io"
"maps"
"os"
@ -19,7 +20,6 @@ import (
composecli "github.com/compose-spec/compose-go/v2/cli"
"github.com/docker/buildx/bake/hclparser"
"github.com/docker/buildx/build"
controllerapi "github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/util/buildflags"
"github.com/docker/buildx/util/platformutil"
"github.com/docker/buildx/util/progress"
@ -483,8 +483,7 @@ func (c Config) expandTargets(pattern string) ([]string, error) {
func (c Config) loadLinks(name string, t *Target, m map[string]*Target, o map[string]map[string]Override, visited []string, ent *EntitlementConf) error {
visited = append(visited, name)
for _, v := range t.Contexts {
if strings.HasPrefix(v, "target:") {
target := strings.TrimPrefix(v, "target:")
if target, ok := strings.CutPrefix(v, "target:"); ok {
if target == name {
return errors.Errorf("target %s cannot link to itself", target)
}
@ -728,6 +727,7 @@ type Target struct {
Ulimits []string `json:"ulimits,omitempty" hcl:"ulimits,optional" cty:"ulimits"`
Call *string `json:"call,omitempty" hcl:"call,optional" cty:"call"`
Entitlements []string `json:"entitlements,omitempty" hcl:"entitlements,optional" cty:"entitlements"`
ExtraHosts map[string]*string `json:"extra-hosts,omitempty" hcl:"extra-hosts,optional" cty:"extra-hosts"`
// IMPORTANT: if you add more fields here, do not forget to update newOverrides/AddOverrides and docs/bake-reference.md.
// linked is a private field to mark a target used as a linked one
@ -766,6 +766,14 @@ func (t *Target) MarshalJSON() ([]byte, error) {
}
}
tgt.ExtraHosts = maps.Clone(t.ExtraHosts)
for k, v := range t.ExtraHosts {
if v != nil {
escaped := esc(*v)
tgt.ExtraHosts[k] = &escaped
}
}
return json.Marshal(tgt)
}
@ -896,6 +904,15 @@ func (t *Target) Merge(t2 *Target) {
if t2.Entitlements != nil { // merge
t.Entitlements = append(t.Entitlements, t2.Entitlements...)
}
for k, v := range t2.ExtraHosts {
if v == nil {
continue
}
if t.ExtraHosts == nil {
t.ExtraHosts = map[string]*string{}
}
t.ExtraHosts[k] = v
}
t.Inherits = append(t.Inherits, t2.Inherits...)
}
@ -1084,6 +1101,14 @@ func (t *Target) AddOverrides(overrides map[string]Override, ent *EntitlementCon
return errors.Errorf("invalid value %s for boolean key load", value)
}
t.Outputs = setLoadOverride(t.Outputs, load)
case "extra-hosts":
if len(keys) != 2 {
return errors.Errorf("invalid format for extra-hosts, expecting extra-hosts.<hostname>=<ip>")
}
if t.ExtraHosts == nil {
t.ExtraHosts = map[string]*string{}
}
t.ExtraHosts[keys[1]] = &value
default:
return errors.Errorf("unknown key: %s", keys[0])
}
@ -1275,8 +1300,8 @@ func collectLocalPaths(t build.Inputs) []string {
if v, ok := isLocalPath(t.DockerfilePath); ok {
out = append(out, v)
}
} else if strings.HasPrefix(t.ContextPath, "cwd://") {
out = append(out, strings.TrimPrefix(t.ContextPath, "cwd://"))
} else if v, ok := strings.CutPrefix(t.ContextPath, "cwd://"); ok {
out = append(out, v)
}
for _, v := range t.NamedContexts {
if v.State != nil {
@ -1328,11 +1353,11 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
bi.DockerfileInline = *t.DockerfileInline
}
updateContext(&bi, inp)
if strings.HasPrefix(bi.DockerfilePath, "cwd://") {
if v, ok := strings.CutPrefix(bi.DockerfilePath, "cwd://"); ok {
// If Dockerfile is local for a remote invocation, we first check if
// it's not outside the working directory and then resolve it to an
// absolute path.
bi.DockerfilePath = path.Clean(strings.TrimPrefix(bi.DockerfilePath, "cwd://"))
bi.DockerfilePath = path.Clean(v)
var err error
bi.DockerfilePath, err = filepath.Abs(bi.DockerfilePath)
if err != nil {
@ -1357,15 +1382,15 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
return nil, errors.Errorf("reading a dockerfile for a remote build invocation is currently not supported")
}
}
if strings.HasPrefix(bi.ContextPath, "cwd://") {
bi.ContextPath = path.Clean(strings.TrimPrefix(bi.ContextPath, "cwd://"))
if v, ok := strings.CutPrefix(bi.ContextPath, "cwd://"); ok {
bi.ContextPath = path.Clean(v)
}
if !build.IsRemoteURL(bi.ContextPath) && bi.ContextState == nil && !path.IsAbs(bi.DockerfilePath) {
bi.DockerfilePath = path.Join(bi.ContextPath, bi.DockerfilePath)
if !build.IsRemoteURL(bi.ContextPath) && bi.ContextState == nil && !filepath.IsAbs(bi.DockerfilePath) {
bi.DockerfilePath = filepath.Join(bi.ContextPath, bi.DockerfilePath)
}
for k, v := range bi.NamedContexts {
if strings.HasPrefix(v.Path, "cwd://") {
bi.NamedContexts[k] = build.NamedContext{Path: path.Clean(strings.TrimPrefix(v.Path, "cwd://"))}
if v, ok := strings.CutPrefix(v.Path, "cwd://"); ok {
bi.NamedContexts[k] = build.NamedContext{Path: path.Clean(v)}
}
}
@ -1406,6 +1431,14 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
}
}
var extraHosts []string
for k, v := range t.ExtraHosts {
if v == nil {
continue
}
extraHosts = append(extraHosts, fmt.Sprintf("%s=%s", k, *v))
}
bo := &build.Options{
Inputs: bi,
Tags: t.Tags,
@ -1417,6 +1450,7 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
NetworkMode: networkMode,
Linked: t.linked,
ShmSize: *shmSize,
ExtraHosts: extraHosts,
}
platforms, err := platformutil.Parse(t.Platforms)
@ -1440,20 +1474,19 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
})
}
}
secrets = secrets.Normalize()
bo.SecretSpecs = secrets.ToPB()
secretAttachment, err := controllerapi.CreateSecrets(bo.SecretSpecs)
bo.SecretSpecs = secrets.Normalize()
secretAttachment, err := build.CreateSecrets(bo.SecretSpecs)
if err != nil {
return nil, err
}
bo.Session = append(bo.Session, secretAttachment)
bo.SSHSpecs = t.SSH.ToPB()
bo.SSHSpecs = t.SSH
if len(bo.SSHSpecs) == 0 && buildflags.IsGitSSH(bi.ContextPath) || (inp != nil && buildflags.IsGitSSH(inp.URL)) {
bo.SSHSpecs = []*controllerapi.SSH{{ID: "default"}}
bo.SSHSpecs = []*buildflags.SSH{{ID: "default"}}
}
sshAttachment, err := controllerapi.CreateSSH(bo.SSHSpecs)
sshAttachment, err := build.CreateSSH(bo.SSHSpecs)
if err != nil {
return nil, err
}
@ -1470,13 +1503,13 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
}
if t.CacheFrom != nil {
bo.CacheFrom = controllerapi.CreateCaches(t.CacheFrom.ToPB())
bo.CacheFrom = build.CreateCaches(t.CacheFrom)
}
if t.CacheTo != nil {
bo.CacheTo = controllerapi.CreateCaches(t.CacheTo.ToPB())
bo.CacheTo = build.CreateCaches(t.CacheTo)
}
bo.Exports, bo.ExportsLocalPathsTemporary, err = controllerapi.CreateExports(t.Outputs.ToPB())
bo.Exports, bo.ExportsLocalPathsTemporary, err = build.CreateExports(t.Outputs)
if err != nil {
return nil, err
}
@ -1491,7 +1524,7 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
}
}
bo.Attests = controllerapi.CreateAttestations(t.Attest.ToPB())
bo.Attests = t.Attest.ToMap()
bo.SourcePolicy, err = build.ReadSourcePolicy()
if err != nil {

View File

@ -27,6 +27,9 @@ target "webDEP" {
no-cache = true
shm-size = "128m"
ulimits = ["nofile=1024:1024"]
extra-hosts = {
my_hostname = "8.8.8.8"
}
}
target "webapp" {
@ -64,6 +67,7 @@ target "webapp" {
require.Equal(t, true, *m["webapp"].NoCache)
require.Equal(t, "128m", *m["webapp"].ShmSize)
require.Equal(t, []string{"nofile=1024:1024"}, m["webapp"].Ulimits)
require.Equal(t, map[string]*string{"my_hostname": ptrstr("8.8.8.8")}, m["webapp"].ExtraHosts)
require.Nil(t, m["webapp"].Pull)
require.Equal(t, 1, len(g))
@ -692,7 +696,7 @@ func TestHCLContextCwdPrefix(t *testing.T) {
require.Contains(t, m, "app")
assert.Equal(t, "test", *m["app"].Dockerfile)
assert.Equal(t, "foo", *m["app"].Context)
assert.Equal(t, "foo/test", bo["app"].Inputs.DockerfilePath)
assert.Equal(t, filepath.Clean("foo/test"), bo["app"].Inputs.DockerfilePath)
assert.Equal(t, "foo", bo["app"].Inputs.ContextPath)
}
@ -1381,7 +1385,6 @@ target "d" {
},
}
for _, tt := range cases {
tt := tt
t.Run(tt.name, func(t *testing.T) {
m, g, err := ReadTargets(ctx, []File{f}, []string{"d"}, tt.overrides, nil, &EntitlementConf{})
require.NoError(t, err)
@ -1454,7 +1457,6 @@ group "default" {
},
}
for _, tt := range cases {
tt := tt
t.Run(tt.name, func(t *testing.T) {
m, g, err := ReadTargets(ctx, []File{f}, []string{"default"}, tt.overrides, nil, &EntitlementConf{})
require.NoError(t, err)
@ -1509,7 +1511,6 @@ func TestTargetName(t *testing.T) {
},
}
for _, tt := range cases {
tt := tt
t.Run(tt.target, func(t *testing.T) {
_, _, err := ReadTargets(ctx, []File{{
Name: "docker-bake.hcl",
@ -1600,7 +1601,6 @@ target "f" {
},
}
for _, tt := range cases {
tt := tt
t.Run(strings.Join(tt.names, "+"), func(t *testing.T) {
m, g, err := ReadTargets(ctx, []File{f}, tt.names, nil, nil, &EntitlementConf{})
require.NoError(t, err)

View File

@ -62,7 +62,6 @@ func ParseCompose(cfgs []composetypes.ConfigFile, envs map[string]string) (*Conf
g := &Group{Name: "default"}
for _, s := range cfg.Services {
s := s
if s.Build == nil {
continue
}
@ -123,6 +122,14 @@ func ParseCompose(cfgs []composetypes.ConfigFile, envs map[string]string) (*Conf
}
}
extraHosts := map[string]*string{}
if s.Build.ExtraHosts != nil {
for k, v := range s.Build.ExtraHosts {
vv := strings.Join(v, ",")
extraHosts[k] = &vv
}
}
var ssh []*buildflags.SSH
for _, bkey := range s.Build.SSH {
sshkey := composeToBuildkitSSH(bkey)
@ -144,7 +151,6 @@ func ParseCompose(cfgs []composetypes.ConfigFile, envs map[string]string) (*Conf
// compose does not support nil values for labels
labels := map[string]*string{}
for k, v := range s.Build.Labels {
v := v
labels[k] = &v
}
@ -182,6 +188,7 @@ func ParseCompose(cfgs []composetypes.ConfigFile, envs map[string]string) (*Conf
Secrets: secrets,
ShmSize: shmSize,
Ulimits: ulimits,
ExtraHosts: extraHosts,
}
if err = t.composeExtTarget(s.Build.Extensions); err != nil {
return nil, err

View File

@ -32,6 +32,10 @@ services:
- type=local,src=path/to/cache
cache_to:
- type=local,dest=path/to/cache
extra_hosts:
- "somehost:162.242.195.82"
- "somehost:162.242.195.83"
- "myhostv6:::1"
ssh:
- key=/path/to/key
- default
@ -76,6 +80,7 @@ secrets:
require.Equal(t, ptrstr("123"), c.Targets[1].Args["buildno"])
require.Equal(t, []string{"type=local,src=path/to/cache"}, stringify(c.Targets[1].CacheFrom))
require.Equal(t, []string{"type=local,dest=path/to/cache"}, stringify(c.Targets[1].CacheTo))
require.Equal(t, map[string]*string{"myhostv6": ptrstr("::1"), "somehost": ptrstr("162.242.195.82,162.242.195.83")}, c.Targets[1].ExtraHosts)
require.Equal(t, "none", *c.Targets[1].NetworkMode)
require.Equal(t, []string{"default", "key=/path/to/key"}, stringify(c.Targets[1].SSH))
require.Equal(t, []string{
@ -518,7 +523,6 @@ func TestServiceName(t *testing.T) {
},
}
for _, tt := range cases {
tt := tt
t.Run(tt.svc, func(t *testing.T) {
_, err := ParseCompose([]composetypes.ConfigFile{{Content: []byte(`
services:
@ -589,7 +593,6 @@ services:
},
}
for _, tt := range cases {
tt := tt
t.Run(tt.name, func(t *testing.T) {
_, err := ParseCompose([]composetypes.ConfigFile{{Content: tt.dt}}, nil)
if tt.wantErr {
@ -665,7 +668,6 @@ target "default" {
},
}
for _, tt := range cases {
tt := tt
t.Run(tt.name, func(t *testing.T) {
isCompose, err := validateComposeFile(tt.dt, tt.fn)
assert.Equal(t, tt.isCompose, isCompose)

View File

@ -8,7 +8,7 @@ import (
"testing"
"github.com/docker/buildx/build"
"github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/util/buildflags"
"github.com/docker/buildx/util/osutil"
"github.com/moby/buildkit/client/llb"
"github.com/moby/buildkit/util/entitlements"
@ -264,7 +264,7 @@ func TestValidateEntitlements(t *testing.T) {
{
name: "SSHMissing",
opt: build.Options{
SSHSpecs: []*pb.SSH{
SSHSpecs: []*buildflags.SSH{
{
ID: "test",
},
@ -296,7 +296,7 @@ func TestValidateEntitlements(t *testing.T) {
{
name: "SecretFromSubFile",
opt: build.Options{
SecretSpecs: []*pb.Secret{
SecretSpecs: []*buildflags.Secret{
{
FilePath: filepath.Join(dir1, "subfile"),
},
@ -309,7 +309,7 @@ func TestValidateEntitlements(t *testing.T) {
{
name: "SecretFromEscapeLink",
opt: build.Options{
SecretSpecs: []*pb.Secret{
SecretSpecs: []*buildflags.Secret{
{
FilePath: escapeLink,
},
@ -325,7 +325,7 @@ func TestValidateEntitlements(t *testing.T) {
{
name: "SecretFromEscapeLinkAllowRoot",
opt: build.Options{
SecretSpecs: []*pb.Secret{
SecretSpecs: []*buildflags.Secret{
{
FilePath: escapeLink,
},
@ -352,7 +352,7 @@ func TestValidateEntitlements(t *testing.T) {
{
name: "SecretFromEscapeLinkAllowAny",
opt: build.Options{
SecretSpecs: []*pb.Secret{
SecretSpecs: []*buildflags.Secret{
{
FilePath: escapeLink,
},

View File

@ -1,8 +1,10 @@
package bake
import (
"fmt"
"reflect"
"regexp"
"runtime"
"testing"
hcl "github.com/hashicorp/hcl/v2"
@ -421,6 +423,63 @@ func TestHCLNullVariables(t *testing.T) {
require.Equal(t, ptrstr("bar"), c.Targets[0].Args["foo"])
}
func TestHCLTypedNullVariables(t *testing.T) {
types := []string{
"any",
"string", "number", "bool",
"list(string)", "set(string)", "map(string)",
"tuple([string])", "object({val: string})",
}
for _, varType := range types {
tName := fmt.Sprintf("variable typed %q with null default remains null", varType)
t.Run(tName, func(t *testing.T) {
dt := fmt.Sprintf(`
variable "FOO" {
type = %s
default = null
}
target "default" {
args = {
foo = equal(FOO, null)
}
}`, varType)
c, err := ParseFile([]byte(dt), "docker-bake.hcl")
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, "true", *c.Targets[0].Args["foo"])
})
}
}
func TestHCLTypedValuelessVariables(t *testing.T) {
types := []string{
"any",
"string", "number", "bool",
"list(string)", "set(string)", "map(string)",
"tuple([string])", "object({val: string})",
}
for _, varType := range types {
tName := fmt.Sprintf("variable typed %q with no default is null", varType)
t.Run(tName, func(t *testing.T) {
dt := fmt.Sprintf(`
variable "FOO" {
type = %s
}
target "default" {
args = {
foo = equal(FOO, null)
}
}`, varType)
c, err := ParseFile([]byte(dt), "docker-bake.hcl")
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, "true", *c.Targets[0].Args["foo"])
})
}
}
func TestJSONNullVariables(t *testing.T) {
dt := []byte(`{
"variable": {
@ -1563,6 +1622,20 @@ target "two" {
require.Equal(t, map[string]*string{"b": ptrstr("pre-jkl")}, c.Targets[1].Args)
}
func TestEmptyVariable(t *testing.T) {
dt := []byte(`
variable "FOO" {}
target "default" {
args = {
foo = equal(FOO, "")
}
}`)
c, err := ParseFile(dt, "docker-bake.hcl")
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, "true", *c.Targets[0].Args["foo"])
}
func TestEmptyVariableJSON(t *testing.T) {
dt := []byte(`{
"variable": {
@ -1645,6 +1718,669 @@ func TestHCLIndexOfFunc(t *testing.T) {
require.Empty(t, c.Targets[1].Tags[1])
}
func TestVarTypingSpec(t *testing.T) {
templ := `
variable "FOO" {
type = %s
}
target "default" {
}`
// not exhaustive, but the common ones
for _, s := range []string{
"bool", "number", "string", "any",
"list(string)", "set(string)", "tuple([string, number])",
} {
dt := fmt.Sprintf(templ, s)
_, err := ParseFile([]byte(dt), "docker-bake.hcl")
require.NoError(t, err)
}
for _, s := range []string{
"boolean", // no synonyms/aliases
"BOOL", // case matters
`lower("bool")`, // must be literals
} {
dt := fmt.Sprintf(templ, s)
_, err := ParseFile([]byte(dt), "docker-bake.hcl")
require.ErrorContains(t, err, "not a valid type")
}
}
func TestDefaultVarTypeEnforcement(t *testing.T) {
// To help prove a given default doesn't just pass the type check, but *is* that type,
// we use argValue to provide an expression that would work only on that type.
tests := []struct {
name string
varType string
varDefault any
argValue string
wantValue string
wantError bool
}{
{
name: "number (happy)",
varType: "number",
varDefault: 99,
argValue: "FOO + 1",
wantValue: "100",
},
{
name: "numeric string compatible with number",
varType: "number",
varDefault: `"99"`,
argValue: "FOO + 1",
wantValue: "100",
},
{
name: "boolean (happy)",
varType: "bool",
varDefault: true,
argValue: "and(FOO, true)",
wantValue: "true",
},
{
name: "numeric boolean compatible with boolean",
varType: "bool",
varDefault: `"true"`,
argValue: "and(FOO, true)",
wantValue: "true",
},
// should be representative of flagrant primitive type mismatches; not worth listing all possibilities?
{
name: "non-numeric string default incompatible with number",
varType: "number",
varDefault: `"oops"`,
wantError: true,
},
{
name: "list of numbers (happy)",
varType: "list(number)",
varDefault: "[2,3]",
argValue: `join("", [for v in FOO: v + 1])`,
wantValue: "34",
},
{
name: "list of numbers with numeric strings okay",
varType: "list(number)",
varDefault: `["2","3"]`,
argValue: `join("", [for v in FOO: v + 1])`,
wantValue: "34",
},
// represent flagrant mismatches for list types
{
name: "non-numeric strings in numeric list rejected",
varType: "list(number)",
varDefault: `["oops"]`,
wantError: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
argValue := tt.argValue
if argValue == "" {
argValue = "FOO"
}
dt := fmt.Sprintf(`
variable "FOO" {
type = %s
default = %v
}
target "default" {
args = {
foo = %s
}
}`, tt.varType, tt.varDefault, argValue)
c, err := ParseFile([]byte(dt), "docker-bake.hcl")
if tt.wantError {
require.ErrorContains(t, err, "invalid type")
} else {
require.NoError(t, err)
if tt.wantValue != "" {
require.Equal(t, 1, len(c.Targets))
require.Equal(t, ptrstr(tt.wantValue), c.Targets[0].Args["foo"])
}
}
})
}
}
func TestDefaultVarTypeWithAttrValuesEnforcement(t *testing.T) {
tests := []struct {
name string
attrValue any
varType string
wantError bool
}{
{
name: "attribute literal which matches var type",
attrValue: `"hello"`,
varType: "string",
},
{
name: "attribute literal which coerces to var type",
attrValue: `"99"`,
varType: "number",
},
{
name: "attribute from function which coerces to var type",
attrValue: `substr("99 bottles", 0, 2)`,
varType: "number",
},
{
name: "attribute from function returning non-coercible value",
attrValue: `split(",", "1,2,3foo")`,
varType: "list(number)",
wantError: true,
},
{
name: "mismatch",
attrValue: 99,
varType: "bool",
wantError: true,
},
{
name: "attribute correctly typed via function",
attrValue: `split(",", "1,2,3")`,
varType: "list(number)",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
dt := fmt.Sprintf(`
BAR = %v
variable "FOO" {
type = %s
default = BAR
}
target "default" {
}`, tt.attrValue, tt.varType)
_, err := ParseFile([]byte(dt), "docker-bake.hcl")
if tt.wantError {
require.ErrorContains(t, err, "invalid type")
require.ErrorContains(t, err, "FOO default value")
} else {
require.NoError(t, err)
}
})
}
}
func TestTypedVarOverrides(t *testing.T) {
const unsuitableValueType = "Unsuitable value type"
const unsupportedType = "unsupported type"
const failedToParseElement = "failed to parse element"
tests := []struct {
name string
varType string
override string
argValue string
wantValue string
wantErrorMsg string
}{
{
name: "boolean",
varType: "bool",
override: "true",
wantValue: "true",
},
{
name: "number",
varType: "number",
override: "99",
wantValue: "99",
},
{
name: "unquoted string accepted",
varType: "string",
override: "hello",
wantValue: "hello",
},
// an environment variable with a quoted string would most likely be intended
// to be a string whose first and last characters are quotes
{
name: "quoted string keeps quotes in value",
varType: "string",
override: `"hello"`,
wantValue: `"hello"`,
},
{
name: "proper CSV list of strings",
varType: "list(string)",
override: "hi,there",
argValue: `join("-", FOO)`,
wantValue: "hi-there",
},
{
name: "CSV of unquoted strings okay",
varType: "list(string)",
override: `hi,there`,
argValue: `join("-", FOO)`,
wantValue: "hi-there",
},
{
name: "CSV list of numbers",
varType: "list(number)",
override: "3,1,4",
argValue: `join("-", [for v in FOO: v + 1])`,
wantValue: "4-2-5",
},
{
name: "CSV set of numbers",
varType: "set(number)",
override: "3,1,4",
// anecdotally sets are sorted but may not be guaranteed
argValue: `join("-", [for v in sort(FOO): v + 1])`,
wantValue: "2-4-5",
},
{
name: "CSV map of numbers",
varType: "map(number)",
override: "foo:1,bar:2",
argValue: `join("-", sort(values(FOO)))`,
wantValue: "1-2",
},
{
name: "CSV tuple",
varType: "tuple([number,string])",
override: `99,bottles`,
argValue: `format("%d %s", FOO[0], FOO[1])`,
wantValue: "99 bottles",
},
{
name: "CSV tuple elements with wrong type",
varType: "tuple([number,string])",
override: `99,100`,
wantErrorMsg: unsuitableValueType,
},
{
name: "invalid CSV value",
varType: "list(string)",
override: `"hello,world`,
wantErrorMsg: "from CSV",
},
{
name: "object not supported",
varType: "object({message: string})",
override: "does not matter",
wantErrorMsg: unsupportedType,
},
{
name: "list of non-primitives not supported",
varType: "list(list(number))",
override: "1,2",
wantErrorMsg: unsupportedType,
},
{
name: "set of non-primitives not supported",
varType: "set(set(number))",
override: "1,2",
wantErrorMsg: unsupportedType,
},
{
name: "tuple of non-primitives not supported",
varType: "tuple([list(number)])",
// Intentionally a different override than other similar tests; tuple is unique in that
// multiple types are involved and length matters. In the real world, it's probably more
// likely a user would accidentally omit or add an item than trying to use non-primitives,
// so the length check comes first.
override: "1",
wantErrorMsg: unsupportedType,
},
{
name: "map of non-primitives not supported",
varType: "map(list(number))",
override: "foo:1,2",
wantErrorMsg: unsupportedType,
},
{
name: "invalid map k/v parsing",
varType: "map(string)",
// TODO fragile; will fail in a different manner without first k/v pair
override: `a:b,foo:"bar`,
wantErrorMsg: "as CSV",
},
{
name: "list with invalidly parsed elements",
varType: "list(number)",
override: "1,1z",
wantErrorMsg: failedToParseElement,
},
{
name: "set with invalidly parsed elements",
varType: "set(number)",
override: "1,1z",
wantErrorMsg: failedToParseElement,
},
{
name: "tuple with invalidly parsed elements",
varType: "tuple([number])",
override: "1z",
wantErrorMsg: failedToParseElement,
},
{
name: "map with invalidly parsed elements",
varType: "map(number)",
override: "foo:1z",
wantErrorMsg: failedToParseElement,
},
{
name: "map with bad value format",
varType: "map(number)",
override: "foo:1:1",
wantErrorMsg: "expected one k/v pair",
},
{
name: "primitive with bad value format",
varType: "number",
override: "1z",
wantErrorMsg: "failed to parse",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
argValue := tt.argValue
if argValue == "" {
argValue = "FOO"
}
dt := fmt.Sprintf(`
variable "FOO" {
type = %s
}
target "default" {
args = {
foo = %s
}
}`, tt.varType, argValue)
t.Setenv("FOO", tt.override)
c, err := ParseFile([]byte(dt), "docker-bake.hcl")
if tt.wantErrorMsg != "" {
require.ErrorContains(t, err, tt.wantErrorMsg)
} else {
require.NoError(t, err)
if tt.wantValue != "" {
require.Equal(t, 1, len(c.Targets))
require.Equal(t, tt.wantValue, *c.Targets[0].Args["foo"])
}
}
})
}
}
func TestTypedVarOverrides_JSON(t *testing.T) {
const unsuitableValueType = "Unsuitable value type"
tests := []struct {
name string
varType string
override string
argValue string
wantValue string
wantErrorMsg string
}{
{
name: "boolean",
varType: "bool",
override: "true",
wantValue: "true",
},
{
name: "number",
varType: "number",
override: "99",
wantValue: "99",
},
// no shortcuts in JSON mode
{
name: "unquoted string is error",
varType: "string",
override: "hello",
wantErrorMsg: "from JSON",
},
{
name: "string",
varType: "string",
override: `"hello"`,
wantValue: "hello",
},
{
name: "list of strings",
varType: "list(string)",
override: `["hi","there"]`,
argValue: `join("-", FOO)`,
wantValue: "hi-there",
},
{
name: "list of numbers",
varType: "list(number)",
override: "[3, 1, 4]",
argValue: `join("-", [for v in FOO: v + 1])`,
wantValue: "4-2-5",
},
{
name: "map of numbers",
varType: "map(number)",
override: `{"foo": 1, "bar": 2}`,
argValue: `join("-", sort(values(FOO)))`,
wantValue: "1-2",
},
{
name: "invalid JSON map of numbers",
varType: "map(number)",
override: `{"foo": "oops", "bar": 2}`,
// in lieu of something like ErrorMatches, this is the best single phrase
wantErrorMsg: "from JSON",
},
{
name: "tuple",
varType: "tuple([number,string])",
override: `[99, "bottles"]`,
argValue: `format("%d %s", FOO[0], FOO[1])`,
wantValue: "99 bottles",
},
{
name: "tuple elements with wrong type",
varType: "tuple([number,string])",
override: `[99, 100]`,
wantErrorMsg: unsuitableValueType,
},
{
name: "JSON object",
varType: `object({messages: list(string)})`,
override: `{"messages": ["hi", "there"]}`,
argValue: `join("-", FOO["messages"])`,
wantValue: "hi-there",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
argValue := tt.argValue
if argValue == "" {
argValue = "FOO"
}
dt := fmt.Sprintf(`
variable "FOO" {
type = %s
}
target "default" {
args = {
foo = %s
}
}`, tt.varType, argValue)
t.Setenv("FOO_JSON", tt.override)
c, err := ParseFile([]byte(dt), "docker-bake.hcl")
if tt.wantErrorMsg != "" {
require.ErrorContains(t, err, tt.wantErrorMsg)
} else {
require.NoError(t, err)
if tt.wantValue != "" {
require.Equal(t, 1, len(c.Targets))
require.Equal(t, tt.wantValue, *c.Targets[0].Args["foo"])
}
}
})
}
}
func TestJSONOverridePriority(t *testing.T) {
t.Run("JSON override ignored when same user var exists", func(t *testing.T) {
dt := []byte(`
variable "FOO" {
type = list(number)
}
variable "FOO_JSON" {
type = list(number)
}
target "default" {
args = {
foo = FOO
}
}`)
// env FOO_JSON is the CSV override of var FOO_JSON, not a JSON override of FOO
t.Setenv("FOO", "[1,2]")
t.Setenv("FOO_JSON", "[3,4]")
_, err := ParseFile(dt, "docker-bake.hcl")
require.ErrorContains(t, err, "failed to convert")
require.ErrorContains(t, err, "from CSV")
})
t.Run("JSON override ignored when same builtin var exists", func(t *testing.T) {
dt := []byte(`
variable "FOO" {
type = list(number)
}
target "default" {
args = {
foo = length(FOO)
}
}`)
t.Setenv("FOO", "1,2")
t.Setenv("FOO_JSON", "[3,4,5]")
c, _, err := ParseFiles(
[]File{{Name: "docker-bake.hcl", Data: dt}},
map[string]string{"FOO_JSON": "whatever"},
)
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, "2", *c.Targets[0].Args["foo"])
})
// this is implied/exercised in other tests, but repeated for completeness
t.Run("JSON override ignored if var is untyped", func(t *testing.T) {
dt := []byte(`
variable "FOO" {
default = [1, 2]
}
target "default" {
args = {
foo = length(FOO)
}
}`)
t.Setenv("FOO_JSON", "[3,4]")
_, err := ParseFile(dt, "docker-bake.hcl")
require.ErrorContains(t, err, "unsupported type")
})
t.Run("override-ish variable has regular CSV override", func(t *testing.T) {
dt := []byte(`
variable "FOO_JSON" {
type = list(number)
}
target "default" {
args = {
foo = length(FOO_JSON)
}
}`)
// despite the name, it's still CSV
t.Setenv("FOO_JSON", "10,11,12")
c, err := ParseFile(dt, "docker-bake.hcl")
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, "3", *c.Targets[0].Args["foo"])
t.Setenv("FOO_JSON", "[10,11,12]")
_, err = ParseFile(dt, "docker-bake.hcl")
require.ErrorContains(t, err, "from CSV")
})
t.Run("override-ish variable has own JSON override", func(t *testing.T) {
dt := []byte(`
variable "FOO_JSON" {
type = list(number)
}
target "default" {
args = {
foo = length(FOO_JSON)
}
}`)
t.Setenv("FOO_JSON_JSON", "[4,5,6]")
c, err := ParseFile(dt, "docker-bake.hcl")
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, "3", *c.Targets[0].Args["foo"])
})
t.Run("JSON override trumps CSV when no var name conflict", func(t *testing.T) {
dt := []byte(`
variable "FOO" {
type = list(number)
}
target "default" {
args = {
foo = length(FOO)
}
}`)
t.Setenv("FOO", "1,2")
t.Setenv("FOO_JSON", "[3,4,5]")
c, err := ParseFile(dt, "docker-bake.hcl")
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, "3", *c.Targets[0].Args["foo"])
})
t.Run("JSON override works with lowercase vars", func(t *testing.T) {
if runtime.GOOS == "windows" {
t.Skip("Windows case-insensitivity")
}
dt := []byte(`
variable "foo" {
type = number
default = 101
}
target "default" {
args = {
bar = foo
}
}`)
// may seem reasonable, but not supported (on case-sensitive systems)
t.Setenv("foo_json", "9000")
c, err := ParseFile(dt, "docker-bake.hcl")
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, "101", *c.Targets[0].Args["bar"])
t.Setenv("foo_JSON", "42")
c, err = ParseFile(dt, "docker-bake.hcl")
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, "42", *c.Targets[0].Args["bar"])
})
}
func ptrstr(s any) *string {
var n *string
if reflect.ValueOf(s).Kind() == reflect.String {

View File

@ -14,11 +14,16 @@ import (
"github.com/docker/buildx/bake/hclparser/gohcl"
"github.com/docker/buildx/util/userfunc"
"github.com/hashicorp/hcl/v2"
"github.com/hashicorp/hcl/v2/ext/typeexpr"
"github.com/pkg/errors"
"github.com/tonistiigi/go-csvvalue"
"github.com/zclconf/go-cty/cty"
"github.com/zclconf/go-cty/cty/convert"
ctyjson "github.com/zclconf/go-cty/cty/json"
)
const jsonEnvOverrideSuffix = "_JSON"
type Opt struct {
LookupVar func(string) (string, bool)
Vars map[string]string
@ -27,11 +32,15 @@ type Opt struct {
type variable struct {
Name string `json:"-" hcl:"name,label"`
Type hcl.Expression `json:"type,omitempty" hcl:"type,optional"`
Default *hcl.Attribute `json:"default,omitempty" hcl:"default,optional"`
Description string `json:"description,omitempty" hcl:"description,optional"`
Validations []*variableValidation `json:"validation,omitempty" hcl:"validation,block"`
Body hcl.Body `json:"-" hcl:",body"`
Remain hcl.Body `json:"-" hcl:",remain"`
// the type described by Type if it was specified
constraint *cty.Type
}
type variableValidation struct {
@ -42,7 +51,7 @@ type variableValidation struct {
type functionDef struct {
Name string `json:"-" hcl:"name,label"`
Params *hcl.Attribute `json:"params,omitempty" hcl:"params"`
Variadic *hcl.Attribute `json:"variadic_param,omitempty" hcl:"variadic_params"`
Variadic *hcl.Attribute `json:"variadic_params,omitempty" hcl:"variadic_params"`
Result *hcl.Attribute `json:"result,omitempty" hcl:"result"`
}
@ -267,57 +276,92 @@ func (p *parser) resolveValue(ectx *hcl.EvalContext, name string) (err error) {
}
}()
// built-in vars aren't intended to be overridden and are statically typed as strings;
// no sense sending them through type checks or waiting to return them
if val, ok := p.opt.Vars[name]; ok {
vv := cty.StringVal(val)
v = &vv
return
}
var diags hcl.Diagnostics
varType, typeSpecified := cty.DynamicPseudoType, false
def, ok := p.attrs[name]
if _, builtin := p.opt.Vars[name]; !ok && !builtin {
if !ok {
vr, ok := p.vars[name]
if !ok {
return errors.Wrapf(errUndefined{}, "variable %q does not exist", name)
}
def = vr.Default
ectx = p.ectx
varType, diags = typeConstraint(vr.Type)
if diags.HasErrors() {
return diags
}
typeSpecified = !varType.Equals(cty.DynamicPseudoType) || hcl.ExprAsKeyword(vr.Type) == "any"
if typeSpecified {
vr.constraint = &varType
}
}
if def == nil {
val, ok := p.opt.Vars[name]
if !ok {
val, _ = p.opt.LookupVar(name)
// Lack of specified value, when untyped is considered to have an empty string value.
// A typed variable with no value will result in (typed) nil.
if _, ok, _ := p.valueHasOverride(name, false); !ok && !typeSpecified {
vv := cty.StringVal("")
v = &vv
return
}
vv := cty.StringVal(val)
v = &vv
return
}
if diags := p.loadDeps(ectx, def.Expr, nil, true); diags.HasErrors() {
return diags
}
vv, diags := def.Expr.Value(ectx)
if diags.HasErrors() {
return diags
var vv cty.Value
if def != nil {
if diags := p.loadDeps(ectx, def.Expr, nil, true); diags.HasErrors() {
return diags
}
vv, diags = def.Expr.Value(ectx)
if diags.HasErrors() {
return diags
}
vv, err = convert.Convert(vv, varType)
if err != nil {
return errors.Wrapf(err, "invalid type %s for variable %s default value", varType.FriendlyName(), name)
}
}
envv, hasEnv, jsonEnv := p.valueHasOverride(name, typeSpecified)
_, isVar := p.vars[name]
if envv, ok := p.opt.LookupVar(name); ok && isVar {
if hasEnv && isVar {
switch {
case vv.Type().Equals(cty.Bool):
b, err := strconv.ParseBool(envv)
case typeSpecified && jsonEnv:
vv, err = ctyjson.Unmarshal([]byte(envv), varType)
if err != nil {
return errors.Wrapf(err, "failed to parse %s as bool", name)
return errors.Wrapf(err, "failed to convert variable %s from JSON", name)
}
vv = cty.BoolVal(b)
case vv.Type().Equals(cty.String), vv.Type().Equals(cty.DynamicPseudoType):
case supportedCSVType(varType): // typing explicitly specified for selected complex types
vv, err = valueFromCSV(name, envv, varType)
if err != nil {
return errors.Wrapf(err, "failed to convert variable %s from CSV", name)
}
case typeSpecified && varType.IsPrimitiveType():
vv, err = convertPrimitive(name, envv, varType)
if err != nil {
return err
}
case typeSpecified:
// e.g., an 'object' not provided as JSON (which can't be expressed in the default CSV format)
return errors.Errorf("unsupported type %s for variable %s", varType.FriendlyName(), name)
case def == nil: // no default from which to infer typing
vv = cty.StringVal(envv)
case vv.Type().Equals(cty.Number):
n, err := strconv.ParseFloat(envv, 64)
if err == nil && (math.IsNaN(n) || math.IsInf(n, 0)) {
err = errors.Errorf("invalid number value")
}
case vv.Type().Equals(cty.DynamicPseudoType):
vv = cty.StringVal(envv)
case vv.Type().IsPrimitiveType():
vv, err = convertPrimitive(name, envv, vv.Type())
if err != nil {
return errors.Wrapf(err, "failed to parse %s as number", name)
return err
}
vv = cty.NumberVal(big.NewFloat(n))
default:
// TODO: support lists with csv values
return errors.Errorf("unsupported type %s for variable %s", vv.Type().FriendlyName(), name)
}
}
@ -325,6 +369,29 @@ func (p *parser) resolveValue(ectx *hcl.EvalContext, name string) (err error) {
return nil
}
// valueHasOverride returns a possible override value if one was specified, and whether it should
// be treated as a JSON value.
//
// A plain/CSV override is the default; this consolidates the logic around how a JSON-specific override
// is specified and when it will be honored when there are naming conflicts or ambiguity.
func (p *parser) valueHasOverride(name string, favorJSON bool) (string, bool, bool) {
jsonEnv := false
envv, hasEnv := p.opt.LookupVar(name)
// If no plain override exists (!hasEnv) or JSON overrides are explicitly favored (favorJSON),
// check for a JSON-specific override with the "_JSON" suffix.
if !hasEnv || favorJSON {
jsonVarName := name + jsonEnvOverrideSuffix
_, builtin := p.opt.Vars[jsonVarName]
if _, ok := p.vars[jsonVarName]; !ok && !builtin {
if j, ok := p.opt.LookupVar(jsonVarName); ok {
envv = j
hasEnv, jsonEnv = true, true
}
}
}
return envv, hasEnv, jsonEnv
}
// resolveBlock force evaluates a block, storing the result in the parser. If a
// target schema is provided, only the attributes and blocks present in the
// schema will be evaluated.
@ -613,6 +680,7 @@ func (p *parser) validateVariables(vars map[string]*variable, ectx *hcl.EvalCont
type Variable struct {
Name string `json:"name"`
Description string `json:"description,omitempty"`
Type string `json:"type,omitempty"`
Value *string `json:"value,omitempty"`
}
@ -718,7 +786,6 @@ func Parse(b hcl.Body, opt Opt, val any) (*ParseMeta, hcl.Diagnostics) {
}
for _, a := range content.Attributes {
a := a
return nil, hcl.Diagnostics{
&hcl.Diagnostic{
Severity: hcl.DiagError,
@ -743,13 +810,31 @@ func Parse(b hcl.Body, opt Opt, val any) (*ParseMeta, hcl.Diagnostics) {
Name: p.vars[k].Name,
Description: p.vars[k].Description,
}
tc := p.vars[k].constraint
if tc != nil {
v.Type = tc.FriendlyNameForConstraint()
}
if vv := p.ectx.Variables[k]; !vv.IsNull() {
var s string
switch vv.Type() {
case cty.String:
s = vv.AsString()
case cty.Bool:
s = strconv.FormatBool(vv.True())
switch {
case tc != nil:
if bs, err := ctyjson.Marshal(vv, *tc); err == nil {
s = string(bs)
// untyped strings were always unquoted, so be consistent with typed strings as well
if tc.Equals(cty.String) {
s = strings.Trim(s, "\"")
}
}
case vv.Type().IsPrimitiveType():
// all primitives can convert to string, so error should never occur
if val, err := convert.Convert(vv, cty.String); err == nil {
s = val.AsString()
}
default:
// must be an (inferred) tuple or object
if bs, err := ctyjson.Marshal(vv, vv.Type()); err == nil {
s = string(bs)
}
}
v.Value = &s
}
@ -771,7 +856,6 @@ func Parse(b hcl.Body, opt Opt, val any) (*ParseMeta, hcl.Diagnostics) {
context = subject
} else {
for _, block := range blocks.Blocks {
block := block
if block.Type == "function" && len(block.Labels) == 1 && block.Labels[0] == k {
subject = block.LabelRanges[0].Ptr()
context = block.DefRange.Ptr()
@ -840,7 +924,6 @@ func Parse(b hcl.Body, opt Opt, val any) (*ParseMeta, hcl.Diagnostics) {
diags = hcl.Diagnostics{}
for _, b := range content.Blocks {
b := b
v := reflect.ValueOf(val)
err := p.resolveBlock(b, nil)
@ -873,7 +956,7 @@ func Parse(b hcl.Body, opt Opt, val any) (*ParseMeta, hcl.Diagnostics) {
}
}
if exists {
if m := oldValue.Value.MethodByName("Merge"); m.IsValid() {
if m := oldValue.MethodByName("Merge"); m.IsValid() {
m.Call([]reflect.Value{vv})
} else {
v.Elem().Field(t.idx).Index(oldValue.idx).Set(vv)
@ -907,6 +990,141 @@ func Parse(b hcl.Body, opt Opt, val any) (*ParseMeta, hcl.Diagnostics) {
}, nil
}
// typeConstraint wraps typeexpr.TypeConstraint to differentiate between errors in the
// specification and errors due to being cty.NullVal (not provided).
func typeConstraint(expr hcl.Expression) (cty.Type, hcl.Diagnostics) {
t, diag := typeexpr.TypeConstraint(expr)
if !diag.HasErrors() {
return t, diag
}
// if had errors, it could be because the expression is 'nil', i.e., unspecified
if v, err := expr.Value(nil); err == nil {
if v.IsNull() {
return cty.DynamicPseudoType, nil
}
}
// even if the evaluation resulted in error, the original (error) diagnostics are likely more useful
return t, diag
}
// convertPrimitive converts a single string primitive value to a given cty.Type.
func convertPrimitive(name, value string, target cty.Type) (cty.Value, error) {
switch {
case target.Equals(cty.String):
return cty.StringVal(value), nil
case target.Equals(cty.Bool):
b, err := strconv.ParseBool(value)
if err != nil {
return cty.NilVal, errors.Wrapf(err, "failed to parse %s as bool", name)
}
return cty.BoolVal(b), nil
case target.Equals(cty.Number):
n, err := strconv.ParseFloat(value, 64)
if err == nil && (math.IsNaN(n) || math.IsInf(n, 0)) {
err = errors.Errorf("invalid number value")
}
if err != nil {
return cty.NilVal, errors.Wrapf(err, "failed to parse %s as number", name)
}
return cty.NumberVal(big.NewFloat(n)), nil
default:
return cty.NilVal, errors.Errorf("%s of type %s is not a primitive", name, target.FriendlyName())
}
}
// supportedCSVType reports whether the given cty.Type might be convertible from a CSV string via valueFromCSV.
func supportedCSVType(t cty.Type) bool {
return t.IsListType() || t.IsSetType() || t.IsTupleType() || t.IsMapType()
}
// valueFromCSV takes CSV value and converts it to cty.Type.
//
// This currently supports conversion to cty.List and cty.Set.
// It also contains preliminary support for cty.Map (the other collection type).
// While not considered a collection type, it also tentatively supports cty.Tuple.
func valueFromCSV(name, value string, target cty.Type) (cty.Value, error) {
fields, err := csvvalue.Fields(value, nil)
if err != nil {
return cty.NilVal, errors.Wrapf(err, "failed to parse %s as CSV", value)
}
// used for lists and set, which require identical processing and differ only in return type
singleTypeConvert := func(t cty.Type) ([]cty.Value, error) {
var elems []cty.Value
for _, f := range fields {
v, err := convertPrimitive(name, f, t)
if err != nil {
return nil, errors.Wrapf(err, "failed to parse element of type %s", target.FriendlyName())
}
elems = append(elems, v)
}
return elems, nil
}
switch {
case target.IsListType():
if !target.ElementType().IsPrimitiveType() {
return cty.NilVal, errors.Errorf("unsupported type %s for CSV specification", target.FriendlyName())
}
elems, err := singleTypeConvert(target.ElementType())
if err != nil {
return cty.NilVal, err
}
return cty.ListVal(elems), nil
case target.IsSetType():
if !target.ElementType().IsPrimitiveType() {
return cty.NilVal, errors.Errorf("unsupported type %s for CSV specification", target.FriendlyName())
}
elems, err := singleTypeConvert(target.ElementType())
if err != nil {
return cty.NilVal, err
}
return cty.SetVal(elems), nil
case target.IsTupleType():
tupleTypes := target.TupleElementTypes()
if len(tupleTypes) != len(fields) {
return cty.NilVal, errors.Errorf("%s expects %d elements but only %d provided", target.FriendlyName(), len(tupleTypes), len(fields))
}
var elems []cty.Value
for i, f := range fields {
tt := tupleTypes[i]
if !tt.IsPrimitiveType() {
return cty.NilVal, errors.Errorf("unsupported type %s for CSV specification", target.FriendlyName())
}
v, err := convertPrimitive(name, f, tt)
if err != nil {
return cty.NilVal, errors.Wrapf(err, "failed to parse element of type %s", target.FriendlyName())
}
elems = append(elems, v)
}
return cty.TupleVal(elems), nil
case target.IsMapType():
if !target.ElementType().IsPrimitiveType() {
return cty.NilVal, errors.Errorf("unsupported type %s for CSV specification", target.FriendlyName())
}
p := csvvalue.Parser{Comma: ':'}
var kvSlice []string
m := make(map[string]cty.Value)
for _, f := range fields {
kvSlice, err = p.Fields(f, kvSlice)
if err != nil {
return cty.NilVal, errors.Wrapf(err, "failed to parse %s as k/v for variable %s", f, name)
}
if len(kvSlice) != 2 {
return cty.NilVal, errors.Errorf("expected one k/v pair but got %d pieces from %s", len(kvSlice), f)
}
v, err := convertPrimitive(name, kvSlice[1], target.ElementType())
if err != nil {
return cty.NilVal, errors.Wrapf(err, "failed to parse element from type %s", target.FriendlyName())
}
m[kvSlice[0]] = v
}
return cty.MapVal(m), nil
default:
return cty.NilVal, errors.Errorf("unsupported type %s for CSV specification", target.FriendlyName())
}
}
// wrapErrorDiagnostic wraps an error into a hcl.Diagnostics object.
// If the error is already an hcl.Diagnostics object, it is returned as is.
func wrapErrorDiagnostic(message string, err error, subject *hcl.Range, context *hcl.Range) hcl.Diagnostics {

View File

@ -1,8 +1,6 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
// Forked from https://github.com/hashicorp/hcl/blob/4679383728fe331fc8a6b46036a27b8f818d9bc0/merged.go
package hclparser
import (

View File

@ -0,0 +1,687 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
package hclparser
import (
"fmt"
"reflect"
"testing"
"github.com/davecgh/go-spew/spew"
"github.com/hashicorp/hcl/v2"
)
func TestMergedBodiesContent(t *testing.T) {
tests := []struct {
Bodies []hcl.Body
Schema *hcl.BodySchema
Want *hcl.BodyContent
DiagCount int
}{
{
[]hcl.Body{},
&hcl.BodySchema{},
&hcl.BodyContent{
Attributes: map[string]*hcl.Attribute{},
},
0,
},
{
[]hcl.Body{},
&hcl.BodySchema{
Attributes: []hcl.AttributeSchema{
{
Name: "name",
},
},
},
&hcl.BodyContent{
Attributes: map[string]*hcl.Attribute{},
},
0,
},
{
[]hcl.Body{},
&hcl.BodySchema{
Attributes: []hcl.AttributeSchema{
{
Name: "name",
Required: true,
},
},
},
&hcl.BodyContent{
Attributes: map[string]*hcl.Attribute{},
},
1,
},
{
[]hcl.Body{
&testMergedBodiesVictim{
HasAttributes: []string{"name"},
},
},
&hcl.BodySchema{
Attributes: []hcl.AttributeSchema{
{
Name: "name",
},
},
},
&hcl.BodyContent{
Attributes: map[string]*hcl.Attribute{
"name": {
Name: "name",
},
},
},
0,
},
{
[]hcl.Body{
&testMergedBodiesVictim{
Name: "first",
HasAttributes: []string{"name"},
},
&testMergedBodiesVictim{
Name: "second",
HasAttributes: []string{"name"},
},
},
&hcl.BodySchema{
Attributes: []hcl.AttributeSchema{
{
Name: "name",
},
},
},
&hcl.BodyContent{
Attributes: map[string]*hcl.Attribute{
"name": {
Name: "name",
NameRange: hcl.Range{Filename: "second"},
},
},
},
1,
},
{
[]hcl.Body{
&testMergedBodiesVictim{
Name: "first",
HasAttributes: []string{"name"},
},
&testMergedBodiesVictim{
Name: "second",
HasAttributes: []string{"age"},
},
},
&hcl.BodySchema{
Attributes: []hcl.AttributeSchema{
{
Name: "name",
},
{
Name: "age",
},
},
},
&hcl.BodyContent{
Attributes: map[string]*hcl.Attribute{
"name": {
Name: "name",
NameRange: hcl.Range{Filename: "first"},
},
"age": {
Name: "age",
NameRange: hcl.Range{Filename: "second"},
},
},
},
0,
},
{
[]hcl.Body{},
&hcl.BodySchema{
Blocks: []hcl.BlockHeaderSchema{
{
Type: "pizza",
},
},
},
&hcl.BodyContent{
Attributes: map[string]*hcl.Attribute{},
},
0,
},
{
[]hcl.Body{
&testMergedBodiesVictim{
HasBlocks: map[string]int{
"pizza": 1,
},
},
},
&hcl.BodySchema{
Blocks: []hcl.BlockHeaderSchema{
{
Type: "pizza",
},
},
},
&hcl.BodyContent{
Attributes: map[string]*hcl.Attribute{},
Blocks: hcl.Blocks{
{
Type: "pizza",
},
},
},
0,
},
{
[]hcl.Body{
&testMergedBodiesVictim{
HasBlocks: map[string]int{
"pizza": 2,
},
},
},
&hcl.BodySchema{
Blocks: []hcl.BlockHeaderSchema{
{
Type: "pizza",
},
},
},
&hcl.BodyContent{
Attributes: map[string]*hcl.Attribute{},
Blocks: hcl.Blocks{
{
Type: "pizza",
},
{
Type: "pizza",
},
},
},
0,
},
{
[]hcl.Body{
&testMergedBodiesVictim{
Name: "first",
HasBlocks: map[string]int{
"pizza": 1,
},
},
&testMergedBodiesVictim{
Name: "second",
HasBlocks: map[string]int{
"pizza": 1,
},
},
},
&hcl.BodySchema{
Blocks: []hcl.BlockHeaderSchema{
{
Type: "pizza",
},
},
},
&hcl.BodyContent{
Attributes: map[string]*hcl.Attribute{},
Blocks: hcl.Blocks{
{
Type: "pizza",
DefRange: hcl.Range{Filename: "first"},
},
{
Type: "pizza",
DefRange: hcl.Range{Filename: "second"},
},
},
},
0,
},
{
[]hcl.Body{
&testMergedBodiesVictim{
Name: "first",
},
&testMergedBodiesVictim{
Name: "second",
HasBlocks: map[string]int{
"pizza": 2,
},
},
},
&hcl.BodySchema{
Blocks: []hcl.BlockHeaderSchema{
{
Type: "pizza",
},
},
},
&hcl.BodyContent{
Attributes: map[string]*hcl.Attribute{},
Blocks: hcl.Blocks{
{
Type: "pizza",
DefRange: hcl.Range{Filename: "second"},
},
{
Type: "pizza",
DefRange: hcl.Range{Filename: "second"},
},
},
},
0,
},
{
[]hcl.Body{
&testMergedBodiesVictim{
Name: "first",
HasBlocks: map[string]int{
"pizza": 2,
},
},
&testMergedBodiesVictim{
Name: "second",
},
},
&hcl.BodySchema{
Blocks: []hcl.BlockHeaderSchema{
{
Type: "pizza",
},
},
},
&hcl.BodyContent{
Attributes: map[string]*hcl.Attribute{},
Blocks: hcl.Blocks{
{
Type: "pizza",
DefRange: hcl.Range{Filename: "first"},
},
{
Type: "pizza",
DefRange: hcl.Range{Filename: "first"},
},
},
},
0,
},
{
[]hcl.Body{
&testMergedBodiesVictim{
Name: "first",
},
&testMergedBodiesVictim{
Name: "second",
},
},
&hcl.BodySchema{
Blocks: []hcl.BlockHeaderSchema{
{
Type: "pizza",
},
},
},
&hcl.BodyContent{
Attributes: map[string]*hcl.Attribute{},
},
0,
},
}
for i, test := range tests {
t.Run(fmt.Sprintf("%02d", i), func(t *testing.T) {
merged := MergeBodies(test.Bodies)
got, diags := merged.Content(test.Schema)
if len(diags) != test.DiagCount {
t.Errorf("Wrong number of diagnostics %d; want %d", len(diags), test.DiagCount)
for _, diag := range diags {
t.Logf(" - %s", diag)
}
}
if !reflect.DeepEqual(got, test.Want) {
t.Errorf("wrong result\ngot: %s\nwant: %s", spew.Sdump(got), spew.Sdump(test.Want))
}
})
}
}
func TestMergeBodiesPartialContent(t *testing.T) {
tests := []struct {
Bodies []hcl.Body
Schema *hcl.BodySchema
WantContent *hcl.BodyContent
WantRemain hcl.Body
DiagCount int
}{
{
[]hcl.Body{},
&hcl.BodySchema{},
&hcl.BodyContent{
Attributes: map[string]*hcl.Attribute{},
},
mergedBodies{},
0,
},
{
[]hcl.Body{
&testMergedBodiesVictim{
Name: "first",
HasAttributes: []string{"name", "age"},
},
},
&hcl.BodySchema{
Attributes: []hcl.AttributeSchema{
{
Name: "name",
},
},
},
&hcl.BodyContent{
Attributes: map[string]*hcl.Attribute{
"name": {
Name: "name",
NameRange: hcl.Range{Filename: "first"},
},
},
},
mergedBodies{
&testMergedBodiesVictim{
Name: "first",
HasAttributes: []string{"age"},
},
},
0,
},
{
[]hcl.Body{
&testMergedBodiesVictim{
Name: "first",
HasAttributes: []string{"name", "age"},
},
&testMergedBodiesVictim{
Name: "second",
HasAttributes: []string{"name", "pizza"},
},
},
&hcl.BodySchema{
Attributes: []hcl.AttributeSchema{
{
Name: "name",
},
},
},
&hcl.BodyContent{
Attributes: map[string]*hcl.Attribute{
"name": {
Name: "name",
NameRange: hcl.Range{Filename: "second"},
},
},
},
mergedBodies{
&testMergedBodiesVictim{
Name: "first",
HasAttributes: []string{"age"},
},
&testMergedBodiesVictim{
Name: "second",
HasAttributes: []string{"pizza"},
},
},
1,
},
{
[]hcl.Body{
&testMergedBodiesVictim{
Name: "first",
HasAttributes: []string{"name", "age"},
},
&testMergedBodiesVictim{
Name: "second",
HasAttributes: []string{"pizza", "soda"},
},
},
&hcl.BodySchema{
Attributes: []hcl.AttributeSchema{
{
Name: "name",
},
{
Name: "soda",
},
},
},
&hcl.BodyContent{
Attributes: map[string]*hcl.Attribute{
"name": {
Name: "name",
NameRange: hcl.Range{Filename: "first"},
},
"soda": {
Name: "soda",
NameRange: hcl.Range{Filename: "second"},
},
},
},
mergedBodies{
&testMergedBodiesVictim{
Name: "first",
HasAttributes: []string{"age"},
},
&testMergedBodiesVictim{
Name: "second",
HasAttributes: []string{"pizza"},
},
},
0,
},
{
[]hcl.Body{
&testMergedBodiesVictim{
Name: "first",
HasBlocks: map[string]int{
"pizza": 1,
},
},
&testMergedBodiesVictim{
Name: "second",
HasBlocks: map[string]int{
"pizza": 1,
"soda": 2,
},
},
},
&hcl.BodySchema{
Blocks: []hcl.BlockHeaderSchema{
{
Type: "pizza",
},
},
},
&hcl.BodyContent{
Attributes: map[string]*hcl.Attribute{},
Blocks: hcl.Blocks{
{
Type: "pizza",
DefRange: hcl.Range{Filename: "first"},
},
{
Type: "pizza",
DefRange: hcl.Range{Filename: "second"},
},
},
},
mergedBodies{
&testMergedBodiesVictim{
Name: "first",
HasAttributes: []string{},
HasBlocks: map[string]int{},
},
&testMergedBodiesVictim{
Name: "second",
HasAttributes: []string{},
HasBlocks: map[string]int{
"soda": 2,
},
},
},
0,
},
}
for i, test := range tests {
t.Run(fmt.Sprintf("%02d", i), func(t *testing.T) {
merged := MergeBodies(test.Bodies)
got, gotRemain, diags := merged.PartialContent(test.Schema)
if len(diags) != test.DiagCount {
t.Errorf("Wrong number of diagnostics %d; want %d", len(diags), test.DiagCount)
for _, diag := range diags {
t.Logf(" - %s", diag)
}
}
if !reflect.DeepEqual(got, test.WantContent) {
t.Errorf("wrong content result\ngot: %s\nwant: %s", spew.Sdump(got), spew.Sdump(test.WantContent))
}
if !reflect.DeepEqual(gotRemain, test.WantRemain) {
t.Errorf("wrong remaining result\ngot: %s\nwant: %s", spew.Sdump(gotRemain), spew.Sdump(test.WantRemain))
}
})
}
}
type testMergedBodiesVictim struct {
Name string
HasAttributes []string
HasBlocks map[string]int
DiagCount int
}
func (v *testMergedBodiesVictim) Content(schema *hcl.BodySchema) (*hcl.BodyContent, hcl.Diagnostics) {
c, _, d := v.PartialContent(schema)
return c, d
}
func (v *testMergedBodiesVictim) PartialContent(schema *hcl.BodySchema) (*hcl.BodyContent, hcl.Body, hcl.Diagnostics) {
remain := &testMergedBodiesVictim{
Name: v.Name,
HasAttributes: []string{},
}
hasAttrs := map[string]struct{}{}
for _, n := range v.HasAttributes {
hasAttrs[n] = struct{}{}
var found bool
for _, attrS := range schema.Attributes {
if n == attrS.Name {
found = true
break
}
}
if !found {
remain.HasAttributes = append(remain.HasAttributes, n)
}
}
content := &hcl.BodyContent{
Attributes: map[string]*hcl.Attribute{},
}
rng := hcl.Range{
Filename: v.Name,
}
for _, attrS := range schema.Attributes {
_, has := hasAttrs[attrS.Name]
if has {
content.Attributes[attrS.Name] = &hcl.Attribute{
Name: attrS.Name,
NameRange: rng,
}
}
}
if v.HasBlocks != nil {
for _, blockS := range schema.Blocks {
num := v.HasBlocks[blockS.Type]
for range num {
content.Blocks = append(content.Blocks, &hcl.Block{
Type: blockS.Type,
DefRange: rng,
})
}
}
remain.HasBlocks = map[string]int{}
for n := range v.HasBlocks {
var found bool
for _, blockS := range schema.Blocks {
if blockS.Type == n {
found = true
break
}
}
if !found {
remain.HasBlocks[n] = v.HasBlocks[n]
}
}
}
diags := make(hcl.Diagnostics, v.DiagCount)
for i := range diags {
diags[i] = &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: fmt.Sprintf("Fake diagnostic %d", i),
Detail: "For testing only.",
Context: &rng,
}
}
return content, remain, diags
}
func (v *testMergedBodiesVictim) JustAttributes() (hcl.Attributes, hcl.Diagnostics) {
attrs := make(map[string]*hcl.Attribute)
rng := hcl.Range{
Filename: v.Name,
}
for _, name := range v.HasAttributes {
attrs[name] = &hcl.Attribute{
Name: name,
NameRange: rng,
}
}
diags := make(hcl.Diagnostics, v.DiagCount)
for i := range diags {
diags[i] = &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: fmt.Sprintf("Fake diagnostic %d", i),
Detail: "For testing only.",
Context: &rng,
}
}
return attrs, diags
}
func (v *testMergedBodiesVictim) MissingItemRange() hcl.Range {
return hcl.Range{
Filename: v.Name,
}
}

View File

@ -144,7 +144,7 @@ func indexOfFunc() function.Function {
},
Type: function.StaticReturnType(cty.Number),
Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) {
if !(args[0].Type().IsListType() || args[0].Type().IsTupleType()) {
if !args[0].Type().IsListType() && !args[0].Type().IsTupleType() {
return cty.NilVal, errors.New("argument must be a list or tuple")
}

View File

@ -7,9 +7,10 @@ import (
"os"
"strings"
"github.com/docker/buildx/build"
"github.com/docker/buildx/builder"
controllerapi "github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/driver"
"github.com/docker/buildx/util/buildflags"
"github.com/docker/buildx/util/progress"
"github.com/docker/go-units"
"github.com/moby/buildkit/client"
@ -33,27 +34,27 @@ func ReadRemoteFiles(ctx context.Context, nodes []builder.Node, url string, name
st, ok := dockerui.DetectGitContext(url, false)
if ok {
if ssh, err := controllerapi.CreateSSH([]*controllerapi.SSH{{
if ssh, err := build.CreateSSH([]*buildflags.SSH{{
ID: "default",
Paths: strings.Split(os.Getenv("BUILDX_BAKE_GIT_SSH"), ","),
}}); err == nil {
sessions = append(sessions, ssh)
}
var gitAuthSecrets []*controllerapi.Secret
var gitAuthSecrets []*buildflags.Secret
if _, ok := os.LookupEnv("BUILDX_BAKE_GIT_AUTH_TOKEN"); ok {
gitAuthSecrets = append(gitAuthSecrets, &controllerapi.Secret{
gitAuthSecrets = append(gitAuthSecrets, &buildflags.Secret{
ID: llb.GitAuthTokenKey,
Env: "BUILDX_BAKE_GIT_AUTH_TOKEN",
})
}
if _, ok := os.LookupEnv("BUILDX_BAKE_GIT_AUTH_HEADER"); ok {
gitAuthSecrets = append(gitAuthSecrets, &controllerapi.Secret{
gitAuthSecrets = append(gitAuthSecrets, &buildflags.Secret{
ID: llb.GitAuthHeaderKey,
Env: "BUILDX_BAKE_GIT_AUTH_HEADER",
})
}
if len(gitAuthSecrets) > 0 {
if secrets, err := controllerapi.CreateSecrets(gitAuthSecrets); err == nil {
if secrets, err := build.CreateSecrets(gitAuthSecrets); err == nil {
sessions = append(sessions, secrets)
}
}

View File

@ -19,8 +19,8 @@ import (
"github.com/containerd/containerd/v2/core/images"
"github.com/distribution/reference"
"github.com/docker/buildx/builder"
controllerapi "github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/driver"
"github.com/docker/buildx/util/buildflags"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/desktop"
"github.com/docker/buildx/util/dockerutil"
@ -44,7 +44,7 @@ import (
"github.com/moby/buildkit/util/progress/progresswriter"
"github.com/moby/buildkit/util/tracing"
"github.com/opencontainers/go-digest"
specs "github.com/opencontainers/image-spec/specs-go/v1"
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/tonistiigi/fsutil"
@ -59,6 +59,8 @@ const (
printLintFallbackImage = "docker/dockerfile:1.8.1@sha256:e87caa74dcb7d46cd820352bfea12591f3dba3ddc4285e19c7dcd13359f7cefd"
)
var ErrRestart = errors.New("build: restart")
type Options struct {
Inputs Inputs
@ -76,10 +78,10 @@ type Options struct {
NetworkMode string
NoCache bool
NoCacheFilter []string
Platforms []specs.Platform
Platforms []ocispecs.Platform
Pull bool
SecretSpecs []*controllerapi.Secret
SSHSpecs []*controllerapi.SSH
SecretSpecs buildflags.Secrets
SSHSpecs []*buildflags.SSH
ShmSize opts.MemBytes
Tags []string
Target string
@ -138,85 +140,69 @@ func filterAvailableNodes(nodes []builder.Node) ([]builder.Node, error) {
return nil, err
}
func toRepoOnly(in string) (string, error) {
m := map[string]struct{}{}
p := strings.Split(in, ",")
for _, pp := range p {
n, err := reference.ParseNormalizedNamed(pp)
if err != nil {
return "", err
}
m[n.Name()] = struct{}{}
}
out := make([]string, 0, len(m))
for k := range m {
out = append(out, k)
}
return strings.Join(out, ","), nil
}
func Build(ctx context.Context, nodes []builder.Node, opts map[string]Options, docker *dockerutil.Client, cfg *confutil.Config, w progress.Writer) (resp map[string]*client.SolveResponse, err error) {
return BuildWithResultHandler(ctx, nodes, opts, docker, cfg, w, nil)
}
func BuildWithResultHandler(ctx context.Context, nodes []builder.Node, opts map[string]Options, docker *dockerutil.Client, cfg *confutil.Config, w progress.Writer, resultHandleFunc func(driverIndex int, rCtx *ResultHandle)) (resp map[string]*client.SolveResponse, err error) {
if len(nodes) == 0 {
return nil, errors.Errorf("driver required for build")
}
nodes, err = filterAvailableNodes(nodes)
if err != nil {
return nil, errors.Wrapf(err, "no valid drivers found")
}
var noMobyDriver *driver.DriverHandle
// findNonMobyDriver returns the first non-moby based driver.
func findNonMobyDriver(nodes []builder.Node) *driver.DriverHandle {
for _, n := range nodes {
if !n.Driver.IsMobyDriver() {
noMobyDriver = n.Driver
break
return n.Driver
}
}
return nil
}
// warnOnNoOutput will check if the given nodes and options would result in an output
// and prints a warning if it would not.
func warnOnNoOutput(ctx context.Context, nodes []builder.Node, opts map[string]Options) {
// Return immediately if default load is explicitly disabled or a call
// function is used.
if noDefaultLoad() || !noCallFunc(opts) {
return
}
// Find the first non-moby driver and return if it either doesn't exist
// or if the driver has default load enabled.
noMobyDriver := findNonMobyDriver(nodes)
if noMobyDriver == nil || noMobyDriver.Features(ctx)[driver.DefaultLoad] {
return
}
// Produce a warning describing the targets affected.
var noOutputTargets []string
for name, opt := range opts {
if !opt.Linked && len(opt.Exports) == 0 {
noOutputTargets = append(noOutputTargets, name)
}
}
if noMobyDriver != nil && !noDefaultLoad() && noCallFunc(opts) {
var noOutputTargets []string
for name, opt := range opts {
if noMobyDriver.Features(ctx)[driver.DefaultLoad] {
continue
}
if !opt.Linked && len(opt.Exports) == 0 {
noOutputTargets = append(noOutputTargets, name)
}
}
if len(noOutputTargets) > 0 {
var warnNoOutputBuf bytes.Buffer
warnNoOutputBuf.WriteString("No output specified ")
if len(noOutputTargets) == 1 && noOutputTargets[0] == "default" {
warnNoOutputBuf.WriteString(fmt.Sprintf("with %s driver", noMobyDriver.Factory().Name()))
} else {
warnNoOutputBuf.WriteString(fmt.Sprintf("for %s target(s) with %s driver", strings.Join(noOutputTargets, ", "), noMobyDriver.Factory().Name()))
}
logrus.Warnf("%s. Build result will only remain in the build cache. To push result image into registry use --push or to load image into docker use --load", warnNoOutputBuf.String())
}
if len(noOutputTargets) == 0 {
return
}
drivers, err := resolveDrivers(ctx, nodes, opts, w)
if err != nil {
return nil, err
var warnNoOutputBuf bytes.Buffer
warnNoOutputBuf.WriteString("No output specified ")
if len(noOutputTargets) == 1 && noOutputTargets[0] == "default" {
warnNoOutputBuf.WriteString(fmt.Sprintf("with %s driver", noMobyDriver.Factory().Name()))
} else {
warnNoOutputBuf.WriteString(fmt.Sprintf("for %s target(s) with %s driver", strings.Join(noOutputTargets, ", "), noMobyDriver.Factory().Name()))
}
logrus.Warnf("%s. Build result will only remain in the build cache. To push result image into registry use --push or to load image into docker use --load", warnNoOutputBuf.String())
}
defers := make([]func(), 0, 2)
func newBuildRequests(ctx context.Context, docker *dockerutil.Client, cfg *confutil.Config, drivers map[string][]*resolvedNode, w progress.Writer, opts map[string]Options) (_ map[string][]*reqForNode, _ func(), retErr error) {
reqForNodes := make(map[string][]*reqForNode)
var releasers []func()
releaseAll := func() {
for _, fn := range releasers {
fn()
}
}
defer func() {
if err != nil {
for _, f := range defers {
f()
}
if retErr != nil {
releaseAll()
}
}()
reqForNodes := make(map[string][]*reqForNode)
eg, ctx := errgroup.WithContext(ctx)
for k, opt := range opts {
multiDriver := len(drivers[k]) > 1
hasMobyDriver := false
@ -235,19 +221,19 @@ func BuildWithResultHandler(ctx context.Context, nodes []builder.Node, opts map[
opt.Platforms = np.platforms
gatewayOpts, err := np.BuildOpts(ctx)
if err != nil {
return nil, err
return nil, nil, err
}
localOpt := opt
so, release, err := toSolveOpt(ctx, np.Node(), multiDriver, &localOpt, gatewayOpts, cfg, w, docker)
opts[k] = localOpt
if err != nil {
return nil, err
return nil, nil, err
}
releasers = append(releasers, release)
if err := saveLocalState(so, k, opt, np.Node(), cfg); err != nil {
return nil, err
return nil, nil, err
}
addGitAttrs(so)
defers = append(defers, release)
reqn = append(reqn, &reqForNode{
resolvedNode: np,
so: so,
@ -270,15 +256,17 @@ func BuildWithResultHandler(ctx context.Context, nodes []builder.Node, opts map[
for _, e := range np.so.Exports {
if e.Type == "moby" {
if ok, _ := strconv.ParseBool(e.Attrs["push"]); ok {
return nil, errors.Errorf("multi-node push can't currently be performed with the docker driver, please switch to a different driver")
return nil, nil, errors.Errorf("multi-node push can't currently be performed with the docker driver, please switch to a different driver")
}
}
}
}
}
}
return reqForNodes, releaseAll, nil
}
// validate that all links between targets use same drivers
func validateTargetLinks(reqForNodes map[string][]*reqForNode, drivers map[string][]*resolvedNode, opts map[string]Options) error {
for name := range opts {
dps := reqForNodes[name]
for i, dp := range dps {
@ -288,8 +276,9 @@ func BuildWithResultHandler(ctx context.Context, nodes []builder.Node, opts map[
k2 := strings.TrimPrefix(v, "target:")
dps2, ok := drivers[k2]
if !ok {
return nil, errors.Errorf("failed to find target %s for context %s", k2, strings.TrimPrefix(k, "context:")) // should be validated before already
return errors.Errorf("failed to find target %s for context %s", k2, strings.TrimPrefix(k, "context:")) // should be validated before already
}
var found bool
for _, dp2 := range dps2 {
if dp2.driverIndex == dp.driverIndex {
@ -298,12 +287,67 @@ func BuildWithResultHandler(ctx context.Context, nodes []builder.Node, opts map[
}
}
if !found {
return nil, errors.Errorf("failed to use %s as context %s for %s because targets build with different drivers", k2, strings.TrimPrefix(k, "context:"), name)
return errors.Errorf("failed to use %s as context %s for %s because targets build with different drivers", k2, strings.TrimPrefix(k, "context:"), name)
}
}
}
}
}
return nil
}
func toRepoOnly(in string) (string, error) {
m := map[string]struct{}{}
p := strings.Split(in, ",")
for _, pp := range p {
n, err := reference.ParseNormalizedNamed(pp)
if err != nil {
return "", err
}
m[n.Name()] = struct{}{}
}
out := make([]string, 0, len(m))
for k := range m {
out = append(out, k)
}
return strings.Join(out, ","), nil
}
type Handler struct {
Evaluate func(ctx context.Context, c gateway.Client, res *gateway.Result) error
}
func Build(ctx context.Context, nodes []builder.Node, opts map[string]Options, docker *dockerutil.Client, cfg *confutil.Config, w progress.Writer) (resp map[string]*client.SolveResponse, err error) {
return BuildWithResultHandler(ctx, nodes, opts, docker, cfg, w, nil)
}
func BuildWithResultHandler(ctx context.Context, nodes []builder.Node, opts map[string]Options, docker *dockerutil.Client, cfg *confutil.Config, w progress.Writer, bh *Handler) (resp map[string]*client.SolveResponse, err error) {
if len(nodes) == 0 {
return nil, errors.Errorf("driver required for build")
}
nodes, err = filterAvailableNodes(nodes)
if err != nil {
return nil, errors.Wrapf(err, "no valid drivers found")
}
warnOnNoOutput(ctx, nodes, opts)
drivers, err := resolveDrivers(ctx, nodes, opts, w)
if err != nil {
return nil, err
}
eg, ctx := errgroup.WithContext(ctx)
reqForNodes, release, err := newBuildRequests(ctx, docker, cfg, drivers, w, opts)
if err != nil {
return nil, err
}
defer release()
// validate that all links between targets use same drivers
if err := validateTargetLinks(reqForNodes, drivers, opts); err != nil {
return nil, err
}
sharedSessions, err := detectSharedMounts(ctx, reqForNodes)
if err != nil {
@ -320,7 +364,6 @@ func BuildWithResultHandler(ctx context.Context, nodes []builder.Node, opts map[
for k, opt := range opts {
err := func(k string) (err error) {
opt := opt
dps := drivers[k]
multiDriver := len(drivers[k]) > 1
@ -392,7 +435,6 @@ func BuildWithResultHandler(ctx context.Context, nodes []builder.Node, opts map[
wg.Add(1)
sharedSessionsWG[node.Name] = wg
for _, s := range sessions {
s := s
eg.Go(func() error {
return s.Run(baseCtx, c.Dialer())
})
@ -439,9 +481,14 @@ func BuildWithResultHandler(ctx context.Context, nodes []builder.Node, opts map[
ch, done := progress.NewChannel(pw)
defer func() { <-done }()
cc := c
var callRes map[string][]byte
buildFunc := func(ctx context.Context, c gateway.Client) (*gateway.Result, error) {
var (
callRes map[string][]byte
frontendErr error
)
buildFunc := func(ctx context.Context, c gateway.Client) (_ *gateway.Result, retErr error) {
// Capture the error from this build function.
defer catchFrontendError(&retErr, &frontendErr)
if opt.CallFunc != nil {
if _, ok := req.FrontendOpt["frontend.caps"]; !ok {
req.FrontendOpt["frontend.caps"] = "moby.buildkit.frontend.subrequests+forward"
@ -451,19 +498,11 @@ func BuildWithResultHandler(ctx context.Context, nodes []builder.Node, opts map[
req.FrontendOpt["requestid"] = "frontend." + opt.CallFunc.Name
}
res, err := c.Solve(ctx, req)
res, err := solve(ctx, c, req)
if err != nil {
req, ok := fallbackPrintError(err, req)
if ok {
res2, err2 := c.Solve(ctx, req)
if err2 != nil {
return nil, err
}
res = res2
} else {
return nil, err
}
return nil, err
}
if opt.CallFunc != nil {
callRes = res.Metadata
}
@ -471,41 +510,27 @@ func BuildWithResultHandler(ctx context.Context, nodes []builder.Node, opts map[
rKey := resultKey(dp.driverIndex, k)
results.Set(rKey, res)
if children, ok := childTargets[rKey]; ok && len(children) > 0 {
// wait for the child targets to register their LLB before evaluating
_, err := results.Get(ctx, children...)
if err != nil {
if children := childTargets[rKey]; len(children) > 0 {
if err := waitForChildren(ctx, bh, c, res, results, children); err != nil {
return nil, err
}
// we need to wait until the child targets have completed before we can release
eg, ctx := errgroup.WithContext(ctx)
eg.Go(func() error {
return res.EachRef(func(ref gateway.Reference) error {
return ref.Evaluate(ctx)
})
})
eg.Go(func() error {
_, err := results.Get(ctx, children...)
return err
})
if err := eg.Wait(); err != nil {
} else if bh != nil && bh.Evaluate != nil {
if err := bh.Evaluate(ctx, c, res); err != nil {
return nil, err
}
}
return res, nil
}
buildRef := fmt.Sprintf("%s/%s/%s", node.Builder, node.Name, so.Ref)
var rr *client.SolveResponse
if resultHandleFunc != nil {
var resultHandle *ResultHandle
resultHandle, rr, err = NewResultHandle(ctx, cc, *so, "buildx", buildFunc, ch)
resultHandleFunc(dp.driverIndex, resultHandle)
} else {
span, ctx := tracing.StartSpan(ctx, "build")
rr, err = c.Build(ctx, *so, "buildx", buildFunc, ch)
tracing.FinishWithError(span, err)
span, ctx := tracing.StartSpan(ctx, "build")
rr, err := c.Build(ctx, *so, "buildx", buildFunc, ch)
if errors.Is(frontendErr, ErrRestart) {
err = ErrRestart
}
tracing.FinishWithError(span, err)
if !so.Internal && desktop.BuildBackendEnabled() && node.Driver.HistoryAPISupported(ctx) {
if err != nil {
return &desktop.ErrorWithBuildRef{
@ -534,7 +559,6 @@ func BuildWithResultHandler(ctx context.Context, nodes []builder.Node, opts map[
}
}
}
node := dp.Node().Driver
if node.IsMobyDriver() {
for _, e := range so.Exports {
@ -570,6 +594,14 @@ func BuildWithResultHandler(ctx context.Context, nodes []builder.Node, opts map[
}
}
}
// if prefer-image-digest is set in the solver options, remove the image
// config digest from the exporter's response
for _, e := range so.Exports {
if e.Attrs["prefer-image-digest"] == "true" {
delete(rr.ExporterResponse, exptypes.ExporterImageConfigDigestKey)
break
}
}
return nil
})
}
@ -602,7 +634,7 @@ func BuildWithResultHandler(ctx context.Context, nodes []builder.Node, opts map[
if pushNames != "" {
err := progress.Write(pw, fmt.Sprintf("merging manifest list %s", pushNames), func() error {
descs := make([]specs.Descriptor, 0, len(res))
descs := make([]ocispecs.Descriptor, 0, len(res))
for _, r := range res {
s, ok := r.ExporterResponse[exptypes.ExporterImageDescriptorKey]
@ -611,7 +643,7 @@ func BuildWithResultHandler(ctx context.Context, nodes []builder.Node, opts map[
if err != nil {
return err
}
var desc specs.Descriptor
var desc ocispecs.Descriptor
if err := json.Unmarshal(dt, &desc); err != nil {
return errors.Wrapf(err, "failed to unmarshal descriptor %s", s)
}
@ -624,7 +656,7 @@ func BuildWithResultHandler(ctx context.Context, nodes []builder.Node, opts map[
// mediatype value in the Accept header does not seem to matter.
s, ok = r.ExporterResponse[exptypes.ExporterImageDigestKey]
if ok {
descs = append(descs, specs.Descriptor{
descs = append(descs, ocispecs.Descriptor{
Digest: digest.Digest(s),
MediaType: images.MediaTypeDockerSchema2ManifestList,
Size: -1,
@ -1149,3 +1181,52 @@ func ReadSourcePolicy() (*spb.Policy, error) {
return &pol, nil
}
func solve(ctx context.Context, c gateway.Client, req gateway.SolveRequest) (*gateway.Result, error) {
res, err := c.Solve(ctx, req)
if err != nil {
req, ok := fallbackPrintError(err, req)
if ok {
res2, err2 := c.Solve(ctx, req)
if err2 != nil {
return nil, err
}
res = res2
} else {
return nil, err
}
}
return res, nil
}
func waitForChildren(ctx context.Context, bh *Handler, c gateway.Client, res *gateway.Result, results *waitmap.Map, children []string) error {
// wait for the child targets to register their LLB before evaluating
_, err := results.Get(ctx, children...)
if err != nil {
return err
}
// we need to wait until the child targets have completed before we can release
eg, ctx := errgroup.WithContext(ctx)
eg.Go(func() error {
if bh != nil && bh.Evaluate != nil {
return bh.Evaluate(ctx, c, res)
}
return res.EachRef(func(ref gateway.Reference) error {
return ref.Evaluate(ctx)
})
})
eg.Go(func() error {
_, err := results.Get(ctx, children...)
return err
})
return eg.Wait()
}
func catchFrontendError(retErr, frontendErr *error) {
*frontendErr = *retErr
if errors.Is(*retErr, ErrRestart) {
// Overwrite the sentinel error with a more user friendly message.
// This gets stored only in the return error.
*retErr = errors.New("build restarted by client")
}
}

View File

@ -9,11 +9,11 @@ import (
"github.com/containerd/platforms"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/util/progress"
v1 "github.com/opencontainers/image-spec/specs-go/v1"
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
func Dial(ctx context.Context, nodes []builder.Node, pw progress.Writer, platform *v1.Platform) (net.Conn, error) {
func Dial(ctx context.Context, nodes []builder.Node, pw progress.Writer, platform *ocispecs.Platform) (net.Conn, error) {
nodes, err := filterAvailableNodes(nodes)
if err != nil {
return nil, err
@ -23,9 +23,9 @@ func Dial(ctx context.Context, nodes []builder.Node, pw progress.Writer, platfor
return nil, errors.New("no nodes available")
}
var pls []v1.Platform
var pls []ocispecs.Platform
if platform != nil {
pls = []v1.Platform{*platform}
pls = []ocispecs.Platform{*platform}
}
opts := map[string]Options{"default": {Platforms: pls}}

View File

@ -14,7 +14,7 @@ import (
gateway "github.com/moby/buildkit/frontend/gateway/client"
"github.com/moby/buildkit/util/flightcontrol"
"github.com/moby/buildkit/util/tracing"
specs "github.com/opencontainers/image-spec/specs-go/v1"
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"go.opentelemetry.io/otel/trace"
"golang.org/x/sync/errgroup"
@ -23,7 +23,7 @@ import (
type resolvedNode struct {
resolver *nodeResolver
driverIndex int
platforms []specs.Platform
platforms []ocispecs.Platform
}
func (dp resolvedNode) Node() builder.Node {
@ -46,7 +46,7 @@ func (dp resolvedNode) BuildOpts(ctx context.Context) (gateway.BuildOpts, error)
return opts[0], nil
}
type matchMaker func(specs.Platform) platforms.MatchComparer
type matchMaker func(ocispecs.Platform) platforms.MatchComparer
type cachedGroup[T any] struct {
g flightcontrol.Group[T]
@ -112,7 +112,7 @@ func (r *nodeResolver) Resolve(ctx context.Context, opt map[string]Options, pw p
return nil, err
}
eg, egCtx := errgroup.WithContext(ctx)
workers := make([][]specs.Platform, len(clients))
workers := make([][]ocispecs.Platform, len(clients))
for i, c := range clients {
i, c := i, c
if c == nil {
@ -124,7 +124,7 @@ func (r *nodeResolver) Resolve(ctx context.Context, opt map[string]Options, pw p
return errors.Wrap(err, "listing workers")
}
ps := make(map[string]specs.Platform, len(ww))
ps := make(map[string]ocispecs.Platform, len(ww))
for _, w := range ww {
for _, p := range w.Platforms {
pk := platforms.Format(platforms.Normalize(p))
@ -145,7 +145,7 @@ func (r *nodeResolver) Resolve(ctx context.Context, opt map[string]Options, pw p
// (this time we don't care about imperfect matches)
nodes = map[string][]*resolvedNode{}
for k, opt := range opt {
node, _, err := r.resolve(ctx, opt.Platforms, pw, platforms.Only, func(idx int, n builder.Node) []specs.Platform {
node, _, err := r.resolve(ctx, opt.Platforms, pw, platforms.Only, func(idx int, n builder.Node) []ocispecs.Platform {
return workers[idx]
})
if err != nil {
@ -173,7 +173,7 @@ func (r *nodeResolver) Resolve(ctx context.Context, opt map[string]Options, pw p
return nodes, nil
}
func (r *nodeResolver) resolve(ctx context.Context, ps []specs.Platform, pw progress.Writer, matcher matchMaker, additional func(idx int, n builder.Node) []specs.Platform) ([]*resolvedNode, bool, error) {
func (r *nodeResolver) resolve(ctx context.Context, ps []ocispecs.Platform, pw progress.Writer, matcher matchMaker, additional func(idx int, n builder.Node) []ocispecs.Platform) ([]*resolvedNode, bool, error) {
if len(r.nodes) == 0 {
return nil, true, nil
}
@ -203,7 +203,7 @@ func (r *nodeResolver) resolve(ctx context.Context, ps []specs.Platform, pw prog
driverIndex: idx,
}
if len(ps) > 0 {
node.platforms = []specs.Platform{ps[i]}
node.platforms = []ocispecs.Platform{ps[i]}
}
nodes = append(nodes, node)
}
@ -216,9 +216,9 @@ func (r *nodeResolver) resolve(ctx context.Context, ps []specs.Platform, pw prog
return nodes, perfect, nil
}
func (r *nodeResolver) get(p specs.Platform, matcher matchMaker, additionalPlatforms func(int, builder.Node) []specs.Platform) int {
func (r *nodeResolver) get(p ocispecs.Platform, matcher matchMaker, additionalPlatforms func(int, builder.Node) []ocispecs.Platform) int {
best := -1
bestPlatform := specs.Platform{}
bestPlatform := ocispecs.Platform{}
for i, node := range r.nodes {
platforms := node.Platforms
if additionalPlatforms != nil {

View File

@ -7,41 +7,41 @@ import (
"github.com/containerd/platforms"
"github.com/docker/buildx/builder"
specs "github.com/opencontainers/image-spec/specs-go/v1"
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/stretchr/testify/require"
)
func TestFindDriverSanity(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
r := makeTestResolver(map[string][]ocispecs.Platform{
"aaa": {platforms.DefaultSpec()},
})
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.DefaultSpec()}, nil, platforms.OnlyStrict, nil)
res, perfect, err := r.resolve(context.TODO(), []ocispecs.Platform{platforms.DefaultSpec()}, nil, platforms.OnlyStrict, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, 0, res[0].driverIndex)
require.Equal(t, "aaa", res[0].Node().Builder)
require.Equal(t, []specs.Platform{platforms.DefaultSpec()}, res[0].platforms)
require.Equal(t, []ocispecs.Platform{platforms.DefaultSpec()}, res[0].platforms)
}
func TestFindDriverEmpty(t *testing.T) {
r := makeTestResolver(nil)
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.DefaultSpec()}, nil, platforms.Only, nil)
res, perfect, err := r.resolve(context.TODO(), []ocispecs.Platform{platforms.DefaultSpec()}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Nil(t, res)
}
func TestFindDriverWeirdName(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
r := makeTestResolver(map[string][]ocispecs.Platform{
"aaa": {platforms.MustParse("linux/amd64")},
"bbb": {platforms.MustParse("linux/foobar")},
})
// find first platform
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/foobar")}, nil, platforms.Only, nil)
res, perfect, err := r.resolve(context.TODO(), []ocispecs.Platform{platforms.MustParse("linux/foobar")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
@ -50,11 +50,11 @@ func TestFindDriverWeirdName(t *testing.T) {
}
func TestFindDriverUnknown(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
r := makeTestResolver(map[string][]ocispecs.Platform{
"aaa": {platforms.MustParse("linux/amd64")},
})
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/riscv64")}, nil, platforms.Only, nil)
res, perfect, err := r.resolve(context.TODO(), []ocispecs.Platform{platforms.MustParse("linux/riscv64")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.False(t, perfect)
require.Len(t, res, 1)
@ -63,13 +63,13 @@ func TestFindDriverUnknown(t *testing.T) {
}
func TestSelectNodeSinglePlatform(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
r := makeTestResolver(map[string][]ocispecs.Platform{
"aaa": {platforms.MustParse("linux/amd64")},
"bbb": {platforms.MustParse("linux/riscv64")},
})
// find first platform
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/amd64")}, nil, platforms.Only, nil)
res, perfect, err := r.resolve(context.TODO(), []ocispecs.Platform{platforms.MustParse("linux/amd64")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
@ -77,7 +77,7 @@ func TestSelectNodeSinglePlatform(t *testing.T) {
require.Equal(t, "aaa", res[0].Node().Builder)
// find second platform
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/riscv64")}, nil, platforms.Only, nil)
res, perfect, err = r.resolve(context.TODO(), []ocispecs.Platform{platforms.MustParse("linux/riscv64")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
@ -85,7 +85,7 @@ func TestSelectNodeSinglePlatform(t *testing.T) {
require.Equal(t, "bbb", res[0].Node().Builder)
// find an unknown platform, should match the first driver
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/s390x")}, nil, platforms.Only, nil)
res, perfect, err = r.resolve(context.TODO(), []ocispecs.Platform{platforms.MustParse("linux/s390x")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.False(t, perfect)
require.Len(t, res, 1)
@ -94,26 +94,26 @@ func TestSelectNodeSinglePlatform(t *testing.T) {
}
func TestSelectNodeMultiPlatform(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
r := makeTestResolver(map[string][]ocispecs.Platform{
"aaa": {platforms.MustParse("linux/amd64"), platforms.MustParse("linux/arm64")},
"bbb": {platforms.MustParse("linux/riscv64")},
})
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/amd64")}, nil, platforms.Only, nil)
res, perfect, err := r.resolve(context.TODO(), []ocispecs.Platform{platforms.MustParse("linux/amd64")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, 0, res[0].driverIndex)
require.Equal(t, "aaa", res[0].Node().Builder)
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm64")}, nil, platforms.Only, nil)
res, perfect, err = r.resolve(context.TODO(), []ocispecs.Platform{platforms.MustParse("linux/arm64")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, 0, res[0].driverIndex)
require.Equal(t, "aaa", res[0].Node().Builder)
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/riscv64")}, nil, platforms.Only, nil)
res, perfect, err = r.resolve(context.TODO(), []ocispecs.Platform{platforms.MustParse("linux/riscv64")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
@ -122,27 +122,27 @@ func TestSelectNodeMultiPlatform(t *testing.T) {
}
func TestSelectNodeNonStrict(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
r := makeTestResolver(map[string][]ocispecs.Platform{
"aaa": {platforms.MustParse("linux/amd64")},
"bbb": {platforms.MustParse("linux/arm64")},
})
// arm64 should match itself
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm64")}, nil, platforms.Only, nil)
res, perfect, err := r.resolve(context.TODO(), []ocispecs.Platform{platforms.MustParse("linux/arm64")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "bbb", res[0].Node().Builder)
// arm64 may support arm/v8
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v8")}, nil, platforms.Only, nil)
res, perfect, err = r.resolve(context.TODO(), []ocispecs.Platform{platforms.MustParse("linux/arm/v8")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "bbb", res[0].Node().Builder)
// arm64 may support arm/v7
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v7")}, nil, platforms.Only, nil)
res, perfect, err = r.resolve(context.TODO(), []ocispecs.Platform{platforms.MustParse("linux/arm/v7")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
@ -150,19 +150,19 @@ func TestSelectNodeNonStrict(t *testing.T) {
}
func TestSelectNodeNonStrictARM(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
r := makeTestResolver(map[string][]ocispecs.Platform{
"aaa": {platforms.MustParse("linux/amd64")},
"bbb": {platforms.MustParse("linux/arm64")},
"ccc": {platforms.MustParse("linux/arm/v8")},
})
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v8")}, nil, platforms.Only, nil)
res, perfect, err := r.resolve(context.TODO(), []ocispecs.Platform{platforms.MustParse("linux/arm/v8")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "ccc", res[0].Node().Builder)
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v7")}, nil, platforms.Only, nil)
res, perfect, err = r.resolve(context.TODO(), []ocispecs.Platform{platforms.MustParse("linux/arm/v7")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
@ -170,20 +170,20 @@ func TestSelectNodeNonStrictARM(t *testing.T) {
}
func TestSelectNodeNonStrictLower(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
r := makeTestResolver(map[string][]ocispecs.Platform{
"aaa": {platforms.MustParse("linux/amd64")},
"bbb": {platforms.MustParse("linux/arm/v7")},
})
// v8 can't be built on v7 (so we should select the default)...
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v8")}, nil, platforms.Only, nil)
res, perfect, err := r.resolve(context.TODO(), []ocispecs.Platform{platforms.MustParse("linux/arm/v8")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.False(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "aaa", res[0].Node().Builder)
// ...but v6 can be built on v8
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v6")}, nil, platforms.Only, nil)
res, perfect, err = r.resolve(context.TODO(), []ocispecs.Platform{platforms.MustParse("linux/arm/v6")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
@ -191,13 +191,13 @@ func TestSelectNodeNonStrictLower(t *testing.T) {
}
func TestSelectNodePreferStart(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
r := makeTestResolver(map[string][]ocispecs.Platform{
"aaa": {platforms.MustParse("linux/amd64")},
"bbb": {platforms.MustParse("linux/riscv64")},
"ccc": {platforms.MustParse("linux/riscv64")},
})
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/riscv64")}, nil, platforms.Only, nil)
res, perfect, err := r.resolve(context.TODO(), []ocispecs.Platform{platforms.MustParse("linux/riscv64")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
@ -205,12 +205,12 @@ func TestSelectNodePreferStart(t *testing.T) {
}
func TestSelectNodePreferExact(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
r := makeTestResolver(map[string][]ocispecs.Platform{
"aaa": {platforms.MustParse("linux/arm/v8")},
"bbb": {platforms.MustParse("linux/arm/v7")},
})
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v7")}, nil, platforms.Only, nil)
res, perfect, err := r.resolve(context.TODO(), []ocispecs.Platform{platforms.MustParse("linux/arm/v7")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
@ -218,12 +218,12 @@ func TestSelectNodePreferExact(t *testing.T) {
}
func TestSelectNodeNoPlatform(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
r := makeTestResolver(map[string][]ocispecs.Platform{
"aaa": {platforms.MustParse("linux/foobar")},
"bbb": {platforms.DefaultSpec()},
})
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{}, nil, platforms.Only, nil)
res, perfect, err := r.resolve(context.TODO(), []ocispecs.Platform{}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
@ -232,20 +232,20 @@ func TestSelectNodeNoPlatform(t *testing.T) {
}
func TestSelectNodeAdditionalPlatforms(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
r := makeTestResolver(map[string][]ocispecs.Platform{
"aaa": {platforms.MustParse("linux/amd64")},
"bbb": {platforms.MustParse("linux/arm/v8")},
})
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v7")}, nil, platforms.Only, nil)
res, perfect, err := r.resolve(context.TODO(), []ocispecs.Platform{platforms.MustParse("linux/arm/v7")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "bbb", res[0].Node().Builder)
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v7")}, nil, platforms.Only, func(idx int, n builder.Node) []specs.Platform {
res, perfect, err = r.resolve(context.TODO(), []ocispecs.Platform{platforms.MustParse("linux/arm/v7")}, nil, platforms.Only, func(idx int, n builder.Node) []ocispecs.Platform {
if n.Builder == "aaa" {
return []specs.Platform{platforms.MustParse("linux/arm/v7")}
return []ocispecs.Platform{platforms.MustParse("linux/arm/v7")}
}
return nil
})
@ -256,12 +256,12 @@ func TestSelectNodeAdditionalPlatforms(t *testing.T) {
}
func TestSplitNodeMultiPlatform(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
r := makeTestResolver(map[string][]ocispecs.Platform{
"aaa": {platforms.MustParse("linux/amd64"), platforms.MustParse("linux/arm64")},
"bbb": {platforms.MustParse("linux/riscv64")},
})
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{
res, perfect, err := r.resolve(context.TODO(), []ocispecs.Platform{
platforms.MustParse("linux/amd64"),
platforms.MustParse("linux/arm64"),
}, nil, platforms.Only, nil)
@ -270,7 +270,7 @@ func TestSplitNodeMultiPlatform(t *testing.T) {
require.Len(t, res, 1)
require.Equal(t, "aaa", res[0].Node().Builder)
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{
res, perfect, err = r.resolve(context.TODO(), []ocispecs.Platform{
platforms.MustParse("linux/amd64"),
platforms.MustParse("linux/riscv64"),
}, nil, platforms.Only, nil)
@ -282,14 +282,14 @@ func TestSplitNodeMultiPlatform(t *testing.T) {
}
func TestSplitNodeMultiPlatformNoUnify(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
r := makeTestResolver(map[string][]ocispecs.Platform{
"aaa": {platforms.MustParse("linux/amd64")},
"bbb": {platforms.MustParse("linux/amd64"), platforms.MustParse("linux/riscv64")},
})
// the "best" choice would be the node with both platforms, but we're using
// a naive algorithm that doesn't try to unify the platforms
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{
res, perfect, err := r.resolve(context.TODO(), []ocispecs.Platform{
platforms.MustParse("linux/amd64"),
platforms.MustParse("linux/riscv64"),
}, nil, platforms.Only, nil)
@ -300,7 +300,7 @@ func TestSplitNodeMultiPlatformNoUnify(t *testing.T) {
require.Equal(t, "bbb", res[1].Node().Builder)
}
func makeTestResolver(nodes map[string][]specs.Platform) *nodeResolver {
func makeTestResolver(nodes map[string][]ocispecs.Platform) *nodeResolver {
var ns []builder.Node
for name, platforms := range nodes {
ns = append(ns, builder.Node{

View File

@ -12,7 +12,7 @@ import (
"github.com/docker/buildx/util/gitutil"
"github.com/docker/buildx/util/osutil"
"github.com/moby/buildkit/client"
specs "github.com/opencontainers/image-spec/specs-go/v1"
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
@ -96,7 +96,7 @@ func getGitAttributes(ctx context.Context, contextPath, dockerfilePath string) (
sha += "-dirty"
}
if setGitLabels {
res["label:"+specs.AnnotationRevision] = sha
res["label:"+ocispecs.AnnotationRevision] = sha
}
if setGitInfo {
res["vcs:revision"] = sha
@ -105,7 +105,7 @@ func getGitAttributes(ctx context.Context, contextPath, dockerfilePath string) (
if rurl, err := gitc.RemoteURL(); err == nil && rurl != "" {
if setGitLabels {
res["label:"+specs.AnnotationSource] = rurl
res["label:"+ocispecs.AnnotationSource] = rurl
}
if setGitInfo {
res["vcs:source"] = rurl

View File

@ -11,7 +11,7 @@ import (
"github.com/docker/buildx/util/gitutil"
"github.com/docker/buildx/util/gitutil/gittestutil"
"github.com/moby/buildkit/client"
specs "github.com/opencontainers/image-spec/specs-go/v1"
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
@ -91,8 +91,8 @@ func TestGetGitAttributes(t *testing.T) {
envGitInfo: "false",
expected: []string{
"label:" + DockerfileLabel,
"label:" + specs.AnnotationRevision,
"label:" + specs.AnnotationSource,
"label:" + ocispecs.AnnotationRevision,
"label:" + ocispecs.AnnotationSource,
},
},
{
@ -101,15 +101,14 @@ func TestGetGitAttributes(t *testing.T) {
envGitInfo: "",
expected: []string{
"label:" + DockerfileLabel,
"label:" + specs.AnnotationRevision,
"label:" + specs.AnnotationSource,
"label:" + ocispecs.AnnotationRevision,
"label:" + ocispecs.AnnotationSource,
"vcs:revision",
"vcs:source",
},
},
}
for _, tt := range cases {
tt := tt
t.Run(tt.name, func(t *testing.T) {
setupTest(t)
if tt.envGitLabels != "" {
@ -125,9 +124,10 @@ func TestGetGitAttributes(t *testing.T) {
for _, e := range tt.expected {
assert.Contains(t, so.FrontendAttrs, e)
assert.NotEmpty(t, so.FrontendAttrs[e])
if e == "label:"+DockerfileLabel {
switch e {
case "label:" + DockerfileLabel:
assert.Equal(t, "Dockerfile", so.FrontendAttrs[e])
} else if e == "label:"+specs.AnnotationSource || e == "vcs:source" {
case "label:" + ocispecs.AnnotationSource, "vcs:source":
assert.Equal(t, "git@github.com:docker/buildx.git", so.FrontendAttrs[e])
}
}
@ -155,10 +155,10 @@ func TestGetGitAttributesDirty(t *testing.T) {
assert.Contains(t, so.FrontendAttrs, "label:"+DockerfileLabel)
assert.Equal(t, "Dockerfile", so.FrontendAttrs["label:"+DockerfileLabel])
assert.Contains(t, so.FrontendAttrs, "label:"+specs.AnnotationSource)
assert.Equal(t, "git@github.com:docker/buildx.git", so.FrontendAttrs["label:"+specs.AnnotationSource])
assert.Contains(t, so.FrontendAttrs, "label:"+specs.AnnotationRevision)
assert.True(t, strings.HasSuffix(so.FrontendAttrs["label:"+specs.AnnotationRevision], "-dirty"))
assert.Contains(t, so.FrontendAttrs, "label:"+ocispecs.AnnotationSource)
assert.Equal(t, "git@github.com:docker/buildx.git", so.FrontendAttrs["label:"+ocispecs.AnnotationSource])
assert.Contains(t, so.FrontendAttrs, "label:"+ocispecs.AnnotationRevision)
assert.True(t, strings.HasSuffix(so.FrontendAttrs["label:"+ocispecs.AnnotationRevision], "-dirty"))
assert.Contains(t, so.FrontendAttrs, "vcs:source")
assert.Equal(t, "git@github.com:docker/buildx.git", so.FrontendAttrs["vcs:source"])

View File

@ -8,12 +8,41 @@ import (
"sync/atomic"
"syscall"
controllerapi "github.com/docker/buildx/controller/pb"
gateway "github.com/moby/buildkit/frontend/gateway/client"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
type InvokeConfig struct {
Entrypoint []string
Cmd []string
NoCmd bool
Env []string
User string
NoUser bool
Cwd string
NoCwd bool
Tty bool
Rollback bool
Initial bool
SuspendOn SuspendOn
}
func (cfg *InvokeConfig) NeedsDebug(err error) bool {
return cfg.SuspendOn.DebugEnabled(err)
}
type SuspendOn int
const (
SuspendError SuspendOn = iota
SuspendAlways
)
func (s SuspendOn) DebugEnabled(err error) bool {
return err != nil || s == SuspendAlways
}
type Container struct {
cancelOnce sync.Once
containerCancel func(error)
@ -24,29 +53,21 @@ type Container struct {
resultCtx *ResultHandle
}
func NewContainer(ctx context.Context, resultCtx *ResultHandle, cfg *controllerapi.InvokeConfig) (*Container, error) {
func NewContainer(ctx context.Context, resultCtx *ResultHandle, cfg *InvokeConfig) (*Container, error) {
mainCtx := ctx
ctrCh := make(chan *Container)
errCh := make(chan error)
ctrCh := make(chan *Container, 1)
errCh := make(chan error, 1)
go func() {
err := resultCtx.build(func(ctx context.Context, c gateway.Client) (*gateway.Result, error) {
ctx, cancel := context.WithCancelCause(ctx)
go func() {
<-mainCtx.Done()
cancel(errors.WithStack(context.Canceled))
}()
containerCfg, err := resultCtx.getContainerConfig(cfg)
if err != nil {
return nil, err
}
err := func() error {
containerCtx, containerCancel := context.WithCancelCause(ctx)
defer containerCancel(errors.WithStack(context.Canceled))
bkContainer, err := c.NewContainer(containerCtx, containerCfg)
bkContainer, err := resultCtx.NewContainer(containerCtx, cfg)
if err != nil {
return nil, err
return err
}
releaseCh := make(chan struct{})
container := &Container{
containerCancel: containerCancel,
@ -63,8 +84,8 @@ func NewContainer(ctx context.Context, resultCtx *ResultHandle, cfg *controllera
ctrCh <- container
<-container.releaseCh
return nil, bkContainer.Release(ctx)
})
return bkContainer.Release(ctx)
}()
if err != nil {
errCh <- err
}
@ -97,7 +118,7 @@ func (c *Container) markUnavailable() {
c.isUnavailable.Store(true)
}
func (c *Container) Exec(ctx context.Context, cfg *controllerapi.InvokeConfig, stdin io.ReadCloser, stdout io.WriteCloser, stderr io.WriteCloser) error {
func (c *Container) Exec(ctx context.Context, cfg *InvokeConfig, stdin io.ReadCloser, stdout io.WriteCloser, stderr io.WriteCloser) error {
if isInit := c.initStarted.CompareAndSwap(false, true); isInit {
defer func() {
// container can't be used after init exits
@ -112,7 +133,7 @@ func (c *Container) Exec(ctx context.Context, cfg *controllerapi.InvokeConfig, s
return err
}
func exec(ctx context.Context, resultCtx *ResultHandle, cfg *controllerapi.InvokeConfig, ctr gateway.Container, stdin io.ReadCloser, stdout io.WriteCloser, stderr io.WriteCloser) error {
func exec(ctx context.Context, resultCtx *ResultHandle, cfg *InvokeConfig, ctr gateway.Container, stdin io.ReadCloser, stdout io.WriteCloser, stderr io.WriteCloser) error {
processCfg, err := resultCtx.getProcessConfig(cfg, stdin, stdout, stderr)
if err != nil {
return err

View File

@ -4,6 +4,7 @@ import (
"bytes"
"context"
"io"
"maps"
"os"
"path/filepath"
"slices"
@ -11,12 +12,15 @@ import (
"strings"
"syscall"
awsconfig "github.com/aws/aws-sdk-go-v2/config"
"github.com/containerd/console"
"github.com/containerd/containerd/v2/core/content"
"github.com/containerd/containerd/v2/plugins/content/local"
"github.com/containerd/platforms"
"github.com/distribution/reference"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/driver"
"github.com/docker/buildx/util/buildflags"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/dockerutil"
"github.com/docker/buildx/util/osutil"
@ -26,6 +30,9 @@ import (
"github.com/moby/buildkit/client/ociindex"
gateway "github.com/moby/buildkit/frontend/gateway/client"
"github.com/moby/buildkit/identity"
"github.com/moby/buildkit/session"
"github.com/moby/buildkit/session/secrets/secretsprovider"
"github.com/moby/buildkit/session/sshforward/sshprovider"
"github.com/moby/buildkit/session/upload/uploadprovider"
"github.com/moby/buildkit/solver/pb"
"github.com/moby/buildkit/util/apicaps"
@ -237,6 +244,11 @@ func toSolveOpt(ctx context.Context, node builder.Node, multiDriver bool, opt *O
opt.Exports[i].Output = func(_ map[string]string) (io.WriteCloser, error) {
return w, nil
}
// if docker is using the containerd snapshotter, prefer to export the image digest
// (rather than the image config digest). See https://github.com/moby/moby/issues/45458.
if features[dockerutil.OCIImporter] {
opt.Exports[i].Attrs["prefer-image-digest"] = "true"
}
}
} else if !nodeDriver.Features(ctx)[driver.DockerExporter] {
return nil, nil, notSupported(driver.DockerExporter, nodeDriver, "https://docs.docker.com/go/build-exporters/")
@ -389,7 +401,7 @@ func loadInputs(ctx context.Context, d *driver.DriverHandle, inp *Inputs, pw pro
if err != nil && err != io.EOF {
return nil, errors.Wrap(err, "failed to peek context header from STDIN")
}
if !(err == io.EOF && len(magic) == 0) {
if err != io.EOF || len(magic) != 0 {
if isArchive(magic) {
// stdin is context
up := uploadprovider.New()
@ -498,8 +510,7 @@ func loadInputs(ctx context.Context, d *driver.DriverHandle, inp *Inputs, pw pro
}
// handle OCI layout
if strings.HasPrefix(v.Path, "oci-layout://") {
localPath := strings.TrimPrefix(v.Path, "oci-layout://")
if localPath, ok := strings.CutPrefix(v.Path, "oci-layout://"); ok {
localPath, dig, hasDigest := strings.Cut(localPath, "@")
localPath, tag, hasTag := strings.Cut(localPath, ":")
if !hasTag {
@ -655,3 +666,221 @@ type fs struct {
}
var _ fsutil.FS = &fs{}
func CreateSSH(ssh []*buildflags.SSH) (session.Attachable, error) {
configs := make([]sshprovider.AgentConfig, 0, len(ssh))
for _, ssh := range ssh {
cfg := sshprovider.AgentConfig{
ID: ssh.ID,
Paths: slices.Clone(ssh.Paths),
}
configs = append(configs, cfg)
}
return sshprovider.NewSSHAgentProvider(configs)
}
func CreateSecrets(secrets []*buildflags.Secret) (session.Attachable, error) {
fs := make([]secretsprovider.Source, 0, len(secrets))
for _, secret := range secrets {
fs = append(fs, secretsprovider.Source{
ID: secret.ID,
FilePath: secret.FilePath,
Env: secret.Env,
})
}
store, err := secretsprovider.NewStore(fs)
if err != nil {
return nil, err
}
return secretsprovider.NewSecretProvider(store), nil
}
func CreateExports(entries []*buildflags.ExportEntry) ([]client.ExportEntry, []string, error) {
var outs []client.ExportEntry
var localPaths []string
if len(entries) == 0 {
return nil, nil, nil
}
var stdoutUsed bool
for _, entry := range entries {
if entry.Type == "" {
return nil, nil, errors.Errorf("type is required for output")
}
out := client.ExportEntry{
Type: entry.Type,
Attrs: map[string]string{},
}
maps.Copy(out.Attrs, entry.Attrs)
supportFile := false
supportDir := false
switch out.Type {
case client.ExporterLocal:
supportDir = true
case client.ExporterTar:
supportFile = true
case client.ExporterOCI, client.ExporterDocker:
tar, err := strconv.ParseBool(out.Attrs["tar"])
if err != nil {
tar = true
}
supportFile = tar
supportDir = !tar
case "registry":
out.Type = client.ExporterImage
out.Attrs["push"] = "true"
}
if supportDir {
if entry.Destination == "" {
return nil, nil, errors.Errorf("dest is required for %s exporter", out.Type)
}
if entry.Destination == "-" {
return nil, nil, errors.Errorf("dest cannot be stdout for %s exporter", out.Type)
}
fi, err := os.Stat(entry.Destination)
if err != nil && !os.IsNotExist(err) {
return nil, nil, errors.Wrapf(err, "invalid destination directory: %s", entry.Destination)
}
if err == nil && !fi.IsDir() {
return nil, nil, errors.Errorf("destination directory %s is a file", entry.Destination)
}
out.OutputDir = entry.Destination
localPaths = append(localPaths, entry.Destination)
}
if supportFile {
if entry.Destination == "" && out.Type != client.ExporterDocker {
entry.Destination = "-"
}
if entry.Destination == "-" {
if stdoutUsed {
return nil, nil, errors.Errorf("multiple outputs configured to write to stdout")
}
if _, err := console.ConsoleFromFile(os.Stdout); err == nil {
return nil, nil, errors.Errorf("dest file is required for %s exporter. refusing to write to console", out.Type)
}
out.Output = wrapWriteCloser(os.Stdout)
stdoutUsed = true
} else if entry.Destination != "" {
fi, err := os.Stat(entry.Destination)
if err != nil && !os.IsNotExist(err) {
return nil, nil, errors.Wrapf(err, "invalid destination file: %s", entry.Destination)
}
if err == nil && fi.IsDir() {
return nil, nil, errors.Errorf("destination file %s is a directory", entry.Destination)
}
f, err := os.Create(entry.Destination)
if err != nil {
return nil, nil, errors.Errorf("failed to open %s", err)
}
out.Output = wrapWriteCloser(f)
localPaths = append(localPaths, entry.Destination)
}
}
outs = append(outs, out)
}
return outs, localPaths, nil
}
func wrapWriteCloser(wc io.WriteCloser) func(map[string]string) (io.WriteCloser, error) {
return func(map[string]string) (io.WriteCloser, error) {
return wc, nil
}
}
func CreateCaches(entries []*buildflags.CacheOptionsEntry) []client.CacheOptionsEntry {
var outs []client.CacheOptionsEntry
if len(entries) == 0 {
return nil
}
for _, entry := range entries {
out := client.CacheOptionsEntry{
Type: entry.Type,
Attrs: map[string]string{},
}
maps.Copy(out.Attrs, entry.Attrs)
addGithubToken(&out)
addAwsCredentials(&out)
if !isActive(&out) {
continue
}
outs = append(outs, out)
}
return outs
}
func addGithubToken(ci *client.CacheOptionsEntry) {
if ci.Type != "gha" {
return
}
version, ok := ci.Attrs["version"]
if !ok {
// https://github.com/actions/toolkit/blob/2b08dc18f261b9fdd978b70279b85cbef81af8bc/packages/cache/src/internal/config.ts#L19
if v, ok := os.LookupEnv("ACTIONS_CACHE_SERVICE_V2"); ok {
if b, err := strconv.ParseBool(v); err == nil && b {
version = "2"
}
}
}
if _, ok := ci.Attrs["token"]; !ok {
if v, ok := os.LookupEnv("ACTIONS_RUNTIME_TOKEN"); ok {
ci.Attrs["token"] = v
}
}
if _, ok := ci.Attrs["url_v2"]; !ok && version == "2" {
// https://github.com/actions/toolkit/blob/2b08dc18f261b9fdd978b70279b85cbef81af8bc/packages/cache/src/internal/config.ts#L34-L35
if v, ok := os.LookupEnv("ACTIONS_RESULTS_URL"); ok {
ci.Attrs["url_v2"] = v
}
}
if _, ok := ci.Attrs["url"]; !ok {
// https://github.com/actions/toolkit/blob/2b08dc18f261b9fdd978b70279b85cbef81af8bc/packages/cache/src/internal/config.ts#L28-L33
if v, ok := os.LookupEnv("ACTIONS_CACHE_URL"); ok {
ci.Attrs["url"] = v
} else if v, ok := os.LookupEnv("ACTIONS_RESULTS_URL"); ok {
ci.Attrs["url"] = v
}
}
}
func addAwsCredentials(ci *client.CacheOptionsEntry) {
if ci.Type != "s3" {
return
}
_, okAccessKeyID := ci.Attrs["access_key_id"]
_, okSecretAccessKey := ci.Attrs["secret_access_key"]
// If the user provides access_key_id, secret_access_key, do not override the session token.
if okAccessKeyID && okSecretAccessKey {
return
}
ctx := context.TODO()
awsConfig, err := awsconfig.LoadDefaultConfig(ctx)
if err != nil {
return
}
credentials, err := awsConfig.Credentials.Retrieve(ctx)
if err != nil {
return
}
if !okAccessKeyID && credentials.AccessKeyID != "" {
ci.Attrs["access_key_id"] = credentials.AccessKeyID
}
if !okSecretAccessKey && credentials.SecretAccessKey != "" {
ci.Attrs["secret_access_key"] = credentials.SecretAccessKey
}
if _, ok := ci.Attrs["session_token"]; !ok && credentials.SessionToken != "" {
ci.Attrs["session_token"] = credentials.SessionToken
}
}
func isActive(ce *client.CacheOptionsEntry) bool {
// Always active if not gha.
if ce.Type != "gha" {
return true
}
return ce.Attrs["token"] != "" && (ce.Attrs["url"] != "" || ce.Attrs["url_v2"] != "")
}

40
build/opt_test.go Normal file
View File

@ -0,0 +1,40 @@
package build
import (
"testing"
"github.com/docker/buildx/util/buildflags"
"github.com/moby/buildkit/client"
"github.com/stretchr/testify/require"
)
func TestCacheOptions_DerivedVars(t *testing.T) {
t.Setenv("ACTIONS_RUNTIME_TOKEN", "sensitive_token")
t.Setenv("ACTIONS_CACHE_URL", "https://cache.github.com")
t.Setenv("AWS_ACCESS_KEY_ID", "definitely_dont_look_here")
t.Setenv("AWS_SECRET_ACCESS_KEY", "hackers_please_dont_steal")
t.Setenv("AWS_SESSION_TOKEN", "not_a_mitm_attack")
cacheFrom, err := buildflags.ParseCacheEntry([]string{"type=gha", "type=s3,region=us-west-2,bucket=my_bucket,name=my_image"})
require.NoError(t, err)
require.Equal(t, []client.CacheOptionsEntry{
{
Type: "gha",
Attrs: map[string]string{
"token": "sensitive_token",
"url": "https://cache.github.com",
},
},
{
Type: "s3",
Attrs: map[string]string{
"region": "us-west-2",
"bucket": "my_bucket",
"name": "my_image",
"access_key_id": "definitely_dont_look_here",
"secret_access_key": "hackers_please_dont_steal",
"session_token": "not_a_mitm_attack",
},
},
}, CreateCaches(cacheFrom))
}

View File

@ -13,6 +13,7 @@ import (
"github.com/containerd/containerd/v2/core/content/proxy"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/progress"
slsa1 "github.com/in-toto/in-toto-golang/in_toto/slsa_provenance/v1"
controlapi "github.com/moby/buildkit/api/services/control"
"github.com/moby/buildkit/client"
provenancetypes "github.com/moby/buildkit/solver/llbsolver/provenance/types"
@ -22,15 +23,6 @@ import (
"golang.org/x/sync/errgroup"
)
type provenancePredicate struct {
Builder *provenanceBuilder `json:"builder,omitempty"`
provenancetypes.ProvenancePredicate
}
type provenanceBuilder struct {
ID string `json:"id,omitempty"`
}
func setRecordProvenance(ctx context.Context, c *client.Client, sr *client.SolveResponse, ref string, mode confutil.MetadataProvenanceMode, pw progress.Writer) error {
if mode == confutil.MetadataProvenanceModeDisabled {
return nil
@ -69,7 +61,7 @@ func fetchProvenance(ctx context.Context, c *client.Client, ref string, mode con
continue
}
if ev.Record.Result != nil {
desc := lookupProvenance(ev.Record.Result)
desc, predicateType := lookupProvenance(ev.Record.Result)
if desc == nil {
continue
}
@ -78,7 +70,7 @@ func fetchProvenance(ctx context.Context, c *client.Client, ref string, mode con
if err != nil {
return errors.Wrapf(err, "failed to load provenance blob from build record")
}
prv, err := encodeProvenance(dt, mode)
prv, err := encodeProvenance(dt, predicateType, mode)
if err != nil {
return err
}
@ -92,8 +84,7 @@ func fetchProvenance(ctx context.Context, c *client.Client, ref string, mode con
})
} else if ev.Record.Results != nil {
for platform, res := range ev.Record.Results {
platform := platform
desc := lookupProvenance(res)
desc, predicateType := lookupProvenance(res)
if desc == nil {
continue
}
@ -102,7 +93,7 @@ func fetchProvenance(ctx context.Context, c *client.Client, ref string, mode con
if err != nil {
return errors.Wrapf(err, "failed to load provenance blob from build record")
}
prv, err := encodeProvenance(dt, mode)
prv, err := encodeProvenance(dt, predicateType, mode)
if err != nil {
return err
}
@ -120,7 +111,7 @@ func fetchProvenance(ctx context.Context, c *client.Client, ref string, mode con
return out, eg.Wait()
}
func lookupProvenance(res *controlapi.BuildResultInfo) *ocispecs.Descriptor {
func lookupProvenance(res *controlapi.BuildResultInfo) (*ocispecs.Descriptor, string) {
for _, a := range res.Attestations {
if a.MediaType == "application/vnd.in-toto+json" && strings.HasPrefix(a.Annotations["in-toto.io/predicate-type"], "https://slsa.dev/provenance/") {
return &ocispecs.Descriptor{
@ -128,27 +119,29 @@ func lookupProvenance(res *controlapi.BuildResultInfo) *ocispecs.Descriptor {
Size: a.Size,
MediaType: a.MediaType,
Annotations: a.Annotations,
}
}, a.Annotations["in-toto.io/predicate-type"]
}
}
return nil
return nil, ""
}
func encodeProvenance(dt []byte, mode confutil.MetadataProvenanceMode) (string, error) {
var prv provenancePredicate
if err := json.Unmarshal(dt, &prv); err != nil {
func encodeProvenance(dt []byte, predicateType string, mode confutil.MetadataProvenanceMode) (string, error) {
var pred *provenancetypes.ProvenancePredicateSLSA02
if predicateType == slsa1.PredicateSLSAProvenance {
var pred1 *provenancetypes.ProvenancePredicateSLSA1
if err := json.Unmarshal(dt, &pred1); err != nil {
return "", errors.Wrapf(err, "failed to unmarshal provenance")
}
pred = pred1.ConvertToSLSA02()
} else if err := json.Unmarshal(dt, &pred); err != nil {
return "", errors.Wrapf(err, "failed to unmarshal provenance")
}
if prv.Builder != nil && prv.Builder.ID == "" {
// reset builder if id is empty
prv.Builder = nil
}
if mode == confutil.MetadataProvenanceModeMin {
// reset fields for minimal provenance
prv.BuildConfig = nil
prv.Metadata = nil
pred.BuildConfig = nil
pred.Metadata = nil
}
dtprv, err := json.Marshal(prv)
dtprv, err := json.Marshal(pred)
if err != nil {
return "", errors.Wrapf(err, "failed to marshal provenance")
}

View File

@ -7,260 +7,41 @@ import (
"io"
"sync"
controllerapi "github.com/docker/buildx/controller/pb"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/exporter/containerimage/exptypes"
gateway "github.com/moby/buildkit/frontend/gateway/client"
"github.com/moby/buildkit/solver/errdefs"
"github.com/moby/buildkit/solver/pb"
"github.com/moby/buildkit/solver/result"
specs "github.com/opencontainers/image-spec/specs-go/v1"
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"golang.org/x/sync/errgroup"
)
// NewResultHandle makes a call to client.Build, additionally returning a
// opaque ResultHandle alongside the standard response and error.
// NewResultHandle stores a gateway client, gateway result, and the error from
// an evaluate call if it is present.
//
// This ResultHandle can be used to execute additional build steps in the same
// context as the build occurred, which can allow easy debugging of build
// failures and successes.
//
// If the returned ResultHandle is not nil, the caller must call Done() on it.
func NewResultHandle(ctx context.Context, cc *client.Client, opt client.SolveOpt, product string, buildFunc gateway.BuildFunc, ch chan *client.SolveStatus) (*ResultHandle, *client.SolveResponse, error) {
// Create a new context to wrap the original, and cancel it when the
// caller-provided context is cancelled.
//
// We derive the context from the background context so that we can forbid
// cancellation of the build request after <-done is closed (which we do
// before returning the ResultHandle).
baseCtx := ctx
ctx, cancel := context.WithCancelCause(context.Background())
done := make(chan struct{})
go func() {
select {
case <-baseCtx.Done():
cancel(baseCtx.Err())
case <-done:
// Once done is closed, we've recorded a ResultHandle, so we
// shouldn't allow cancelling the underlying build request anymore.
}
}()
// Create a new channel to forward status messages to the original.
//
// We do this so that we can discard status messages after the main portion
// of the build is complete. This is necessary for the solve error case,
// where the original gateway is kept open until the ResultHandle is
// closed - we don't want progress messages from operations in that
// ResultHandle to display after this function exits.
//
// Additionally, callers should wait for the progress channel to be closed.
// If we keep the session open and never close the progress channel, the
// caller will likely hang.
baseCh := ch
ch = make(chan *client.SolveStatus)
go func() {
for {
s, ok := <-ch
if !ok {
return
}
select {
case <-baseCh:
// base channel is closed, discard status messages
default:
baseCh <- s
}
}
}()
defer close(baseCh)
var resp *client.SolveResponse
var respErr error
var respHandle *ResultHandle
go func() {
defer func() { cancel(errors.WithStack(context.Canceled)) }() // ensure no dangling processes
var res *gateway.Result
var err error
resp, err = cc.Build(ctx, opt, product, func(ctx context.Context, c gateway.Client) (*gateway.Result, error) {
var err error
res, err = buildFunc(ctx, c)
if res != nil && err == nil {
// Force evaluation of the build result (otherwise, we likely
// won't get a solve error)
def, err2 := getDefinition(ctx, res)
if err2 != nil {
return nil, err2
}
res, err = evalDefinition(ctx, c, def)
}
if err != nil {
// Scenario 1: we failed to evaluate a node somewhere in the
// build graph.
//
// In this case, we construct a ResultHandle from this
// original Build session, and return it alongside the original
// build error. We then need to keep the gateway session open
// until the caller explicitly closes the ResultHandle.
var se *errdefs.SolveError
if errors.As(err, &se) {
respHandle = &ResultHandle{
done: make(chan struct{}),
solveErr: se,
gwClient: c,
gwCtx: ctx,
}
respErr = err // return original error to preserve stacktrace
close(done)
// Block until the caller closes the ResultHandle.
select {
case <-respHandle.done:
case <-ctx.Done():
}
}
}
return res, err
}, ch)
if respHandle != nil {
return
}
if err != nil {
// Something unexpected failed during the build, we didn't succeed,
// but we also didn't make it far enough to create a ResultHandle.
respErr = err
close(done)
return
}
// Scenario 2: we successfully built the image with no errors.
//
// In this case, the original gateway session has now been closed
// since the Build has been completed. So, we need to create a new
// gateway session to populate the ResultHandle. To do this, we
// need to re-evaluate the target result, in this new session. This
// should be instantaneous since the result should be cached.
def, err := getDefinition(ctx, res)
if err != nil {
respErr = err
close(done)
return
}
// NOTE: ideally this second connection should be lazily opened
opt := opt
opt.Ref = ""
opt.Exports = nil
opt.CacheExports = nil
opt.Internal = true
_, respErr = cc.Build(ctx, opt, "buildx", func(ctx context.Context, c gateway.Client) (*gateway.Result, error) {
res, err := evalDefinition(ctx, c, def)
if err != nil {
// This should probably not happen, since we've previously
// successfully evaluated the same result with no issues.
return nil, errors.Wrap(err, "inconsistent solve result")
}
respHandle = &ResultHandle{
done: make(chan struct{}),
res: res,
gwClient: c,
gwCtx: ctx,
}
close(done)
// Block until the caller closes the ResultHandle.
select {
case <-respHandle.done:
case <-ctx.Done():
}
return nil, context.Cause(ctx)
}, nil)
if respHandle != nil {
return
}
close(done)
}()
// Block until the other thread signals that it's completed the build.
select {
case <-done:
case <-baseCtx.Done():
if respErr == nil {
respErr = baseCtx.Err()
}
func NewResultHandle(ctx context.Context, c gateway.Client, res *gateway.Result, err error) *ResultHandle {
rCtx := &ResultHandle{
res: res,
gwClient: c,
}
return respHandle, resp, respErr
}
// getDefinition converts a gateway result into a collection of definitions for
// each ref in the result.
func getDefinition(ctx context.Context, res *gateway.Result) (*result.Result[*pb.Definition], error) {
return result.ConvertResult(res, func(ref gateway.Reference) (*pb.Definition, error) {
st, err := ref.ToState()
if err != nil {
return nil, err
}
def, err := st.Marshal(ctx)
if err != nil {
return nil, err
}
return def.ToPB(), nil
})
}
// evalDefinition performs the reverse of getDefinition, converting a
// collection of definitions into a gateway result.
func evalDefinition(ctx context.Context, c gateway.Client, defs *result.Result[*pb.Definition]) (*gateway.Result, error) {
// force evaluation of all targets in parallel
results := make(map[*pb.Definition]*gateway.Result)
resultsMu := sync.Mutex{}
eg, egCtx := errgroup.WithContext(ctx)
defs.EachRef(func(def *pb.Definition) error {
eg.Go(func() error {
res, err := c.Solve(egCtx, gateway.SolveRequest{
Evaluate: true,
Definition: def,
})
if err != nil {
return err
}
resultsMu.Lock()
results[def] = res
resultsMu.Unlock()
return nil
})
if err != nil && !errors.As(err, &rCtx.solveErr) {
return nil
})
if err := eg.Wait(); err != nil {
return nil, err
}
res, _ := result.ConvertResult(defs, func(def *pb.Definition) (gateway.Reference, error) {
if res, ok := results[def]; ok {
return res.Ref, nil
}
return nil, nil
})
return res, nil
return rCtx
}
// ResultHandle is a build result with the client that built it.
type ResultHandle struct {
res *gateway.Result
solveErr *errdefs.SolveError
done chan struct{}
doneOnce sync.Once
gwClient gateway.Client
gwCtx context.Context
doneOnce sync.Once
cleanups []func()
cleanupsMu sync.Mutex
@ -275,9 +56,6 @@ func (r *ResultHandle) Done() {
for _, f := range cleanups {
f()
}
close(r.done)
<-r.gwCtx.Done()
})
}
@ -287,12 +65,15 @@ func (r *ResultHandle) registerCleanup(f func()) {
r.cleanupsMu.Unlock()
}
func (r *ResultHandle) build(buildFunc gateway.BuildFunc) (err error) {
_, err = buildFunc(r.gwCtx, r.gwClient)
return err
func (r *ResultHandle) NewContainer(ctx context.Context, cfg *InvokeConfig) (gateway.Container, error) {
req, err := r.getContainerConfig(cfg)
if err != nil {
return nil, err
}
return r.gwClient.NewContainer(ctx, req)
}
func (r *ResultHandle) getContainerConfig(cfg *controllerapi.InvokeConfig) (containerCfg gateway.NewContainerRequest, _ error) {
func (r *ResultHandle) getContainerConfig(cfg *InvokeConfig) (containerCfg gateway.NewContainerRequest, _ error) {
if r.res != nil && r.solveErr == nil {
logrus.Debugf("creating container from successful build")
ccfg, err := containerConfigFromResult(r.res, cfg)
@ -311,7 +92,7 @@ func (r *ResultHandle) getContainerConfig(cfg *controllerapi.InvokeConfig) (cont
return containerCfg, nil
}
func (r *ResultHandle) getProcessConfig(cfg *controllerapi.InvokeConfig, stdin io.ReadCloser, stdout io.WriteCloser, stderr io.WriteCloser) (_ gateway.StartRequest, err error) {
func (r *ResultHandle) getProcessConfig(cfg *InvokeConfig, stdin io.ReadCloser, stdout io.WriteCloser, stderr io.WriteCloser) (_ gateway.StartRequest, err error) {
processCfg := newStartRequest(stdin, stdout, stderr)
if r.res != nil && r.solveErr == nil {
logrus.Debugf("creating container from successful build")
@ -327,7 +108,7 @@ func (r *ResultHandle) getProcessConfig(cfg *controllerapi.InvokeConfig, stdin i
return processCfg, nil
}
func containerConfigFromResult(res *gateway.Result, cfg *controllerapi.InvokeConfig) (*gateway.NewContainerRequest, error) {
func containerConfigFromResult(res *gateway.Result, cfg *InvokeConfig) (*gateway.NewContainerRequest, error) {
if cfg.Initial {
return nil, errors.Errorf("starting from the container from the initial state of the step is supported only on the failed steps")
}
@ -352,11 +133,11 @@ func containerConfigFromResult(res *gateway.Result, cfg *controllerapi.InvokeCon
}, nil
}
func populateProcessConfigFromResult(req *gateway.StartRequest, res *gateway.Result, cfg *controllerapi.InvokeConfig) error {
func populateProcessConfigFromResult(req *gateway.StartRequest, res *gateway.Result, cfg *InvokeConfig) error {
imgData := res.Metadata[exptypes.ExporterImageConfigKey]
var img *specs.Image
var img *ocispecs.Image
if len(imgData) > 0 {
img = &specs.Image{}
img = &ocispecs.Image{}
if err := json.Unmarshal(imgData, img); err != nil {
return err
}
@ -403,16 +184,16 @@ func populateProcessConfigFromResult(req *gateway.StartRequest, res *gateway.Res
return nil
}
func containerConfigFromError(solveErr *errdefs.SolveError, cfg *controllerapi.InvokeConfig) (*gateway.NewContainerRequest, error) {
func containerConfigFromError(solveErr *errdefs.SolveError, cfg *InvokeConfig) (*gateway.NewContainerRequest, error) {
exec, err := execOpFromError(solveErr)
if err != nil {
return nil, err
}
var mounts []gateway.Mount
for i, mnt := range exec.Mounts {
rid := solveErr.Solve.MountIDs[i]
rid := solveErr.MountIDs[i]
if cfg.Initial {
rid = solveErr.Solve.InputIDs[i]
rid = solveErr.InputIDs[i]
}
mounts = append(mounts, gateway.Mount{
Selector: mnt.Selector,
@ -431,7 +212,7 @@ func containerConfigFromError(solveErr *errdefs.SolveError, cfg *controllerapi.I
}, nil
}
func populateProcessConfigFromError(req *gateway.StartRequest, solveErr *errdefs.SolveError, cfg *controllerapi.InvokeConfig) error {
func populateProcessConfigFromError(req *gateway.StartRequest, solveErr *errdefs.SolveError, cfg *InvokeConfig) error {
exec, err := execOpFromError(solveErr)
if err != nil {
return err
@ -477,7 +258,7 @@ func execOpFromError(solveErr *errdefs.SolveError) (*pb.ExecOp, error) {
if solveErr == nil {
return nil, errors.Errorf("no error is available")
}
switch op := solveErr.Solve.Op.GetOp().(type) {
switch op := solveErr.Op.GetOp().(type) {
case *pb.Op_Exec:
return op.Exec, nil
default:

View File

@ -77,24 +77,30 @@ func toBuildkitExtraHosts(ctx context.Context, inp []string, nodeDriver *driver.
}
// If the IP Address is a "host-gateway", replace this value with the
// IP address provided by the worker's label.
var ips []string
if ip == mobyHostGatewayName {
hgip, err := nodeDriver.HostGatewayIP(ctx)
if err != nil {
return "", errors.Wrap(err, "unable to derive the IP value for host-gateway")
}
ip = hgip.String()
ips = append(ips, hgip.String())
} else {
// If the address is enclosed in square brackets, extract it (for IPv6, but
// permit it for IPv4 as well; we don't know the address family here, but it's
// unambiguous).
if len(ip) > 2 && ip[0] == '[' && ip[len(ip)-1] == ']' {
ip = ip[1 : len(ip)-1]
}
if net.ParseIP(ip) == nil {
return "", errors.Errorf("invalid host %s", h)
for _, v := range strings.Split(ip, ",") {
// If the address is enclosed in square brackets, extract it
// (for IPv6, but permit it for IPv4 as well; we don't know the
// address family here, but it's unambiguous).
if len(v) > 2 && v[0] == '[' && v[len(v)-1] == ']' {
v = v[1 : len(v)-1]
}
if net.ParseIP(v) == nil {
return "", errors.Errorf("invalid host %s", h)
}
ips = append(ips, v)
}
}
hosts = append(hosts, host+"="+ip)
for _, v := range ips {
hosts = append(hosts, host+"="+v)
}
}
return strings.Join(hosts, ","), nil
}

View File

@ -72,6 +72,11 @@ func TestToBuildkitExtraHosts(t *testing.T) {
doc: "IPv6 localhost, non-canonical, eq sep",
input: []string{`ipv6local=0:0:0:0:0:0:0:1`},
},
{
doc: "Multi IPs",
input: []string{`myhost=162.242.195.82,162.242.195.83`},
expectedOut: `myhost=162.242.195.82,myhost=162.242.195.83`,
},
{
doc: "IPv6 localhost, non-canonical, eq sep, brackets",
input: []string{`ipv6local=[0:0:0:0:0:0:0:1]`},
@ -130,7 +135,6 @@ func TestToBuildkitExtraHosts(t *testing.T) {
}
for _, tc := range tests {
tc := tc
if tc.expectedOut == "" {
tc.expectedOut = strings.Join(tc.input, ",")
}

View File

@ -122,7 +122,7 @@ func New(dockerCli command.Cli, opts ...Option) (_ *Builder, err error) {
// Validate validates builder context
func (b *Builder) Validate() error {
if b.NodeGroup != nil && b.NodeGroup.DockerContext {
if b.NodeGroup != nil && b.DockerContext {
list, err := b.opts.dockerCli.ContextStore().List()
if err != nil {
return err
@ -144,7 +144,7 @@ func (b *Builder) ContextName() string {
return ""
}
for _, cb := range ctxbuilders {
if b.NodeGroup.Driver == "docker" && len(b.NodeGroup.Nodes) == 1 && b.NodeGroup.Nodes[0].Endpoint == cb.Name {
if b.Driver == "docker" && len(b.NodeGroup.Nodes) == 1 && b.NodeGroup.Nodes[0].Endpoint == cb.Name {
return cb.Name
}
}
@ -254,7 +254,7 @@ func (b *Builder) Factory(ctx context.Context, dialMeta map[string][]string) (_
if err != nil {
return
}
b.Driver = b.driverFactory.Factory.Name()
b.Driver = b.driverFactory.Name()
}
})
return b.driverFactory.Factory, err
@ -309,7 +309,7 @@ func GetBuilders(dockerCli command.Cli, txn *store.Txn) ([]*Builder, error) {
return nil, err
}
builders[i] = b
seen[b.NodeGroup.Name] = struct{}{}
seen[b.Name] = struct{}{}
}
for _, c := range contexts {
@ -524,7 +524,7 @@ func Create(ctx context.Context, txn *store.Txn, dockerCli command.Cli, opts Cre
}
cancelCtx, cancel := context.WithCancelCause(ctx)
timeoutCtx, _ := context.WithTimeoutCause(cancelCtx, 20*time.Second, errors.WithStack(context.DeadlineExceeded)) //nolint:govet,lostcancel // no need to manually cancel this context as we already rely on parent
timeoutCtx, _ := context.WithTimeoutCause(cancelCtx, 20*time.Second, errors.WithStack(context.DeadlineExceeded)) //nolint:govet // no need to manually cancel this context as we already rely on parent
defer func() { cancel(errors.WithStack(context.Canceled)) }()
nodes, err := b.LoadNodes(timeoutCtx, WithData())

View File

@ -190,7 +190,6 @@ foo = "bar"
},
}
for _, tt := range testCases {
tt := tt
t.Run(tt.name, func(t *testing.T) {
flags, err := parseBuildkitdFlags(tt.flags, tt.driver, tt.driverOpts, tt.buildkitdConfigFile)
if tt.wantErr {

View File

@ -183,7 +183,7 @@ func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []N
// not append (remove the static nodes in the store)
b.NodeGroup.Nodes = dynamicNodes
b.nodes = nodes
b.NodeGroup.Dynamic = true
b.Dynamic = true
}
}

View File

@ -7,7 +7,6 @@ import (
"path/filepath"
"github.com/docker/buildx/commands"
controllererrors "github.com/docker/buildx/controller/errdefs"
"github.com/docker/buildx/util/desktop"
"github.com/docker/buildx/version"
"github.com/docker/cli/cli"
@ -15,10 +14,12 @@ import (
"github.com/docker/cli/cli-plugins/plugin"
"github.com/docker/cli/cli/command"
"github.com/docker/cli/cli/debug"
"github.com/moby/buildkit/solver/errdefs"
solvererrdefs "github.com/moby/buildkit/solver/errdefs"
"github.com/moby/buildkit/util/grpcerrors"
"github.com/moby/buildkit/util/stack"
"github.com/pkg/errors"
"go.opentelemetry.io/otel"
"google.golang.org/grpc/codes"
_ "k8s.io/client-go/plugin/pkg/client/auth/oidc"
@ -100,7 +101,7 @@ func main() {
os.Exit(sterr.StatusCode)
}
for _, s := range errdefs.Sources(err) {
for _, s := range solvererrdefs.Sources(err) {
s.Print(cmd.Err())
}
if debug.IsEnabled() {
@ -112,12 +113,21 @@ func main() {
var ebr *desktop.ErrorWithBuildRef
if errors.As(err, &ebr) {
ebr.Print(cmd.Err())
} else {
var be *controllererrors.BuildError
if errors.As(err, &be) {
be.PrintBuildDetails(cmd.Err())
}
exitCode := 1
switch grpcerrors.Code(err) {
case codes.Internal:
exitCode = 100 // https://github.com/square/exit/blob/v1.3.0/exit.go#L70
case codes.ResourceExhausted:
exitCode = 102
case codes.Canceled:
exitCode = 130
default:
if errors.Is(err, context.Canceled) {
exitCode = 130
}
}
os.Exit(1)
os.Exit(exitCode)
}

View File

@ -22,7 +22,6 @@ import (
"github.com/docker/buildx/bake/hclparser"
"github.com/docker/buildx/build"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/localstate"
"github.com/docker/buildx/util/buildflags"
"github.com/docker/buildx/util/cobrautil/completion"
@ -41,6 +40,11 @@ import (
"go.opentelemetry.io/otel/attribute"
)
const (
bakeEnvFileSeparator = "BUILDX_BAKE_PATH_SEPARATOR"
bakeEnvFilePath = "BUILDX_BAKE_FILE"
)
type bakeOptions struct {
files []string
overrides []string
@ -63,7 +67,7 @@ type bakeOptions struct {
listVars bool
}
func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in bakeOptions, cFlags commonFlags) (err error) {
func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in bakeOptions, cFlags commonFlags, filesFromEnv bool) (err error) {
mp := dockerCli.MeterProvider()
ctx, end, err := tracing.TraceCurrentCommand(ctx, append([]string{"bake"}, targets...),
@ -136,7 +140,7 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
// instance only needed for reading remote bake files or building
var driverType string
if url != "" || !(in.print || in.list != "") {
if url != "" || (!in.print && in.list == "") {
b, err := builder.New(dockerCli,
builder.WithName(in.builder),
builder.WithContextPathHash(contextPathHash),
@ -163,7 +167,13 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
attributes := bakeMetricAttributes(dockerCli, driverType, url, cmdContext, targets, &in)
progressMode := progressui.DisplayMode(cFlags.progress)
var printer *progress.Printer
defer func() {
if printer != nil {
printer.Wait()
}
}()
makePrinter := func() error {
var err error
@ -181,7 +191,7 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
return err
}
files, inp, err := readBakeFiles(ctx, nodes, url, in.files, dockerCli.In(), printer)
files, inp, err := readBakeFiles(ctx, nodes, url, in.files, dockerCli.In(), printer, filesFromEnv)
if err != nil {
return err
}
@ -261,13 +271,18 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
return err
}
for _, opt := range bo {
for k, opt := range bo {
if opt.CallFunc != nil {
cf, err := buildflags.ParseCallFunc(opt.CallFunc.Name)
if err != nil {
return err
}
opt.CallFunc.Name = cf.Name
if cf == nil {
opt.CallFunc = nil
bo[k] = opt
} else {
opt.CallFunc.Name = cf.Name
}
}
}
@ -343,7 +358,7 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
continue
}
pf := &pb.CallFunc{
pf := &buildflags.CallFunc{
Name: req.CallFunc.Name,
Format: req.CallFunc.Format,
IgnoreStatus: req.CallFunc.IgnoreStatus,
@ -424,6 +439,14 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
fmt.Fprintln(dockerCli.Out(), string(dt))
}
for _, name := range names {
if sp, ok := resp[name]; ok {
if v, ok := sp.ExporterResponse["frontend.result.inlinemessage"]; ok {
fmt.Fprintf(dockerCli.Out(), "\n# %s\n%s\n", name, v)
}
}
}
if exitCode != 0 {
os.Exit(exitCode)
}
@ -440,6 +463,15 @@ func bakeCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
Aliases: []string{"f"},
Short: "Build from a file",
RunE: func(cmd *cobra.Command, args []string) error {
filesFromEnv := false
if len(options.files) == 0 {
envFiles, err := bakeEnvFiles(os.LookupEnv)
if err != nil {
return err
}
options.files = envFiles
filesFromEnv = true
}
// reset to nil to avoid override is unset
if !cmd.Flags().Lookup("no-cache").Changed {
cFlags.noCache = nil
@ -457,7 +489,7 @@ func bakeCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
options.builder = rootOpts.builder
options.metadataFile = cFlags.metadataFile
// Other common flags (noCache, pull and progress) are processed in runBake function.
return runBake(cmd.Context(), dockerCli, args, options, cFlags)
return runBake(cmd.Context(), dockerCli, args, options, cFlags, filesFromEnv)
},
ValidArgsFunction: completion.BakeTargets(options.files),
}
@ -492,6 +524,37 @@ func bakeCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
return cmd
}
func bakeEnvFiles(lookup func(string string) (string, bool)) ([]string, error) {
sep, _ := lookup(bakeEnvFileSeparator)
if sep == "" {
sep = string(os.PathListSeparator)
}
f, ok := lookup(bakeEnvFilePath)
if ok {
return cleanPaths(strings.Split(f, sep))
}
return []string{}, nil
}
func cleanPaths(p []string) ([]string, error) {
var paths []string
for _, f := range p {
f = strings.TrimSpace(f)
if f == "" {
continue
}
if f == "-" {
paths = append(paths, f)
continue
}
if _, err := os.Stat(f); err != nil {
return nil, err
}
paths = append(paths, f)
}
return paths, nil
}
func saveLocalStateGroup(dockerCli command.Cli, in bakeOptions, targets []string, bo map[string]build.Options) error {
l, err := localstate.New(confutil.NewConfig(dockerCli))
if err != nil {
@ -541,13 +604,12 @@ func bakeArgs(args []string) (url, cmdContext string, targets []string) {
return url, cmdContext, targets
}
func readBakeFiles(ctx context.Context, nodes []builder.Node, url string, names []string, stdin io.Reader, pw progress.Writer) (files []bake.File, inp *bake.Input, err error) {
func readBakeFiles(ctx context.Context, nodes []builder.Node, url string, names []string, stdin io.Reader, pw progress.Writer, filesFromEnv bool) (files []bake.File, inp *bake.Input, err error) {
var lnames []string // local
var rnames []string // remote
var anames []string // both
for _, v := range names {
if strings.HasPrefix(v, "cwd://") {
tname := strings.TrimPrefix(v, "cwd://")
if tname, ok := strings.CutPrefix(v, "cwd://"); ok {
lnames = append(lnames, tname)
anames = append(anames, tname)
} else {
@ -567,7 +629,11 @@ func readBakeFiles(ctx context.Context, nodes []builder.Node, url string, names
if len(lnames) > 0 || url == "" {
var lfiles []bake.File
progress.Wrap("[internal] load local bake definitions", pw.Write, func(sub progress.SubLogger) error {
where := ""
if filesFromEnv {
where = " from " + bakeEnvFilePath + " env"
}
progress.Wrap("[internal] load local bake definitions"+where, pw.Write, func(sub progress.SubLogger) error {
if url != "" {
lfiles, err = bake.ReadLocalFiles(lnames, stdin, sub)
} else {
@ -651,7 +717,7 @@ func printVars(w io.Writer, format string, vars []*hclparser.Variable) error {
tw := tabwriter.NewWriter(w, 1, 8, 1, '\t', 0)
defer tw.Flush()
tw.Write([]byte("VARIABLE\tVALUE\tDESCRIPTION\n"))
tw.Write([]byte("VARIABLE\tTYPE\tVALUE\tDESCRIPTION\n"))
for _, v := range vars {
var value string
@ -660,7 +726,7 @@ func printVars(w io.Writer, format string, vars []*hclparser.Variable) error {
} else {
value = "<null>"
}
fmt.Fprintf(tw, "%s\t%s\t%s\n", v.Name, value, v.Description)
fmt.Fprintf(tw, "%s\t%s\t%s\t%s\n", v.Name, v.Type, value, v.Description)
}
return nil
}

View File

@ -20,12 +20,6 @@ import (
"github.com/containerd/console"
"github.com/docker/buildx/build"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/commands/debug"
"github.com/docker/buildx/controller"
cbuild "github.com/docker/buildx/controller/build"
"github.com/docker/buildx/controller/control"
controllererrors "github.com/docker/buildx/controller/errdefs"
controllerapi "github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/monitor"
"github.com/docker/buildx/store"
"github.com/docker/buildx/store/storeutil"
@ -33,26 +27,29 @@ import (
"github.com/docker/buildx/util/cobrautil"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/desktop"
"github.com/docker/buildx/util/ioset"
"github.com/docker/buildx/util/dockerutil"
"github.com/docker/buildx/util/metricutil"
"github.com/docker/buildx/util/osutil"
"github.com/docker/buildx/util/platformutil"
"github.com/docker/buildx/util/progress"
"github.com/docker/buildx/util/tracing"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
dockeropts "github.com/docker/cli/opts"
"github.com/docker/docker/api/types/versions"
"github.com/docker/docker/pkg/atomicwriter"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/exporter/containerimage/exptypes"
"github.com/moby/buildkit/frontend/subrequests"
"github.com/moby/buildkit/frontend/subrequests/lint"
"github.com/moby/buildkit/frontend/subrequests/outline"
"github.com/moby/buildkit/frontend/subrequests/targets"
"github.com/moby/buildkit/session/auth/authprovider"
"github.com/moby/buildkit/solver/errdefs"
solverpb "github.com/moby/buildkit/solver/pb"
sourcepolicy "github.com/moby/buildkit/sourcepolicy/pb"
"github.com/moby/buildkit/util/grpcerrors"
"github.com/moby/buildkit/util/progress/progressui"
"github.com/moby/sys/atomicwriter"
"github.com/morikuni/aec"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
@ -104,12 +101,10 @@ type buildOptions struct {
exportPush bool
exportLoad bool
control.ControlOptions
invokeConfig *invokeConfig
}
func (o *buildOptions) toControllerOptions() (*controllerapi.BuildOptions, error) {
func (o *buildOptions) toOptions() (*BuildOptions, error) {
var err error
buildArgs, err := listToMap(o.buildArgs, true)
@ -122,7 +117,7 @@ func (o *buildOptions) toControllerOptions() (*controllerapi.BuildOptions, error
return nil, err
}
opts := controllerapi.BuildOptions{
opts := BuildOptions{
Allow: o.allow,
Annotations: o.annotations,
BuildArgs: buildArgs,
@ -137,7 +132,7 @@ func (o *buildOptions) toControllerOptions() (*controllerapi.BuildOptions, error
ShmSize: int64(o.shmSize),
Tags: o.tags,
Target: o.target,
Ulimits: dockerUlimitToControllerUlimit(o.ulimits),
Ulimits: o.ulimits,
Builder: o.builder,
NoCache: o.noCache,
Pull: o.pull,
@ -184,17 +179,15 @@ func (o *buildOptions) toControllerOptions() (*controllerapi.BuildOptions, error
}
}
cacheFrom, err := buildflags.ParseCacheEntry(o.cacheFrom)
opts.CacheFrom, err = buildflags.ParseCacheEntry(o.cacheFrom)
if err != nil {
return nil, err
}
opts.CacheFrom = cacheFrom.ToPB()
cacheTo, err := buildflags.ParseCacheEntry(o.cacheTo)
opts.CacheTo, err = buildflags.ParseCacheEntry(o.cacheTo)
if err != nil {
return nil, err
}
opts.CacheTo = cacheTo.ToPB()
opts.Secrets, err = buildflags.ParseSecretSpecs(o.secrets)
if err != nil {
@ -298,7 +291,7 @@ func runBuild(ctx context.Context, dockerCli command.Cli, options buildOptions)
end(err)
}()
opts, err := options.toControllerOptions()
opts, err := options.toOptions()
if err != nil {
return err
}
@ -355,14 +348,7 @@ func runBuild(ctx context.Context, dockerCli command.Cli, options buildOptions)
}
done := timeBuildCommand(mp, attributes)
var resp *client.SolveResponse
var inputs *build.Inputs
var retErr error
if confutil.IsExperimental() {
resp, inputs, retErr = runControllerBuild(ctx, dockerCli, opts, options, printer)
} else {
resp, inputs, retErr = runBasicBuild(ctx, dockerCli, opts, printer)
}
resp, inputs, retErr := runBuildWithOptions(ctx, dockerCli, opts, options, printer)
if err := printer.Wait(); retErr == nil {
retErr = err
@ -404,6 +390,10 @@ func runBuild(ctx context.Context, dockerCli command.Cli, options buildOptions)
os.Exit(exitcode)
}
}
if v, ok := resp.ExporterResponse["frontend.result.inlinemessage"]; ok {
fmt.Fprintf(dockerCli.Out(), "\n%s\n", v)
return nil
}
return nil
}
@ -416,138 +406,41 @@ func getImageID(resp map[string]string) string {
return dgst
}
func runBasicBuild(ctx context.Context, dockerCli command.Cli, opts *controllerapi.BuildOptions, printer *progress.Printer) (*client.SolveResponse, *build.Inputs, error) {
resp, res, dfmap, err := cbuild.RunBuild(ctx, dockerCli, opts, dockerCli.In(), printer, false)
if res != nil {
res.Done()
}
return resp, dfmap, err
}
func runControllerBuild(ctx context.Context, dockerCli command.Cli, opts *controllerapi.BuildOptions, options buildOptions, printer *progress.Printer) (*client.SolveResponse, *build.Inputs, error) {
func runBuildWithOptions(ctx context.Context, dockerCli command.Cli, opts *BuildOptions, options buildOptions, printer *progress.Printer) (_ *client.SolveResponse, _ *build.Inputs, retErr error) {
if options.invokeConfig != nil && (options.dockerfileName == "-" || options.contextPath == "-") {
// stdin must be usable for monitor
return nil, nil, errors.Errorf("Dockerfile or context from stdin is not supported with invoke")
}
c, err := controller.NewController(ctx, options.ControlOptions, dockerCli, printer)
if err != nil {
return nil, nil, err
}
defer func() {
if err := c.Close(); err != nil {
logrus.Warnf("failed to close server connection %v", err)
}
}()
// NOTE: buildx server has the current working directory different from the client
// so we need to resolve paths to abosolute ones in the client.
opts, err = controllerapi.ResolveOptionPaths(opts)
if err != nil {
return nil, nil, err
}
var ref string
var retErr error
var resp *client.SolveResponse
var inputs *build.Inputs
var f *ioset.SingleForwarder
var pr io.ReadCloser
var pw io.WriteCloser
var (
in io.ReadCloser
m *monitor.Monitor
bh build.Handler
)
if options.invokeConfig == nil {
pr = dockerCli.In()
in = dockerCli.In()
} else {
f = ioset.NewSingleForwarder()
f.SetReader(dockerCli.In())
pr, pw = io.Pipe()
f.SetWriter(pw, func() io.WriteCloser {
pw.Close() // propagate EOF
logrus.Debug("propagating stdin close")
return nil
})
m = monitor.New(&options.invokeConfig.InvokeConfig, dockerCli.In(), os.Stdout, os.Stderr, printer)
defer m.Close()
bh = m.Handler()
}
ref, resp, inputs, err = c.Build(ctx, opts, pr, printer)
if err != nil {
var be *controllererrors.BuildError
if errors.As(err, &be) {
ref = be.SessionID
retErr = err
// We can proceed to monitor
} else {
for {
resp, inputs, err := RunBuild(ctx, dockerCli, opts, in, printer, &bh)
if err != nil {
if errors.Is(err, build.ErrRestart) {
retErr = nil
continue
}
return nil, nil, errors.Wrapf(err, "failed to build")
}
return resp, inputs, err
}
if options.invokeConfig != nil {
if err := pw.Close(); err != nil {
logrus.Debug("failed to close stdin pipe writer")
}
if err := pr.Close(); err != nil {
logrus.Debug("failed to close stdin pipe reader")
}
}
if options.invokeConfig != nil && options.invokeConfig.needsDebug(retErr) {
// Print errors before launching monitor
if err := printError(retErr, printer); err != nil {
logrus.Warnf("failed to print error information: %v", err)
}
pr2, pw2 := io.Pipe()
f.SetWriter(pw2, func() io.WriteCloser {
pw2.Close() // propagate EOF
return nil
})
monitorBuildResult, err := options.invokeConfig.runDebug(ctx, ref, opts, c, pr2, os.Stdout, os.Stderr, printer)
if err := pw2.Close(); err != nil {
logrus.Debug("failed to close monitor stdin pipe reader")
}
if err != nil {
logrus.Warnf("failed to run monitor: %v", err)
}
if monitorBuildResult != nil {
// Update return values with the last build result from monitor
resp, retErr = monitorBuildResult.Resp, monitorBuildResult.Err
}
} else {
if err := c.Disconnect(ctx, ref); err != nil {
logrus.Warnf("disconnect error: %v", err)
}
}
return resp, inputs, retErr
}
func printError(err error, printer *progress.Printer) error {
if err == nil {
return nil
}
if err := printer.Pause(); err != nil {
return err
}
defer printer.Unpause()
for _, s := range errdefs.Sources(err) {
s.Print(os.Stderr)
}
fmt.Fprintf(os.Stderr, "ERROR: %v\n", err)
return nil
}
func newDebuggableBuild(dockerCli command.Cli, rootOpts *rootOptions) debug.DebuggableCmd {
return &debuggableBuild{dockerCli: dockerCli, rootOpts: rootOpts}
}
type debuggableBuild struct {
dockerCli command.Cli
rootOpts *rootOptions
}
func (b *debuggableBuild) NewDebugger(cfg *debug.DebugConfig) *cobra.Command {
return buildCmd(b.dockerCli, b.rootOpts, cfg)
}
func buildCmd(dockerCli command.Cli, rootOpts *rootOptions, debugConfig *debug.DebugConfig) *cobra.Command {
func buildCmd(dockerCli command.Cli, rootOpts *rootOptions, debugConfig *debugOptions) *cobra.Command {
cFlags := &commonFlags{}
options := &buildOptions{}
@ -649,14 +542,6 @@ func buildCmd(dockerCli command.Cli, rootOpts *rootOptions, debugConfig *debug.D
flags.StringVar(&options.sbom, "sbom", "", `Shorthand for "--attest=type=sbom"`)
flags.StringVar(&options.provenance, "provenance", "", `Shorthand for "--attest=type=provenance"`)
if confutil.IsExperimental() {
// TODO: move this to debug command if needed
flags.StringVar(&options.Root, "root", "", "Specify root directory of server to connect")
flags.BoolVar(&options.Detach, "detach", false, "Detach buildx server (supported only on linux)")
flags.StringVar(&options.ServerConfig, "server-config", "", "Specify buildx server config file (used only when launching new server)")
cobrautil.MarkFlagsExperimental(flags, "root", "detach", "server-config")
}
flags.StringVar(&options.callFunc, "call", "build", `Set method for evaluating build ("check", "outline", "targets")`)
flags.VarPF(callAlias(&options.callFunc, "check"), "check", "", `Shorthand for "--call=check"`)
flags.Lookup("check").NoOptDefVal = "true"
@ -842,21 +727,6 @@ func listToMap(values []string, defaultEnv bool) (map[string]string, error) {
return result, nil
}
func dockerUlimitToControllerUlimit(u *dockeropts.UlimitOpt) *controllerapi.UlimitOpt {
if u == nil {
return nil
}
values := make(map[string]*controllerapi.Ulimit)
for _, u := range u.GetList() {
values[u.Name] = &controllerapi.Ulimit{
Name: u.Name,
Hard: u.Hard,
Soft: u.Soft,
}
}
return &controllerapi.UlimitOpt{Values: values}
}
func printWarnings(w io.Writer, warnings []client.VertexWarning, mode progressui.DisplayMode) {
if len(warnings) == 0 || mode == progressui.QuietMode || mode == progressui.RawJSONMode {
return
@ -896,7 +766,7 @@ func printWarnings(w io.Writer, warnings []client.VertexWarning, mode progressui
}
}
func printResult(w io.Writer, f *controllerapi.CallFunc, res map[string]string, target string, inp *build.Inputs) (int, error) {
func printResult(w io.Writer, f *buildflags.CallFunc, res map[string]string, target string, inp *build.Inputs) (int, error) {
switch f.Name {
case "outline":
return 0, printValue(w, outline.PrintOutline, outline.SubrequestsOutlineDefinition.Version, f.Format, res)
@ -997,37 +867,22 @@ func printValue(w io.Writer, printer callFunc, version string, format string, re
}
type invokeConfig struct {
controllerapi.InvokeConfig
onFlag string
build.InvokeConfig
invokeFlag string
}
func (cfg *invokeConfig) needsDebug(retErr error) bool {
switch cfg.onFlag {
case "always":
return true
case "error":
return retErr != nil
default:
return cfg.invokeFlag != ""
}
}
func (cfg *invokeConfig) runDebug(ctx context.Context, ref string, options *controllerapi.BuildOptions, c control.BuildxController, stdin io.ReadCloser, stdout io.WriteCloser, stderr console.File, progress *progress.Printer) (*monitor.MonitorBuildResult, error) {
con := console.Current()
if err := con.SetRaw(); err != nil {
// TODO: run disconnect in build command (on error case)
if err := c.Disconnect(ctx, ref); err != nil {
logrus.Warnf("disconnect error: %v", err)
}
return nil, errors.Errorf("failed to configure terminal: %v", err)
}
defer con.Reset()
return monitor.RunMonitor(ctx, ref, options, &cfg.InvokeConfig, c, stdin, stdout, stderr, progress)
}
func (cfg *invokeConfig) parseInvokeConfig(invoke, on string) error {
cfg.onFlag = on
switch on {
case "always":
cfg.SuspendOn = build.SuspendAlways
case "error":
cfg.SuspendOn = build.SuspendError
default:
if invoke != "" {
cfg.SuspendOn = build.SuspendAlways
}
}
cfg.invokeFlag = invoke
cfg.Tty = true
cfg.NoCmd = true
@ -1149,3 +1004,209 @@ func otelErrorType(err error) string {
}
return name
}
const defaultTargetName = "default"
type BuildOptions struct {
ContextPath string
DockerfileName string
CallFunc *buildflags.CallFunc
NamedContexts map[string]string
Allow []string
Attests buildflags.Attests
BuildArgs map[string]string
CacheFrom []*buildflags.CacheOptionsEntry
CacheTo []*buildflags.CacheOptionsEntry
CgroupParent string
Exports []*buildflags.ExportEntry
ExtraHosts []string
Labels map[string]string
NetworkMode string
NoCacheFilter []string
Platforms []string
Secrets buildflags.Secrets
ShmSize int64
SSH []*buildflags.SSH
Tags []string
Target string
Ulimits *dockeropts.UlimitOpt
Builder string
NoCache bool
Pull bool
ExportPush bool
ExportLoad bool
SourcePolicy *sourcepolicy.Policy
Ref string
GroupRef string
Annotations []string
ProvenanceResponseMode string
}
// RunBuild runs the specified build and returns the result.
func RunBuild(ctx context.Context, dockerCli command.Cli, in *BuildOptions, inStream io.Reader, progress progress.Writer, bh *build.Handler) (*client.SolveResponse, *build.Inputs, error) {
if in.NoCache && len(in.NoCacheFilter) > 0 {
return nil, nil, errors.Errorf("--no-cache and --no-cache-filter cannot currently be used together")
}
contexts := map[string]build.NamedContext{}
for name, path := range in.NamedContexts {
contexts[name] = build.NamedContext{Path: path}
}
opts := build.Options{
Inputs: build.Inputs{
ContextPath: in.ContextPath,
DockerfilePath: in.DockerfileName,
InStream: build.NewSyncMultiReader(inStream),
NamedContexts: contexts,
},
Ref: in.Ref,
BuildArgs: in.BuildArgs,
CgroupParent: in.CgroupParent,
ExtraHosts: in.ExtraHosts,
Labels: in.Labels,
NetworkMode: in.NetworkMode,
NoCache: in.NoCache,
NoCacheFilter: in.NoCacheFilter,
Pull: in.Pull,
ShmSize: dockeropts.MemBytes(in.ShmSize),
Tags: in.Tags,
Target: in.Target,
Ulimits: in.Ulimits,
GroupRef: in.GroupRef,
ProvenanceResponseMode: confutil.ParseMetadataProvenance(in.ProvenanceResponseMode),
}
platforms, err := platformutil.Parse(in.Platforms)
if err != nil {
return nil, nil, err
}
opts.Platforms = platforms
dockerConfig := dockerCli.ConfigFile()
opts.Session = append(opts.Session, authprovider.NewDockerAuthProvider(authprovider.DockerAuthProviderConfig{
ConfigFile: dockerConfig,
}))
secrets, err := build.CreateSecrets(in.Secrets)
if err != nil {
return nil, nil, err
}
opts.Session = append(opts.Session, secrets)
sshSpecs := in.SSH
if len(sshSpecs) == 0 && buildflags.IsGitSSH(in.ContextPath) {
sshSpecs = append(sshSpecs, &buildflags.SSH{ID: "default"})
}
ssh, err := build.CreateSSH(sshSpecs)
if err != nil {
return nil, nil, err
}
opts.Session = append(opts.Session, ssh)
outputs, _, err := build.CreateExports(in.Exports)
if err != nil {
return nil, nil, err
}
if in.ExportPush {
var pushUsed bool
for i := range outputs {
if outputs[i].Type == client.ExporterImage {
outputs[i].Attrs["push"] = "true"
pushUsed = true
}
}
if !pushUsed {
outputs = append(outputs, client.ExportEntry{
Type: client.ExporterImage,
Attrs: map[string]string{
"push": "true",
},
})
}
}
if in.ExportLoad {
var loadUsed bool
for i := range outputs {
if outputs[i].Type == client.ExporterDocker {
if _, ok := outputs[i].Attrs["dest"]; !ok {
loadUsed = true
break
}
}
}
if !loadUsed {
outputs = append(outputs, client.ExportEntry{
Type: client.ExporterDocker,
Attrs: map[string]string{},
})
}
}
annotations, err := buildflags.ParseAnnotations(in.Annotations)
if err != nil {
return nil, nil, errors.Wrap(err, "parse annotations")
}
for _, o := range outputs {
for k, v := range annotations {
o.Attrs[k.String()] = v
}
}
opts.Exports = outputs
opts.CacheFrom = build.CreateCaches(in.CacheFrom)
opts.CacheTo = build.CreateCaches(in.CacheTo)
opts.Attests = in.Attests.ToMap()
opts.SourcePolicy = in.SourcePolicy
allow, err := buildflags.ParseEntitlements(in.Allow)
if err != nil {
return nil, nil, err
}
opts.Allow = allow
if in.CallFunc != nil {
opts.CallFunc = &build.CallFunc{
Name: in.CallFunc.Name,
Format: in.CallFunc.Format,
IgnoreStatus: in.CallFunc.IgnoreStatus,
}
}
// key string used for kubernetes "sticky" mode
contextPathHash, err := filepath.Abs(in.ContextPath)
if err != nil {
contextPathHash = in.ContextPath
}
b, err := builder.New(dockerCli,
builder.WithName(in.Builder),
builder.WithContextPathHash(contextPathHash),
)
if err != nil {
return nil, nil, err
}
if err = updateLastActivity(dockerCli, b.NodeGroup); err != nil {
return nil, nil, errors.Wrapf(err, "failed to update builder last activity time")
}
nodes, err := b.LoadNodes(ctx)
if err != nil {
return nil, nil, err
}
var inputs *build.Inputs
buildOptions := map[string]build.Options{defaultTargetName: opts}
resp, err := build.BuildWithResultHandler(ctx, nodes, buildOptions, dockerutil.NewClient(dockerCli), confutil.NewConfig(dockerCli), progress, bh)
err = wrapBuildError(err, false)
if err != nil {
return nil, nil, err
}
if i, ok := buildOptions[defaultTargetName]; ok {
inputs = &i.Inputs
}
return resp[defaultTargetName], inputs, nil
}

34
commands/debug.go Normal file
View File

@ -0,0 +1,34 @@
package commands
import (
"github.com/docker/buildx/util/cobrautil"
"github.com/docker/cli/cli/command"
"github.com/spf13/cobra"
)
type debugOptions struct {
// InvokeFlag is a flag to configure the launched debugger and the commaned executed on the debugger.
InvokeFlag string
// OnFlag is a flag to configure the timing of launching the debugger.
OnFlag string
}
func debugCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
var options debugOptions
cmd := &cobra.Command{
Use: "debug",
Short: "Start debugger",
}
cobrautil.MarkCommandExperimental(cmd)
flags := cmd.Flags()
flags.StringVar(&options.InvokeFlag, "invoke", "", "Launch a monitor with executing specified command")
flags.StringVar(&options.OnFlag, "on", "error", "When to launch the monitor ([always, error])")
cobrautil.MarkFlagsExperimental(flags, "invoke", "on")
cmd.AddCommand(buildCmd(dockerCli, rootOpts, &options))
return cmd
}

View File

@ -1,92 +0,0 @@
package debug
import (
"context"
"os"
"runtime"
"github.com/containerd/console"
"github.com/docker/buildx/controller"
"github.com/docker/buildx/controller/control"
controllerapi "github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/monitor"
"github.com/docker/buildx/util/cobrautil"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/command"
"github.com/moby/buildkit/util/progress/progressui"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
)
// DebugConfig is a user-specified configuration for the debugger.
type DebugConfig struct {
// InvokeFlag is a flag to configure the launched debugger and the commaned executed on the debugger.
InvokeFlag string
// OnFlag is a flag to configure the timing of launching the debugger.
OnFlag string
}
// DebuggableCmd is a command that supports debugger with recognizing the user-specified DebugConfig.
type DebuggableCmd interface {
// NewDebugger returns the new *cobra.Command with support for the debugger with recognizing DebugConfig.
NewDebugger(*DebugConfig) *cobra.Command
}
func RootCmd(dockerCli command.Cli, children ...DebuggableCmd) *cobra.Command {
var controlOptions control.ControlOptions
var progressMode string
var options DebugConfig
cmd := &cobra.Command{
Use: "debug",
Short: "Start debugger",
Args: cobra.NoArgs,
RunE: func(cmd *cobra.Command, args []string) error {
printer, err := progress.NewPrinter(context.TODO(), os.Stderr, progressui.DisplayMode(progressMode))
if err != nil {
return err
}
ctx := context.TODO()
c, err := controller.NewController(ctx, controlOptions, dockerCli, printer)
if err != nil {
return err
}
defer func() {
if err := c.Close(); err != nil {
logrus.Warnf("failed to close server connection %v", err)
}
}()
con := console.Current()
if err := con.SetRaw(); err != nil {
return errors.Errorf("failed to configure terminal: %v", err)
}
_, err = monitor.RunMonitor(ctx, "", nil, &controllerapi.InvokeConfig{
Tty: true,
}, c, dockerCli.In(), os.Stdout, os.Stderr, printer)
con.Reset()
return err
},
}
cobrautil.MarkCommandExperimental(cmd)
flags := cmd.Flags()
flags.StringVar(&options.InvokeFlag, "invoke", "", "Launch a monitor with executing specified command")
flags.StringVar(&options.OnFlag, "on", "error", "When to launch the monitor ([always, error])")
flags.StringVar(&controlOptions.Root, "root", "", "Specify root directory of server to connect for the monitor")
flags.BoolVar(&controlOptions.Detach, "detach", runtime.GOOS == "linux", "Detach buildx server for the monitor (supported only on linux)")
flags.StringVar(&controlOptions.ServerConfig, "server-config", "", "Specify buildx server config file for the monitor (used only when launching new server)")
flags.StringVar(&progressMode, "progress", "auto", `Set type of progress output ("auto", "plain", "tty", "rawjson") for the monitor. Use plain to show container output`)
cobrautil.MarkFlagsExperimental(flags, "invoke", "on", "root", "detach", "server-config")
for _, c := range children {
cmd.AddCommand(c.NewDebugger(&options))
}
return cmd
}

View File

@ -12,7 +12,7 @@ import (
"github.com/docker/cli/cli/command"
"github.com/moby/buildkit/util/appcontext"
"github.com/moby/buildkit/util/progress/progressui"
v1 "github.com/opencontainers/image-spec/specs-go/v1"
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/spf13/cobra"
"golang.org/x/sync/errgroup"
@ -49,7 +49,7 @@ func runDialStdio(dockerCli command.Cli, opts stdioOptions) error {
return err
}
var p *v1.Platform
var p *ocispecs.Platform
if opts.platform != "" {
pp, err := platforms.Parse(opts.platform)
if err != nil {

View File

@ -20,10 +20,11 @@ import (
)
type exportOptions struct {
builder string
refs []string
output string
all bool
builder string
refs []string
output string
all bool
finalize bool
}
func runExport(ctx context.Context, dockerCli command.Cli, opts exportOptions) error {
@ -62,6 +63,26 @@ func runExport(ctx context.Context, dockerCli command.Cli, opts exportOptions) e
return errors.Errorf("no record found for ref %q", ref)
}
if opts.finalize {
var finalized bool
for _, rec := range recs {
if rec.Trace == nil {
finalized = true
if err := finalizeRecord(ctx, rec.Ref, nodes); err != nil {
return err
}
}
}
if finalized {
recs, err = queryRecords(ctx, ref, nodes, &queryOptions{
CompletedOnly: true,
})
if err != nil {
return err
}
}
}
if ref == "" {
slices.SortFunc(recs, func(a, b historyRecord) int {
return b.CreatedAt.AsTime().Compare(a.CreatedAt.AsTime())
@ -139,8 +160,8 @@ func exportCmd(dockerCli command.Cli, rootOpts RootOptions) *cobra.Command {
var options exportOptions
cmd := &cobra.Command{
Use: "export [OPTIONS] [REF]",
Short: "Export a build into Docker Desktop bundle",
Use: "export [OPTIONS] [REF...]",
Short: "Export build records into Docker Desktop bundle",
RunE: func(cmd *cobra.Command, args []string) error {
if options.all && len(args) > 0 {
return errors.New("cannot specify refs when using --all")
@ -154,7 +175,8 @@ func exportCmd(dockerCli command.Cli, rootOpts RootOptions) *cobra.Command {
flags := cmd.Flags()
flags.StringVarP(&options.output, "output", "o", "", "Output file path")
flags.BoolVar(&options.all, "all", false, "Export all records for the builder")
flags.BoolVar(&options.all, "all", false, "Export all build records for the builder")
flags.BoolVar(&options.finalize, "finalize", false, "Ensure build records are finalized before exporting")
return cmd
}

View File

@ -119,8 +119,8 @@ func importCmd(dockerCli command.Cli, _ RootOptions) *cobra.Command {
var options importOptions
cmd := &cobra.Command{
Use: "import [OPTIONS] < bundle.dockerbuild",
Short: "Import a build into Docker Desktop",
Use: "import [OPTIONS] -",
Short: "Import build records into Docker Desktop",
Args: cobra.NoArgs,
RunE: func(cmd *cobra.Command, args []string) error {
return runImport(cmd.Context(), dockerCli, options)

View File

@ -335,9 +335,9 @@ workers0:
out.Error.Sources = errsources.Bytes()
var ve *errdefs.VertexError
if errors.As(retErr, &ve) {
dgst, err := digest.Parse(ve.Vertex.Digest)
dgst, err := digest.Parse(ve.Digest)
if err != nil {
return errors.Wrapf(err, "failed to parse vertex digest %s", ve.Vertex.Digest)
return errors.Wrapf(err, "failed to parse vertex digest %s", ve.Digest)
}
name, logs, err := loadVertexLogs(ctx, c, rec.Ref, dgst, 16)
if err != nil {
@ -426,23 +426,32 @@ workers0:
}
provIndex := slices.IndexFunc(attachments, func(a attachment) bool {
return descrType(a.descr) == slsa02.PredicateSLSAProvenance
return strings.HasPrefix(descrType(a.descr), "https://slsa.dev/provenance/")
})
if provIndex != -1 {
prov := attachments[provIndex]
predType := descrType(prov.descr)
dt, err := content.ReadBlob(ctx, store, prov.descr)
if err != nil {
return errors.Errorf("failed to read provenance %s: %v", prov.descr.Digest, err)
}
var pred provenancetypes.ProvenancePredicate
if err := json.Unmarshal(dt, &pred); err != nil {
var pred *provenancetypes.ProvenancePredicateSLSA1
if predType == slsa02.PredicateSLSAProvenance {
var pred02 *provenancetypes.ProvenancePredicateSLSA02
if err := json.Unmarshal(dt, &pred02); err != nil {
return errors.Errorf("failed to unmarshal provenance %s: %v", prov.descr.Digest, err)
}
pred = pred02.ConvertToSLSA1()
} else if err := json.Unmarshal(dt, &pred); err != nil {
return errors.Errorf("failed to unmarshal provenance %s: %v", prov.descr.Digest, err)
}
for _, m := range pred.Materials {
out.Materials = append(out.Materials, materialOutput{
URI: m.URI,
Digests: digestSetToDigests(m.Digest),
})
if pred != nil {
for _, m := range pred.BuildDefinition.ResolvedDependencies {
out.Materials = append(out.Materials, materialOutput{
URI: m.URI,
Digests: digestSetToDigests(m.Digest),
})
}
}
}
@ -525,9 +534,10 @@ workers0:
}
fmt.Fprintf(tw, "Duration:\t%s%s\n", formatDuration(out.Duration), statusStr)
if out.Status == statusError {
switch out.Status {
case statusError:
fmt.Fprintf(tw, "Error:\t%s %s\n", codes.Code(rec.Error.Code).String(), rec.Error.Message)
} else if out.Status == statusCanceled {
case statusCanceled:
fmt.Fprintf(tw, "Status:\tCanceled\n")
}
@ -648,7 +658,7 @@ func inspectCmd(dockerCli command.Cli, rootOpts RootOptions) *cobra.Command {
cmd := &cobra.Command{
Use: "inspect [OPTIONS] [REF]",
Short: "Inspect a build",
Short: "Inspect a build record",
Args: cobra.MaximumNArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
if len(args) > 0 {
@ -835,6 +845,7 @@ func ociDesc(in *controlapi.Descriptor) ocispecs.Descriptor {
Annotations: in.Annotations,
}
}
func descrType(desc ocispecs.Descriptor) string {
if typ, ok := desc.Annotations["in-toto.io/predicate-type"]; ok {
return typ
@ -868,9 +879,9 @@ func printTable(w io.Writer, kvs []keyValueOutput, title string) {
func readKeyValues(attrs map[string]string, prefix string) []keyValueOutput {
var out []keyValueOutput
for k, v := range attrs {
if strings.HasPrefix(k, prefix) {
if name, ok := strings.CutPrefix(k, prefix); ok {
out = append(out, keyValueOutput{
Name: strings.TrimPrefix(k, prefix),
Name: name,
Value: v,
})
}

View File

@ -11,6 +11,7 @@ import (
"github.com/docker/cli/cli/command"
intoto "github.com/in-toto/in-toto-golang/in_toto"
slsa02 "github.com/in-toto/in-toto-golang/in_toto/slsa_provenance/v0.2"
slsa1 "github.com/in-toto/in-toto-golang/in_toto/slsa_provenance/v1"
"github.com/opencontainers/go-digest"
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
@ -76,25 +77,30 @@ func runAttachment(ctx context.Context, dockerCli command.Cli, opts attachmentOp
return err
}
typ := opts.typ
switch typ {
types := make(map[string]struct{})
switch opts.typ {
case "index":
typ = ocispecs.MediaTypeImageIndex
types[ocispecs.MediaTypeImageIndex] = struct{}{}
case "manifest":
typ = ocispecs.MediaTypeImageManifest
types[ocispecs.MediaTypeImageManifest] = struct{}{}
case "image":
typ = ocispecs.MediaTypeImageConfig
types[ocispecs.MediaTypeImageConfig] = struct{}{}
case "provenance":
typ = slsa02.PredicateSLSAProvenance
types[slsa1.PredicateSLSAProvenance] = struct{}{}
types[slsa02.PredicateSLSAProvenance] = struct{}{}
case "sbom":
typ = intoto.PredicateSPDX
types[intoto.PredicateSPDX] = struct{}{}
default:
if opts.typ != "" {
types[opts.typ] = struct{}{}
}
}
for _, a := range attachments {
if opts.platform != "" && (a.platform == nil || platforms.FormatAll(*a.platform) != opts.platform) {
continue
}
if typ != "" && descrType(a.descr) != typ {
if _, ok := types[descrType(a.descr)]; opts.typ != "" && !ok {
continue
}
ra, err := store.ReaderAt(ctx, a.descr)
@ -112,9 +118,9 @@ func attachmentCmd(dockerCli command.Cli, rootOpts RootOptions) *cobra.Command {
var options attachmentOptions
cmd := &cobra.Command{
Use: "attachment [OPTIONS] REF [DIGEST]",
Short: "Inspect a build attachment",
Args: cobra.RangeArgs(1, 2),
Use: "attachment [OPTIONS] [REF [DIGEST]]",
Short: "Inspect a build record attachment",
Args: cobra.MaximumNArgs(2),
RunE: func(cmd *cobra.Command, args []string) error {
if len(args) > 0 {
options.ref = args[0]

View File

@ -63,7 +63,7 @@ func runLogs(ctx context.Context, dockerCli command.Cli, opts logsOptions) error
return err
}
var mode progressui.DisplayMode = progressui.DisplayMode(opts.progress)
mode := progressui.DisplayMode(opts.progress)
if mode == progressui.AutoMode {
mode = progressui.PlainMode
}
@ -98,7 +98,7 @@ func logsCmd(dockerCli command.Cli, rootOpts RootOptions) *cobra.Command {
cmd := &cobra.Command{
Use: "logs [OPTIONS] [REF]",
Short: "Print the logs of a build",
Short: "Print the logs of a build record",
Args: cobra.MaximumNArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
if len(args) > 0 {

View File

@ -107,7 +107,7 @@ func lsCmd(dockerCli command.Cli, rootOpts RootOptions) *cobra.Command {
var options lsOptions
cmd := &cobra.Command{
Use: "ls",
Use: "ls [OPTIONS]",
Short: "List build records",
Args: cli.NoArgs,
RunE: func(cmd *cobra.Command, args []string) error {

View File

@ -57,7 +57,7 @@ func openCmd(dockerCli command.Cli, rootOpts RootOptions) *cobra.Command {
cmd := &cobra.Command{
Use: "open [OPTIONS] [REF]",
Short: "Open a build in Docker Desktop",
Short: "Open a build record in Docker Desktop",
Args: cobra.MaximumNArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
if len(args) > 0 {

View File

@ -43,7 +43,6 @@ func runRm(ctx context.Context, dockerCli command.Cli, opts rmOptions) error {
eg, ctx := errgroup.WithContext(ctx)
for i, node := range nodes {
node := node
eg.Go(func() error {
if node.Driver == nil {
return nil

View File

@ -17,7 +17,6 @@ import (
"github.com/docker/buildx/util/otelutil"
"github.com/docker/buildx/util/otelutil/jaeger"
"github.com/docker/cli/cli/command"
controlapi "github.com/moby/buildkit/api/services/control"
"github.com/opencontainers/go-digest"
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/browser"
@ -57,14 +56,7 @@ func loadTrace(ctx context.Context, ref string, nodes []builder.Node) (string, [
// build is complete but no trace yet. try to finalize the trace
time.Sleep(1 * time.Second) // give some extra time for last parts of trace to be written
c, err := rec.node.Driver.Client(ctx)
if err != nil {
return "", nil, err
}
_, err = c.ControlClient().UpdateBuildHistory(ctx, &controlapi.UpdateBuildHistoryRequest{
Ref: rec.Ref,
Finalize: true,
})
err := finalizeRecord(ctx, rec.Ref, []builder.Node{*rec.node})
if err != nil {
return "", nil, err
}
@ -222,7 +214,7 @@ func traceCmd(dockerCli command.Cli, rootOpts RootOptions) *cobra.Command {
flags := cmd.Flags()
flags.StringVar(&options.addr, "addr", "127.0.0.1:0", "Address to bind the UI server")
flags.StringVar(&options.compare, "compare", "", "Compare with another build reference")
flags.StringVar(&options.compare, "compare", "", "Compare with another build record")
return cmd
}

View File

@ -139,7 +139,6 @@ func queryRecords(ctx context.Context, ref string, nodes []builder.Node, opts *q
eg, ctx := errgroup.WithContext(ctx)
for _, node := range nodes {
node := node
eg.Go(func() error {
if node.Driver == nil {
return nil
@ -248,6 +247,27 @@ func queryRecords(ctx context.Context, ref string, nodes []builder.Node, opts *q
return out, nil
}
func finalizeRecord(ctx context.Context, ref string, nodes []builder.Node) error {
eg, ctx := errgroup.WithContext(ctx)
for _, node := range nodes {
eg.Go(func() error {
if node.Driver == nil {
return nil
}
c, err := node.Driver.Client(ctx)
if err != nil {
return err
}
_, err = c.ControlClient().UpdateBuildHistory(ctx, &controlapi.UpdateBuildHistoryRequest{
Ref: ref,
Finalize: true,
})
return err
})
}
return eg.Wait()
}
func formatDuration(d time.Duration) string {
if d < time.Minute {
return fmt.Sprintf("%.1fs", d.Seconds())

View File

@ -16,7 +16,7 @@ import (
"github.com/docker/cli/cli/command"
"github.com/moby/buildkit/util/progress/progressui"
"github.com/opencontainers/go-digest"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/spf13/cobra"
"golang.org/x/sync/errgroup"
@ -184,7 +184,6 @@ func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, arg
pw := progress.WithPrefix(printer, "internal", true)
for _, t := range tags {
t := t
eg.Go(func() error {
return progress.Wrap(fmt.Sprintf("pushing %s", t.String()), pw.Write, func(sub progress.SubLogger) error {
eg2, _ := errgroup.WithContext(ctx)
@ -246,7 +245,7 @@ func parseSource(in string) (*imagetools.Source, error) {
dgst, err := digest.Parse(in)
if err == nil {
return &imagetools.Source{
Desc: ocispec.Descriptor{
Desc: ocispecs.Descriptor{
Digest: dgst,
},
}, nil
@ -274,7 +273,7 @@ func createCmd(dockerCli command.Cli, opts RootOptions) *cobra.Command {
var options createOptions
cmd := &cobra.Command{
Use: "create [OPTIONS] [SOURCE] [SOURCE...]",
Use: "create [OPTIONS] [SOURCE...]",
Short: "Create a new image based on source images",
RunE: func(cmd *cobra.Command, args []string) error {
options.builder = *opts.Builder
@ -295,9 +294,9 @@ func createCmd(dockerCli command.Cli, opts RootOptions) *cobra.Command {
return cmd
}
func mergeDesc(d1, d2 ocispec.Descriptor) (ocispec.Descriptor, error) {
func mergeDesc(d1, d2 ocispecs.Descriptor) (ocispecs.Descriptor, error) {
if d2.Size != 0 && d1.Size != d2.Size {
return ocispec.Descriptor{}, errors.Errorf("invalid size mismatch for %s, %d != %d", d1.Digest, d2.Size, d1.Size)
return ocispecs.Descriptor{}, errors.Errorf("invalid size mismatch for %s, %d != %d", d1.Digest, d2.Size, d1.Size)
}
if d2.MediaType != "" {
d1.MediaType = d2.MediaType

View File

@ -36,7 +36,7 @@ func runInspect(ctx context.Context, dockerCli command.Cli, in inspectOptions) e
}
timeoutCtx, cancel := context.WithCancelCause(ctx)
timeoutCtx, _ = context.WithTimeoutCause(timeoutCtx, 20*time.Second, errors.WithStack(context.DeadlineExceeded)) //nolint:govet,lostcancel // no need to manually cancel this context as we already rely on parent
timeoutCtx, _ = context.WithTimeoutCause(timeoutCtx, 20*time.Second, errors.WithStack(context.DeadlineExceeded)) //nolint:govet // no need to manually cancel this context as we already rely on parent
defer func() { cancel(errors.WithStack(context.Canceled)) }()
nodes, err := b.LoadNodes(timeoutCtx, builder.WithData())
@ -54,8 +54,8 @@ func runInspect(ctx context.Context, dockerCli command.Cli, in inspectOptions) e
w := tabwriter.NewWriter(os.Stdout, 0, 0, 1, ' ', 0)
fmt.Fprintf(w, "Name:\t%s\n", b.Name)
fmt.Fprintf(w, "Driver:\t%s\n", b.Driver)
if !b.NodeGroup.LastActivity.IsZero() {
fmt.Fprintf(w, "Last Activity:\t%v\n", b.NodeGroup.LastActivity)
if !b.LastActivity.IsZero() {
fmt.Fprintf(w, "Last Activity:\t%v\n", b.LastActivity)
}
if err != nil {

View File

@ -60,7 +60,7 @@ func runLs(ctx context.Context, dockerCli command.Cli, in lsOptions) error {
}
timeoutCtx, cancel := context.WithCancelCause(ctx)
timeoutCtx, _ = context.WithTimeoutCause(timeoutCtx, 20*time.Second, errors.WithStack(context.DeadlineExceeded)) //nolint:govet,lostcancel // no need to manually cancel this context as we already rely on parent
timeoutCtx, _ = context.WithTimeoutCause(timeoutCtx, 20*time.Second, errors.WithStack(context.DeadlineExceeded)) //nolint:govet // no need to manually cancel this context as we already rely on parent
defer func() { cancel(errors.WithStack(context.Canceled)) }()
eg, _ := errgroup.WithContext(timeoutCtx)
@ -213,7 +213,17 @@ type lsContext struct {
}
func (c *lsContext) MarshalJSON() ([]byte, error) {
return json.Marshal(c.Builder)
// can't marshal c.Builder directly because Builder type has custom MarshalJSON
dt, err := json.Marshal(c.Builder.Builder)
if err != nil {
return nil, err
}
var m map[string]any
if err := json.Unmarshal(dt, &m); err != nil {
return nil, err
}
m["Current"] = c.Builder.Current
return json.Marshal(m)
}
func (c *lsContext) Name() string {

View File

@ -164,7 +164,6 @@ func TestTruncPlatforms(t *testing.T) {
},
}
for _, tt := range tests {
tt := tt
t.Run(tt.name, func(t *testing.T) {
tpfs := truncPlatforms(tt.platforms, tt.max)
assert.Equal(t, tt.expectedList, tpfs.List())

View File

@ -182,7 +182,7 @@ func pruneCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
flags.BoolVarP(&options.force, "force", "f", false, "Do not prompt for confirmation")
flags.Var(&options.reservedSpace, "keep-storage", "Amount of disk space to keep for cache")
flags.MarkDeprecated("keep-storage", "keep-storage flag has been changed to max-storage")
flags.MarkDeprecated("keep-storage", "keep-storage flag has been changed to reserved-space")
return cmd
}

View File

@ -99,7 +99,7 @@ func rmCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
var options rmOptions
cmd := &cobra.Command{
Use: "rm [OPTIONS] [NAME] [NAME...]",
Use: "rm [OPTIONS] [NAME...]",
Short: "Remove one or more builder instances",
RunE: func(cmd *cobra.Command, args []string) error {
options.builders = []string{rootOpts.builder}
@ -151,7 +151,7 @@ func rmAllInactive(ctx context.Context, txn *store.Txn, dockerCli command.Cli, i
}
timeoutCtx, cancel := context.WithCancelCause(ctx)
timeoutCtx, _ = context.WithTimeoutCause(timeoutCtx, 20*time.Second, errors.WithStack(context.DeadlineExceeded)) //nolint:govet,lostcancel // no need to manually cancel this context as we already rely on parent
timeoutCtx, _ = context.WithTimeoutCause(timeoutCtx, 20*time.Second, errors.WithStack(context.DeadlineExceeded)) //nolint:govet // no need to manually cancel this context as we already rely on parent
defer func() { cancel(errors.WithStack(context.Canceled)) }()
eg, _ := errgroup.WithContext(timeoutCtx)

View File

@ -4,10 +4,8 @@ import (
"fmt"
"os"
debugcmd "github.com/docker/buildx/commands/debug"
historycmd "github.com/docker/buildx/commands/history"
imagetoolscmd "github.com/docker/buildx/commands/imagetools"
"github.com/docker/buildx/controller/remote"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/logutil"
@ -121,10 +119,7 @@ func addCommands(cmd *cobra.Command, opts *rootOptions, dockerCli command.Cli) {
historycmd.RootCmd(cmd, dockerCli, historycmd.RootOptions{Builder: &opts.builder}),
)
if confutil.IsExperimental() {
cmd.AddCommand(debugcmd.RootCmd(dockerCli,
newDebuggableBuild(dockerCli, opts),
))
remote.AddControllerCommands(cmd, dockerCli)
cmd.AddCommand(debugCmd(dockerCli, opts))
}
cmd.RegisterFlagCompletionFunc( //nolint:errcheck

View File

@ -1,288 +0,0 @@
package build
import (
"context"
"io"
"path/filepath"
"strings"
"sync"
"github.com/docker/buildx/build"
"github.com/docker/buildx/builder"
controllerapi "github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/store"
"github.com/docker/buildx/store/storeutil"
"github.com/docker/buildx/util/buildflags"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/dockerutil"
"github.com/docker/buildx/util/platformutil"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/command"
dockeropts "github.com/docker/cli/opts"
"github.com/docker/docker/api/types/container"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/session/auth/authprovider"
"github.com/moby/buildkit/util/grpcerrors"
"github.com/pkg/errors"
"google.golang.org/grpc/codes"
)
const defaultTargetName = "default"
// RunBuild runs the specified build and returns the result.
//
// NOTE: When an error happens during the build and this function acquires the debuggable *build.ResultHandle,
// this function returns it in addition to the error (i.e. it does "return nil, res, err"). The caller can
// inspect the result and debug the cause of that error.
func RunBuild(ctx context.Context, dockerCli command.Cli, in *controllerapi.BuildOptions, inStream io.Reader, progress progress.Writer, generateResult bool) (*client.SolveResponse, *build.ResultHandle, *build.Inputs, error) {
if in.NoCache && len(in.NoCacheFilter) > 0 {
return nil, nil, nil, errors.Errorf("--no-cache and --no-cache-filter cannot currently be used together")
}
contexts := map[string]build.NamedContext{}
for name, path := range in.NamedContexts {
contexts[name] = build.NamedContext{Path: path}
}
opts := build.Options{
Inputs: build.Inputs{
ContextPath: in.ContextPath,
DockerfilePath: in.DockerfileName,
InStream: build.NewSyncMultiReader(inStream),
NamedContexts: contexts,
},
Ref: in.Ref,
BuildArgs: in.BuildArgs,
CgroupParent: in.CgroupParent,
ExtraHosts: in.ExtraHosts,
Labels: in.Labels,
NetworkMode: in.NetworkMode,
NoCache: in.NoCache,
NoCacheFilter: in.NoCacheFilter,
Pull: in.Pull,
ShmSize: dockeropts.MemBytes(in.ShmSize),
Tags: in.Tags,
Target: in.Target,
Ulimits: controllerUlimitOpt2DockerUlimit(in.Ulimits),
GroupRef: in.GroupRef,
ProvenanceResponseMode: confutil.ParseMetadataProvenance(in.ProvenanceResponseMode),
}
platforms, err := platformutil.Parse(in.Platforms)
if err != nil {
return nil, nil, nil, err
}
opts.Platforms = platforms
dockerConfig := dockerCli.ConfigFile()
opts.Session = append(opts.Session, authprovider.NewDockerAuthProvider(authprovider.DockerAuthProviderConfig{
ConfigFile: dockerConfig,
}))
secrets, err := controllerapi.CreateSecrets(in.Secrets)
if err != nil {
return nil, nil, nil, err
}
opts.Session = append(opts.Session, secrets)
sshSpecs := in.SSH
if len(sshSpecs) == 0 && buildflags.IsGitSSH(in.ContextPath) {
sshSpecs = append(sshSpecs, &controllerapi.SSH{ID: "default"})
}
ssh, err := controllerapi.CreateSSH(sshSpecs)
if err != nil {
return nil, nil, nil, err
}
opts.Session = append(opts.Session, ssh)
outputs, _, err := controllerapi.CreateExports(in.Exports)
if err != nil {
return nil, nil, nil, err
}
if in.ExportPush {
var pushUsed bool
for i := range outputs {
if outputs[i].Type == client.ExporterImage {
outputs[i].Attrs["push"] = "true"
pushUsed = true
}
}
if !pushUsed {
outputs = append(outputs, client.ExportEntry{
Type: client.ExporterImage,
Attrs: map[string]string{
"push": "true",
},
})
}
}
if in.ExportLoad {
var loadUsed bool
for i := range outputs {
if outputs[i].Type == client.ExporterDocker {
if _, ok := outputs[i].Attrs["dest"]; !ok {
loadUsed = true
break
}
}
}
if !loadUsed {
outputs = append(outputs, client.ExportEntry{
Type: client.ExporterDocker,
Attrs: map[string]string{},
})
}
}
annotations, err := buildflags.ParseAnnotations(in.Annotations)
if err != nil {
return nil, nil, nil, errors.Wrap(err, "parse annotations")
}
for _, o := range outputs {
for k, v := range annotations {
o.Attrs[k.String()] = v
}
}
opts.Exports = outputs
opts.CacheFrom = controllerapi.CreateCaches(in.CacheFrom)
opts.CacheTo = controllerapi.CreateCaches(in.CacheTo)
opts.Attests = controllerapi.CreateAttestations(in.Attests)
opts.SourcePolicy = in.SourcePolicy
allow, err := buildflags.ParseEntitlements(in.Allow)
if err != nil {
return nil, nil, nil, err
}
opts.Allow = allow
if in.CallFunc != nil {
opts.CallFunc = &build.CallFunc{
Name: in.CallFunc.Name,
Format: in.CallFunc.Format,
IgnoreStatus: in.CallFunc.IgnoreStatus,
}
}
// key string used for kubernetes "sticky" mode
contextPathHash, err := filepath.Abs(in.ContextPath)
if err != nil {
contextPathHash = in.ContextPath
}
// TODO: this should not be loaded this side of the controller api
b, err := builder.New(dockerCli,
builder.WithName(in.Builder),
builder.WithContextPathHash(contextPathHash),
)
if err != nil {
return nil, nil, nil, err
}
if err = updateLastActivity(dockerCli, b.NodeGroup); err != nil {
return nil, nil, nil, errors.Wrapf(err, "failed to update builder last activity time")
}
nodes, err := b.LoadNodes(ctx)
if err != nil {
return nil, nil, nil, err
}
var inputs *build.Inputs
buildOptions := map[string]build.Options{defaultTargetName: opts}
resp, res, err := buildTargets(ctx, dockerCli, nodes, buildOptions, progress, generateResult)
err = wrapBuildError(err, false)
if err != nil {
// NOTE: buildTargets can return *build.ResultHandle even on error.
return nil, res, nil, err
}
if i, ok := buildOptions[defaultTargetName]; ok {
inputs = &i.Inputs
}
return resp, res, inputs, nil
}
// buildTargets runs the specified build and returns the result.
//
// NOTE: When an error happens during the build and this function acquires the debuggable *build.ResultHandle,
// this function returns it in addition to the error (i.e. it does "return nil, res, err"). The caller can
// inspect the result and debug the cause of that error.
func buildTargets(ctx context.Context, dockerCli command.Cli, nodes []builder.Node, opts map[string]build.Options, progress progress.Writer, generateResult bool) (*client.SolveResponse, *build.ResultHandle, error) {
var res *build.ResultHandle
var resp map[string]*client.SolveResponse
var err error
if generateResult {
var mu sync.Mutex
var idx int
resp, err = build.BuildWithResultHandler(ctx, nodes, opts, dockerutil.NewClient(dockerCli), confutil.NewConfig(dockerCli), progress, func(driverIndex int, gotRes *build.ResultHandle) {
mu.Lock()
defer mu.Unlock()
if res == nil || driverIndex < idx {
idx, res = driverIndex, gotRes
}
})
} else {
resp, err = build.Build(ctx, nodes, opts, dockerutil.NewClient(dockerCli), confutil.NewConfig(dockerCli), progress)
}
if err != nil {
return nil, res, err
}
return resp[defaultTargetName], res, err
}
func wrapBuildError(err error, bake bool) error {
if err == nil {
return nil
}
st, ok := grpcerrors.AsGRPCStatus(err)
if ok {
if st.Code() == codes.Unimplemented && strings.Contains(st.Message(), "unsupported frontend capability moby.buildkit.frontend.contexts") {
msg := "current frontend does not support --build-context."
if bake {
msg = "current frontend does not support defining additional contexts for targets."
}
msg += " Named contexts are supported since Dockerfile v1.4. Use #syntax directive in Dockerfile or update to latest BuildKit."
return &wrapped{err, msg}
}
}
return err
}
type wrapped struct {
err error
msg string
}
func (w *wrapped) Error() string {
return w.msg
}
func (w *wrapped) Unwrap() error {
return w.err
}
func updateLastActivity(dockerCli command.Cli, ng *store.NodeGroup) error {
txn, release, err := storeutil.GetStore(dockerCli)
if err != nil {
return err
}
defer release()
return txn.UpdateLastActivity(ng)
}
func controllerUlimitOpt2DockerUlimit(u *controllerapi.UlimitOpt) *dockeropts.UlimitOpt {
if u == nil {
return nil
}
values := make(map[string]*container.Ulimit)
for k, v := range u.Values {
values[k] = &container.Ulimit{
Name: v.Name,
Hard: v.Hard,
Soft: v.Soft,
}
}
return dockeropts.NewUlimitOpt(&values)
}

View File

@ -1,33 +0,0 @@
package control
import (
"context"
"io"
"github.com/docker/buildx/build"
controllerapi "github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/util/progress"
"github.com/moby/buildkit/client"
)
type BuildxController interface {
Build(ctx context.Context, options *controllerapi.BuildOptions, in io.ReadCloser, progress progress.Writer) (ref string, resp *client.SolveResponse, inputs *build.Inputs, err error)
// Invoke starts an IO session into the specified process.
// If pid doesn't match to any running processes, it starts a new process with the specified config.
// If there is no container running or InvokeConfig.Rollback is specified, the process will start in a newly created container.
// NOTE: If needed, in the future, we can split this API into three APIs (NewContainer, NewProcess and Attach).
Invoke(ctx context.Context, ref, pid string, options *controllerapi.InvokeConfig, ioIn io.ReadCloser, ioOut io.WriteCloser, ioErr io.WriteCloser) error
Kill(ctx context.Context) error
Close() error
List(ctx context.Context) (refs []string, _ error)
Disconnect(ctx context.Context, ref string) error
ListProcesses(ctx context.Context, ref string) (infos []*controllerapi.ProcessInfo, retErr error)
DisconnectProcess(ctx context.Context, ref, pid string) error
Inspect(ctx context.Context, ref string) (*controllerapi.InspectResponse, error)
}
type ControlOptions struct {
ServerConfig string
Root string
Detach bool
}

View File

@ -1,36 +0,0 @@
package controller
import (
"context"
"fmt"
"github.com/docker/buildx/controller/control"
"github.com/docker/buildx/controller/local"
"github.com/docker/buildx/controller/remote"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/command"
"github.com/pkg/errors"
)
func NewController(ctx context.Context, opts control.ControlOptions, dockerCli command.Cli, pw progress.Writer) (control.BuildxController, error) {
var name string
if opts.Detach {
name = "remote"
} else {
name = "local"
}
var c control.BuildxController
err := progress.Wrap(fmt.Sprintf("[internal] connecting to %s controller", name), pw.Write, func(l progress.SubLogger) (err error) {
if opts.Detach {
c, err = remote.NewRemoteBuildxController(ctx, dockerCli, opts, l)
} else {
c = local.NewLocalBuildxController(ctx, dockerCli, l)
}
return err
})
if err != nil {
return nil, errors.Wrap(err, "failed to start buildx controller")
}
return c, nil
}

View File

@ -1,48 +0,0 @@
package errdefs
import (
"io"
"github.com/containerd/typeurl/v2"
"github.com/docker/buildx/util/desktop"
"github.com/moby/buildkit/util/grpcerrors"
)
func init() {
typeurl.Register((*Build)(nil), "github.com/docker/buildx", "errdefs.Build+json")
}
type BuildError struct {
*Build
error
}
func (e *BuildError) Unwrap() error {
return e.error
}
func (e *BuildError) ToProto() grpcerrors.TypedErrorProto {
return e.Build
}
func (e *BuildError) PrintBuildDetails(w io.Writer) error {
if e.Ref == "" {
return nil
}
ebr := &desktop.ErrorWithBuildRef{
Ref: e.Ref,
Err: e.error,
}
return ebr.Print(w)
}
func WrapBuild(err error, sessionID string, ref string) error {
if err == nil {
return nil
}
return &BuildError{Build: &Build{SessionID: sessionID, Ref: ref}, error: err}
}
func (b *Build) WrapError(err error) error {
return &BuildError{error: err, Build: b}
}

View File

@ -1,157 +0,0 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.34.1
// protoc v3.11.4
// source: github.com/docker/buildx/controller/errdefs/errdefs.proto
package errdefs
import (
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
reflect "reflect"
sync "sync"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
type Build struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
SessionID string `protobuf:"bytes,1,opt,name=SessionID,proto3" json:"SessionID,omitempty"`
Ref string `protobuf:"bytes,2,opt,name=Ref,proto3" json:"Ref,omitempty"`
}
func (x *Build) Reset() {
*x = Build{}
if protoimpl.UnsafeEnabled {
mi := &file_github_com_docker_buildx_controller_errdefs_errdefs_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *Build) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*Build) ProtoMessage() {}
func (x *Build) ProtoReflect() protoreflect.Message {
mi := &file_github_com_docker_buildx_controller_errdefs_errdefs_proto_msgTypes[0]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use Build.ProtoReflect.Descriptor instead.
func (*Build) Descriptor() ([]byte, []int) {
return file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDescGZIP(), []int{0}
}
func (x *Build) GetSessionID() string {
if x != nil {
return x.SessionID
}
return ""
}
func (x *Build) GetRef() string {
if x != nil {
return x.Ref
}
return ""
}
var File_github_com_docker_buildx_controller_errdefs_errdefs_proto protoreflect.FileDescriptor
var file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDesc = []byte{
0x0a, 0x39, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x64, 0x6f, 0x63,
0x6b, 0x65, 0x72, 0x2f, 0x62, 0x75, 0x69, 0x6c, 0x64, 0x78, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x72,
0x6f, 0x6c, 0x6c, 0x65, 0x72, 0x2f, 0x65, 0x72, 0x72, 0x64, 0x65, 0x66, 0x73, 0x2f, 0x65, 0x72,
0x72, 0x64, 0x65, 0x66, 0x73, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x15, 0x64, 0x6f, 0x63,
0x6b, 0x65, 0x72, 0x2e, 0x62, 0x75, 0x69, 0x6c, 0x64, 0x78, 0x2e, 0x65, 0x72, 0x72, 0x64, 0x65,
0x66, 0x73, 0x22, 0x37, 0x0a, 0x05, 0x42, 0x75, 0x69, 0x6c, 0x64, 0x12, 0x1c, 0x0a, 0x09, 0x53,
0x65, 0x73, 0x73, 0x69, 0x6f, 0x6e, 0x49, 0x44, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09,
0x53, 0x65, 0x73, 0x73, 0x69, 0x6f, 0x6e, 0x49, 0x44, 0x12, 0x10, 0x0a, 0x03, 0x52, 0x65, 0x66,
0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x52, 0x65, 0x66, 0x42, 0x2d, 0x5a, 0x2b, 0x67,
0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x64, 0x6f, 0x63, 0x6b, 0x65, 0x72,
0x2f, 0x62, 0x75, 0x69, 0x6c, 0x64, 0x78, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x72, 0x6f, 0x6c, 0x6c,
0x65, 0x72, 0x2f, 0x65, 0x72, 0x72, 0x64, 0x65, 0x66, 0x73, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74,
0x6f, 0x33,
}
var (
file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDescOnce sync.Once
file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDescData = file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDesc
)
func file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDescGZIP() []byte {
file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDescOnce.Do(func() {
file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDescData = protoimpl.X.CompressGZIP(file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDescData)
})
return file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDescData
}
var file_github_com_docker_buildx_controller_errdefs_errdefs_proto_msgTypes = make([]protoimpl.MessageInfo, 1)
var file_github_com_docker_buildx_controller_errdefs_errdefs_proto_goTypes = []interface{}{
(*Build)(nil), // 0: docker.buildx.errdefs.Build
}
var file_github_com_docker_buildx_controller_errdefs_errdefs_proto_depIdxs = []int32{
0, // [0:0] is the sub-list for method output_type
0, // [0:0] is the sub-list for method input_type
0, // [0:0] is the sub-list for extension type_name
0, // [0:0] is the sub-list for extension extendee
0, // [0:0] is the sub-list for field type_name
}
func init() { file_github_com_docker_buildx_controller_errdefs_errdefs_proto_init() }
func file_github_com_docker_buildx_controller_errdefs_errdefs_proto_init() {
if File_github_com_docker_buildx_controller_errdefs_errdefs_proto != nil {
return
}
if !protoimpl.UnsafeEnabled {
file_github_com_docker_buildx_controller_errdefs_errdefs_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*Build); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDesc,
NumEnums: 0,
NumMessages: 1,
NumExtensions: 0,
NumServices: 0,
},
GoTypes: file_github_com_docker_buildx_controller_errdefs_errdefs_proto_goTypes,
DependencyIndexes: file_github_com_docker_buildx_controller_errdefs_errdefs_proto_depIdxs,
MessageInfos: file_github_com_docker_buildx_controller_errdefs_errdefs_proto_msgTypes,
}.Build()
File_github_com_docker_buildx_controller_errdefs_errdefs_proto = out.File
file_github_com_docker_buildx_controller_errdefs_errdefs_proto_rawDesc = nil
file_github_com_docker_buildx_controller_errdefs_errdefs_proto_goTypes = nil
file_github_com_docker_buildx_controller_errdefs_errdefs_proto_depIdxs = nil
}

View File

@ -1,10 +0,0 @@
syntax = "proto3";
package docker.buildx.errdefs;
option go_package = "github.com/docker/buildx/controller/errdefs";
message Build {
string SessionID = 1;
string Ref = 2;
}

View File

@ -1,241 +0,0 @@
// Code generated by protoc-gen-go-vtproto. DO NOT EDIT.
// protoc-gen-go-vtproto version: v0.6.1-0.20240319094008-0393e58bdf10
// source: github.com/docker/buildx/controller/errdefs/errdefs.proto
package errdefs
import (
fmt "fmt"
protohelpers "github.com/planetscale/vtprotobuf/protohelpers"
proto "google.golang.org/protobuf/proto"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
io "io"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
func (m *Build) CloneVT() *Build {
if m == nil {
return (*Build)(nil)
}
r := new(Build)
r.SessionID = m.SessionID
r.Ref = m.Ref
if len(m.unknownFields) > 0 {
r.unknownFields = make([]byte, len(m.unknownFields))
copy(r.unknownFields, m.unknownFields)
}
return r
}
func (m *Build) CloneMessageVT() proto.Message {
return m.CloneVT()
}
func (this *Build) EqualVT(that *Build) bool {
if this == that {
return true
} else if this == nil || that == nil {
return false
}
if this.SessionID != that.SessionID {
return false
}
if this.Ref != that.Ref {
return false
}
return string(this.unknownFields) == string(that.unknownFields)
}
func (this *Build) EqualMessageVT(thatMsg proto.Message) bool {
that, ok := thatMsg.(*Build)
if !ok {
return false
}
return this.EqualVT(that)
}
func (m *Build) MarshalVT() (dAtA []byte, err error) {
if m == nil {
return nil, nil
}
size := m.SizeVT()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBufferVT(dAtA[:size])
if err != nil {
return nil, err
}
return dAtA[:n], nil
}
func (m *Build) MarshalToVT(dAtA []byte) (int, error) {
size := m.SizeVT()
return m.MarshalToSizedBufferVT(dAtA[:size])
}
func (m *Build) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
if m == nil {
return 0, nil
}
i := len(dAtA)
_ = i
var l int
_ = l
if m.unknownFields != nil {
i -= len(m.unknownFields)
copy(dAtA[i:], m.unknownFields)
}
if len(m.Ref) > 0 {
i -= len(m.Ref)
copy(dAtA[i:], m.Ref)
i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.Ref)))
i--
dAtA[i] = 0x12
}
if len(m.SessionID) > 0 {
i -= len(m.SessionID)
copy(dAtA[i:], m.SessionID)
i = protohelpers.EncodeVarint(dAtA, i, uint64(len(m.SessionID)))
i--
dAtA[i] = 0xa
}
return len(dAtA) - i, nil
}
func (m *Build) SizeVT() (n int) {
if m == nil {
return 0
}
var l int
_ = l
l = len(m.SessionID)
if l > 0 {
n += 1 + l + protohelpers.SizeOfVarint(uint64(l))
}
l = len(m.Ref)
if l > 0 {
n += 1 + l + protohelpers.SizeOfVarint(uint64(l))
}
n += len(m.unknownFields)
return n
}
func (m *Build) UnmarshalVT(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return protohelpers.ErrIntOverflow
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
return fmt.Errorf("proto: Build: wiretype end group for non-group")
}
if fieldNum <= 0 {
return fmt.Errorf("proto: Build: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field SessionID", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return protohelpers.ErrIntOverflow
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return protohelpers.ErrInvalidLength
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return protohelpers.ErrInvalidLength
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.SessionID = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
case 2:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Ref", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return protohelpers.ErrIntOverflow
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return protohelpers.ErrInvalidLength
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return protohelpers.ErrInvalidLength
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Ref = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := protohelpers.Skip(dAtA[iNdEx:])
if err != nil {
return err
}
if (skippy < 0) || (iNdEx+skippy) < 0 {
return protohelpers.ErrInvalidLength
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
m.unknownFields = append(m.unknownFields, dAtA[iNdEx:iNdEx+skippy]...)
iNdEx += skippy
}
}
if iNdEx > l {
return io.ErrUnexpectedEOF
}
return nil
}

View File

@ -1,152 +0,0 @@
package local
import (
"context"
"io"
"sync/atomic"
"github.com/docker/buildx/build"
cbuild "github.com/docker/buildx/controller/build"
"github.com/docker/buildx/controller/control"
controllererrors "github.com/docker/buildx/controller/errdefs"
controllerapi "github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/controller/processes"
"github.com/docker/buildx/util/desktop"
"github.com/docker/buildx/util/ioset"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/command"
"github.com/moby/buildkit/client"
"github.com/pkg/errors"
)
func NewLocalBuildxController(ctx context.Context, dockerCli command.Cli, logger progress.SubLogger) control.BuildxController {
return &localController{
dockerCli: dockerCli,
sessionID: "local",
processes: processes.NewManager(),
}
}
type buildConfig struct {
// TODO: these two structs should be merged
// Discussion: https://github.com/docker/buildx/pull/1640#discussion_r1113279719
resultCtx *build.ResultHandle
buildOptions *controllerapi.BuildOptions
}
type localController struct {
dockerCli command.Cli
sessionID string
buildConfig buildConfig
processes *processes.Manager
buildOnGoing atomic.Bool
}
func (b *localController) Build(ctx context.Context, options *controllerapi.BuildOptions, in io.ReadCloser, progress progress.Writer) (string, *client.SolveResponse, *build.Inputs, error) {
if !b.buildOnGoing.CompareAndSwap(false, true) {
return "", nil, nil, errors.New("build ongoing")
}
defer b.buildOnGoing.Store(false)
resp, res, dockerfileMappings, buildErr := cbuild.RunBuild(ctx, b.dockerCli, options, in, progress, true)
// NOTE: RunBuild can return *build.ResultHandle even on error.
if res != nil {
b.buildConfig = buildConfig{
resultCtx: res,
buildOptions: options,
}
if buildErr != nil {
var ref string
var ebr *desktop.ErrorWithBuildRef
if errors.As(buildErr, &ebr) {
ref = ebr.Ref
}
buildErr = controllererrors.WrapBuild(buildErr, b.sessionID, ref)
}
}
if buildErr != nil {
return "", nil, nil, buildErr
}
return b.sessionID, resp, dockerfileMappings, nil
}
func (b *localController) ListProcesses(ctx context.Context, sessionID string) (infos []*controllerapi.ProcessInfo, retErr error) {
if sessionID != b.sessionID {
return nil, errors.Errorf("unknown session ID %q", sessionID)
}
return b.processes.ListProcesses(), nil
}
func (b *localController) DisconnectProcess(ctx context.Context, sessionID, pid string) error {
if sessionID != b.sessionID {
return errors.Errorf("unknown session ID %q", sessionID)
}
return b.processes.DeleteProcess(pid)
}
func (b *localController) cancelRunningProcesses() {
b.processes.CancelRunningProcesses()
}
func (b *localController) Invoke(ctx context.Context, sessionID string, pid string, cfg *controllerapi.InvokeConfig, ioIn io.ReadCloser, ioOut io.WriteCloser, ioErr io.WriteCloser) error {
if sessionID != b.sessionID {
return errors.Errorf("unknown session ID %q", sessionID)
}
proc, ok := b.processes.Get(pid)
if !ok {
// Start a new process.
if b.buildConfig.resultCtx == nil {
return errors.New("no build result is registered")
}
var err error
proc, err = b.processes.StartProcess(pid, b.buildConfig.resultCtx, cfg)
if err != nil {
return err
}
}
// Attach containerIn to this process
ioCancelledCh := make(chan struct{})
proc.ForwardIO(&ioset.In{Stdin: ioIn, Stdout: ioOut, Stderr: ioErr}, func(error) { close(ioCancelledCh) })
select {
case <-ioCancelledCh:
return errors.Errorf("io cancelled")
case err := <-proc.Done():
return err
case <-ctx.Done():
return context.Cause(ctx)
}
}
func (b *localController) Kill(context.Context) error {
b.Close()
return nil
}
func (b *localController) Close() error {
b.cancelRunningProcesses()
if b.buildConfig.resultCtx != nil {
b.buildConfig.resultCtx.Done()
}
// TODO: cancel ongoing builds?
return nil
}
func (b *localController) List(ctx context.Context) (res []string, _ error) {
return []string{b.sessionID}, nil
}
func (b *localController) Disconnect(ctx context.Context, key string) error {
b.Close()
return nil
}
func (b *localController) Inspect(ctx context.Context, sessionID string) (*controllerapi.InspectResponse, error) {
if sessionID != b.sessionID {
return nil, errors.Errorf("unknown session ID %q", sessionID)
}
return &controllerapi.InspectResponse{Options: b.buildConfig.buildOptions}, nil
}

View File

@ -1,20 +0,0 @@
package pb
func CreateAttestations(attests []*Attest) map[string]*string {
result := map[string]*string{}
for _, attest := range attests {
// ignore duplicates
if _, ok := result[attest.Type]; ok {
continue
}
if attest.Disabled {
result[attest.Type] = nil
continue
}
attrs := attest.Attrs
result[attest.Type] = &attrs
}
return result
}

View File

@ -1,23 +0,0 @@
package pb
import (
"maps"
"github.com/moby/buildkit/client"
)
func CreateCaches(entries []*CacheOptionsEntry) []client.CacheOptionsEntry {
var outs []client.CacheOptionsEntry
if len(entries) == 0 {
return nil
}
for _, entry := range entries {
out := client.CacheOptionsEntry{
Type: entry.Type,
Attrs: map[string]string{},
}
maps.Copy(out.Attrs, entry.Attrs)
outs = append(outs, out)
}
return outs
}

File diff suppressed because it is too large Load Diff

View File

@ -1,250 +0,0 @@
syntax = "proto3";
package buildx.controller.v1;
import "github.com/moby/buildkit/api/services/control/control.proto";
import "github.com/moby/buildkit/sourcepolicy/pb/policy.proto";
option go_package = "github.com/docker/buildx/controller/pb";
service Controller {
rpc Build(BuildRequest) returns (BuildResponse);
rpc Inspect(InspectRequest) returns (InspectResponse);
rpc Status(StatusRequest) returns (stream StatusResponse);
rpc Input(stream InputMessage) returns (InputResponse);
rpc Invoke(stream Message) returns (stream Message);
rpc List(ListRequest) returns (ListResponse);
rpc Disconnect(DisconnectRequest) returns (DisconnectResponse);
rpc Info(InfoRequest) returns (InfoResponse);
rpc ListProcesses(ListProcessesRequest) returns (ListProcessesResponse);
rpc DisconnectProcess(DisconnectProcessRequest) returns (DisconnectProcessResponse);
}
message ListProcessesRequest {
string SessionID = 1;
}
message ListProcessesResponse {
repeated ProcessInfo Infos = 1;
}
message ProcessInfo {
string ProcessID = 1;
InvokeConfig InvokeConfig = 2;
}
message DisconnectProcessRequest {
string SessionID = 1;
string ProcessID = 2;
}
message DisconnectProcessResponse {
}
message BuildRequest {
string SessionID = 1;
BuildOptions Options = 2;
}
message BuildOptions {
string ContextPath = 1;
string DockerfileName = 2;
CallFunc CallFunc = 3;
map<string, string> NamedContexts = 4;
repeated string Allow = 5;
repeated Attest Attests = 6;
map<string, string> BuildArgs = 7;
repeated CacheOptionsEntry CacheFrom = 8;
repeated CacheOptionsEntry CacheTo = 9;
string CgroupParent = 10;
repeated ExportEntry Exports = 11;
repeated string ExtraHosts = 12;
map<string, string> Labels = 13;
string NetworkMode = 14;
repeated string NoCacheFilter = 15;
repeated string Platforms = 16;
repeated Secret Secrets = 17;
int64 ShmSize = 18;
repeated SSH SSH = 19;
repeated string Tags = 20;
string Target = 21;
UlimitOpt Ulimits = 22;
string Builder = 23;
bool NoCache = 24;
bool Pull = 25;
bool ExportPush = 26;
bool ExportLoad = 27;
moby.buildkit.v1.sourcepolicy.Policy SourcePolicy = 28;
string Ref = 29;
string GroupRef = 30;
repeated string Annotations = 31;
string ProvenanceResponseMode = 32;
}
message ExportEntry {
string Type = 1;
map<string, string> Attrs = 2;
string Destination = 3;
}
message CacheOptionsEntry {
string Type = 1;
map<string, string> Attrs = 2;
}
message Attest {
string Type = 1;
bool Disabled = 2;
string Attrs = 3;
}
message SSH {
string ID = 1;
repeated string Paths = 2;
}
message Secret {
string ID = 1;
string FilePath = 2;
string Env = 3;
}
message CallFunc {
string Name = 1;
string Format = 2;
bool IgnoreStatus = 3;
}
message InspectRequest {
string SessionID = 1;
}
message InspectResponse {
BuildOptions Options = 1;
}
message UlimitOpt {
map<string, Ulimit> values = 1;
}
message Ulimit {
string Name = 1;
int64 Hard = 2;
int64 Soft = 3;
}
message BuildResponse {
map<string, string> ExporterResponse = 1;
}
message DisconnectRequest {
string SessionID = 1;
}
message DisconnectResponse {}
message ListRequest {
string SessionID = 1;
}
message ListResponse {
repeated string keys = 1;
}
message InputMessage {
oneof Input {
InputInitMessage Init = 1;
DataMessage Data = 2;
}
}
message InputInitMessage {
string SessionID = 1;
}
message DataMessage {
bool EOF = 1; // true if eof was reached
bytes Data = 2; // should be chunked smaller than 4MB:
// https://pkg.go.dev/google.golang.org/grpc#MaxRecvMsgSize
}
message InputResponse {}
message Message {
oneof Input {
InitMessage Init = 1;
// FdMessage used from client to server for input (stdin) and
// from server to client for output (stdout, stderr)
FdMessage File = 2;
// ResizeMessage used from client to server for terminal resize events
ResizeMessage Resize = 3;
// SignalMessage is used from client to server to send signal events
SignalMessage Signal = 4;
}
}
message InitMessage {
string SessionID = 1;
// If ProcessID already exists in the server, it tries to connect to it
// instead of invoking the new one. In this case, InvokeConfig will be ignored.
string ProcessID = 2;
InvokeConfig InvokeConfig = 3;
}
message InvokeConfig {
repeated string Entrypoint = 1;
repeated string Cmd = 2;
bool NoCmd = 11; // Do not set cmd but use the image's default
repeated string Env = 3;
string User = 4;
bool NoUser = 5; // Do not set user but use the image's default
string Cwd = 6;
bool NoCwd = 7; // Do not set cwd but use the image's default
bool Tty = 8;
bool Rollback = 9; // Kill all process in the container and recreate it.
bool Initial = 10; // Run container from the initial state of that stage (supported only on the failed step)
}
message FdMessage {
uint32 Fd = 1; // what fd the data was from
bool EOF = 2; // true if eof was reached
bytes Data = 3; // should be chunked smaller than 4MB:
// https://pkg.go.dev/google.golang.org/grpc#MaxRecvMsgSize
}
message ResizeMessage {
uint32 Rows = 1;
uint32 Cols = 2;
}
message SignalMessage {
// we only send name (ie HUP, INT) because the int values
// are platform dependent.
string Name = 1;
}
message StatusRequest {
string SessionID = 1;
}
message StatusResponse {
repeated moby.buildkit.v1.Vertex vertexes = 1;
repeated moby.buildkit.v1.VertexStatus statuses = 2;
repeated moby.buildkit.v1.VertexLog logs = 3;
repeated moby.buildkit.v1.VertexWarning warnings = 4;
}
message InfoRequest {}
message InfoResponse {
BuildxVersion buildxVersion = 1;
}
message BuildxVersion {
string package = 1;
string version = 2;
string revision = 3;
}

View File

@ -1,452 +0,0 @@
// Code generated by protoc-gen-go-grpc. DO NOT EDIT.
// versions:
// - protoc-gen-go-grpc v1.5.1
// - protoc v3.11.4
// source: github.com/docker/buildx/controller/pb/controller.proto
package pb
import (
context "context"
grpc "google.golang.org/grpc"
codes "google.golang.org/grpc/codes"
status "google.golang.org/grpc/status"
)
// This is a compile-time assertion to ensure that this generated file
// is compatible with the grpc package it is being compiled against.
// Requires gRPC-Go v1.64.0 or later.
const _ = grpc.SupportPackageIsVersion9
const (
Controller_Build_FullMethodName = "/buildx.controller.v1.Controller/Build"
Controller_Inspect_FullMethodName = "/buildx.controller.v1.Controller/Inspect"
Controller_Status_FullMethodName = "/buildx.controller.v1.Controller/Status"
Controller_Input_FullMethodName = "/buildx.controller.v1.Controller/Input"
Controller_Invoke_FullMethodName = "/buildx.controller.v1.Controller/Invoke"
Controller_List_FullMethodName = "/buildx.controller.v1.Controller/List"
Controller_Disconnect_FullMethodName = "/buildx.controller.v1.Controller/Disconnect"
Controller_Info_FullMethodName = "/buildx.controller.v1.Controller/Info"
Controller_ListProcesses_FullMethodName = "/buildx.controller.v1.Controller/ListProcesses"
Controller_DisconnectProcess_FullMethodName = "/buildx.controller.v1.Controller/DisconnectProcess"
)
// ControllerClient is the client API for Controller service.
//
// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream.
type ControllerClient interface {
Build(ctx context.Context, in *BuildRequest, opts ...grpc.CallOption) (*BuildResponse, error)
Inspect(ctx context.Context, in *InspectRequest, opts ...grpc.CallOption) (*InspectResponse, error)
Status(ctx context.Context, in *StatusRequest, opts ...grpc.CallOption) (grpc.ServerStreamingClient[StatusResponse], error)
Input(ctx context.Context, opts ...grpc.CallOption) (grpc.ClientStreamingClient[InputMessage, InputResponse], error)
Invoke(ctx context.Context, opts ...grpc.CallOption) (grpc.BidiStreamingClient[Message, Message], error)
List(ctx context.Context, in *ListRequest, opts ...grpc.CallOption) (*ListResponse, error)
Disconnect(ctx context.Context, in *DisconnectRequest, opts ...grpc.CallOption) (*DisconnectResponse, error)
Info(ctx context.Context, in *InfoRequest, opts ...grpc.CallOption) (*InfoResponse, error)
ListProcesses(ctx context.Context, in *ListProcessesRequest, opts ...grpc.CallOption) (*ListProcessesResponse, error)
DisconnectProcess(ctx context.Context, in *DisconnectProcessRequest, opts ...grpc.CallOption) (*DisconnectProcessResponse, error)
}
type controllerClient struct {
cc grpc.ClientConnInterface
}
func NewControllerClient(cc grpc.ClientConnInterface) ControllerClient {
return &controllerClient{cc}
}
func (c *controllerClient) Build(ctx context.Context, in *BuildRequest, opts ...grpc.CallOption) (*BuildResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(BuildResponse)
err := c.cc.Invoke(ctx, Controller_Build_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *controllerClient) Inspect(ctx context.Context, in *InspectRequest, opts ...grpc.CallOption) (*InspectResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(InspectResponse)
err := c.cc.Invoke(ctx, Controller_Inspect_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *controllerClient) Status(ctx context.Context, in *StatusRequest, opts ...grpc.CallOption) (grpc.ServerStreamingClient[StatusResponse], error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
stream, err := c.cc.NewStream(ctx, &Controller_ServiceDesc.Streams[0], Controller_Status_FullMethodName, cOpts...)
if err != nil {
return nil, err
}
x := &grpc.GenericClientStream[StatusRequest, StatusResponse]{ClientStream: stream}
if err := x.ClientStream.SendMsg(in); err != nil {
return nil, err
}
if err := x.ClientStream.CloseSend(); err != nil {
return nil, err
}
return x, nil
}
// This type alias is provided for backwards compatibility with existing code that references the prior non-generic stream type by name.
type Controller_StatusClient = grpc.ServerStreamingClient[StatusResponse]
func (c *controllerClient) Input(ctx context.Context, opts ...grpc.CallOption) (grpc.ClientStreamingClient[InputMessage, InputResponse], error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
stream, err := c.cc.NewStream(ctx, &Controller_ServiceDesc.Streams[1], Controller_Input_FullMethodName, cOpts...)
if err != nil {
return nil, err
}
x := &grpc.GenericClientStream[InputMessage, InputResponse]{ClientStream: stream}
return x, nil
}
// This type alias is provided for backwards compatibility with existing code that references the prior non-generic stream type by name.
type Controller_InputClient = grpc.ClientStreamingClient[InputMessage, InputResponse]
func (c *controllerClient) Invoke(ctx context.Context, opts ...grpc.CallOption) (grpc.BidiStreamingClient[Message, Message], error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
stream, err := c.cc.NewStream(ctx, &Controller_ServiceDesc.Streams[2], Controller_Invoke_FullMethodName, cOpts...)
if err != nil {
return nil, err
}
x := &grpc.GenericClientStream[Message, Message]{ClientStream: stream}
return x, nil
}
// This type alias is provided for backwards compatibility with existing code that references the prior non-generic stream type by name.
type Controller_InvokeClient = grpc.BidiStreamingClient[Message, Message]
func (c *controllerClient) List(ctx context.Context, in *ListRequest, opts ...grpc.CallOption) (*ListResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(ListResponse)
err := c.cc.Invoke(ctx, Controller_List_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *controllerClient) Disconnect(ctx context.Context, in *DisconnectRequest, opts ...grpc.CallOption) (*DisconnectResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(DisconnectResponse)
err := c.cc.Invoke(ctx, Controller_Disconnect_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *controllerClient) Info(ctx context.Context, in *InfoRequest, opts ...grpc.CallOption) (*InfoResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(InfoResponse)
err := c.cc.Invoke(ctx, Controller_Info_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *controllerClient) ListProcesses(ctx context.Context, in *ListProcessesRequest, opts ...grpc.CallOption) (*ListProcessesResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(ListProcessesResponse)
err := c.cc.Invoke(ctx, Controller_ListProcesses_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *controllerClient) DisconnectProcess(ctx context.Context, in *DisconnectProcessRequest, opts ...grpc.CallOption) (*DisconnectProcessResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(DisconnectProcessResponse)
err := c.cc.Invoke(ctx, Controller_DisconnectProcess_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
// ControllerServer is the server API for Controller service.
// All implementations should embed UnimplementedControllerServer
// for forward compatibility.
type ControllerServer interface {
Build(context.Context, *BuildRequest) (*BuildResponse, error)
Inspect(context.Context, *InspectRequest) (*InspectResponse, error)
Status(*StatusRequest, grpc.ServerStreamingServer[StatusResponse]) error
Input(grpc.ClientStreamingServer[InputMessage, InputResponse]) error
Invoke(grpc.BidiStreamingServer[Message, Message]) error
List(context.Context, *ListRequest) (*ListResponse, error)
Disconnect(context.Context, *DisconnectRequest) (*DisconnectResponse, error)
Info(context.Context, *InfoRequest) (*InfoResponse, error)
ListProcesses(context.Context, *ListProcessesRequest) (*ListProcessesResponse, error)
DisconnectProcess(context.Context, *DisconnectProcessRequest) (*DisconnectProcessResponse, error)
}
// UnimplementedControllerServer should be embedded to have
// forward compatible implementations.
//
// NOTE: this should be embedded by value instead of pointer to avoid a nil
// pointer dereference when methods are called.
type UnimplementedControllerServer struct{}
func (UnimplementedControllerServer) Build(context.Context, *BuildRequest) (*BuildResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method Build not implemented")
}
func (UnimplementedControllerServer) Inspect(context.Context, *InspectRequest) (*InspectResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method Inspect not implemented")
}
func (UnimplementedControllerServer) Status(*StatusRequest, grpc.ServerStreamingServer[StatusResponse]) error {
return status.Errorf(codes.Unimplemented, "method Status not implemented")
}
func (UnimplementedControllerServer) Input(grpc.ClientStreamingServer[InputMessage, InputResponse]) error {
return status.Errorf(codes.Unimplemented, "method Input not implemented")
}
func (UnimplementedControllerServer) Invoke(grpc.BidiStreamingServer[Message, Message]) error {
return status.Errorf(codes.Unimplemented, "method Invoke not implemented")
}
func (UnimplementedControllerServer) List(context.Context, *ListRequest) (*ListResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method List not implemented")
}
func (UnimplementedControllerServer) Disconnect(context.Context, *DisconnectRequest) (*DisconnectResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method Disconnect not implemented")
}
func (UnimplementedControllerServer) Info(context.Context, *InfoRequest) (*InfoResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method Info not implemented")
}
func (UnimplementedControllerServer) ListProcesses(context.Context, *ListProcessesRequest) (*ListProcessesResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method ListProcesses not implemented")
}
func (UnimplementedControllerServer) DisconnectProcess(context.Context, *DisconnectProcessRequest) (*DisconnectProcessResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method DisconnectProcess not implemented")
}
func (UnimplementedControllerServer) testEmbeddedByValue() {}
// UnsafeControllerServer may be embedded to opt out of forward compatibility for this service.
// Use of this interface is not recommended, as added methods to ControllerServer will
// result in compilation errors.
type UnsafeControllerServer interface {
mustEmbedUnimplementedControllerServer()
}
func RegisterControllerServer(s grpc.ServiceRegistrar, srv ControllerServer) {
// If the following call pancis, it indicates UnimplementedControllerServer was
// embedded by pointer and is nil. This will cause panics if an
// unimplemented method is ever invoked, so we test this at initialization
// time to prevent it from happening at runtime later due to I/O.
if t, ok := srv.(interface{ testEmbeddedByValue() }); ok {
t.testEmbeddedByValue()
}
s.RegisterService(&Controller_ServiceDesc, srv)
}
func _Controller_Build_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(BuildRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(ControllerServer).Build(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: Controller_Build_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(ControllerServer).Build(ctx, req.(*BuildRequest))
}
return interceptor(ctx, in, info, handler)
}
func _Controller_Inspect_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(InspectRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(ControllerServer).Inspect(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: Controller_Inspect_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(ControllerServer).Inspect(ctx, req.(*InspectRequest))
}
return interceptor(ctx, in, info, handler)
}
func _Controller_Status_Handler(srv interface{}, stream grpc.ServerStream) error {
m := new(StatusRequest)
if err := stream.RecvMsg(m); err != nil {
return err
}
return srv.(ControllerServer).Status(m, &grpc.GenericServerStream[StatusRequest, StatusResponse]{ServerStream: stream})
}
// This type alias is provided for backwards compatibility with existing code that references the prior non-generic stream type by name.
type Controller_StatusServer = grpc.ServerStreamingServer[StatusResponse]
func _Controller_Input_Handler(srv interface{}, stream grpc.ServerStream) error {
return srv.(ControllerServer).Input(&grpc.GenericServerStream[InputMessage, InputResponse]{ServerStream: stream})
}
// This type alias is provided for backwards compatibility with existing code that references the prior non-generic stream type by name.
type Controller_InputServer = grpc.ClientStreamingServer[InputMessage, InputResponse]
func _Controller_Invoke_Handler(srv interface{}, stream grpc.ServerStream) error {
return srv.(ControllerServer).Invoke(&grpc.GenericServerStream[Message, Message]{ServerStream: stream})
}
// This type alias is provided for backwards compatibility with existing code that references the prior non-generic stream type by name.
type Controller_InvokeServer = grpc.BidiStreamingServer[Message, Message]
func _Controller_List_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(ListRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(ControllerServer).List(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: Controller_List_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(ControllerServer).List(ctx, req.(*ListRequest))
}
return interceptor(ctx, in, info, handler)
}
func _Controller_Disconnect_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(DisconnectRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(ControllerServer).Disconnect(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: Controller_Disconnect_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(ControllerServer).Disconnect(ctx, req.(*DisconnectRequest))
}
return interceptor(ctx, in, info, handler)
}
func _Controller_Info_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(InfoRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(ControllerServer).Info(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: Controller_Info_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(ControllerServer).Info(ctx, req.(*InfoRequest))
}
return interceptor(ctx, in, info, handler)
}
func _Controller_ListProcesses_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(ListProcessesRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(ControllerServer).ListProcesses(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: Controller_ListProcesses_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(ControllerServer).ListProcesses(ctx, req.(*ListProcessesRequest))
}
return interceptor(ctx, in, info, handler)
}
func _Controller_DisconnectProcess_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(DisconnectProcessRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(ControllerServer).DisconnectProcess(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: Controller_DisconnectProcess_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(ControllerServer).DisconnectProcess(ctx, req.(*DisconnectProcessRequest))
}
return interceptor(ctx, in, info, handler)
}
// Controller_ServiceDesc is the grpc.ServiceDesc for Controller service.
// It's only intended for direct use with grpc.RegisterService,
// and not to be introspected or modified (even as a copy)
var Controller_ServiceDesc = grpc.ServiceDesc{
ServiceName: "buildx.controller.v1.Controller",
HandlerType: (*ControllerServer)(nil),
Methods: []grpc.MethodDesc{
{
MethodName: "Build",
Handler: _Controller_Build_Handler,
},
{
MethodName: "Inspect",
Handler: _Controller_Inspect_Handler,
},
{
MethodName: "List",
Handler: _Controller_List_Handler,
},
{
MethodName: "Disconnect",
Handler: _Controller_Disconnect_Handler,
},
{
MethodName: "Info",
Handler: _Controller_Info_Handler,
},
{
MethodName: "ListProcesses",
Handler: _Controller_ListProcesses_Handler,
},
{
MethodName: "DisconnectProcess",
Handler: _Controller_DisconnectProcess_Handler,
},
},
Streams: []grpc.StreamDesc{
{
StreamName: "Status",
Handler: _Controller_Status_Handler,
ServerStreams: true,
},
{
StreamName: "Input",
Handler: _Controller_Input_Handler,
ClientStreams: true,
},
{
StreamName: "Invoke",
Handler: _Controller_Invoke_Handler,
ServerStreams: true,
ClientStreams: true,
},
},
Metadata: "github.com/docker/buildx/controller/pb/controller.proto",
}

File diff suppressed because it is too large Load Diff

View File

@ -1,108 +0,0 @@
package pb
import (
"io"
"maps"
"os"
"strconv"
"github.com/containerd/console"
"github.com/moby/buildkit/client"
"github.com/pkg/errors"
)
func CreateExports(entries []*ExportEntry) ([]client.ExportEntry, []string, error) {
var outs []client.ExportEntry
var localPaths []string
if len(entries) == 0 {
return nil, nil, nil
}
var stdoutUsed bool
for _, entry := range entries {
if entry.Type == "" {
return nil, nil, errors.Errorf("type is required for output")
}
out := client.ExportEntry{
Type: entry.Type,
Attrs: map[string]string{},
}
maps.Copy(out.Attrs, entry.Attrs)
supportFile := false
supportDir := false
switch out.Type {
case client.ExporterLocal:
supportDir = true
case client.ExporterTar:
supportFile = true
case client.ExporterOCI, client.ExporterDocker:
tar, err := strconv.ParseBool(out.Attrs["tar"])
if err != nil {
tar = true
}
supportFile = tar
supportDir = !tar
case "registry":
out.Type = client.ExporterImage
out.Attrs["push"] = "true"
}
if supportDir {
if entry.Destination == "" {
return nil, nil, errors.Errorf("dest is required for %s exporter", out.Type)
}
if entry.Destination == "-" {
return nil, nil, errors.Errorf("dest cannot be stdout for %s exporter", out.Type)
}
fi, err := os.Stat(entry.Destination)
if err != nil && !os.IsNotExist(err) {
return nil, nil, errors.Wrapf(err, "invalid destination directory: %s", entry.Destination)
}
if err == nil && !fi.IsDir() {
return nil, nil, errors.Errorf("destination directory %s is a file", entry.Destination)
}
out.OutputDir = entry.Destination
localPaths = append(localPaths, entry.Destination)
}
if supportFile {
if entry.Destination == "" && out.Type != client.ExporterDocker {
entry.Destination = "-"
}
if entry.Destination == "-" {
if stdoutUsed {
return nil, nil, errors.Errorf("multiple outputs configured to write to stdout")
}
if _, err := console.ConsoleFromFile(os.Stdout); err == nil {
return nil, nil, errors.Errorf("dest file is required for %s exporter. refusing to write to console", out.Type)
}
out.Output = wrapWriteCloser(os.Stdout)
stdoutUsed = true
} else if entry.Destination != "" {
fi, err := os.Stat(entry.Destination)
if err != nil && !os.IsNotExist(err) {
return nil, nil, errors.Wrapf(err, "invalid destination file: %s", entry.Destination)
}
if err == nil && fi.IsDir() {
return nil, nil, errors.Errorf("destination file %s is a directory", entry.Destination)
}
f, err := os.Create(entry.Destination)
if err != nil {
return nil, nil, errors.Errorf("failed to open %s", err)
}
out.Output = wrapWriteCloser(f)
localPaths = append(localPaths, entry.Destination)
}
}
outs = append(outs, out)
}
return outs, localPaths, nil
}
func wrapWriteCloser(wc io.WriteCloser) func(map[string]string) (io.WriteCloser, error) {
return func(map[string]string) (io.WriteCloser, error) {
return wc, nil
}
}

View File

@ -1,180 +0,0 @@
package pb
import (
"path/filepath"
"strings"
"github.com/moby/buildkit/util/gitutil"
)
// ResolveOptionPaths resolves all paths contained in BuildOptions
// and replaces them to absolute paths.
func ResolveOptionPaths(options *BuildOptions) (_ *BuildOptions, err error) {
localContext := false
if options.ContextPath != "" && options.ContextPath != "-" {
if !isRemoteURL(options.ContextPath) {
localContext = true
options.ContextPath, err = filepath.Abs(options.ContextPath)
if err != nil {
return nil, err
}
}
}
if options.DockerfileName != "" && options.DockerfileName != "-" {
if localContext && !isHTTPURL(options.DockerfileName) {
options.DockerfileName, err = filepath.Abs(options.DockerfileName)
if err != nil {
return nil, err
}
}
}
var contexts map[string]string
for k, v := range options.NamedContexts {
if isRemoteURL(v) || strings.HasPrefix(v, "docker-image://") {
// url prefix, this is a remote path
} else if strings.HasPrefix(v, "oci-layout://") {
// oci layout prefix, this is a local path
p := strings.TrimPrefix(v, "oci-layout://")
p, err = filepath.Abs(p)
if err != nil {
return nil, err
}
v = "oci-layout://" + p
} else {
// no prefix, assume local path
v, err = filepath.Abs(v)
if err != nil {
return nil, err
}
}
if contexts == nil {
contexts = make(map[string]string)
}
contexts[k] = v
}
options.NamedContexts = contexts
var cacheFrom []*CacheOptionsEntry
for _, co := range options.CacheFrom {
switch co.Type {
case "local":
var attrs map[string]string
for k, v := range co.Attrs {
if attrs == nil {
attrs = make(map[string]string)
}
switch k {
case "src":
p := v
if p != "" {
p, err = filepath.Abs(p)
if err != nil {
return nil, err
}
}
attrs[k] = p
default:
attrs[k] = v
}
}
co.Attrs = attrs
cacheFrom = append(cacheFrom, co)
default:
cacheFrom = append(cacheFrom, co)
}
}
options.CacheFrom = cacheFrom
var cacheTo []*CacheOptionsEntry
for _, co := range options.CacheTo {
switch co.Type {
case "local":
var attrs map[string]string
for k, v := range co.Attrs {
if attrs == nil {
attrs = make(map[string]string)
}
switch k {
case "dest":
p := v
if p != "" {
p, err = filepath.Abs(p)
if err != nil {
return nil, err
}
}
attrs[k] = p
default:
attrs[k] = v
}
}
co.Attrs = attrs
cacheTo = append(cacheTo, co)
default:
cacheTo = append(cacheTo, co)
}
}
options.CacheTo = cacheTo
var exports []*ExportEntry
for _, e := range options.Exports {
if e.Destination != "" && e.Destination != "-" {
e.Destination, err = filepath.Abs(e.Destination)
if err != nil {
return nil, err
}
}
exports = append(exports, e)
}
options.Exports = exports
var secrets []*Secret
for _, s := range options.Secrets {
if s.FilePath != "" {
s.FilePath, err = filepath.Abs(s.FilePath)
if err != nil {
return nil, err
}
}
secrets = append(secrets, s)
}
options.Secrets = secrets
var ssh []*SSH
for _, s := range options.SSH {
var ps []string
for _, pt := range s.Paths {
p := pt
if p != "" {
p, err = filepath.Abs(p)
if err != nil {
return nil, err
}
}
ps = append(ps, p)
}
s.Paths = ps
ssh = append(ssh, s)
}
options.SSH = ssh
return options, nil
}
// isHTTPURL returns true if the provided str is an HTTP(S) URL by checking if it
// has a http:// or https:// scheme. No validation is performed to verify if the
// URL is well-formed.
func isHTTPURL(str string) bool {
return strings.HasPrefix(str, "https://") || strings.HasPrefix(str, "http://")
}
func isRemoteURL(c string) bool {
if isHTTPURL(c) {
return true
}
if _, err := gitutil.ParseGitRef(c); err == nil {
return true
}
return false
}

View File

@ -1,252 +0,0 @@
package pb
import (
"os"
"path/filepath"
"testing"
"github.com/stretchr/testify/require"
"google.golang.org/protobuf/proto"
)
func TestResolvePaths(t *testing.T) {
tmpwd, err := os.MkdirTemp("", "testresolvepaths")
require.NoError(t, err)
defer os.Remove(tmpwd)
require.NoError(t, os.Chdir(tmpwd))
tests := []struct {
name string
options *BuildOptions
want *BuildOptions
}{
{
name: "contextpath",
options: &BuildOptions{ContextPath: "test"},
want: &BuildOptions{ContextPath: filepath.Join(tmpwd, "test")},
},
{
name: "contextpath-cwd",
options: &BuildOptions{ContextPath: "."},
want: &BuildOptions{ContextPath: tmpwd},
},
{
name: "contextpath-dash",
options: &BuildOptions{ContextPath: "-"},
want: &BuildOptions{ContextPath: "-"},
},
{
name: "contextpath-ssh",
options: &BuildOptions{ContextPath: "git@github.com:docker/buildx.git"},
want: &BuildOptions{ContextPath: "git@github.com:docker/buildx.git"},
},
{
name: "dockerfilename",
options: &BuildOptions{DockerfileName: "test", ContextPath: "."},
want: &BuildOptions{DockerfileName: filepath.Join(tmpwd, "test"), ContextPath: tmpwd},
},
{
name: "dockerfilename-dash",
options: &BuildOptions{DockerfileName: "-", ContextPath: "."},
want: &BuildOptions{DockerfileName: "-", ContextPath: tmpwd},
},
{
name: "dockerfilename-remote",
options: &BuildOptions{DockerfileName: "test", ContextPath: "git@github.com:docker/buildx.git"},
want: &BuildOptions{DockerfileName: "test", ContextPath: "git@github.com:docker/buildx.git"},
},
{
name: "contexts",
options: &BuildOptions{NamedContexts: map[string]string{
"a": "test1", "b": "test2",
"alpine": "docker-image://alpine@sha256:0123456789", "project": "https://github.com/myuser/project.git",
}},
want: &BuildOptions{NamedContexts: map[string]string{
"a": filepath.Join(tmpwd, "test1"), "b": filepath.Join(tmpwd, "test2"),
"alpine": "docker-image://alpine@sha256:0123456789", "project": "https://github.com/myuser/project.git",
}},
},
{
name: "cache-from",
options: &BuildOptions{
CacheFrom: []*CacheOptionsEntry{
{
Type: "local",
Attrs: map[string]string{"src": "test"},
},
{
Type: "registry",
Attrs: map[string]string{"ref": "user/app"},
},
},
},
want: &BuildOptions{
CacheFrom: []*CacheOptionsEntry{
{
Type: "local",
Attrs: map[string]string{"src": filepath.Join(tmpwd, "test")},
},
{
Type: "registry",
Attrs: map[string]string{"ref": "user/app"},
},
},
},
},
{
name: "cache-to",
options: &BuildOptions{
CacheTo: []*CacheOptionsEntry{
{
Type: "local",
Attrs: map[string]string{"dest": "test"},
},
{
Type: "registry",
Attrs: map[string]string{"ref": "user/app"},
},
},
},
want: &BuildOptions{
CacheTo: []*CacheOptionsEntry{
{
Type: "local",
Attrs: map[string]string{"dest": filepath.Join(tmpwd, "test")},
},
{
Type: "registry",
Attrs: map[string]string{"ref": "user/app"},
},
},
},
},
{
name: "exports",
options: &BuildOptions{
Exports: []*ExportEntry{
{
Type: "local",
Destination: "-",
},
{
Type: "local",
Destination: "test1",
},
{
Type: "tar",
Destination: "test3",
},
{
Type: "oci",
Destination: "-",
},
{
Type: "docker",
Destination: "test4",
},
{
Type: "image",
Attrs: map[string]string{"push": "true"},
},
},
},
want: &BuildOptions{
Exports: []*ExportEntry{
{
Type: "local",
Destination: "-",
},
{
Type: "local",
Destination: filepath.Join(tmpwd, "test1"),
},
{
Type: "tar",
Destination: filepath.Join(tmpwd, "test3"),
},
{
Type: "oci",
Destination: "-",
},
{
Type: "docker",
Destination: filepath.Join(tmpwd, "test4"),
},
{
Type: "image",
Attrs: map[string]string{"push": "true"},
},
},
},
},
{
name: "secrets",
options: &BuildOptions{
Secrets: []*Secret{
{
FilePath: "test1",
},
{
ID: "val",
Env: "a",
},
{
ID: "test",
FilePath: "test3",
},
},
},
want: &BuildOptions{
Secrets: []*Secret{
{
FilePath: filepath.Join(tmpwd, "test1"),
},
{
ID: "val",
Env: "a",
},
{
ID: "test",
FilePath: filepath.Join(tmpwd, "test3"),
},
},
},
},
{
name: "ssh",
options: &BuildOptions{
SSH: []*SSH{
{
ID: "default",
Paths: []string{"test1", "test2"},
},
{
ID: "a",
Paths: []string{"test3"},
},
},
},
want: &BuildOptions{
SSH: []*SSH{
{
ID: "default",
Paths: []string{filepath.Join(tmpwd, "test1"), filepath.Join(tmpwd, "test2")},
},
{
ID: "a",
Paths: []string{filepath.Join(tmpwd, "test3")},
},
},
},
},
}
for _, tt := range tests {
tt := tt
t.Run(tt.name, func(t *testing.T) {
got, err := ResolveOptionPaths(tt.options)
require.NoError(t, err)
if !proto.Equal(tt.want, got) {
t.Fatalf("expected %#v, got %#v", tt.want, got)
}
})
}
}

View File

@ -1,162 +0,0 @@
package pb
import (
"time"
"github.com/docker/buildx/util/progress"
control "github.com/moby/buildkit/api/services/control"
"github.com/moby/buildkit/client"
"github.com/opencontainers/go-digest"
"google.golang.org/protobuf/types/known/timestamppb"
)
type writer struct {
ch chan<- *StatusResponse
}
func NewProgressWriter(ch chan<- *StatusResponse) progress.Writer {
return &writer{ch: ch}
}
func (w *writer) Write(status *client.SolveStatus) {
w.ch <- ToControlStatus(status)
}
func (w *writer) WriteBuildRef(target string, ref string) {}
func (w *writer) ValidateLogSource(digest.Digest, any) bool {
return true
}
func (w *writer) ClearLogSource(any) {}
func ToControlStatus(s *client.SolveStatus) *StatusResponse {
resp := StatusResponse{}
for _, v := range s.Vertexes {
resp.Vertexes = append(resp.Vertexes, &control.Vertex{
Digest: string(v.Digest),
Inputs: digestSliceToPB(v.Inputs),
Name: v.Name,
Started: timestampToPB(v.Started),
Completed: timestampToPB(v.Completed),
Error: v.Error,
Cached: v.Cached,
ProgressGroup: v.ProgressGroup,
})
}
for _, v := range s.Statuses {
resp.Statuses = append(resp.Statuses, &control.VertexStatus{
ID: v.ID,
Vertex: string(v.Vertex),
Name: v.Name,
Total: v.Total,
Current: v.Current,
Timestamp: timestamppb.New(v.Timestamp),
Started: timestampToPB(v.Started),
Completed: timestampToPB(v.Completed),
})
}
for _, v := range s.Logs {
resp.Logs = append(resp.Logs, &control.VertexLog{
Vertex: string(v.Vertex),
Stream: int64(v.Stream),
Msg: v.Data,
Timestamp: timestamppb.New(v.Timestamp),
})
}
for _, v := range s.Warnings {
resp.Warnings = append(resp.Warnings, &control.VertexWarning{
Vertex: string(v.Vertex),
Level: int64(v.Level),
Short: v.Short,
Detail: v.Detail,
Url: v.URL,
Info: v.SourceInfo,
Ranges: v.Range,
})
}
return &resp
}
func FromControlStatus(resp *StatusResponse) *client.SolveStatus {
s := client.SolveStatus{}
for _, v := range resp.Vertexes {
s.Vertexes = append(s.Vertexes, &client.Vertex{
Digest: digest.Digest(v.Digest),
Inputs: digestSliceFromPB(v.Inputs),
Name: v.Name,
Started: timestampFromPB(v.Started),
Completed: timestampFromPB(v.Completed),
Error: v.Error,
Cached: v.Cached,
ProgressGroup: v.ProgressGroup,
})
}
for _, v := range resp.Statuses {
s.Statuses = append(s.Statuses, &client.VertexStatus{
ID: v.ID,
Vertex: digest.Digest(v.Vertex),
Name: v.Name,
Total: v.Total,
Current: v.Current,
Timestamp: v.Timestamp.AsTime(),
Started: timestampFromPB(v.Started),
Completed: timestampFromPB(v.Completed),
})
}
for _, v := range resp.Logs {
s.Logs = append(s.Logs, &client.VertexLog{
Vertex: digest.Digest(v.Vertex),
Stream: int(v.Stream),
Data: v.Msg,
Timestamp: v.Timestamp.AsTime(),
})
}
for _, v := range resp.Warnings {
s.Warnings = append(s.Warnings, &client.VertexWarning{
Vertex: digest.Digest(v.Vertex),
Level: int(v.Level),
Short: v.Short,
Detail: v.Detail,
URL: v.Url,
SourceInfo: v.Info,
Range: v.Ranges,
})
}
return &s
}
func timestampFromPB(ts *timestamppb.Timestamp) *time.Time {
if ts == nil {
return nil
}
t := ts.AsTime()
if t.IsZero() {
return nil
}
return &t
}
func timestampToPB(ts *time.Time) *timestamppb.Timestamp {
if ts == nil {
return nil
}
return timestamppb.New(*ts)
}
func digestSliceFromPB(elems []string) []digest.Digest {
clone := make([]digest.Digest, len(elems))
for i, e := range elems {
clone[i] = digest.Digest(e)
}
return clone
}
func digestSliceToPB(elems []digest.Digest) []string {
clone := make([]string, len(elems))
for i, e := range elems {
clone[i] = string(e)
}
return clone
}

View File

@ -1,22 +0,0 @@
package pb
import (
"github.com/moby/buildkit/session"
"github.com/moby/buildkit/session/secrets/secretsprovider"
)
func CreateSecrets(secrets []*Secret) (session.Attachable, error) {
fs := make([]secretsprovider.Source, 0, len(secrets))
for _, secret := range secrets {
fs = append(fs, secretsprovider.Source{
ID: secret.ID,
FilePath: secret.FilePath,
Env: secret.Env,
})
}
store, err := secretsprovider.NewStore(fs)
if err != nil {
return nil, err
}
return secretsprovider.NewSecretProvider(store), nil
}

View File

@ -1,20 +0,0 @@
package pb
import (
"slices"
"github.com/moby/buildkit/session"
"github.com/moby/buildkit/session/sshforward/sshprovider"
)
func CreateSSH(ssh []*SSH) (session.Attachable, error) {
configs := make([]sshprovider.AgentConfig, 0, len(ssh))
for _, ssh := range ssh {
cfg := sshprovider.AgentConfig{
ID: ssh.ID,
Paths: slices.Clone(ssh.Paths),
}
configs = append(configs, cfg)
}
return sshprovider.NewSSHAgentProvider(configs)
}

View File

@ -1,243 +0,0 @@
package remote
import (
"context"
"io"
"sync"
"time"
"github.com/containerd/containerd/v2/defaults"
"github.com/containerd/containerd/v2/pkg/dialer"
"github.com/docker/buildx/build"
"github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/util/progress"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/identity"
"github.com/moby/buildkit/util/grpcerrors"
"github.com/pkg/errors"
"golang.org/x/sync/errgroup"
"google.golang.org/grpc"
"google.golang.org/grpc/backoff"
"google.golang.org/grpc/credentials/insecure"
)
func NewClient(ctx context.Context, addr string) (*Client, error) {
backoffConfig := backoff.DefaultConfig
backoffConfig.MaxDelay = 3 * time.Second
connParams := grpc.ConnectParams{
Backoff: backoffConfig,
}
gopts := []grpc.DialOption{
//nolint:staticcheck // ignore SA1019: WithBlock is deprecated and does not work with NewClient.
grpc.WithBlock(),
grpc.WithTransportCredentials(insecure.NewCredentials()),
grpc.WithConnectParams(connParams),
grpc.WithContextDialer(dialer.ContextDialer),
grpc.WithDefaultCallOptions(grpc.MaxCallRecvMsgSize(defaults.DefaultMaxRecvMsgSize)),
grpc.WithDefaultCallOptions(grpc.MaxCallSendMsgSize(defaults.DefaultMaxSendMsgSize)),
grpc.WithUnaryInterceptor(grpcerrors.UnaryClientInterceptor),
grpc.WithStreamInterceptor(grpcerrors.StreamClientInterceptor),
}
//nolint:staticcheck // ignore SA1019: Recommended NewClient has different behavior from DialContext.
conn, err := grpc.DialContext(ctx, dialer.DialAddress(addr), gopts...)
if err != nil {
return nil, err
}
return &Client{conn: conn}, nil
}
type Client struct {
conn *grpc.ClientConn
closeOnce sync.Once
}
func (c *Client) Close() (err error) {
c.closeOnce.Do(func() {
err = c.conn.Close()
})
return
}
func (c *Client) Version(ctx context.Context) (string, string, string, error) {
res, err := c.client().Info(ctx, &pb.InfoRequest{})
if err != nil {
return "", "", "", err
}
v := res.BuildxVersion
return v.Package, v.Version, v.Revision, nil
}
func (c *Client) List(ctx context.Context) (keys []string, retErr error) {
res, err := c.client().List(ctx, &pb.ListRequest{})
if err != nil {
return nil, err
}
return res.Keys, nil
}
func (c *Client) Disconnect(ctx context.Context, sessionID string) error {
if sessionID == "" {
return nil
}
_, err := c.client().Disconnect(ctx, &pb.DisconnectRequest{SessionID: sessionID})
return err
}
func (c *Client) ListProcesses(ctx context.Context, sessionID string) (infos []*pb.ProcessInfo, retErr error) {
res, err := c.client().ListProcesses(ctx, &pb.ListProcessesRequest{SessionID: sessionID})
if err != nil {
return nil, err
}
return res.Infos, nil
}
func (c *Client) DisconnectProcess(ctx context.Context, sessionID, pid string) error {
_, err := c.client().DisconnectProcess(ctx, &pb.DisconnectProcessRequest{SessionID: sessionID, ProcessID: pid})
return err
}
func (c *Client) Invoke(ctx context.Context, sessionID string, pid string, invokeConfig *pb.InvokeConfig, in io.ReadCloser, stdout io.WriteCloser, stderr io.WriteCloser) error {
if sessionID == "" || pid == "" {
return errors.New("build session ID must be specified")
}
stream, err := c.client().Invoke(ctx)
if err != nil {
return err
}
return attachIO(ctx, stream, &pb.InitMessage{SessionID: sessionID, ProcessID: pid, InvokeConfig: invokeConfig}, ioAttachConfig{
stdin: in,
stdout: stdout,
stderr: stderr,
// TODO: Signal, Resize
})
}
func (c *Client) Inspect(ctx context.Context, sessionID string) (*pb.InspectResponse, error) {
return c.client().Inspect(ctx, &pb.InspectRequest{SessionID: sessionID})
}
func (c *Client) Build(ctx context.Context, options *pb.BuildOptions, in io.ReadCloser, progress progress.Writer) (string, *client.SolveResponse, *build.Inputs, error) {
ref := identity.NewID()
statusChan := make(chan *client.SolveStatus)
eg, egCtx := errgroup.WithContext(ctx)
var resp *client.SolveResponse
eg.Go(func() error {
defer close(statusChan)
var err error
resp, err = c.build(egCtx, ref, options, in, statusChan)
return err
})
eg.Go(func() error {
for s := range statusChan {
st := s
progress.Write(st)
}
return nil
})
return ref, resp, nil, eg.Wait()
}
func (c *Client) build(ctx context.Context, sessionID string, options *pb.BuildOptions, in io.ReadCloser, statusChan chan *client.SolveStatus) (*client.SolveResponse, error) {
eg, egCtx := errgroup.WithContext(ctx)
done := make(chan struct{})
var resp *client.SolveResponse
eg.Go(func() error {
defer close(done)
pbResp, err := c.client().Build(egCtx, &pb.BuildRequest{
SessionID: sessionID,
Options: options,
})
if err != nil {
return err
}
resp = &client.SolveResponse{
ExporterResponse: pbResp.ExporterResponse,
}
return nil
})
eg.Go(func() error {
stream, err := c.client().Status(egCtx, &pb.StatusRequest{
SessionID: sessionID,
})
if err != nil {
return err
}
for {
resp, err := stream.Recv()
if err != nil {
if err == io.EOF {
return nil
}
return errors.Wrap(err, "failed to receive status")
}
statusChan <- pb.FromControlStatus(resp)
}
})
if in != nil {
eg.Go(func() error {
stream, err := c.client().Input(egCtx)
if err != nil {
return err
}
if err := stream.Send(&pb.InputMessage{
Input: &pb.InputMessage_Init{
Init: &pb.InputInitMessage{
SessionID: sessionID,
},
},
}); err != nil {
return errors.Wrap(err, "failed to init input")
}
inReader, inWriter := io.Pipe()
eg2, _ := errgroup.WithContext(ctx)
eg2.Go(func() error {
<-done
return inWriter.Close()
})
go func() {
// do not wait for read completion but return here and let the caller send EOF
// this allows us to return on ctx.Done() without being blocked by this reader.
io.Copy(inWriter, in)
inWriter.Close()
}()
eg2.Go(func() error {
for {
buf := make([]byte, 32*1024)
n, err := inReader.Read(buf)
if err != nil {
if err == io.EOF {
break // break loop and send EOF
}
return err
} else if n > 0 {
if err := stream.Send(&pb.InputMessage{
Input: &pb.InputMessage_Data{
Data: &pb.DataMessage{
Data: buf[:n],
},
},
}); err != nil {
return err
}
}
}
return stream.Send(&pb.InputMessage{
Input: &pb.InputMessage_Data{
Data: &pb.DataMessage{
EOF: true,
},
},
})
})
return eg2.Wait()
})
}
return resp, eg.Wait()
}
func (c *Client) client() pb.ControllerClient {
return pb.NewControllerClient(c.conn)
}

View File

@ -1,335 +0,0 @@
//go:build linux
package remote
import (
"context"
"fmt"
"io"
"net"
"os"
"os/exec"
"os/signal"
"path/filepath"
"strconv"
"syscall"
"time"
"github.com/containerd/log"
"github.com/docker/buildx/build"
cbuild "github.com/docker/buildx/controller/build"
"github.com/docker/buildx/controller/control"
controllerapi "github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/progress"
"github.com/docker/buildx/version"
"github.com/docker/cli/cli/command"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/util/grpcerrors"
"github.com/pelletier/go-toml"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"google.golang.org/grpc"
)
const (
serveCommandName = "_INTERNAL_SERVE"
)
var (
defaultLogFilename = fmt.Sprintf("buildx.%s.log", version.Revision)
defaultSocketFilename = fmt.Sprintf("buildx.%s.sock", version.Revision)
defaultPIDFilename = fmt.Sprintf("buildx.%s.pid", version.Revision)
)
type serverConfig struct {
// Specify buildx server root
Root string `toml:"root"`
// LogLevel sets the logging level [trace, debug, info, warn, error, fatal, panic]
LogLevel string `toml:"log_level"`
// Specify file to output buildx server log
LogFile string `toml:"log_file"`
}
func NewRemoteBuildxController(ctx context.Context, dockerCli command.Cli, opts control.ControlOptions, logger progress.SubLogger) (control.BuildxController, error) {
rootDir := opts.Root
if rootDir == "" {
rootDir = rootDataDir(dockerCli)
}
serverRoot := filepath.Join(rootDir, "shared")
// connect to buildx server if it is already running
ctx2, cancel := context.WithCancelCause(ctx)
ctx2, _ = context.WithTimeoutCause(ctx2, 1*time.Second, errors.WithStack(context.DeadlineExceeded)) //nolint:govet,lostcancel // no need to manually cancel this context as we already rely on parent
c, err := newBuildxClientAndCheck(ctx2, filepath.Join(serverRoot, defaultSocketFilename))
cancel(errors.WithStack(context.Canceled))
if err != nil {
if !errors.Is(err, context.DeadlineExceeded) {
return nil, errors.Wrap(err, "cannot connect to the buildx server")
}
} else {
return &buildxController{c, serverRoot}, nil
}
// start buildx server via subcommand
err = logger.Wrap("no buildx server found; launching...", func() error {
launchFlags := []string{}
if opts.ServerConfig != "" {
launchFlags = append(launchFlags, "--config", opts.ServerConfig)
}
logFile, err := getLogFilePath(dockerCli, opts.ServerConfig)
if err != nil {
return err
}
wait, err := launch(ctx, logFile, append([]string{serveCommandName}, launchFlags...)...)
if err != nil {
return err
}
go wait()
// wait for buildx server to be ready
ctx2, cancel = context.WithCancelCause(ctx)
ctx2, _ = context.WithTimeoutCause(ctx2, 10*time.Second, errors.WithStack(context.DeadlineExceeded)) //nolint:govet,lostcancel // no need to manually cancel this context as we already rely on parent
c, err = newBuildxClientAndCheck(ctx2, filepath.Join(serverRoot, defaultSocketFilename))
cancel(errors.WithStack(context.Canceled))
if err != nil {
return errors.Wrap(err, "cannot connect to the buildx server")
}
return nil
})
if err != nil {
return nil, err
}
return &buildxController{c, serverRoot}, nil
}
func AddControllerCommands(cmd *cobra.Command, dockerCli command.Cli) {
cmd.AddCommand(
serveCmd(dockerCli),
)
}
func serveCmd(dockerCli command.Cli) *cobra.Command {
var serverConfigPath string
cmd := &cobra.Command{
Use: fmt.Sprintf("%s [OPTIONS]", serveCommandName),
Hidden: true,
RunE: func(cmd *cobra.Command, args []string) error {
// Parse config
config, err := getConfig(dockerCli, serverConfigPath)
if err != nil {
return err
}
if config.LogLevel == "" {
logrus.SetLevel(logrus.InfoLevel)
} else {
lvl, err := logrus.ParseLevel(config.LogLevel)
if err != nil {
return errors.Wrap(err, "failed to prepare logger")
}
logrus.SetLevel(lvl)
}
logrus.SetFormatter(&logrus.JSONFormatter{
TimestampFormat: log.RFC3339NanoFixed,
})
root, err := prepareRootDir(dockerCli, config)
if err != nil {
return err
}
pidF := filepath.Join(root, defaultPIDFilename)
if err := os.WriteFile(pidF, fmt.Appendf(nil, "%d", os.Getpid()), 0600); err != nil {
return err
}
defer func() {
if err := os.Remove(pidF); err != nil {
logrus.Errorf("failed to clean up info file %q: %v", pidF, err)
}
}()
// prepare server
b := NewServer(func(ctx context.Context, options *controllerapi.BuildOptions, stdin io.Reader, progress progress.Writer) (*client.SolveResponse, *build.ResultHandle, *build.Inputs, error) {
return cbuild.RunBuild(ctx, dockerCli, options, stdin, progress, true)
})
defer b.Close()
// serve server
addr := filepath.Join(root, defaultSocketFilename)
if err := os.Remove(addr); err != nil && !os.IsNotExist(err) { // avoid EADDRINUSE
return err
}
defer func() {
if err := os.Remove(addr); err != nil {
logrus.Errorf("failed to clean up socket %q: %v", addr, err)
}
}()
logrus.Infof("starting server at %q", addr)
l, err := net.Listen("unix", addr)
if err != nil {
return err
}
rpc := grpc.NewServer(
grpc.UnaryInterceptor(grpcerrors.UnaryServerInterceptor),
grpc.StreamInterceptor(grpcerrors.StreamServerInterceptor),
)
controllerapi.RegisterControllerServer(rpc, b)
doneCh := make(chan struct{})
errCh := make(chan error, 1)
go func() {
defer close(doneCh)
if err := rpc.Serve(l); err != nil {
errCh <- errors.Wrapf(err, "error on serving via socket %q", addr)
}
}()
var s os.Signal
sigCh := make(chan os.Signal, 1)
signal.Notify(sigCh, syscall.SIGINT)
signal.Notify(sigCh, syscall.SIGTERM)
select {
case err := <-errCh:
logrus.Errorf("got error %s, exiting", err)
return err
case s = <-sigCh:
logrus.Infof("got signal %s, exiting", s)
return nil
case <-doneCh:
logrus.Infof("rpc server done, exiting")
return nil
}
},
}
flags := cmd.Flags()
flags.StringVar(&serverConfigPath, "config", "", "Specify buildx server config file")
return cmd
}
func getLogFilePath(dockerCli command.Cli, configPath string) (string, error) {
config, err := getConfig(dockerCli, configPath)
if err != nil {
return "", err
}
if config.LogFile == "" {
root, err := prepareRootDir(dockerCli, config)
if err != nil {
return "", err
}
return filepath.Join(root, defaultLogFilename), nil
}
return config.LogFile, nil
}
func getConfig(dockerCli command.Cli, configPath string) (*serverConfig, error) {
var defaultConfigPath bool
if configPath == "" {
defaultRoot := rootDataDir(dockerCli)
configPath = filepath.Join(defaultRoot, "config.toml")
defaultConfigPath = true
}
var config serverConfig
tree, err := toml.LoadFile(configPath)
if err != nil && !(os.IsNotExist(err) && defaultConfigPath) {
return nil, errors.Wrapf(err, "failed to read config %q", configPath)
} else if err == nil {
if err := tree.Unmarshal(&config); err != nil {
return nil, errors.Wrapf(err, "failed to unmarshal config %q", configPath)
}
}
return &config, nil
}
func prepareRootDir(dockerCli command.Cli, config *serverConfig) (string, error) {
rootDir := config.Root
if rootDir == "" {
rootDir = rootDataDir(dockerCli)
}
if rootDir == "" {
return "", errors.New("buildx root dir must be determined")
}
if err := os.MkdirAll(rootDir, 0700); err != nil {
return "", err
}
serverRoot := filepath.Join(rootDir, "shared")
if err := os.MkdirAll(serverRoot, 0700); err != nil {
return "", err
}
return serverRoot, nil
}
func rootDataDir(dockerCli command.Cli) string {
return filepath.Join(confutil.NewConfig(dockerCli).Dir(), "controller")
}
func newBuildxClientAndCheck(ctx context.Context, addr string) (*Client, error) {
c, err := NewClient(ctx, addr)
if err != nil {
return nil, err
}
p, v, r, err := c.Version(ctx)
if err != nil {
return nil, err
}
logrus.Debugf("connected to server (\"%v %v %v\")", p, v, r)
if !(p == version.Package && v == version.Version && r == version.Revision) {
return nil, errors.Errorf("version mismatch (client: \"%v %v %v\", server: \"%v %v %v\")", version.Package, version.Version, version.Revision, p, v, r)
}
return c, nil
}
type buildxController struct {
*Client
serverRoot string
}
func (c *buildxController) Kill(ctx context.Context) error {
pidB, err := os.ReadFile(filepath.Join(c.serverRoot, defaultPIDFilename))
if err != nil {
return err
}
pid, err := strconv.ParseInt(string(pidB), 10, 64)
if err != nil {
return err
}
if pid <= 0 {
return errors.New("no PID is recorded for buildx server")
}
p, err := os.FindProcess(int(pid))
if err != nil {
return err
}
if err := p.Signal(syscall.SIGINT); err != nil {
return err
}
// TODO: Should we send SIGKILL if process doesn't finish?
return nil
}
func launch(ctx context.Context, logFile string, args ...string) (func() error, error) {
// set absolute path of binary, since we set the working directory to the root
pathname, err := os.Executable()
if err != nil {
return nil, err
}
bCmd := exec.CommandContext(ctx, pathname, args...)
if logFile != "" {
f, err := os.OpenFile(logFile, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644)
if err != nil {
return nil, err
}
defer f.Close()
bCmd.Stdout = f
bCmd.Stderr = f
}
bCmd.Stdin = nil
bCmd.Dir = "/"
bCmd.SysProcAttr = &syscall.SysProcAttr{
Setsid: true,
}
if err := bCmd.Start(); err != nil {
return nil, err
}
return bCmd.Wait, nil
}

View File

@ -1,19 +0,0 @@
//go:build !linux
package remote
import (
"context"
"github.com/docker/buildx/controller/control"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/command"
"github.com/pkg/errors"
"github.com/spf13/cobra"
)
func NewRemoteBuildxController(ctx context.Context, dockerCli command.Cli, opts control.ControlOptions, logger progress.SubLogger) (control.BuildxController, error) {
return nil, errors.New("remote buildx unsupported")
}
func AddControllerCommands(cmd *cobra.Command, dockerCli command.Cli) {}

View File

@ -1,430 +0,0 @@
package remote
import (
"context"
"io"
"syscall"
"time"
"github.com/docker/buildx/controller/pb"
"github.com/moby/sys/signal"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"golang.org/x/sync/errgroup"
)
type msgStream interface {
Send(*pb.Message) error
Recv() (*pb.Message, error)
}
type ioServerConfig struct {
stdin io.WriteCloser
stdout, stderr io.ReadCloser
// signalFn is a callback function called when a signal is reached to the client.
signalFn func(context.Context, syscall.Signal) error
// resizeFn is a callback function called when a resize event is reached to the client.
resizeFn func(context.Context, winSize) error
}
func serveIO(attachCtx context.Context, srv msgStream, initFn func(*pb.InitMessage) error, ioConfig *ioServerConfig) (err error) {
stdin, stdout, stderr := ioConfig.stdin, ioConfig.stdout, ioConfig.stderr
stream := &debugStream{srv, "server=" + time.Now().String()}
eg, ctx := errgroup.WithContext(attachCtx)
done := make(chan struct{})
msg, err := receive(ctx, stream)
if err != nil {
return err
}
init := msg.GetInit()
if init == nil {
return errors.Errorf("unexpected message: %T; wanted init", msg.GetInput())
}
sessionID := init.SessionID
if sessionID == "" {
return errors.New("no session ID is provided")
}
if err := initFn(init); err != nil {
return errors.Wrap(err, "failed to initialize IO server")
}
if stdout != nil {
stdoutReader, stdoutWriter := io.Pipe()
eg.Go(func() error {
<-done
return stdoutWriter.Close()
})
go func() {
// do not wait for read completion but return here and let the caller send EOF
// this allows us to return on ctx.Done() without being blocked by this reader.
io.Copy(stdoutWriter, stdout)
stdoutWriter.Close()
}()
eg.Go(func() error {
defer stdoutReader.Close()
return copyToStream(1, stream, stdoutReader)
})
}
if stderr != nil {
stderrReader, stderrWriter := io.Pipe()
eg.Go(func() error {
<-done
return stderrWriter.Close()
})
go func() {
// do not wait for read completion but return here and let the caller send EOF
// this allows us to return on ctx.Done() without being blocked by this reader.
io.Copy(stderrWriter, stderr)
stderrWriter.Close()
}()
eg.Go(func() error {
defer stderrReader.Close()
return copyToStream(2, stream, stderrReader)
})
}
msgCh := make(chan *pb.Message)
eg.Go(func() error {
defer close(msgCh)
for {
msg, err := receive(ctx, stream)
if err != nil {
return err
}
select {
case msgCh <- msg:
case <-done:
return nil
case <-ctx.Done():
return nil
}
}
})
eg.Go(func() error {
defer close(done)
for {
var msg *pb.Message
select {
case msg = <-msgCh:
case <-ctx.Done():
return nil
}
if msg == nil {
return nil
}
if file := msg.GetFile(); file != nil {
if file.Fd != 0 {
return errors.Errorf("unexpected fd: %v", file.Fd)
}
if stdin == nil {
continue // no stdin destination is specified so ignore the data
}
if len(file.Data) > 0 {
_, err := stdin.Write(file.Data)
if err != nil {
return err
}
}
if file.EOF {
stdin.Close()
}
} else if resize := msg.GetResize(); resize != nil {
if ioConfig.resizeFn != nil {
ioConfig.resizeFn(ctx, winSize{
cols: resize.Cols,
rows: resize.Rows,
})
}
} else if sig := msg.GetSignal(); sig != nil {
if ioConfig.signalFn != nil {
syscallSignal, ok := signal.SignalMap[sig.Name]
if !ok {
continue
}
ioConfig.signalFn(ctx, syscallSignal)
}
} else {
return errors.Errorf("unexpected message: %T", msg.GetInput())
}
}
})
return eg.Wait()
}
type ioAttachConfig struct {
stdin io.ReadCloser
stdout, stderr io.WriteCloser
signal <-chan syscall.Signal
resize <-chan winSize
}
type winSize struct {
rows uint32
cols uint32
}
func attachIO(ctx context.Context, stream msgStream, initMessage *pb.InitMessage, cfg ioAttachConfig) (retErr error) {
eg, ctx := errgroup.WithContext(ctx)
done := make(chan struct{})
if err := stream.Send(&pb.Message{
Input: &pb.Message_Init{
Init: initMessage,
},
}); err != nil {
return errors.Wrap(err, "failed to init")
}
if cfg.stdin != nil {
stdinReader, stdinWriter := io.Pipe()
eg.Go(func() error {
<-done
return stdinWriter.Close()
})
go func() {
// do not wait for read completion but return here and let the caller send EOF
// this allows us to return on ctx.Done() without being blocked by this reader.
io.Copy(stdinWriter, cfg.stdin)
stdinWriter.Close()
}()
eg.Go(func() error {
defer stdinReader.Close()
return copyToStream(0, stream, stdinReader)
})
}
if cfg.signal != nil {
eg.Go(func() error {
names := signalNames()
for {
var sig syscall.Signal
select {
case sig = <-cfg.signal:
case <-done:
return nil
case <-ctx.Done():
return nil
}
name := names[sig]
if name == "" {
continue
}
if err := stream.Send(&pb.Message{
Input: &pb.Message_Signal{
Signal: &pb.SignalMessage{
Name: name,
},
},
}); err != nil {
return errors.Wrap(err, "failed to send signal")
}
}
})
}
if cfg.resize != nil {
eg.Go(func() error {
for {
var win winSize
select {
case win = <-cfg.resize:
case <-done:
return nil
case <-ctx.Done():
return nil
}
if err := stream.Send(&pb.Message{
Input: &pb.Message_Resize{
Resize: &pb.ResizeMessage{
Rows: win.rows,
Cols: win.cols,
},
},
}); err != nil {
return errors.Wrap(err, "failed to send resize")
}
}
})
}
msgCh := make(chan *pb.Message)
eg.Go(func() error {
defer close(msgCh)
for {
msg, err := receive(ctx, stream)
if err != nil {
return err
}
select {
case msgCh <- msg:
case <-done:
return nil
case <-ctx.Done():
return nil
}
}
})
eg.Go(func() error {
eofs := make(map[uint32]struct{})
defer close(done)
for {
var msg *pb.Message
select {
case msg = <-msgCh:
case <-ctx.Done():
return nil
}
if msg == nil {
return nil
}
if file := msg.GetFile(); file != nil {
if _, ok := eofs[file.Fd]; ok {
continue
}
var out io.WriteCloser
switch file.Fd {
case 1:
out = cfg.stdout
case 2:
out = cfg.stderr
default:
return errors.Errorf("unsupported fd %d", file.Fd)
}
if out == nil {
logrus.Warnf("attachIO: no writer for fd %d", file.Fd)
continue
}
if len(file.Data) > 0 {
if _, err := out.Write(file.Data); err != nil {
return err
}
}
if file.EOF {
eofs[file.Fd] = struct{}{}
}
} else {
return errors.Errorf("unexpected message: %T", msg.GetInput())
}
}
})
return eg.Wait()
}
func receive(ctx context.Context, stream msgStream) (*pb.Message, error) {
msgCh := make(chan *pb.Message)
errCh := make(chan error)
go func() {
msg, err := stream.Recv()
if err != nil {
if errors.Is(err, io.EOF) {
return
}
errCh <- err
return
}
msgCh <- msg
}()
select {
case msg := <-msgCh:
return msg, nil
case err := <-errCh:
return nil, err
case <-ctx.Done():
return nil, context.Cause(ctx)
}
}
func copyToStream(fd uint32, snd msgStream, r io.Reader) error {
for {
buf := make([]byte, 32*1024)
n, err := r.Read(buf)
if err != nil {
if err == io.EOF {
break // break loop and send EOF
}
return err
} else if n > 0 {
if err := snd.Send(&pb.Message{
Input: &pb.Message_File{
File: &pb.FdMessage{
Fd: fd,
Data: buf[:n],
},
},
}); err != nil {
return err
}
}
}
return snd.Send(&pb.Message{
Input: &pb.Message_File{
File: &pb.FdMessage{
Fd: fd,
EOF: true,
},
},
})
}
func signalNames() map[syscall.Signal]string {
m := make(map[syscall.Signal]string, len(signal.SignalMap))
for name, value := range signal.SignalMap {
m[value] = name
}
return m
}
type debugStream struct {
msgStream
prefix string
}
func (s *debugStream) Send(msg *pb.Message) error {
switch m := msg.GetInput().(type) {
case *pb.Message_File:
if m.File.EOF {
logrus.Debugf("|---> File Message (sender:%v) fd=%d, EOF", s.prefix, m.File.Fd)
} else {
logrus.Debugf("|---> File Message (sender:%v) fd=%d, %d bytes", s.prefix, m.File.Fd, len(m.File.Data))
}
case *pb.Message_Resize:
logrus.Debugf("|---> Resize Message (sender:%v): %+v", s.prefix, m.Resize)
case *pb.Message_Signal:
logrus.Debugf("|---> Signal Message (sender:%v): %s", s.prefix, m.Signal.Name)
}
return s.msgStream.Send(msg)
}
func (s *debugStream) Recv() (*pb.Message, error) {
msg, err := s.msgStream.Recv()
if err != nil {
return nil, err
}
switch m := msg.GetInput().(type) {
case *pb.Message_File:
if m.File.EOF {
logrus.Debugf("|<--- File Message (receiver:%v) fd=%d, EOF", s.prefix, m.File.Fd)
} else {
logrus.Debugf("|<--- File Message (receiver:%v) fd=%d, %d bytes", s.prefix, m.File.Fd, len(m.File.Data))
}
case *pb.Message_Resize:
logrus.Debugf("|<--- Resize Message (receiver:%v): %+v", s.prefix, m.Resize)
case *pb.Message_Signal:
logrus.Debugf("|<--- Signal Message (receiver:%v): %s", s.prefix, m.Signal.Name)
}
return msg, nil
}

View File

@ -1,445 +0,0 @@
package remote
import (
"context"
"io"
"sync"
"sync/atomic"
"time"
"github.com/docker/buildx/build"
controllererrors "github.com/docker/buildx/controller/errdefs"
"github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/controller/processes"
"github.com/docker/buildx/util/desktop"
"github.com/docker/buildx/util/ioset"
"github.com/docker/buildx/util/progress"
"github.com/docker/buildx/version"
"github.com/moby/buildkit/client"
"github.com/pkg/errors"
"golang.org/x/sync/errgroup"
)
type BuildFunc func(ctx context.Context, options *pb.BuildOptions, stdin io.Reader, progress progress.Writer) (resp *client.SolveResponse, res *build.ResultHandle, inp *build.Inputs, err error)
func NewServer(buildFunc BuildFunc) *Server {
return &Server{
buildFunc: buildFunc,
}
}
type Server struct {
buildFunc BuildFunc
session map[string]*session
sessionMu sync.Mutex
}
type session struct {
buildOnGoing atomic.Bool
statusChan chan *pb.StatusResponse
cancelBuild func(error)
buildOptions *pb.BuildOptions
inputPipe *io.PipeWriter
result *build.ResultHandle
processes *processes.Manager
}
func (s *session) cancelRunningProcesses() {
s.processes.CancelRunningProcesses()
}
func (m *Server) ListProcesses(ctx context.Context, req *pb.ListProcessesRequest) (res *pb.ListProcessesResponse, err error) {
m.sessionMu.Lock()
defer m.sessionMu.Unlock()
s, ok := m.session[req.SessionID]
if !ok {
return nil, errors.Errorf("unknown session ID %q", req.SessionID)
}
res = new(pb.ListProcessesResponse)
res.Infos = append(res.Infos, s.processes.ListProcesses()...)
return res, nil
}
func (m *Server) DisconnectProcess(ctx context.Context, req *pb.DisconnectProcessRequest) (res *pb.DisconnectProcessResponse, err error) {
m.sessionMu.Lock()
defer m.sessionMu.Unlock()
s, ok := m.session[req.SessionID]
if !ok {
return nil, errors.Errorf("unknown session ID %q", req.SessionID)
}
return res, s.processes.DeleteProcess(req.ProcessID)
}
func (m *Server) Info(ctx context.Context, req *pb.InfoRequest) (res *pb.InfoResponse, err error) {
return &pb.InfoResponse{
BuildxVersion: &pb.BuildxVersion{
Package: version.Package,
Version: version.Version,
Revision: version.Revision,
},
}, nil
}
func (m *Server) List(ctx context.Context, req *pb.ListRequest) (res *pb.ListResponse, err error) {
keys := make(map[string]struct{})
m.sessionMu.Lock()
for k := range m.session {
keys[k] = struct{}{}
}
m.sessionMu.Unlock()
var keysL []string
for k := range keys {
keysL = append(keysL, k)
}
return &pb.ListResponse{
Keys: keysL,
}, nil
}
func (m *Server) Disconnect(ctx context.Context, req *pb.DisconnectRequest) (res *pb.DisconnectResponse, err error) {
sessionID := req.SessionID
if sessionID == "" {
return nil, errors.New("disconnect: empty session ID")
}
m.sessionMu.Lock()
if s, ok := m.session[sessionID]; ok {
if s.cancelBuild != nil {
s.cancelBuild(errors.WithStack(context.Canceled))
}
s.cancelRunningProcesses()
if s.result != nil {
s.result.Done()
}
}
delete(m.session, sessionID)
m.sessionMu.Unlock()
return &pb.DisconnectResponse{}, nil
}
func (m *Server) Close() error {
m.sessionMu.Lock()
for k := range m.session {
if s, ok := m.session[k]; ok {
if s.cancelBuild != nil {
s.cancelBuild(errors.WithStack(context.Canceled))
}
s.cancelRunningProcesses()
}
}
m.sessionMu.Unlock()
return nil
}
func (m *Server) Inspect(ctx context.Context, req *pb.InspectRequest) (*pb.InspectResponse, error) {
sessionID := req.SessionID
if sessionID == "" {
return nil, errors.New("inspect: empty session ID")
}
var bo *pb.BuildOptions
m.sessionMu.Lock()
if s, ok := m.session[sessionID]; ok {
bo = s.buildOptions
} else {
m.sessionMu.Unlock()
return nil, errors.Errorf("inspect: unknown key %v", sessionID)
}
m.sessionMu.Unlock()
return &pb.InspectResponse{Options: bo}, nil
}
func (m *Server) Build(ctx context.Context, req *pb.BuildRequest) (*pb.BuildResponse, error) {
sessionID := req.SessionID
if sessionID == "" {
return nil, errors.New("build: empty session ID")
}
// Prepare status channel and session
m.sessionMu.Lock()
if m.session == nil {
m.session = make(map[string]*session)
}
s, ok := m.session[sessionID]
if ok {
if !s.buildOnGoing.CompareAndSwap(false, true) {
m.sessionMu.Unlock()
return &pb.BuildResponse{}, errors.New("build ongoing")
}
s.cancelRunningProcesses()
s.result = nil
} else {
s = &session{}
s.buildOnGoing.Store(true)
}
s.processes = processes.NewManager()
statusChan := make(chan *pb.StatusResponse)
s.statusChan = statusChan
inR, inW := io.Pipe()
defer inR.Close()
s.inputPipe = inW
m.session[sessionID] = s
m.sessionMu.Unlock()
defer func() {
close(statusChan)
m.sessionMu.Lock()
s, ok := m.session[sessionID]
if ok {
s.statusChan = nil
s.buildOnGoing.Store(false)
}
m.sessionMu.Unlock()
}()
pw := pb.NewProgressWriter(statusChan)
// Build the specified request
ctx, cancel := context.WithCancelCause(ctx)
defer func() { cancel(errors.WithStack(context.Canceled)) }()
resp, res, _, buildErr := m.buildFunc(ctx, req.Options, inR, pw)
m.sessionMu.Lock()
if s, ok := m.session[sessionID]; ok {
// NOTE: buildFunc can return *build.ResultHandle even on error (e.g. when it's implemented using (github.com/docker/buildx/controller/build).RunBuild).
if res != nil {
s.result = res
s.cancelBuild = cancel
s.buildOptions = req.Options
m.session[sessionID] = s
if buildErr != nil {
var ref string
var ebr *desktop.ErrorWithBuildRef
if errors.As(buildErr, &ebr) {
ref = ebr.Ref
}
buildErr = controllererrors.WrapBuild(buildErr, sessionID, ref)
}
}
} else {
m.sessionMu.Unlock()
return nil, errors.Errorf("build: unknown session ID %v", sessionID)
}
m.sessionMu.Unlock()
if buildErr != nil {
return nil, buildErr
}
if resp == nil {
resp = &client.SolveResponse{}
}
return &pb.BuildResponse{
ExporterResponse: resp.ExporterResponse,
}, nil
}
func (m *Server) Status(req *pb.StatusRequest, stream pb.Controller_StatusServer) error {
sessionID := req.SessionID
if sessionID == "" {
return errors.New("status: empty session ID")
}
// Wait and get status channel prepared by Build()
var statusChan <-chan *pb.StatusResponse
for {
// TODO: timeout?
m.sessionMu.Lock()
if _, ok := m.session[sessionID]; !ok || m.session[sessionID].statusChan == nil {
m.sessionMu.Unlock()
time.Sleep(time.Millisecond) // TODO: wait Build without busy loop and make it cancellable
continue
}
statusChan = m.session[sessionID].statusChan
m.sessionMu.Unlock()
break
}
// forward status
for ss := range statusChan {
if ss == nil {
break
}
if err := stream.Send(ss); err != nil {
return err
}
}
return nil
}
func (m *Server) Input(stream pb.Controller_InputServer) (err error) {
// Get the target ref from init message
msg, err := stream.Recv()
if err != nil {
if !errors.Is(err, io.EOF) {
return err
}
return nil
}
init := msg.GetInit()
if init == nil {
return errors.Errorf("unexpected message: %T; wanted init", msg.GetInit())
}
sessionID := init.SessionID
if sessionID == "" {
return errors.New("input: no session ID is provided")
}
// Wait and get input stream pipe prepared by Build()
var inputPipeW *io.PipeWriter
for {
// TODO: timeout?
m.sessionMu.Lock()
if _, ok := m.session[sessionID]; !ok || m.session[sessionID].inputPipe == nil {
m.sessionMu.Unlock()
time.Sleep(time.Millisecond) // TODO: wait Build without busy loop and make it cancellable
continue
}
inputPipeW = m.session[sessionID].inputPipe
m.sessionMu.Unlock()
break
}
// Forward input stream
eg, ctx := errgroup.WithContext(context.TODO())
done := make(chan struct{})
msgCh := make(chan *pb.InputMessage)
eg.Go(func() error {
defer close(msgCh)
for {
msg, err := stream.Recv()
if err != nil {
if !errors.Is(err, io.EOF) {
return err
}
return nil
}
select {
case msgCh <- msg:
case <-done:
return nil
case <-ctx.Done():
return nil
}
}
})
eg.Go(func() (retErr error) {
defer close(done)
defer func() {
if retErr != nil {
inputPipeW.CloseWithError(retErr)
return
}
inputPipeW.Close()
}()
for {
var msg *pb.InputMessage
select {
case msg = <-msgCh:
case <-ctx.Done():
return context.Cause(ctx)
}
if msg == nil {
return nil
}
if data := msg.GetData(); data != nil {
if len(data.Data) > 0 {
_, err := inputPipeW.Write(data.Data)
if err != nil {
return err
}
}
if data.EOF {
return nil
}
}
}
})
return eg.Wait()
}
func (m *Server) Invoke(srv pb.Controller_InvokeServer) error {
containerIn, containerOut := ioset.Pipe()
defer func() { containerOut.Close(); containerIn.Close() }()
initDoneCh := make(chan *processes.Process)
initErrCh := make(chan error)
eg, egCtx := errgroup.WithContext(context.TODO())
srvIOCtx, srvIOCancel := context.WithCancelCause(egCtx)
eg.Go(func() error {
defer srvIOCancel(errors.WithStack(context.Canceled))
return serveIO(srvIOCtx, srv, func(initMessage *pb.InitMessage) (retErr error) {
defer func() {
if retErr != nil {
initErrCh <- retErr
}
}()
sessionID := initMessage.SessionID
cfg := initMessage.InvokeConfig
m.sessionMu.Lock()
s, ok := m.session[sessionID]
if !ok {
m.sessionMu.Unlock()
return errors.Errorf("invoke: unknown session ID %v", sessionID)
}
m.sessionMu.Unlock()
pid := initMessage.ProcessID
if pid == "" {
return errors.Errorf("invoke: specify process ID")
}
proc, ok := s.processes.Get(pid)
if !ok {
// Start a new process.
if cfg == nil {
return errors.New("no container config is provided")
}
var err error
proc, err = s.processes.StartProcess(pid, s.result, cfg)
if err != nil {
return err
}
}
// Attach containerIn to this process
proc.ForwardIO(&containerIn, srvIOCancel)
initDoneCh <- proc
return nil
}, &ioServerConfig{
stdin: containerOut.Stdin,
stdout: containerOut.Stdout,
stderr: containerOut.Stderr,
// TODO: signal, resize
})
})
eg.Go(func() (rErr error) {
defer srvIOCancel(errors.WithStack(context.Canceled))
// Wait for init done
var proc *processes.Process
select {
case p := <-initDoneCh:
proc = p
case err := <-initErrCh:
return err
case <-egCtx.Done():
return egCtx.Err()
}
// Wait for IO done
select {
case <-srvIOCtx.Done():
return srvIOCtx.Err()
case err := <-proc.Done():
return err
case <-egCtx.Done():
return egCtx.Err()
}
})
return eg.Wait()
}

View File

@ -38,6 +38,9 @@ target "lint" {
inherits = ["_common"]
dockerfile = "./hack/dockerfiles/lint.Dockerfile"
output = ["type=cacheonly"]
args = {
GOLANGCI_FROM_SOURCE = "true"
}
platforms = GOLANGCI_LINT_MULTIPLATFORM != "" ? [
"darwin/amd64",
"darwin/arm64",
@ -70,6 +73,13 @@ target "lint-gopls" {
target = "gopls-analyze"
}
target "modernize-fix" {
inherits = ["_common"]
dockerfile = "./hack/dockerfiles/lint.Dockerfile"
target = "modernize-fix"
output = ["."]
}
target "validate-vendor" {
inherits = ["_common"]
dockerfile = "./hack/dockerfiles/vendor.Dockerfile"
@ -95,13 +105,6 @@ target "validate-authors" {
output = ["type=cacheonly"]
}
target "validate-generated-files" {
inherits = ["_common"]
dockerfile = "./hack/dockerfiles/generated-files.Dockerfile"
target = "validate"
output = ["type=cacheonly"]
}
target "update-vendor" {
inherits = ["_common"]
dockerfile = "./hack/dockerfiles/vendor.Dockerfile"
@ -127,13 +130,6 @@ target "update-authors" {
output = ["."]
}
target "update-generated-files" {
inherits = ["_common"]
dockerfile = "./hack/dockerfiles/generated-files.Dockerfile"
target = "update"
output = ["."]
}
target "mod-outdated" {
inherits = ["_common"]
dockerfile = "./hack/dockerfiles/vendor.Dockerfile"

View File

@ -227,6 +227,8 @@ The following table shows the complete list of attributes that you can assign to
| [`description`](#targetdescription) | String | Description of a target |
| [`dockerfile-inline`](#targetdockerfile-inline) | String | Inline Dockerfile string |
| [`dockerfile`](#targetdockerfile) | String | Dockerfile location |
| [`entitlements`](#targetentitlements) | List | Permissions that the build process requires to run |
| [`extra-hosts`](#targetextra-hosts) | List | Customs host-to-IP mapping |
| [`inherits`](#targetinherits) | List | Inherit attributes from other targets |
| [`labels`](#targetlabels) | Map | Metadata for images |
| [`matrix`](#targetmatrix) | Map | Define a set of variables that forks a target into multiple targets. |
@ -297,7 +299,12 @@ example adds annotations to both the image index and manifests.
```hcl
target "default" {
output = [{ type = "image", name = "foo" }]
output = [
{
type = "image"
name = "foo"
}
]
annotations = ["index,manifest:org.opencontainers.image.authors=dvdksn"]
}
```
@ -314,11 +321,11 @@ This attribute accepts the long-form CSV version of attestation parameters.
target "default" {
attest = [
{
type = "provenance",
mode = "max",
type = "provenance"
mode = "max"
},
{
type = "sbom",
type = "sbom"
}
]
}
@ -336,12 +343,12 @@ This takes a list value, so you can specify multiple cache sources.
target "app" {
cache-from = [
{
type = "s3",
region = "eu-west-1",
type = "s3"
region = "eu-west-1"
bucket = "mybucket"
},
{
type = "registry",
type = "registry"
ref = "user/repo:cache"
}
]
@ -360,12 +367,12 @@ This takes a list value, so you can specify multiple cache export targets.
target "app" {
cache-to = [
{
type = "s3",
region = "eu-west-1",
type = "s3"
region = "eu-west-1"
bucket = "mybucket"
},
{
type = "inline",
type = "inline"
}
]
}
@ -445,9 +452,9 @@ a context based on the pattern of the context value.
```hcl
# docker-bake.hcl
target "app" {
contexts = {
alpine = "docker-image://alpine:3.13"
}
contexts = {
alpine = "docker-image://alpine:3.13"
}
}
```
@ -462,9 +469,9 @@ RUN echo "Hello world"
```hcl
# docker-bake.hcl
target "app" {
contexts = {
src = "../path/to/source"
}
contexts = {
src = "../path/to/source"
}
}
```
@ -485,12 +492,13 @@ COPY --from=src . .
```hcl
# docker-bake.hcl
target "base" {
dockerfile = "baseapp.Dockerfile"
dockerfile = "baseapp.Dockerfile"
}
target "app" {
contexts = {
baseapp = "target:base"
}
contexts = {
baseapp = "target:base"
}
}
```
@ -507,11 +515,11 @@ functionality.
```hcl
target "lint" {
description = "Runs golangci-lint to detect style errors"
args = {
GOLANGCI_LINT_VERSION = null
}
dockerfile = "lint.Dockerfile"
description = "Runs golangci-lint to detect style errors"
args = {
GOLANGCI_LINT_VERSION = null
}
dockerfile = "lint.Dockerfile"
}
```
@ -577,6 +585,20 @@ target "integration-tests" {
Entitlements are enabled with a two-step process. First, a target must declare the entitlements it requires. Secondly, when invoking the `bake` command, the user must grant the entitlements by passing the `--allow` flag or confirming the entitlements when prompted in an interactive terminal. This is to ensure that the user is aware of the possibly insecure permissions they are granting to the build process.
### `target.extra-hosts`
Use the `extra-hosts` attribute to define customs host-to-IP mapping for the
target. This has the same effect as passing a [`--add-host`][add-host] flag to
the build command.
```hcl
target "default" {
extra-hosts = {
my_hostname = "8.8.8.8"
}
}
```
### `target.inherits`
A target can inherit attributes from other targets.
@ -913,8 +935,15 @@ variable "HOME" {
target "default" {
secret = [
{ type = "env", id = "KUBECONFIG" },
{ type = "file", id = "aws", src = "${HOME}/.aws/credentials" },
{
type = "env"
id = "KUBECONFIG"
},
{
type = "file"
id = "aws"
src = "${HOME}/.aws/credentials"
}
]
}
```
@ -1068,6 +1097,7 @@ or interpolate them in attribute values in your Bake file.
```hcl
variable "TAG" {
type = string
default = "latest"
}
@ -1089,6 +1119,206 @@ overriding the default `latest` value shown in the previous example.
$ TAG=dev docker buildx bake webapp-dev
```
Variables can also be assigned an explicit type.
If provided, it will be used to validate the default value (if set), as well as any overrides.
This is particularly useful when using complex types which are intended to be overridden.
The previous example could be expanded to apply an arbitrary series of tags.
```hcl
variable "TAGS" {
default = ["latest"]
type = list(string)
}
target "webapp-dev" {
dockerfile = "Dockerfile.webapp"
tags = [for tag in TAGS: "docker.io/username/webapp:${tag}"]
}
```
This example shows how to generate three tags without changing the file
or using custom functions/parsing:
```console
$ TAGS=dev,latest,2 docker buildx bake webapp-dev
```
### Variable typing
The following primitive types are available:
* `string`
* `number`
* `bool`
The type is expressed like a keyword; it must be expressed as a literal:
```hcl
variable "OK" {
type = string
}
# cannot be an actual string
variable "BAD" {
type = "string"
}
# cannot be the result of an expression
variable "ALSO_BAD" {
type = lower("string")
}
```
Specifying primitive types can be valuable to show intent (especially when a default is not provided),
but bake will generally behave as expected without explicit typing.
Complex types are expressed with "type constructors"; they are:
* `tuple([<type>,...])`
* `list(<type>)`
* `set(<type>)`
* `map(<type>)`
* `object({<attr>=<type>},...})`
The following are examples of each of those, as well as how the (optional) default value would be expressed:
```hcl
# structured way to express "1.2.3-alpha"
variable "MY_VERSION" {
type = tuple([number, number, number, string])
default = [1, 2, 3, "alpha"]
}
# JDK versions used in a matrix build
variable "JDK_VERSIONS" {
type = list(number)
default = [11, 17, 21]
}
# better way to express the previous example; this will also
# enforce set semantics and allow use of set-based functions
variable "JDK_VERSIONS" {
type = set(number)
default = [11, 17, 21]
}
# with the help of lookup(), translate a 'feature' to a tag
variable "FEATURE_TO_NAME" {
type = map(string)
default = {featureA = "slim", featureB = "tiny"}
}
# map a branch name to a registry location
variable "PUSH_DESTINATION" {
type = object({branch = string, registry = string})
default = {branch = "main", registry = "prod-registry.invalid.com"}
}
# make the previous example more useful with composition
variable "PUSH_DESTINATIONS" {
type = list(object({branch = string, registry = string}))
default = [
{branch = "develop", registry = "test-registry.invalid.com"},
{branch = "main", registry = "prod-registry.invalid.com"},
]
}
```
Note that in each example, the default value would be valid even if typing was not present.
If typing was omitted, the first three would all be considered `tuple`;
you would be restricted to functions that operate on `tuple` and, for example, not be able to add elements.
Similarly, the third and fourth would both be considered `object`, with the limits and semantics of that type.
In short, in the absence of a type, any value delimited with `[]` is a `tuple`
and value delimited with `{}` is an `object`.
Explicit typing for complex types not only opens up the ability to use functions applicable to that specialized type,
but is also a precondition for providing overrides.
> [!NOTE]
> See [HCL Type Expressions][typeexpr] page for more details.
### Overriding variables
As mentioned in the [intro to variables](#variable), primitive types (`string`, `number`, and `bool`)
can be overridden without typing and will generally behave as expected.
(When explicit typing is not provided, a variable is assumed to be primitive when the default value lacks `{}` or `[]` delimiters;
a variable with neither typing nor a default value is treated as `string`.)
Naturally, these same overrides can be used alongside explicit typing too;
they may help in edge cases where you want `VAR=true` to be a `string`, where without typing,
it may be a `string` or a `bool` depending on how/where it's used.
Overriding a variable with a complex type can only be done when the type is provided.
This is still done via environment variables, but the values can be provided via CSV or JSON.
#### CSV overrides
This is considered the canonical method and is well suited to interactive usage.
It is assumed that `list` and `set` will be the most common complex type,
as well as the most common complex type designed to be overridden.
Thus, there is full CSV support for `list` and `set`
(and `tuple`; despite being considered a structural type, it is more like a collection type in this regard).
There is limited support for `map` and `object` and no support for composite types;
for these advanced cases, an alternative mechanism [using JSON](#json-overrides) is available.
#### JSON overrides
Overrides can also be provided via JSON.
This is the only method available for providing some complex types and may be convenient if overrides are already JSON
(for example, if they come from a JSON API).
It can also be used when dealing with values are difficult or impossible to specify using CSV (e.g., values containing quotes or commas).
To use JSON, simply append `_JSON` to the variable name.
In this contrived example, CSV cannot handle the second value; despite being a supported CSV type, JSON must be used:
```hcl
variable "VALS" {
type = list(string)
default = ["some", "list"]
}
```
```console
$ cat data.json
["hello","with,comma","with\"quote"]
$ VALS_JSON=$(< data.json) docker buildx bake
# CSV equivalent, though the second value cannot be expressed at all
$ VALS='hello,"with""quote"' docker buildx bake
```
This example illustrates some precedence and usage rules:
```hcl
variable "FOO" {
type = string
default = "foo"
}
variable "FOO_JSON" {
type = string
default = "foo"
}
```
The variable `FOO` can *only* be overridden using CSV because `FOO_JSON`, which would typically used for a JSON override,
is already a defined variable.
Since `FOO_JSON` is an actual variable, setting that environment variable would be expected to a CSV value.
A JSON override *is* possible for this variable, using environment variable `FOO_JSON_JSON`.
```Console
# These three are all equivalent, setting variable FOO=bar
$ FOO=bar docker buildx bake <...>
$ FOO='bar' docker buildx bake <...>
$ FOO="bar" docker buildx bake <...>
# Sets *only* variable FOO_JSON; FOO is untouched
$ FOO_JSON=bar docker buildx bake <...>
# This also sets FOO_JSON, but will fail due to not being valid JSON
$ FOO_JSON_JSON=bar docker buildx bake <...>
# These are all equivalent
$ cat data.json
"bar"
$ FOO_JSON_JSON=$(< data.json) docker buildx bake <...>
$ FOO_JSON_JSON='"bar"' docker buildx bake <...>
$ FOO_JSON=bar docker buildx bake <...>
# This results in setting two different variables, both specified as CSV (FOO=bar and FOO_JSON="baz")
$ FOO=bar FOO_JSON='"baz"' docker buildx bake <...>
# These refer to the same variable with FOO_JSON_JSON having precedence and read as JSON (FOO_JSON=baz)
$ FOO_JSON=bar FOO_JSON_JSON='"baz"' docker buildx bake <...>
```
### Built-in variables
The following variables are built-ins that you can use with Bake without having
@ -1208,6 +1438,7 @@ target "webapp-dev" {
<!-- external links -->
[add-host]: https://docs.docker.com/reference/cli/docker/buildx/build/#add-host
[attestations]: https://docs.docker.com/build/attestations/
[bake_stdlib]: https://github.com/docker/buildx/blob/master/bake/hclparser/stdlib.go
[build-arg]: https://docs.docker.com/reference/cli/docker/image/build/#build-arg
@ -1226,4 +1457,5 @@ target "webapp-dev" {
[ssh]: https://docs.docker.com/reference/cli/docker/buildx/build/#ssh
[tag]: https://docs.docker.com/reference/cli/docker/image/build/#tag
[target]: https://docs.docker.com/reference/cli/docker/image/build/#target
[typeexpr]: https://github.com/hashicorp/hcl/tree/main/ext/typeexpr
[userfunc]: https://github.com/hashicorp/hcl/tree/main/ext/userfunc

View File

@ -26,7 +26,6 @@ Arguments available after `buildx debug build` are the same as the normal `build
```console
$ docker buildx debug --invoke /bin/sh build .
[+] Building 4.2s (19/19) FINISHED
=> [internal] connecting to local controller 0.0s
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 32B 0.0s
=> [internal] load .dockerignore 0.0s
@ -68,7 +67,6 @@ If you want to start a debug session when a build fails, you can use
```console
$ docker buildx debug --on=error build .
[+] Building 4.2s (19/19) FINISHED
=> [internal] connecting to local controller 0.0s
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 32B 0.0s
=> [internal] load .dockerignore 0.0s
@ -94,7 +92,6 @@ can use `buildx debug` command to start a debug session.
```
$ docker buildx debug
[+] Building 4.2s (19/19) FINISHED
=> [internal] connecting to local controller 0.0s
(buildx)
```
@ -125,41 +122,3 @@ Available commands are:
rollback re-runs the interactive container with the step's rootfs contents
```
## Build controllers
Debugging is performed using a buildx "controller", which provides a high-level
abstraction to perform builds. By default, the local controller is used for a
more stable experience which runs all builds in-process. However, you can also
use the remote controller to detach the build process from the CLI.
To detach the build process from the CLI, you can use the `--detach=true` flag with
the build command.
```console
$ docker buildx debug --invoke /bin/sh build --detach=true .
```
If you start a debugging session using the `--invoke` flag with a detached
build, then you can attach to it using the `buildx debug` command to
immediately enter the monitor mode.
```console
$ docker buildx debug
[+] Building 0.0s (1/1) FINISHED
=> [internal] connecting to remote controller
(buildx) list
ID CURRENT_SESSION
xfe1162ovd9def8yapb4ys66t false
(buildx) attach xfe1162ovd9def8yapb4ys66t
Attached to process "". Press Ctrl-a-c to switch to the new container
(buildx) ps
PID CURRENT_SESSION COMMAND
3ug8iqaufiwwnukimhqqt06jz false [sh]
(buildx) attach 3ug8iqaufiwwnukimhqqt06jz
Attached to process "3ug8iqaufiwwnukimhqqt06jz". Press Ctrl-a-c to switch to the new container
(buildx) Switched IO
/ # ls
bin etc lib mnt proc run srv tmp var
dev home media opt root sbin sys usr work
/ #
```

View File

@ -143,6 +143,11 @@ Use the `-f` / `--file` option to specify the build definition file to use.
The file can be an HCL, JSON or Compose file. If multiple files are specified,
all are read and the build configurations are combined.
Alternatively, the environment variable `BUILDX_BAKE_FILE` can be used to specify the build definition to use.
This is mutually exclusive with `-f` / `--file`; if both are specified, the environment variable is ignored.
Multiple definitions can be specified by separating them with the system's path separator
(typically `;` on Windows and `:` elsewhere), but can be changed with `BUILDX_BAKE_PATH_SEPARATOR`.
You can pass the names of the targets to build, to build only specific target(s).
The following example builds the `db` and `webapp-release` targets that are
defined in the `docker-bake.dev.hcl` file:
@ -198,12 +203,15 @@ To list variables:
```console
$ docker buildx bake --list=variables
VARIABLE VALUE DESCRIPTION
REGISTRY docker.io/username Registry and namespace
IMAGE_NAME my-app Image name
GO_VERSION <null>
VARIABLE TYPE VALUE DESCRIPTION
REGISTRY string docker.io/username Registry and namespace
IMAGE_NAME string my-app Image name
GO_VERSION <null>
DEBUG bool false Add debug symbols
```
Variable types will be shown when set using the `type` property in the Bake file.
By default, the output of `docker buildx bake --list` is presented in a table
format. Alternatively, you can use a long-form CSV syntax and specify a
`format` attribute to output the list in JSON.
@ -363,6 +371,7 @@ You can override the following fields:
* `context`
* `dockerfile`
* `entitlements`
* `extra-hosts`
* `labels`
* `load`
* `no-cache`

View File

@ -28,7 +28,6 @@ Start a build
| [`--cgroup-parent`](#cgroup-parent) | `string` | | Set the parent cgroup for the `RUN` instructions during build |
| [`--check`](#check) | `bool` | | Shorthand for `--call=check` |
| `-D`, `--debug` | `bool` | | Enable debug logging |
| `--detach` | `bool` | | Detach buildx server (supported only on linux) (EXPERIMENTAL) |
| [`-f`](#file), [`--file`](#file) | `string` | | Name of the Dockerfile (default: `PATH/Dockerfile`) |
| `--iidfile` | `string` | | Write the image ID to a file |
| `--label` | `stringArray` | | Set metadata for an image |
@ -44,10 +43,8 @@ Start a build
| `--pull` | `bool` | | Always attempt to pull all referenced images |
| [`--push`](#push) | `bool` | | Shorthand for `--output=type=registry` |
| `-q`, `--quiet` | `bool` | | Suppress the build output and print image ID on success |
| `--root` | `string` | | Specify root directory of server to connect (EXPERIMENTAL) |
| [`--sbom`](#sbom) | `string` | | Shorthand for `--attest=type=sbom` |
| [`--secret`](#secret) | `stringArray` | | Secret to expose to the build (format: `id=mysecret[,src=/local/secret]`) |
| `--server-config` | `string` | | Specify buildx server config file (used only when launching new server) (EXPERIMENTAL) |
| [`--shm-size`](#shm-size) | `bytes` | `0` | Shared memory size for build containers |
| [`--ssh`](#ssh) | `stringArray` | | SSH agent socket or keys to expose to the build (format: `default\|<id>[=<socket>\|<key>[,<key>]]`) |
| [`-t`](#tag), [`--tag`](#tag) | `stringArray` | | Name and optionally a tag (format: `name:tag`) |
@ -243,13 +240,15 @@ Learn more about the built-in build arguments in the [Dockerfile reference docs]
Define additional build context with specified contents. In Dockerfile the context can be accessed when `FROM name` or `--from=name` is used.
When Dockerfile defines a stage with the same name it is overwritten.
The value can be a local source directory, [local OCI layout compliant directory](https://github.com/opencontainers/image-spec/blob/main/image-layout.md), container image (with docker-image:// prefix), Git or HTTP URL.
The value can be a:
Replace `alpine:latest` with a pinned one:
- local source directory
- [local OCI layout compliant directory](https://github.com/opencontainers/image-spec/blob/main/image-layout.md)
- container image
- Git URL
- HTTP URL
```console
$ docker buildx build --build-context alpine=docker-image://alpine@sha256:0123456789 .
```
#### <a name="local-path"></a> Use a local path
Expose a secondary local source directory:
@ -258,6 +257,16 @@ $ docker buildx build --build-context project=path/to/project/source .
# docker buildx build --build-context project=https://github.com/myuser/project.git .
```
#### <a name="docker-image"></a> Use a container image
Use the `docker-image://` scheme.
Replace `alpine:latest` with a pinned one:
```console
$ docker buildx build --build-context alpine=docker-image://alpine@sha256:0123456789 .
```
```dockerfile
# syntax=docker/dockerfile:1
FROM alpine
@ -266,7 +275,10 @@ COPY --from=project myfile /
#### <a name="source-oci-layout"></a> Use an OCI layout directory as build context
Source an image from a local [OCI layout compliant directory](https://github.com/opencontainers/image-spec/blob/main/image-layout.md),
Use the `oci-layout:///` scheme.
Source an image from a local
[OCI layout compliant directory](https://github.com/opencontainers/image-spec/blob/main/image-layout.md),
either by tag, or by digest:
```console
@ -284,7 +296,6 @@ FROM foo
```
The OCI layout directory must be compliant with the [OCI layout specification](https://github.com/opencontainers/image-spec/blob/main/image-layout.md).
You can reference an image in the layout using either tags, or the exact digest.
### <a name="builder"></a> Override the configured builder instance (--builder)

View File

@ -12,16 +12,12 @@ Start debugger (EXPERIMENTAL)
### Options
| Name | Type | Default | Description |
|:------------------|:---------|:--------|:--------------------------------------------------------------------------------------------------------------------|
| `--builder` | `string` | | Override the configured builder instance |
| `-D`, `--debug` | `bool` | | Enable debug logging |
| `--detach` | `bool` | `true` | Detach buildx server for the monitor (supported only on linux) (EXPERIMENTAL) |
| `--invoke` | `string` | | Launch a monitor with executing specified command (EXPERIMENTAL) |
| `--on` | `string` | `error` | When to launch the monitor ([always, error]) (EXPERIMENTAL) |
| `--progress` | `string` | `auto` | Set type of progress output (`auto`, `plain`, `tty`, `rawjson`) for the monitor. Use plain to show container output |
| `--root` | `string` | | Specify root directory of server to connect for the monitor (EXPERIMENTAL) |
| `--server-config` | `string` | | Specify buildx server config file for the monitor (used only when launching new server) (EXPERIMENTAL) |
| Name | Type | Default | Description |
|:----------------|:---------|:--------|:-----------------------------------------------------------------|
| `--builder` | `string` | | Override the configured builder instance |
| `-D`, `--debug` | `bool` | | Enable debug logging |
| `--invoke` | `string` | | Launch a monitor with executing specified command (EXPERIMENTAL) |
| `--on` | `string` | `error` | When to launch the monitor ([always, error]) (EXPERIMENTAL) |
<!---MARKER_GEN_END-->

View File

@ -24,7 +24,6 @@ Start a build
| `--cgroup-parent` | `string` | | Set the parent cgroup for the `RUN` instructions during build |
| `--check` | `bool` | | Shorthand for `--call=check` |
| `-D`, `--debug` | `bool` | | Enable debug logging |
| `--detach` | `bool` | | Detach buildx server (supported only on linux) (EXPERIMENTAL) |
| `-f`, `--file` | `string` | | Name of the Dockerfile (default: `PATH/Dockerfile`) |
| `--iidfile` | `string` | | Write the image ID to a file |
| `--label` | `stringArray` | | Set metadata for an image |
@ -40,10 +39,8 @@ Start a build
| `--pull` | `bool` | | Always attempt to pull all referenced images |
| `--push` | `bool` | | Shorthand for `--output=type=registry` |
| `-q`, `--quiet` | `bool` | | Suppress the build output and print image ID on success |
| `--root` | `string` | | Specify root directory of server to connect (EXPERIMENTAL) |
| `--sbom` | `string` | | Shorthand for `--attest=type=sbom` |
| `--secret` | `stringArray` | | Secret to expose to the build (format: `id=mysecret[,src=/local/secret]`) |
| `--server-config` | `string` | | Specify buildx server config file (used only when launching new server) (EXPERIMENTAL) |
| `--shm-size` | `bytes` | `0` | Shared memory size for build containers |
| `--ssh` | `stringArray` | | SSH agent socket or keys to expose to the build (format: `default\|<id>[=<socket>\|<key>[,<key>]]`) |
| `-t`, `--tag` | `stringArray` | | Name and optionally a tag (format: `name:tag`) |

View File

@ -1,5 +1,9 @@
# docker buildx dial-stdio
```text
docker buildx dial-stdio [OPTIONS] < in.fifo > out.fifo &
```
<!---MARKER_GEN_START-->
Proxy current stdio streams to builder instance
@ -17,13 +21,15 @@ Proxy current stdio streams to builder instance
## Description
dial-stdio uses the stdin and stdout streams of the command to proxy to the configured builder instance.
It is not intended to be used by humans, but rather by other tools that want to interact with the builder instance via BuildKit API.
dial-stdio uses the stdin and stdout streams of the command to proxy to the
configured builder instance. It is not intended to be used by humans, but
rather by other tools that want to interact with the builder instance via
BuildKit API.
## Examples
Example go program that uses the dial-stdio command wire up a buildkit client.
This is for example use only and may not be suitable for production use.
This is, for example, use only and may not be suitable for production use.
```go
client.New(ctx, "", client.WithContextDialer(func(context.Context, string) (net.Conn, error) {
@ -45,4 +51,4 @@ client.New(ctx, "", client.WithContextDialer(func(context.Context, string) (net.
return c2
}))
```
```

View File

@ -1,7 +1,7 @@
# buildx du
```text
docker buildx du
docker buildx du [OPTIONS]
```
<!---MARKER_GEN_START-->

View File

@ -1,20 +1,24 @@
# docker buildx history
```text
docker buildx history [OPTIONS] COMMAND
```
<!---MARKER_GEN_START-->
Commands to work on build records
### Subcommands
| Name | Description |
|:---------------------------------------|:-----------------------------------------------|
| [`export`](buildx_history_export.md) | Export a build into Docker Desktop bundle |
| [`import`](buildx_history_import.md) | Import a build into Docker Desktop |
| [`inspect`](buildx_history_inspect.md) | Inspect a build |
| [`logs`](buildx_history_logs.md) | Print the logs of a build |
| [`ls`](buildx_history_ls.md) | List build records |
| [`open`](buildx_history_open.md) | Open a build in Docker Desktop |
| [`rm`](buildx_history_rm.md) | Remove build records |
| [`trace`](buildx_history_trace.md) | Show the OpenTelemetry trace of a build record |
| Name | Description |
|:---------------------------------------|:------------------------------------------------|
| [`export`](buildx_history_export.md) | Export build records into Docker Desktop bundle |
| [`import`](buildx_history_import.md) | Import build records into Docker Desktop |
| [`inspect`](buildx_history_inspect.md) | Inspect a build record |
| [`logs`](buildx_history_logs.md) | Print the logs of a build record |
| [`ls`](buildx_history_ls.md) | List build records |
| [`open`](buildx_history_open.md) | Open a build record in Docker Desktop |
| [`rm`](buildx_history_rm.md) | Remove build records |
| [`trace`](buildx_history_trace.md) | Show the OpenTelemetry trace of a build record |
### Options
@ -27,3 +31,32 @@ Commands to work on build records
<!---MARKER_GEN_END-->
### Build references
Most `buildx history` subcommands accept a build reference to identify which
build to act on. You can specify the build in two ways:
- By build ID, fetched by `docker buildx history ls`:
```console
docker buildx history export qu2gsuo8ejqrwdfii23xkkckt --output build.dockerbuild
```
- By relative offset, to refer to recent builds:
```console
docker buildx history export ^1 --output build.dockerbuild
```
- `^0` or no reference targets the most recent build
- `^1` refers to the build before the most recent
- `^2` refers to two builds back, and so on
Offset references are supported in the following `buildx history` commands:
- `logs`
- `inspect`
- `open`
- `trace`
- `export`
- `rm`

View File

@ -1,17 +1,90 @@
# docker buildx history export
<!---MARKER_GEN_START-->
Export a build into Docker Desktop bundle
Export build records into Docker Desktop bundle
### Options
| Name | Type | Default | Description |
|:-----------------|:---------|:--------|:-----------------------------------------|
| `--all` | `bool` | | Export all records for the builder |
| `--builder` | `string` | | Override the configured builder instance |
| `-D`, `--debug` | `bool` | | Enable debug logging |
| `-o`, `--output` | `string` | | Output file path |
| Name | Type | Default | Description |
|:---------------------------------------|:---------|:--------|:----------------------------------------------------|
| [`--all`](#all) | `bool` | | Export all build records for the builder |
| [`--builder`](#builder) | `string` | | Override the configured builder instance |
| [`-D`](#debug), [`--debug`](#debug) | `bool` | | Enable debug logging |
| [`--finalize`](#finalize) | `bool` | | Ensure build records are finalized before exporting |
| [`-o`](#output), [`--output`](#output) | `string` | | Output file path |
<!---MARKER_GEN_END-->
## Description
Export one or more build records to `.dockerbuild` archive files. These archives
contain metadata, logs, and build outputs, and can be imported into Docker
Desktop or shared across environments.
## Examples
### <a name="all"></a> Export all build records to a file (--all)
Use the `--all` flag and redirect the output:
```console
docker buildx history export --all > all-builds.dockerbuild
```
Or use the `--output` flag:
```console
docker buildx history export --all -o all-builds.dockerbuild
```
### <a name="builder"></a> Use a specific builder instance (--builder)
```console
docker buildx history export --builder builder0 ^1 -o builder0-build.dockerbuild
```
### <a name="debug"></a> Enable debug logging (--debug)
```console
docker buildx history export --debug qu2gsuo8ejqrwdfii23xkkckt -o debug-build.dockerbuild
```
### <a name="finalize"></a> Ensure build records are finalized before exporting (--finalize)
Clients can report their own traces concurrently, and not all traces may be
saved yet by the time of the export. Use the `--finalize` flag to ensure all
traces are finalized before exporting.
```console
docker buildx history export --finalize qu2gsuo8ejqrwdfii23xkkckt -o finalized-build.dockerbuild
```
### <a name="output"></a> Export a single build to a custom file (--output)
```console
docker buildx history export qu2gsuo8ejqrwdfii23xkkckt --output mybuild.dockerbuild
```
You can find build IDs by running:
```console
docker buildx history ls
```
To export two builds to separate files:
```console
# Using build IDs
docker buildx history export qu2gsuo8ejqrwdfii23xkkckt qsiifiuf1ad9pa9qvppc0z1l3 -o multi.dockerbuild
# Or using relative offsets
docker buildx history export ^1 ^2 -o multi.dockerbuild
```
Or use shell redirection:
```console
docker buildx history export ^1 > mybuild.dockerbuild
docker buildx history export ^2 > backend-build.dockerbuild
```

View File

@ -1,16 +1,51 @@
# docker buildx history import
```text
docker buildx history import [OPTIONS] -
```
<!---MARKER_GEN_START-->
Import a build into Docker Desktop
Import build records into Docker Desktop
### Options
| Name | Type | Default | Description |
|:----------------|:--------------|:--------|:-----------------------------------------|
| `--builder` | `string` | | Override the configured builder instance |
| `-D`, `--debug` | `bool` | | Enable debug logging |
| `-f`, `--file` | `stringArray` | | Import from a file path |
| Name | Type | Default | Description |
|:---------------------------------|:--------------|:--------|:-----------------------------------------|
| `--builder` | `string` | | Override the configured builder instance |
| `-D`, `--debug` | `bool` | | Enable debug logging |
| [`-f`](#file), [`--file`](#file) | `stringArray` | | Import from a file path |
<!---MARKER_GEN_END-->
## Description
Import a build record from a `.dockerbuild` archive into Docker Desktop. This
lets you view, inspect, and analyze builds created in other environments or CI
pipelines.
## Examples
### Import a `.dockerbuild` archive from standard input
```console
docker buildx history import < mybuild.dockerbuild
```
### <a name="file"></a> Import a build archive from a file (--file)
```console
docker buildx history import --file ./artifacts/backend-build.dockerbuild
```
### Open a build manually
By default, the `import` command automatically opens the imported build in Docker
Desktop. You don't need to run `open` unless you're opening a specific build
or re-opening it later.
If you've imported multiple builds, you can open one manually:
```console
docker buildx history open ci-build
```

View File

@ -1,13 +1,17 @@
# docker buildx history inspect
```text
docker buildx history inspect [OPTIONS] [REF|COMMAND]
```
<!---MARKER_GEN_START-->
Inspect a build
Inspect a build record
### Subcommands
| Name | Description |
|:-----------------------------------------------------|:---------------------------|
| [`attachment`](buildx_history_inspect_attachment.md) | Inspect a build attachment |
| Name | Description |
|:-----------------------------------------------------|:----------------------------------|
| [`attachment`](buildx_history_inspect_attachment.md) | Inspect a build record attachment |
### Options
@ -21,13 +25,61 @@ Inspect a build
<!---MARKER_GEN_END-->
## Description
Inspect a build record to view metadata such as duration, status, build inputs,
platforms, outputs, and attached artifacts. You can also use flags to extract
provenance, SBOMs, or other detailed information.
## Examples
### Inspect the most recent build
```console
$ docker buildx history inspect
Name: buildx (binaries)
Context: .
Dockerfile: Dockerfile
VCS Repository: https://github.com/crazy-max/buildx.git
VCS Revision: f15eaa1ee324ffbbab29605600d27a84cab86361
Target: binaries
Platforms: linux/amd64
Keep Git Dir: true
Started: 2025-02-07 11:56:24
Duration: 1m 1s
Build Steps: 16/16 (25% cached)
Image Resolve Mode: local
Materials:
URI DIGEST
pkg:docker/docker/dockerfile@1 sha256:93bfd3b68c109427185cd78b4779fc82b484b0b7618e36d0f104d4d801e66d25
pkg:docker/golang@1.23-alpine3.21?platform=linux%2Famd64 sha256:2c49857f2295e89b23b28386e57e018a86620a8fede5003900f2d138ba9c4037
pkg:docker/tonistiigi/xx@1.6.1?platform=linux%2Famd64 sha256:923441d7c25f1e2eb5789f82d987693c47b8ed987c4ab3b075d6ed2b5d6779a3
Attachments:
DIGEST PLATFORM TYPE
sha256:217329d2af959d4f02e3a96dcbe62bf100cab1feb8006a047ddfe51a5397f7e3 https://slsa.dev/provenance/v0.2
```
### Inspect a specific build
```console
# Using a build ID
docker buildx history inspect qu2gsuo8ejqrwdfii23xkkckt
# Or using a relative offset
docker buildx history inspect ^1
```
### <a name="format"></a> Format the output (--format)
The formatting options (`--format`) pretty-prints the output to `pretty` (default),
`json` or using a Go template.
#### Pretty output
```console
$ docker buildx history inspect
Name: buildx (binaries)
@ -58,6 +110,8 @@ sha256:217329d2af959d4f02e3a96dcbe62bf100cab1feb8006a047ddfe51a5397f7e3
Print build logs: docker buildx history logs g9808bwrjrlkbhdamxklx660b
```
#### JSON output
```console
$ docker buildx history inspect --format json
{
@ -111,6 +165,8 @@ $ docker buildx history inspect --format json
}
```
#### Go template output
```console
$ docker buildx history inspect --format "{{.Name}}: {{.VCSRepository}} ({{.VCSRevision}})"
buildx (binaries): https://github.com/crazy-max/buildx.git (f15eaa1ee324ffbbab29605600d27a84cab86361)

View File

@ -1,17 +1,198 @@
# docker buildx history inspect attachment
```text
docker buildx history inspect attachment [OPTIONS] [REF [DIGEST]]
```
<!---MARKER_GEN_START-->
Inspect a build attachment
Inspect a build record attachment
### Options
| Name | Type | Default | Description |
|:----------------|:---------|:--------|:-----------------------------------------|
| `--builder` | `string` | | Override the configured builder instance |
| `-D`, `--debug` | `bool` | | Enable debug logging |
| `--platform` | `string` | | Platform of attachment |
| `--type` | `string` | | Type of attachment |
| Name | Type | Default | Description |
|:--------------------------|:---------|:--------|:-----------------------------------------|
| `--builder` | `string` | | Override the configured builder instance |
| `-D`, `--debug` | `bool` | | Enable debug logging |
| [`--platform`](#platform) | `string` | | Platform of attachment |
| [`--type`](#type) | `string` | | Type of attachment |
<!---MARKER_GEN_END-->
## Description
Inspect a specific attachment from a build record, such as a provenance file or
SBOM. Attachments are optional artifacts stored with the build and may be
platform-specific.
## Examples
### <a name="platform"></a> Inspect an attachment by platform (--platform)
```console
$ docker buildx history inspect attachment --platform linux/amd64
{
"schemaVersion": 2,
"mediaType": "application/vnd.oci.image.manifest.v1+json",
"config": {
"mediaType": "application/vnd.oci.image.config.v1+json",
"digest": "sha256:814e63f06465bc78123775714e4df1ebdda37e6403e0b4f481df74947c047163",
"size": 600
},
"layers": [
{
"mediaType": "application/vnd.oci.image.layer.v1.tar+gzip",
"digest": "sha256:36537f3920ae948ce3e12b4ae34c21190280e6e7d58eeabde0dff3fdfb43b6b0",
"size": 21664137
}
]
}
```
### <a name="type"></a> Inspect an attachment by type (--type)
Supported types include:
* `index`
* `manifest`
* `image`
* `provenance`
* `sbom`
#### Index
```console
$ docker buildx history inspect attachment --type index
{
"schemaVersion": 2,
"mediaType": "application/vnd.oci.image.index.v1+json",
"manifests": [
{
"mediaType": "application/vnd.oci.image.manifest.v1+json",
"digest": "sha256:a194e24f47dc6d0e65992c09577b9bc4e7bd0cd5cc4f81e7738918f868aa397b",
"size": 481,
"platform": {
"architecture": "amd64",
"os": "linux"
}
},
{
"mediaType": "application/vnd.oci.image.manifest.v1+json",
"digest": "sha256:49e40223d6a96ea0667a12737fd3dde004cf217eb48cb28c9191288cd44c6ace",
"size": 839,
"annotations": {
"vnd.docker.reference.digest": "sha256:a194e24f47dc6d0e65992c09577b9bc4e7bd0cd5cc4f81e7738918f868aa397b",
"vnd.docker.reference.type": "attestation-manifest"
},
"platform": {
"architecture": "unknown",
"os": "unknown"
}
}
]
}
```
#### Manifest
```console
$ docker buildx history inspect attachment --type manifest
{
"schemaVersion": 2,
"mediaType": "application/vnd.oci.image.manifest.v1+json",
"config": {
"mediaType": "application/vnd.oci.image.config.v1+json",
"digest": "sha256:814e63f06465bc78123775714e4df1ebdda37e6403e0b4f481df74947c047163",
"size": 600
},
"layers": [
{
"mediaType": "application/vnd.oci.image.layer.v1.tar+gzip",
"digest": "sha256:36537f3920ae948ce3e12b4ae34c21190280e6e7d58eeabde0dff3fdfb43b6b0",
"size": 21664137
}
]
}
```
#### Provenance
```console
$ docker buildx history inspect attachment --type provenance
{
"builder": {
"id": ""
},
"buildType": "https://mobyproject.org/buildkit@v1",
"materials": [
{
"uri": "pkg:docker/docker/dockerfile@1",
"digest": {
"sha256": "9ba7531bd80fb0a858632727cf7a112fbfd19b17e94c4e84ced81e24ef1a0dbc"
}
},
{
"uri": "pkg:docker/golang@1.19.4-alpine?platform=linux%2Farm64",
"digest": {
"sha256": "a9b24b67dc83b3383d22a14941c2b2b2ca6a103d805cac6820fd1355943beaf1"
}
}
],
"invocation": {
"configSource": {
"entryPoint": "Dockerfile"
},
"parameters": {
"frontend": "gateway.v0",
"args": {
"cmdline": "docker/dockerfile:1",
"source": "docker/dockerfile:1",
"target": "binaries"
},
"locals": [
{
"name": "context"
},
{
"name": "dockerfile"
}
]
},
"environment": {
"platform": "linux/arm64"
}
},
"metadata": {
"buildInvocationID": "c4a87v0sxhliuewig10gnsb6v",
"buildStartedOn": "2022-12-16T08:26:28.651359794Z",
"buildFinishedOn": "2022-12-16T08:26:29.625483253Z",
"reproducible": false,
"completeness": {
"parameters": true,
"environment": true,
"materials": false
},
"https://mobyproject.org/buildkit@v1#metadata": {
"vcs": {
"revision": "a9ba846486420e07d30db1107411ac3697ecab68",
"source": "git@github.com:<org>/<repo>.git"
}
}
}
}
```
### Inspect an attachment by digest
You can inspect an attachment directly using its digset, which you can get from
the `inspect` output:
```console
# Using a build ID
docker buildx history inspect attachment qu2gsuo8ejqrwdfii23xkkckt sha256:abcdef123456...
# Or using a relative offset
docker buildx history inspect attachment ^0 sha256:abcdef123456...
```
Use `--type sbom` or `--type provenance` to filter attachments by type. To
inspect a specific attachment by digest, omit the `--type` flag.

View File

@ -1,16 +1,69 @@
# docker buildx history logs
```text
docker buildx history logs [OPTIONS] [REF]
```
<!---MARKER_GEN_START-->
Print the logs of a build
Print the logs of a build record
### Options
| Name | Type | Default | Description |
|:----------------|:---------|:--------|:--------------------------------------------------|
| `--builder` | `string` | | Override the configured builder instance |
| `-D`, `--debug` | `bool` | | Enable debug logging |
| `--progress` | `string` | `plain` | Set type of progress output (plain, rawjson, tty) |
| Name | Type | Default | Description |
|:--------------------------|:---------|:--------|:--------------------------------------------------|
| `--builder` | `string` | | Override the configured builder instance |
| `-D`, `--debug` | `bool` | | Enable debug logging |
| [`--progress`](#progress) | `string` | `plain` | Set type of progress output (plain, rawjson, tty) |
<!---MARKER_GEN_END-->
## Description
Print the logs for a completed build. The output appears in the same format as
`--progress=plain`, showing the full logs for each step.
By default, this shows logs for the most recent build on the current builder.
You can also specify an earlier build using an offset. For example:
- `^1` shows logs for the build before the most recent
- `^2` shows logs for the build two steps back
## Examples
### Print logs for the most recent build
```console
$ docker buildx history logs
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 31B done
#1 DONE 0.0s
#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 0.0s
...
```
By default, this shows logs for the most recent build on the current builder.
### Print logs for a specific build
To print logs for a specific build, use a build ID or offset:
```console
# Using a build ID
docker buildx history logs qu2gsuo8ejqrwdfii23xkkckt
# Or using a relative offset
docker buildx history logs ^1
```
### <a name="progress"></a> Set type of progress output (--progress)
```console
$ docker buildx history logs ^1 --progress rawjson
{"id":"buildx_step_1","status":"START","timestamp":"2024-05-01T12:34:56.789Z","detail":"[internal] load build definition from Dockerfile"}
{"id":"buildx_step_1","status":"COMPLETE","timestamp":"2024-05-01T12:34:57.001Z","duration":212000000}
...
```

View File

@ -1,19 +1,106 @@
# docker buildx history ls
```text
docker buildx history ls [OPTIONS]
```
<!---MARKER_GEN_START-->
List build records
### Options
| Name | Type | Default | Description |
|:----------------|:--------------|:--------|:---------------------------------------------|
| `--builder` | `string` | | Override the configured builder instance |
| `-D`, `--debug` | `bool` | | Enable debug logging |
| `--filter` | `stringArray` | | Provide filter values (e.g., `status=error`) |
| `--format` | `string` | `table` | Format the output |
| `--local` | `bool` | | List records for current repository only |
| `--no-trunc` | `bool` | | Don't truncate output |
| Name | Type | Default | Description |
|:--------------------------|:--------------|:--------|:---------------------------------------------|
| `--builder` | `string` | | Override the configured builder instance |
| `-D`, `--debug` | `bool` | | Enable debug logging |
| [`--filter`](#filter) | `stringArray` | | Provide filter values (e.g., `status=error`) |
| [`--format`](#format) | `string` | `table` | Format the output |
| [`--local`](#local) | `bool` | | List records for current repository only |
| [`--no-trunc`](#no-trunc) | `bool` | | Don't truncate output |
<!---MARKER_GEN_END-->
## Description
List completed builds recorded by the active builder. Each entry includes the
build ID, name, status, timestamp, and duration.
By default, only records for the current builder are shown. You can filter
results using flags.
## Examples
### List all build records for the current builder
```console
$ docker buildx history ls
BUILD ID NAME STATUS CREATED AT DURATION
qu2gsuo8ejqrwdfii23xkkckt .dev/2850 Completed 3 days ago 1.4s
qsiifiuf1ad9pa9qvppc0z1l3 .dev/2850 Completed 3 days ago 1.3s
g9808bwrjrlkbhdamxklx660b .dev/3120 Completed 5 days ago 2.1s
```
### <a name="filter"></a> List failed builds (--filter)
```console
docker buildx history ls --filter status=error
```
You can filter the list using the `--filter` flag. Supported filters include:
| Filter | Supported comparisons | Example |
|:---------------------------------------|:-------------------------------------------------|:---------------------------|
| `ref`, `repository`, `status` | Support `=` and `!=` comparisons | `--filter status!=success` |
| `startedAt`, `completedAt`, `duration` | Support `<` and `>` comparisons with time values | `--filter duration>30s` |
You can combine multiple filters by repeating the `--filter` flag:
```console
docker buildx history ls --filter status=error --filter duration>30s
```
### <a name="local"></a> List builds from the current project (--local)
```console
docker buildx history ls --local
```
### <a name="no-trunc"></a> Display full output without truncation (--no-trunc)
```console
docker buildx history ls --no-trunc
```
### <a name="format"></a> Format output (--format)
#### JSON output
```console
$ docker buildx history ls --format json
[
{
"ID": "qu2gsuo8ejqrwdfii23xkkckt",
"Name": ".dev/2850",
"Status": "Completed",
"CreatedAt": "2025-04-15T12:33:00Z",
"Duration": "1.4s"
},
{
"ID": "qsiifiuf1ad9pa9qvppc0z1l3",
"Name": ".dev/2850",
"Status": "Completed",
"CreatedAt": "2025-04-15T12:29:00Z",
"Duration": "1.3s"
}
]
```
#### Go template output
```console
$ docker buildx history ls --format '{{.Name}} - {{.Duration}}'
.dev/2850 - 1.4s
.dev/2850 - 1.3s
.dev/3120 - 2.1s
```

Some files were not shown because too many files have changed in this diff Show More