Compare commits

...

98 Commits

Author SHA1 Message Date
Tõnis Tiigi ae4e7ee6a4
Merge pull request #3370 from thaJeztah/bump_engine
vendor: github.com/docker/docker, docker/cli v28.4.0
2025-09-05 14:23:53 -07:00
CrazyMax 70487beecb
Merge pull request #3405 from thaJeztah/dockerfile_bump_docker
Dockerfile: update to docker v28.4.0
2025-09-05 10:03:38 +02:00
CrazyMax 86ddc5de4e
Merge pull request #3406 from docker/dependabot/github_actions/actions/labeler-6
build(deps): bump actions/labeler from 5 to 6
2025-09-05 10:03:17 +02:00
CrazyMax 7bcaf399b9
Merge pull request #3407 from docker/dependabot/github_actions/actions/setup-go-6
build(deps): bump actions/setup-go from 5 to 6
2025-09-05 10:02:58 +02:00
dependabot[bot] dc10c680f3
build(deps): bump actions/setup-go from 5 to 6
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 5 to 6.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-04 18:03:46 +00:00
dependabot[bot] 9c9fb2a12a
build(deps): bump actions/labeler from 5 to 6
Bumps [actions/labeler](https://github.com/actions/labeler) from 5 to 6.
- [Release notes](https://github.com/actions/labeler/releases)
- [Commits](https://github.com/actions/labeler/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/labeler
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-04 18:03:42 +00:00
Sebastiaan van Stijn b4d5ec9bc2
Dockerfile: update to docker v28.4.0
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-09-04 00:24:06 +02:00
Sebastiaan van Stijn a923dbc1d9
vendor: github.com/docker/docker, docker/cli v28.4.0
full diffs:

- https://github.com/docker/docker/compare/v28.3.3...v28.4.0
- https://github.com/docker/cli/compare/v28.3.3...v28.4.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-09-04 00:11:44 +02:00
Tõnis Tiigi bafc4e207e
Merge pull request #3402 from jsternberg/dap-test-close-client
dap: ensure test client is closed on cleanup
2025-09-03 11:23:51 -07:00
Tõnis Tiigi 2109c9d80d
Merge pull request #3399 from crazy-max/test-gitquerystring
test: git query string
2025-09-03 09:35:04 -07:00
Jonathan A. Sternberg 8841b2dfc8
dap: ensure test client is closed on cleanup
The dap test wasn't waiting for the client's goroutines to complete
before exiting which caused a race condition that could cause it to log
to the dead test logger. This became apparent when `--count` of greater
than one was used since it caused the test to run long enough to trigger
the behavior. It would have also triggered if we had added more tests.

Add the client close to the cleanup so it waits for the goroutine to
finish before the test exits as it was properly supposed to do.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-09-03 10:51:01 -05:00
CrazyMax 643322cbc3
test: git query string
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-09-03 17:49:38 +02:00
Akihiro Suda 056780314b
Merge pull request #3401 from crazy-max/buildkit-0.24.0
vendor: update buildkit to v0.24.0
2025-09-03 23:33:10 +09:00
CrazyMax d136d2ba53
vendor: update buildkit to v0.24.0
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-09-03 16:14:35 +02:00
Akihiro Suda e4f23adf3f
Merge pull request #3398 from tonistiigi/gitquerystring-cap-detect
git querystring frontend capability detection
2025-09-03 16:50:10 +09:00
Tonis Tiigi 5e6951c571
git querystring frontend capability detection
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-09-02 10:46:37 -07:00
Tõnis Tiigi d873cae872
Merge pull request #3392 from crazy-max/docs-du-filter
docs: list available filters for du and prune commands
2025-08-29 15:05:35 -07:00
Tõnis Tiigi 4df89d89fc
Merge pull request #3397 from tonistiigi/update-buildkit-v0.24.0-rc2
vendor: update buildkit to v0.24.0-rc2
2025-08-29 15:04:27 -07:00
Tonis Tiigi 1f39ad2001
vendor: update buildkit to v0.24.0-rc2
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-08-29 14:49:16 -07:00
Tõnis Tiigi ce3592e4ab
Merge pull request #3390 from crazy-max/docs-du-fixes
docs: fixes for du command
2025-08-29 10:37:22 -07:00
Tõnis Tiigi 67218bef58
Merge pull request #3394 from thaJeztah/check_DisableFlagsInUseLine
commands: verify that DisableFlagsInUseLine is set for all commands
2025-08-28 12:53:21 -07:00
Sebastiaan van Stijn 07b99ae7bf
commands: verify that DisableFlagsInUseLine is set for all commands
This replaces the DisableFlagsInUseLine call from the CLI with a test
that verifies the option is set for all commands and subcommands, so
that it doesn't have to be modified at runtime.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-08-28 17:55:50 +02:00
CrazyMax ebe66a8e2e
docs: list available filters for du and prune commands
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-08-28 15:32:10 +02:00
CrazyMax ce07ae04cd
docs: fixes for du command
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-08-28 11:25:53 +02:00
Tõnis Tiigi bb41e835b6
Merge pull request #3375 from jsternberg/update-dap-docs
docs: update dap docs to reflect updates to the debugger
2025-08-27 15:03:15 -07:00
Tõnis Tiigi 31a3fbf107
Merge pull request #3387 from tonistiigi/update-buildkit-v0.24.0-rc1
vendor: update buildkit to v0.24.0-rc1
2025-08-27 15:01:12 -07:00
Tonis Tiigi 440dc2a212
temp skip DAP test that panics in errgroup goroutine
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-08-27 14:38:37 -07:00
Tõnis Tiigi d94a6cf92a
Merge pull request #3377 from crazy-max/du-json
cmd: multiple formats output support for du command
2025-08-27 13:45:23 -07:00
Tonis Tiigi ec3b99180b
vendor: update buildkit to v0.24.0-rc1
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-08-27 13:39:14 -07:00
CrazyMax f0646eeab5
tests: diskusage command
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-08-27 18:07:14 +02:00
CrazyMax b6baad406b
cmd: multiple formats output support for du command
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-08-27 18:07:14 +02:00
Tõnis Tiigi df7c46b02d
Merge pull request #3384 from crazy-max/export-annotations-check
build: fail early if trying to export index annotations with moby exporter
2025-08-27 08:44:51 -07:00
Tõnis Tiigi 026e55b376
Merge pull request #3386 from crazy-max/winsymlink0
restore junctions to have os.ModeSymlink flag set on Windows
2025-08-27 08:38:43 -07:00
CrazyMax 300a136d4c
restore junctions to have os.ModeSymlink flag set on Windows
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-08-27 15:06:48 +02:00
CrazyMax a8f546eea5
build: fail early if trying to export index annotations with moby exporter
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-08-27 11:17:31 +02:00
Tõnis Tiigi 177b980958
Merge pull request #3385 from marxarelli/feature/support-buildkit-syntax-arg
Add BUILDKIT_SYNTAX option handling
2025-08-27 03:19:45 +03:00
Dan Duvall fc3ecb60fb Preserve raw BUILDKIT_SYNTAX as cmdline option
Set gateway `source` to the first part of `BUILDKIT_SYNTAX` and
`cmdline` to the entire raw value to preserve additional options.

Signed-off-by: Dan Duvall <dduvall@wikimedia.org>
2025-08-26 13:56:07 -07:00
Will Nonnemaker b99e799f00 Add BUILDKIT_SYNTAX option handling
This fix allows building with a remote builder where
frontend.dockerfile.v0 enabled = false in the buildkitd yaml file.

Note that this change only allows the usage of BUILDKIT_SYNTAX with
a custom frontend image, and using the #syntax directive in this case
will still fail.

Resolves: docker#3077

Signed-off-by: Will Nonnemaker <wnonnemaker@gmail.com>
2025-08-26 13:51:22 -07:00
CrazyMax 15da6042cc
Merge pull request #3379 from crazy-max/update-pflag
vendor: github.com/spf13/pflag v1.0.7
2025-08-26 17:29:30 +02:00
CrazyMax bfeb19abc8
vendor: github.com/spf13/pflag v1.0.7
full diff: https://github.com/spf13/pflag/compare/v1.0.6...v1.0.7

Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-08-26 13:37:43 +02:00
CrazyMax d7cd677480
Merge pull request #3378 from crazy-max/update-testify
vendor: github.com/stretchr/testify v1.11.0
2025-08-26 13:34:59 +02:00
CrazyMax 149b2a231b
vendor: github.com/stretchr/testify v1.11.0
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-08-25 15:19:36 +02:00
Tõnis Tiigi a6e198a341
Merge pull request #3301 from crazy-max/ci-matrix-subaction
ci(validate): use matrix subaction
2025-08-21 10:49:29 +03:00
CrazyMax 159a68cbb8
ci(validate): use matrix subaction
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-08-21 08:19:26 +02:00
Jonathan A. Sternberg a7c54da345
docs: update dap docs to reflect updates to the debugger
Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-08-19 10:19:09 -05:00
Akihiro Suda 2d65b12a65
Merge pull request #3373 from tonistiigi/kubernetes-env
kubernetes: add env driver opt to kubernetes
2025-08-19 00:38:25 +09:00
Tõnis Tiigi bac71def78
Merge pull request #3366 from jsternberg/dap-detect-parent
dap: improve determination of the proper parent for certain ops
2025-08-18 18:34:57 +03:00
Tonis Tiigi 9f721e3190
kubernetes: add env driver opt to kubernetes
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-08-18 18:11:53 +03:00
Jonathan A. Sternberg 5c97696d64
dap: improve determination of the proper parent for certain ops
Improves the determination of the proper parent for exec and file ops.
With file ops, it will only consider inputs and ignore secondary inputs.
This prevents the following case:

```
FROM busybox AS build1
RUN echo foo > /hello

FROM scratch
COPY --from=build1 /hello .
```

Previously, `build1` would be considered the parent of the copy
instruction. Now, copy properly does not have a parent.

If there are multiple file ops and the operations disagree on the
canonical "parent", we give up on trying to find a canonical parent and
assume there is none.

For exec operations, whichever input is associated with the root mount
is considered the primary parent.

For all other operations, the first parent is considered the primary
parent if it exists.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-08-18 09:27:02 -05:00
Tõnis Tiigi 8033908d09
Merge pull request #3371 from jsternberg/dap-nested-dockerfile-path
dap: look for base name of dockerfile name instead of path from context
2025-08-16 08:26:40 +03:00
Jonathan A. Sternberg b3c389690c
dap: look for base name of dockerfile name instead of path from context
When the builder loads a dockerfile, it does it by using the base name
of the dockerfile path and only loads the innermost directory. This
means that the source name we're looking for is the base name and not
the full relative path.

Update the set breakpoints functionality so it takes this into account.
Fixes scenarios where DAP is used with a dockerfile nested in the
context.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-08-15 14:41:00 -05:00
Tõnis Tiigi 10605b8c35
Merge pull request #3320 from crazy-max/mount-wsl-lib
driver: mount wsl lib folder for docker-container driver
2025-08-13 15:42:38 +03:00
Tõnis Tiigi 4f9f47deec
Merge pull request #3341 from jsternberg/dap-persistent-exec
dap: make exec shell persistent across the build
2025-08-13 15:34:54 +03:00
CrazyMax da81bc15b3
Merge pull request #3364 from docker/dependabot/github_actions/actions/checkout-5
build(deps): bump actions/checkout from 4 to 5
2025-08-13 09:21:26 +02:00
dependabot[bot] a3fa6a7b15
build(deps): bump actions/checkout from 4 to 5
Bumps [actions/checkout](https://github.com/actions/checkout) from 4 to 5.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-12 22:51:34 +00:00
Jonathan A. Sternberg dbda218489
dap: make exec shell persistent across the build
Invoking the shell will cause it to persist across the entire build and
to re-execute whenever the builder pauses at another location again.

This still requires using `exec` to launch the shell. Launching by frame
id is also removed since it no longer applies to this version.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-08-11 12:40:09 -05:00
Tõnis Tiigi c6d1e397a8
Merge pull request #3352 from crazy-max/compose-pull-nocache
bake: pull and no-cache support for compose
2025-08-11 11:55:06 +03:00
CrazyMax 7c434131a3
Merge pull request #3361 from crazy-max/compose-sanitize-ncontexts
compose: sanitize value of named contexts for target type
2025-08-08 16:07:37 +02:00
CrazyMax c365f015b1
compose: sanitize value of named contexts for target type
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-08-08 14:43:48 +02:00
CrazyMax 358312317a
Merge pull request #3357 from crazy-max/update-compose
dockerfile: update compose to 2.39.1
2025-08-08 09:34:45 +02:00
CrazyMax 8f57074638
Merge pull request #3358 from docker/dependabot/github_actions/actions/download-artifact-5
build(deps): bump actions/download-artifact from 4 to 5
2025-08-07 09:39:02 +02:00
CrazyMax d869a0ef65
Merge pull request #3359 from thaJeztah/minor_nits
remove some intermediate vars
2025-08-07 09:26:35 +02:00
Sebastiaan van Stijn 36b18a4c7a
remove some intermediate vars
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-08-06 23:00:24 +02:00
CrazyMax 24eccb0ba5
Merge pull request #3356 from crazy-max/update-test-deps
dockerfile: update docker to 28.3
2025-08-06 20:56:43 +02:00
CrazyMax 0279c49822
dockerfile: update docker to 28.3
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-08-06 20:26:48 +02:00
dependabot[bot] 8e133a5bbb
build(deps): bump actions/download-artifact from 4 to 5
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 4 to 5.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](https://github.com/actions/download-artifact/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-06 18:21:46 +00:00
CrazyMax dbad205dfe
dockerfile: update compose to 2.39.1
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-08-06 18:59:51 +02:00
CrazyMax 0bdd0aa624
Merge pull request #3355 from thaJeztah/bump_buildkit
vendor: github.com/moby/buildkit 9b91d20367db (master, v0.24-dev)
2025-08-06 18:58:23 +02:00
Sebastiaan van Stijn bbd18927d2
vendor: github.com/moby/buildkit 9b91d20367db (master, v0.24-dev)
full diff: 9b91d20367...955c2b2f7d

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-08-06 16:47:11 +02:00
CrazyMax 46463d93bf
Merge pull request #3346 from crazy-max/docs-call-override
docs: missing call as override field
2025-08-05 19:07:31 +02:00
Jonathan A. Sternberg 5c27294f27
Merge pull request #3327 from jsternberg/dap-fs-inspect
dap: filesystem inspection when paused on a digest
2025-08-05 11:06:16 -05:00
CrazyMax 2690ddd9a6
Merge pull request #3351 from crazy-max/bake-homedir
bake: add homedir func
2025-08-05 17:32:14 +02:00
CrazyMax 669fd1df2f
bake: pull and no-cache support for compose
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-08-05 11:49:37 +02:00
CrazyMax e4b49a8cd9
bake: add homedir func
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-08-05 10:21:55 +02:00
CrazyMax 5743e3a77a
Merge pull request #3347 from crazy-max/bake-frix-empty-dockerfile
bake: fix dockerfile default if empty
2025-08-05 09:49:52 +02:00
CrazyMax 9d56b30c42
bake: fix dockerfile default if empty
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-07-31 10:02:44 +02:00
CrazyMax 264c8f9f3d
docs: missing call as override field
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-07-30 13:01:23 +02:00
CrazyMax 1e50e8ddab
Merge pull request #3340 from thaJeztah/docker_28.3.3
vendor: github.com/docker/docker, docker/cli v28.3.3
2025-07-29 21:13:10 +02:00
Sebastiaan van Stijn 4b9a2b07fc
vendor: github.com/docker/docker, docker/cli v28.3.3
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-07-29 19:39:20 +02:00
Jonathan A. Sternberg 8e356c3454
dap: filesystem inspection when paused on a digest
Add a file explorer to the debugger that allows exploring the filesystem
of the current container. It will show directory contents, file
contents, and symlink destinations. It will also show the file mode
associated with a file.

The file explorer defaults to marking itself as an expensive operation
so the debugger doesn't automatically retrieve the information.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-07-28 09:52:30 -05:00
Tõnis Tiigi 4c791dce97
Merge pull request #3325 from jsternberg/dap-alternate-stepping
dap: refactor how step in/step out works
2025-07-24 15:08:52 -07:00
Tõnis Tiigi 4dd5dd5a6d
Merge pull request #3337 from glours/bump-compose-go-v2.8.1
bump compose-go to v2.8.1
2025-07-24 14:29:25 -07:00
Tõnis Tiigi f9be714a52
Merge pull request #3333 from crazy-max/compose-tests
compose integration tests
2025-07-24 14:22:55 -07:00
Guillaume Lours f388981ca4
bump compose-go to v2.8.1
Signed-off-by: Guillaume Lours <705411+glours@users.noreply.github.com>
2025-07-24 19:09:58 +02:00
CrazyMax 03000cc590
compose integration tests
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-07-24 11:45:30 +02:00
Jonathan A. Sternberg 1e3c44709d
dap: refactor how step in/step out works
Change how breakpoints and stepping works. These now work more how you
would expect another programming language to work. Breakpoints happen
before the step has been invoked rather than after which means you can
inspect the state before the command runs.

This has the advantage of being more intuitive for someone familiar with
other debuggers. The negative is that you can't run to after a certain
step as easily as you could before. Instead, you would run to that stage
and then use next to go to the step directly afterwards.

Step in and out also now have different behaviors. When a step has
multiple inputs, the inputs of non-zero index are considered like
"function calls". The most common cause of this is to use `COPY --from`
or a bind mount. Stepping into these will cause it to jump to the
beginning of the call chain for that branch. Using step out will exit
back to the location where step in was used.

This change also makes it so some steps may be invoked multiple times in
the callgraph if multiple steps depend on them. The reused steps will
still be cached, but you may end up stepping through more lines than the
previous implementation.

Stack traces now represent where these step in and step out areas
happen rather than the previous steps. This can help you know from where
a certain step is being used.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-07-23 17:10:40 -05:00
Jonathan A. Sternberg fea53ad1f8
dap: return error from evaluate command in repl context
In the repl context, we will now return the error instead of directly
printing it. We also suppress reporting errors from cobra. The logic
flow has also been changed to prevent returning errors from cobra unless
there was something related to the command line invocation so usage will
only be printed when a command was typed wrong and it will not show up
for every error.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-07-23 13:51:03 -05:00
CrazyMax a1ca46e85e
Merge pull request #3334 from glours/bump-compose-go-v2.8.0
bump compose-go to v2.8.0
2025-07-23 14:53:21 +02:00
Guillaume Lours 19304c0c54
bump compose-go to v2.8.0
Signed-off-by: Guillaume Lours <705411+glours@users.noreply.github.com>
2025-07-23 13:16:22 +02:00
Tõnis Tiigi 7d0efdc50e
Merge pull request #3330 from crazy-max/history-build-name-override
history: use built-in build-arg to override the build name
2025-07-22 16:33:21 -07:00
Tõnis Tiigi f0d16f5914
Merge pull request #3329 from crazy-max/fix-compose-validation
bake: fix compose files validation
2025-07-22 08:09:08 -07:00
CrazyMax 7e11d3601e
history: use built-in build-arg to override the build name
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-07-22 14:34:36 +02:00
CrazyMax 98f04b1290
bake: fix compose files validation
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-07-22 10:18:14 +02:00
CrazyMax ed67ab795b
driver: mount wsl lib folder for docker-container driver
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-07-21 16:47:20 +02:00
CrazyMax 3f4bf829d8
Merge pull request #3324 from thaJeztah/no_pkg_homedir
driver/kubernetes: remove uses of pkg/homedir
2025-07-21 15:35:10 +02:00
Sebastiaan van Stijn 3f725bf4d8
driver/kubernetes: remove uses of pkg/homedir
Create a local fork to keep the existing behavior.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-07-21 14:46:45 +02:00
CrazyMax dcd113370e
Merge pull request #3322 from ndeloof/validateComposeFile
do not assume input is a compose file on .env parsing error
2025-07-21 09:33:12 +02:00
Nicolas De Loof 08e74f8b62
do not assume intput is a compose file on .env parsing error
Signed-off-by: Nicolas De Loof <nicolas.deloof@gmail.com>
2025-07-21 09:02:10 +02:00
207 changed files with 16348 additions and 2815 deletions

View File

@ -121,7 +121,7 @@ jobs:
fi fi
- -
name: Checkout name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v5
with: with:
fetch-depth: 0 fetch-depth: 0
- -
@ -191,10 +191,10 @@ jobs:
git config --global core.eol lf git config --global core.eol lf
- -
name: Checkout name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v5
- -
name: Set up Go name: Set up Go
uses: actions/setup-go@v5 uses: actions/setup-go@v6
with: with:
go-version: "${{ env.GO_VERSION }}" go-version: "${{ env.GO_VERSION }}"
- -
@ -274,7 +274,7 @@ jobs:
echo "GO_VERSION=$goVersion" >> $GITHUB_ENV echo "GO_VERSION=$goVersion" >> $GITHUB_ENV
- -
name: Checkout name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v5
- -
name: Cache Vagrant boxes name: Cache Vagrant boxes
uses: actions/cache@v4 uses: actions/cache@v4
@ -353,7 +353,7 @@ jobs:
steps: steps:
- -
name: Checkout name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v5
- -
name: Create matrix name: Create matrix
id: platforms id: platforms
@ -380,7 +380,7 @@ jobs:
echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV
- -
name: Checkout name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v5
- -
name: Set up QEMU name: Set up QEMU
uses: docker/setup-qemu-action@v3 uses: docker/setup-qemu-action@v3
@ -425,7 +425,7 @@ jobs:
swap-storage: true swap-storage: true
- -
name: Checkout name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v5
- -
name: Set up QEMU name: Set up QEMU
uses: docker/setup-qemu-action@v3 uses: docker/setup-qemu-action@v3
@ -513,10 +513,10 @@ jobs:
steps: steps:
- -
name: Checkout name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v5
- -
name: Download binaries name: Download binaries
uses: actions/download-artifact@v4 uses: actions/download-artifact@v5
with: with:
path: ${{ env.DESTDIR }} path: ${{ env.DESTDIR }}
pattern: buildx-* pattern: buildx-*

View File

@ -29,10 +29,10 @@ jobs:
steps: steps:
- -
name: Checkout name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v5
- -
name: Set up Go name: Set up Go
uses: actions/setup-go@v5 uses: actions/setup-go@v6
with: with:
go-version: ${{ env.GO_VERSION }} go-version: ${{ env.GO_VERSION }}
- -

View File

@ -33,7 +33,7 @@ jobs:
steps: steps:
- -
name: Checkout docs repo name: Checkout docs repo
uses: actions/checkout@v4 uses: actions/checkout@v5
with: with:
token: ${{ secrets.GHPAT_DOCS_DISPATCH }} token: ${{ secrets.GHPAT_DOCS_DISPATCH }}
repository: docker/docs repository: docker/docs

View File

@ -111,14 +111,14 @@ jobs:
steps: steps:
- -
name: Checkout name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v5
- -
name: Set up QEMU name: Set up QEMU
uses: docker/setup-qemu-action@v3 uses: docker/setup-qemu-action@v3
if: matrix.driver == 'docker' || matrix.driver == 'docker-container' if: matrix.driver == 'docker' || matrix.driver == 'docker-container'
- -
name: Install buildx name: Install buildx
uses: actions/download-artifact@v4 uses: actions/download-artifact@v5
with: with:
name: binary name: binary
path: /home/runner/.docker/cli-plugins path: /home/runner/.docker/cli-plugins
@ -214,7 +214,7 @@ jobs:
steps: steps:
- -
name: Checkout name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v5
- -
name: Expose GitHub Runtime name: Expose GitHub Runtime
uses: crazy-max/ghaction-github-runtime@v3 uses: crazy-max/ghaction-github-runtime@v3
@ -230,7 +230,7 @@ jobs:
uses: docker/setup-qemu-action@v3 uses: docker/setup-qemu-action@v3
- -
name: Install buildx name: Install buildx
uses: actions/download-artifact@v4 uses: actions/download-artifact@v5
with: with:
name: binary name: binary
path: /home/runner/.docker/cli-plugins path: /home/runner/.docker/cli-plugins

View File

@ -27,6 +27,6 @@ jobs:
steps: steps:
- -
name: Run name: Run
uses: actions/labeler@v5 uses: actions/labeler@v6
with: with:
sync-labels: true sync-labels: true

View File

@ -33,51 +33,20 @@ jobs:
prepare: prepare:
runs-on: ubuntu-24.04 runs-on: ubuntu-24.04
outputs: outputs:
includes: ${{ steps.matrix.outputs.includes }} includes: ${{ steps.generate.outputs.matrix }}
steps: steps:
- -
name: Checkout name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v5
- -
name: Matrix name: Generate matrix
id: matrix id: generate
uses: actions/github-script@v7 uses: docker/bake-action/subaction/matrix@v6
with: with:
script: | target: validate
let def = {}; fields: platforms
await core.group(`Parsing definition`, async () => { env:
const printEnv = Object.assign({}, process.env, { GOLANGCI_LINT_MULTIPLATFORM: ${{ github.repository == 'docker/buildx' && '1' || '' }}
GOLANGCI_LINT_MULTIPLATFORM: process.env.GITHUB_REPOSITORY === 'docker/buildx' ? '1' : ''
});
const resPrint = await exec.getExecOutput('docker', ['buildx', 'bake', 'validate', '--print'], {
ignoreReturnCode: true,
env: printEnv
});
if (resPrint.stderr.length > 0 && resPrint.exitCode != 0) {
throw new Error(res.stderr);
}
def = JSON.parse(resPrint.stdout.trim());
});
await core.group(`Generating matrix`, async () => {
const includes = [];
for (const targetName of Object.keys(def.target)) {
const target = def.target[targetName];
if (target.platforms && target.platforms.length > 0) {
target.platforms.forEach(platform => {
includes.push({
target: targetName,
platform: platform
});
});
} else {
includes.push({
target: targetName
});
}
}
core.info(JSON.stringify(includes, null, 2));
core.setOutput('includes', JSON.stringify(includes));
});
validate: validate:
runs-on: ubuntu-24.04 runs-on: ubuntu-24.04
@ -88,12 +57,6 @@ jobs:
matrix: matrix:
include: ${{ fromJson(needs.prepare.outputs.includes) }} include: ${{ fromJson(needs.prepare.outputs.includes) }}
steps: steps:
-
name: Prepare
run: |
if [ "$GITHUB_REPOSITORY" = "docker/buildx" ]; then
echo "GOLANGCI_LINT_MULTIPLATFORM=1" >> $GITHUB_ENV
fi
- -
name: Set up Docker Buildx name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3 uses: docker/setup-buildx-action@v3
@ -107,4 +70,4 @@ jobs:
with: with:
targets: ${{ matrix.target }} targets: ${{ matrix.target }}
set: | set: |
*.platform=${{ matrix.platform }} *.platform=${{ matrix.platforms }}

View File

@ -5,13 +5,14 @@ ARG ALPINE_VERSION=3.22
ARG XX_VERSION=1.6.1 ARG XX_VERSION=1.6.1
# for testing # for testing
ARG DOCKER_VERSION=28.3.0 ARG DOCKER_VERSION=28.4
ARG DOCKER_VERSION_ALT_27=27.5.1 ARG DOCKER_VERSION_ALT_27=27.5.1
ARG DOCKER_VERSION_ALT_26=26.1.3 ARG DOCKER_VERSION_ALT_26=26.1.3
ARG DOCKER_CLI_VERSION=${DOCKER_VERSION} ARG DOCKER_CLI_VERSION=${DOCKER_VERSION}
ARG GOTESTSUM_VERSION=v1.12.0 ARG GOTESTSUM_VERSION=v1.12.0
ARG REGISTRY_VERSION=3.0.0 ARG REGISTRY_VERSION=3.0.0
ARG BUILDKIT_VERSION=v0.23.2 ARG BUILDKIT_VERSION=v0.23.2
ARG COMPOSE_VERSION=v2.39.1
ARG UNDOCK_VERSION=0.9.0 ARG UNDOCK_VERSION=0.9.0
FROM --platform=$BUILDPLATFORM tonistiigi/xx:${XX_VERSION} AS xx FROM --platform=$BUILDPLATFORM tonistiigi/xx:${XX_VERSION} AS xx
@ -24,6 +25,7 @@ FROM dockereng/cli-bin:$DOCKER_VERSION_ALT_27 AS docker-cli-alt27
FROM dockereng/cli-bin:$DOCKER_VERSION_ALT_26 AS docker-cli-alt26 FROM dockereng/cli-bin:$DOCKER_VERSION_ALT_26 AS docker-cli-alt26
FROM registry:$REGISTRY_VERSION AS registry FROM registry:$REGISTRY_VERSION AS registry
FROM moby/buildkit:$BUILDKIT_VERSION AS buildkit FROM moby/buildkit:$BUILDKIT_VERSION AS buildkit
FROM docker/compose-bin:$COMPOSE_VERSION AS compose
FROM crazymax/undock:$UNDOCK_VERSION AS undock FROM crazymax/undock:$UNDOCK_VERSION AS undock
FROM golatest AS gobase FROM golatest AS gobase
@ -137,8 +139,10 @@ COPY --link --from=docker-cli-alt27 / /opt/docker-alt-27/
COPY --link --from=docker-cli-alt26 / /opt/docker-alt-26/ COPY --link --from=docker-cli-alt26 / /opt/docker-alt-26/
COPY --link --from=buildkit /usr/bin/buildkitd /usr/bin/ COPY --link --from=buildkit /usr/bin/buildkitd /usr/bin/
COPY --link --from=buildkit /usr/bin/buildctl /usr/bin/ COPY --link --from=buildkit /usr/bin/buildctl /usr/bin/
COPY --link --from=compose /docker-compose /usr/bin/compose
COPY --link --from=undock /usr/local/bin/undock /usr/bin/ COPY --link --from=undock /usr/local/bin/undock /usr/bin/
COPY --link --from=binaries /buildx /usr/bin/ COPY --link --from=binaries /buildx /usr/bin/
RUN mkdir -p /usr/local/lib/docker/cli-plugins && ln -s /usr/bin/buildx /usr/local/lib/docker/cli-plugins/docker-buildx
ENV TEST_DOCKER_EXTRA="docker@27.5=/opt/docker-alt-27,docker@26.1=/opt/docker-alt-26" ENV TEST_DOCKER_EXTRA="docker@27.5=/opt/docker-alt-27,docker@26.1=/opt/docker-alt-26"
FROM integration-test-base AS integration-test FROM integration-test-base AS integration-test

View File

@ -676,7 +676,7 @@ func (c Config) ResolveTarget(name string, overrides map[string]map[string]Overr
s := "." s := "."
t.Context = &s t.Context = &s
} }
if t.Dockerfile == nil { if t.Dockerfile == nil || (t.Dockerfile != nil && *t.Dockerfile == "") {
s := "Dockerfile" s := "Dockerfile"
t.Dockerfile = &s t.Dockerfile = &s
} }
@ -1257,9 +1257,8 @@ func (t *Target) GetName(ectx *hcl.EvalContext, block *hcl.Block, loadDeps func(
func TargetsToBuildOpt(m map[string]*Target, inp *Input) (map[string]build.Options, error) { func TargetsToBuildOpt(m map[string]*Target, inp *Input) (map[string]build.Options, error) {
// make sure local credentials are loaded multiple times for different targets // make sure local credentials are loaded multiple times for different targets
dockerConfig := config.LoadDefaultConfigFile(os.Stderr)
authProvider := authprovider.NewDockerAuthProvider(authprovider.DockerAuthProviderConfig{ authProvider := authprovider.NewDockerAuthProvider(authprovider.DockerAuthProviderConfig{
ConfigFile: dockerConfig, ConfigFile: config.LoadDefaultConfigFile(os.Stderr),
}) })
m2 := make(map[string]build.Options, len(m)) m2 := make(map[string]build.Options, len(m))
@ -1545,12 +1544,12 @@ func toBuildOpt(t *Target, inp *Input) (*build.Options, error) {
return nil, err return nil, err
} }
annotations, err := buildflags.ParseAnnotations(t.Annotations) bo.Annotations, err = buildflags.ParseAnnotations(t.Annotations)
if err != nil { if err != nil {
return nil, err return nil, err
} }
for _, e := range bo.Exports { for _, e := range bo.Exports {
for k, v := range annotations { for k, v := range bo.Annotations {
e.Attrs[k.String()] = v e.Attrs[k.String()] = v
} }
} }

View File

@ -2248,6 +2248,23 @@ target "app" {
require.Len(t, m["app"].Outputs, 0) require.Len(t, m["app"].Outputs, 0)
} }
func TestEmptyDockerfile(t *testing.T) {
fp := File{
Name: "docker-bake.hcl",
Data: []byte(`
target "app" {
dockerfile = ""
}
`),
}
ctx := context.TODO()
m, _, err := ReadTargets(ctx, []File{fp}, []string{"app"}, nil, nil, &EntitlementConf{})
require.NoError(t, err)
require.Contains(t, m, "app")
require.Equal(t, "Dockerfile", *m["app"].Dockerfile)
}
// https://github.com/docker/buildx/issues/2859 // https://github.com/docker/buildx/issues/2859
func TestGroupTargetsWithDefault(t *testing.T) { func TestGroupTargetsWithDefault(t *testing.T) {
t.Run("OnTarget", func(t *testing.T) { t.Run("OnTarget", func(t *testing.T) {

View File

@ -17,7 +17,7 @@ import (
dockeropts "github.com/docker/cli/opts" dockeropts "github.com/docker/cli/opts"
"github.com/docker/go-units" "github.com/docker/go-units"
"github.com/pkg/errors" "github.com/pkg/errors"
"gopkg.in/yaml.v3" "go.yaml.in/yaml/v3"
) )
func ParseComposeFiles(fs []File) (*Config, error) { func ParseComposeFiles(fs []File) (*Config, error) {
@ -76,13 +76,7 @@ func ParseCompose(cfgs []composetypes.ConfigFile, envs map[string]string) (*Conf
var additionalContexts map[string]string var additionalContexts map[string]string
if s.Build.AdditionalContexts != nil { if s.Build.AdditionalContexts != nil {
additionalContexts = map[string]string{} additionalContexts = composeToBuildkitNamedContexts(s.Build.AdditionalContexts)
for k, v := range s.Build.AdditionalContexts {
if strings.HasPrefix(v, "service:") {
v = strings.Replace(v, "service:", "target:", 1)
}
additionalContexts[k] = v
}
} }
var shmSize *string var shmSize *string
@ -151,6 +145,28 @@ func ParseCompose(cfgs []composetypes.ConfigFile, envs map[string]string) (*Conf
return nil, err return nil, err
} }
var inAttests []string
if s.Build.SBOM != "" {
inAttests = append(inAttests, buildflags.CanonicalizeAttest("sbom", s.Build.SBOM))
}
if s.Build.Provenance != "" {
inAttests = append(inAttests, buildflags.CanonicalizeAttest("provenance", s.Build.Provenance))
}
attests, err := buildflags.ParseAttests(inAttests)
if err != nil {
return nil, err
}
var noCache *bool
if s.Build.NoCache {
noCache = &s.Build.NoCache
}
var pull *bool
if s.Build.Pull {
pull = &s.Build.Pull
}
g.Targets = append(g.Targets, targetName) g.Targets = append(g.Targets, targetName)
t := &Target{ t := &Target{
Name: targetName, Name: targetName,
@ -176,6 +192,9 @@ func ParseCompose(cfgs []composetypes.ConfigFile, envs map[string]string) (*Conf
ShmSize: shmSize, ShmSize: shmSize,
Ulimits: ulimits, Ulimits: ulimits,
ExtraHosts: extraHosts, ExtraHosts: extraHosts,
Attest: attests,
NoCache: noCache,
Pull: pull,
} }
if err = t.composeExtTarget(s.Build.Extensions); err != nil { if err = t.composeExtTarget(s.Build.Extensions); err != nil {
return nil, err return nil, err
@ -242,6 +261,9 @@ func loadComposeFiles(cfgs []composetypes.ConfigFile, envs map[string]string, op
filtered[key] = v filtered[key] = v
} }
} }
if len(filtered) == 0 {
return nil, errors.New("empty compose file")
}
if err := composeschema.Validate(filtered); err != nil { if err := composeschema.Validate(filtered); err != nil {
return nil, err return nil, err
@ -259,7 +281,7 @@ func loadComposeFiles(cfgs []composetypes.ConfigFile, envs map[string]string, op
func validateComposeFile(dt []byte, fn string) (bool, error) { func validateComposeFile(dt []byte, fn string) (bool, error) {
envs, err := composeEnv() envs, err := composeEnv()
if err != nil { if err != nil {
return true, err return false, err
} }
fnl := strings.ToLower(fn) fnl := strings.ToLower(fn)
if strings.HasSuffix(fnl, ".yml") || strings.HasSuffix(fnl, ".yaml") { if strings.HasSuffix(fnl, ".yml") || strings.HasSuffix(fnl, ".yaml") {
@ -455,7 +477,7 @@ func (t *Target) composeExtTarget(exts map[string]any) error {
t.NoCacheFilter = dedupSlice(append(t.NoCacheFilter, xb.NoCacheFilter...)) t.NoCacheFilter = dedupSlice(append(t.NoCacheFilter, xb.NoCacheFilter...))
} }
if len(xb.Contexts) > 0 { if len(xb.Contexts) > 0 {
t.Contexts = dedupMap(t.Contexts, xb.Contexts) t.Contexts = dedupMap(t.Contexts, composeToBuildkitNamedContexts(xb.Contexts))
} }
return nil return nil
@ -490,3 +512,16 @@ func composeToBuildkitSSH(sshKey composetypes.SSHKey) *buildflags.SSH {
} }
return bkssh return bkssh
} }
func composeToBuildkitNamedContexts(m map[string]string) map[string]string {
out := make(map[string]string, len(m))
for k, v := range m {
if strings.HasPrefix(v, "service:") || strings.HasPrefix(v, "target:") {
if parts := strings.SplitN(v, ":", 2); len(parts) == 2 {
v = "target:" + sanitizeTargetName(parts[1])
}
}
out[k] = v
}
return out
}

View File

@ -611,6 +611,7 @@ func TestValidateComposeFile(t *testing.T) {
fn string fn string
dt []byte dt []byte
isCompose bool isCompose bool
wantErr bool
}{ }{
{ {
name: "empty service", name: "empty service",
@ -620,6 +621,7 @@ services:
foo: foo:
`), `),
isCompose: true, isCompose: true,
wantErr: false,
}, },
{ {
name: "build", name: "build",
@ -630,6 +632,7 @@ services:
build: . build: .
`), `),
isCompose: true, isCompose: true,
wantErr: false,
}, },
{ {
name: "image", name: "image",
@ -640,6 +643,7 @@ services:
image: nginx image: nginx
`), `),
isCompose: true, isCompose: true,
wantErr: false,
}, },
{ {
name: "unknown ext", name: "unknown ext",
@ -650,6 +654,7 @@ services:
image: nginx image: nginx
`), `),
isCompose: true, isCompose: true,
wantErr: false,
}, },
{ {
name: "hcl", name: "hcl",
@ -660,13 +665,64 @@ target "default" {
} }
`), `),
isCompose: false, isCompose: false,
wantErr: false,
},
{
name: "json",
fn: "docker-bake.json",
dt: []byte(`
{
"group": [
{
"targets": [
"my-service"
]
}
],
"target": [
{
"context": ".",
"dockerfile": "Dockerfile"
}
]
}
`),
isCompose: false,
wantErr: false,
},
{
name: "json unknown ext",
fn: "docker-bake.foo",
dt: []byte(`
{
"group": [
{
"targets": [
"my-service"
]
}
],
"target": [
{
"context": ".",
"dockerfile": "Dockerfile"
}
]
}
`),
isCompose: false,
wantErr: true,
}, },
} }
for _, tt := range cases { for _, tt := range cases {
t.Run(tt.name, func(t *testing.T) { t.Run(tt.name, func(t *testing.T) {
isCompose, err := validateComposeFile(tt.dt, tt.fn) isCompose, err := validateComposeFile(tt.dt, tt.fn)
assert.Equal(t, tt.isCompose, isCompose) assert.Equal(t, tt.isCompose, isCompose)
require.NoError(t, err) if tt.wantErr {
require.Error(t, err)
} else {
require.NoError(t, err)
}
}) })
} }
} }
@ -837,6 +893,44 @@ services:
require.Equal(t, map[string]string{"base": "target:base"}, c.Targets[1].Contexts) require.Equal(t, map[string]string{"base": "target:base"}, c.Targets[1].Contexts)
} }
func TestServiceContextDot(t *testing.T) {
dt := []byte(`
services:
base.1:
build:
dockerfile: baseapp.Dockerfile
command: ./entrypoint.sh
foo.1:
build:
dockerfile: fooapp.Dockerfile
command: ./entrypoint.sh
webapp:
build:
context: ./dir
additional_contexts:
base: service:base.1
x-bake:
contexts:
foo: target:foo.1
`)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Equal(t, 1, len(c.Groups))
require.Equal(t, "default", c.Groups[0].Name)
sort.Strings(c.Groups[0].Targets)
require.Equal(t, []string{"base_1", "foo_1", "webapp"}, c.Groups[0].Targets)
require.Equal(t, 3, len(c.Targets))
sort.Slice(c.Targets, func(i, j int) bool {
return c.Targets[i].Name < c.Targets[j].Name
})
require.Equal(t, "webapp", c.Targets[2].Name)
require.Equal(t, map[string]string{"base": "target:base_1", "foo": "target:foo_1"}, c.Targets[2].Contexts)
}
func TestDotEnvDir(t *testing.T) { func TestDotEnvDir(t *testing.T) {
tmpdir := t.TempDir() tmpdir := t.TempDir()
require.NoError(t, os.Mkdir(filepath.Join(tmpdir, ".env"), 0755)) require.NoError(t, os.Mkdir(filepath.Join(tmpdir, ".env"), 0755))
@ -913,6 +1007,108 @@ services:
require.ErrorContains(t, err, `additional properties 'foo' not allowed`) require.ErrorContains(t, err, `additional properties 'foo' not allowed`)
} }
func TestEmptyComposeFile(t *testing.T) {
tmpdir := t.TempDir()
chdir(t, tmpdir)
_, err := ParseComposeFiles([]File{{Name: "compose.yml", Data: []byte(``)}})
require.Error(t, err)
require.ErrorContains(t, err, `empty compose file`) // https://github.com/compose-spec/compose-go/blob/a42e7579d813e64c0c1f598a666358bc0c0a0eb4/loader/loader.go#L542
}
func TestParseComposeAttests(t *testing.T) {
dt := []byte(`
services:
app:
build:
context: .
sbom: true
provenance: mode=max
`)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
target := c.Targets[0]
require.Equal(t, "app", target.Name)
require.NotNil(t, target.Attest)
require.Len(t, target.Attest, 2)
attestMap := target.Attest.ToMap()
require.Contains(t, attestMap, "sbom")
require.Contains(t, attestMap, "provenance")
// Check the actual content - sbom=true should result in disabled=false (not disabled)
require.Equal(t, "type=sbom", *attestMap["sbom"])
require.Equal(t, "type=provenance,mode=max", *attestMap["provenance"])
}
func TestParseComposeAttestsDisabled(t *testing.T) {
dt := []byte(`
services:
app:
build:
context: .
sbom: false
provenance: false
`)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
target := c.Targets[0]
require.Equal(t, "app", target.Name)
require.NotNil(t, target.Attest)
require.Len(t, target.Attest, 2)
attestMap := target.Attest.ToMap()
require.Contains(t, attestMap, "sbom")
require.Contains(t, attestMap, "provenance")
// When disabled=true, the value should be nil
require.Nil(t, attestMap["sbom"])
require.Nil(t, attestMap["provenance"])
}
func TestParseComposePull(t *testing.T) {
dt := []byte(`
services:
app:
build:
context: .
pull: true
`)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
target := c.Targets[0]
require.Equal(t, "app", target.Name)
require.Equal(t, true, *target.Pull)
}
func TestParseComposeNoCache(t *testing.T) {
dt := []byte(`
services:
app:
build:
context: .
no_cache: true
`)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
target := c.Targets[0]
require.Equal(t, "app", target.Name)
require.Equal(t, true, *target.NoCache)
}
// chdir changes the current working directory to the named directory, // chdir changes the current working directory to the named directory,
// and then restore the original working directory at the end of the test. // and then restore the original working directory at the end of the test.
func chdir(t *testing.T, dir string) { func chdir(t *testing.T, dir string) {

View File

@ -2,10 +2,15 @@ package hclparser
import ( import (
"errors" "errors"
"os"
"os/user"
"path" "path"
"path/filepath"
"runtime"
"strings" "strings"
"time" "time"
"github.com/docker/cli/cli/config"
"github.com/hashicorp/go-cty-funcs/cidr" "github.com/hashicorp/go-cty-funcs/cidr"
"github.com/hashicorp/go-cty-funcs/crypto" "github.com/hashicorp/go-cty-funcs/crypto"
"github.com/hashicorp/go-cty-funcs/encoding" "github.com/hashicorp/go-cty-funcs/encoding"
@ -62,6 +67,7 @@ var stdlibFunctions = []funcDef{
{name: "greaterthan", fn: stdlib.GreaterThanFunc}, {name: "greaterthan", fn: stdlib.GreaterThanFunc},
{name: "greaterthanorequalto", fn: stdlib.GreaterThanOrEqualToFunc}, {name: "greaterthanorequalto", fn: stdlib.GreaterThanOrEqualToFunc},
{name: "hasindex", fn: stdlib.HasIndexFunc}, {name: "hasindex", fn: stdlib.HasIndexFunc},
{name: "homedir", factory: homedirFunc},
{name: "indent", fn: stdlib.IndentFunc}, {name: "indent", fn: stdlib.IndentFunc},
{name: "index", fn: stdlib.IndexFunc}, {name: "index", fn: stdlib.IndexFunc},
{name: "indexof", factory: indexOfFunc}, {name: "indexof", factory: indexOfFunc},
@ -254,6 +260,27 @@ func timestampFunc() function.Function {
}) })
} }
// homedirFunc constructs a function that returns the current user's home directory.
func homedirFunc() function.Function {
return function.New(&function.Spec{
Description: `Returns the current user's home directory.`,
Params: []function.Parameter{},
Type: function.StaticReturnType(cty.String),
Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) {
home, err := os.UserHomeDir()
if err != nil {
if home == "" && runtime.GOOS != "windows" {
if u, err := user.Current(); err == nil {
return cty.StringVal(u.HomeDir), nil
}
}
return cty.StringVal(filepath.Dir(config.Dir())), nil
}
return cty.StringVal(home), nil
},
})
}
func Stdlib() map[string]function.Function { func Stdlib() map[string]function.Function {
funcs := make(map[string]function.Function, len(stdlibFunctions)) funcs := make(map[string]function.Function, len(stdlibFunctions))
for _, v := range stdlibFunctions { for _, v := range stdlibFunctions {

View File

@ -1,6 +1,7 @@
package hclparser package hclparser
import ( import (
"path/filepath"
"testing" "testing"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
@ -197,3 +198,10 @@ func TestSanitize(t *testing.T) {
}) })
} }
} }
func TestHomedir(t *testing.T) {
home, err := homedirFunc().Call(nil)
require.NoError(t, err)
require.NotEmpty(t, home.AsString())
require.True(t, filepath.IsAbs(home.AsString()))
}

View File

@ -32,8 +32,12 @@ func ReadRemoteFiles(ctx context.Context, nodes []builder.Node, url string, name
var sessions []session.Attachable var sessions []session.Attachable
var filename string var filename string
st, ok := dockerui.DetectGitContext(url, false) keepGitDir := false
st, ok, err := dockerui.DetectGitContext(url, &keepGitDir)
if ok { if ok {
if err != nil {
return nil, nil, err
}
if ssh, err := build.CreateSSH([]*buildflags.SSH{{ if ssh, err := build.CreateSSH([]*buildflags.SSH{{
ID: "default", ID: "default",
Paths: strings.Split(os.Getenv("BUILDX_BAKE_GIT_SSH"), ","), Paths: strings.Split(os.Getenv("BUILDX_BAKE_GIT_SSH"), ","),

View File

@ -93,6 +93,7 @@ type Options struct {
ProvenanceResponseMode confutil.MetadataProvenanceMode ProvenanceResponseMode confutil.MetadataProvenanceMode
SourcePolicy *spb.Policy SourcePolicy *spb.Policy
GroupRef string GroupRef string
Annotations map[exptypes.AnnotationKey]string // Not used during build, annotations are already set in Exports. Just used to check for support with drivers.
} }
type CallFunc struct { type CallFunc struct {

View File

@ -28,6 +28,7 @@ import (
"github.com/moby/buildkit/client" "github.com/moby/buildkit/client"
"github.com/moby/buildkit/client/llb" "github.com/moby/buildkit/client/llb"
"github.com/moby/buildkit/client/ociindex" "github.com/moby/buildkit/client/ociindex"
"github.com/moby/buildkit/exporter/containerimage/exptypes"
gateway "github.com/moby/buildkit/frontend/gateway/client" gateway "github.com/moby/buildkit/frontend/gateway/client"
"github.com/moby/buildkit/identity" "github.com/moby/buildkit/identity"
"github.com/moby/buildkit/session" "github.com/moby/buildkit/session"
@ -37,6 +38,7 @@ import (
"github.com/moby/buildkit/solver/pb" "github.com/moby/buildkit/solver/pb"
"github.com/moby/buildkit/util/apicaps" "github.com/moby/buildkit/util/apicaps"
"github.com/moby/buildkit/util/entitlements" "github.com/moby/buildkit/util/entitlements"
"github.com/moby/buildkit/util/gitutil"
"github.com/opencontainers/go-digest" "github.com/opencontainers/go-digest"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/tonistiigi/fsutil" "github.com/tonistiigi/fsutil"
@ -116,6 +118,13 @@ func toSolveOpt(ctx context.Context, node builder.Node, multiDriver bool, opt *O
so.FrontendAttrs["cgroup-parent"] = opt.CgroupParent so.FrontendAttrs["cgroup-parent"] = opt.CgroupParent
} }
if v, ok := opt.BuildArgs["BUILDKIT_SYNTAX"]; ok {
p := strings.SplitN(strings.TrimSpace(v), " ", 2)
so.Frontend = "gateway.v0"
so.FrontendAttrs["source"] = p[0]
so.FrontendAttrs["cmdline"] = v
}
if v, ok := opt.BuildArgs["BUILDKIT_MULTI_PLATFORM"]; ok { if v, ok := opt.BuildArgs["BUILDKIT_MULTI_PLATFORM"]; ok {
if v, _ := strconv.ParseBool(v); v { if v, _ := strconv.ParseBool(v); v {
so.FrontendAttrs["multi-platform"] = "true" so.FrontendAttrs["multi-platform"] = "true"
@ -180,6 +189,20 @@ func toSolveOpt(ctx context.Context, node builder.Node, multiDriver bool, opt *O
} }
} }
// check if index annotations are supported by docker driver
if len(opt.Exports) > 0 && opt.CallFunc == nil && len(opt.Annotations) > 0 && nodeDriver.IsMobyDriver() && !nodeDriver.Features(ctx)[driver.MultiPlatform] {
for _, exp := range opt.Exports {
if exp.Type == "image" || exp.Type == "docker" {
for ak := range opt.Annotations {
switch ak.Type {
case exptypes.AnnotationIndex, exptypes.AnnotationIndexDescriptor:
return nil, nil, errors.New("index annotations not supported for single platform export")
}
}
}
}
}
// fill in image exporter names from tags // fill in image exporter names from tags
if len(opt.Tags) > 0 { if len(opt.Tags) > 0 {
tags := make([]string, len(opt.Tags)) tags := make([]string, len(opt.Tags))
@ -382,6 +405,7 @@ func loadInputs(ctx context.Context, d *driver.DriverHandle, inp *Inputs, pw pro
dockerfileName = inp.DockerfilePath dockerfileName = inp.DockerfilePath
dockerfileSrcName = inp.DockerfilePath dockerfileSrcName = inp.DockerfilePath
toRemove []string toRemove []string
caps = map[string]struct{}{}
) )
switch { switch {
@ -447,6 +471,12 @@ func loadInputs(ctx context.Context, d *driver.DriverHandle, inp *Inputs, pw pro
target.FrontendAttrs["dockerfilekey"] = "dockerfile" target.FrontendAttrs["dockerfilekey"] = "dockerfile"
} }
target.FrontendAttrs["context"] = inp.ContextPath target.FrontendAttrs["context"] = inp.ContextPath
gitRef, err := gitutil.ParseURL(inp.ContextPath)
if err == nil && len(gitRef.Query) > 0 {
caps["moby.buildkit.frontend.gitquerystring"] = struct{}{}
}
default: default:
return nil, errors.Errorf("unable to prepare context: path %q not found", inp.ContextPath) return nil, errors.Errorf("unable to prepare context: path %q not found", inp.ContextPath)
} }
@ -494,7 +524,7 @@ func loadInputs(ctx context.Context, d *driver.DriverHandle, inp *Inputs, pw pro
target.FrontendAttrs["filename"] = dockerfileName target.FrontendAttrs["filename"] = dockerfileName
for k, v := range inp.NamedContexts { for k, v := range inp.NamedContexts {
target.FrontendAttrs["frontend.caps"] = "moby.buildkit.frontend.contexts+forward" caps["moby.buildkit.frontend.contexts+forward"] = struct{}{}
if v.State != nil { if v.State != nil {
target.FrontendAttrs["context:"+k] = "input:" + k target.FrontendAttrs["context:"+k] = "input:" + k
if target.FrontendInputs == nil { if target.FrontendInputs == nil {
@ -506,6 +536,12 @@ func loadInputs(ctx context.Context, d *driver.DriverHandle, inp *Inputs, pw pro
if IsRemoteURL(v.Path) || strings.HasPrefix(v.Path, "docker-image://") || strings.HasPrefix(v.Path, "target:") { if IsRemoteURL(v.Path) || strings.HasPrefix(v.Path, "docker-image://") || strings.HasPrefix(v.Path, "target:") {
target.FrontendAttrs["context:"+k] = v.Path target.FrontendAttrs["context:"+k] = v.Path
gitRef, err := gitutil.ParseURL(v.Path)
if err == nil && len(gitRef.Query) > 0 {
if _, ok := caps["moby.buildkit.frontend.gitquerystring"]; !ok {
caps["moby.buildkit.frontend.gitquerystring+forward"] = struct{}{}
}
}
continue continue
} }
@ -535,6 +571,7 @@ func loadInputs(ctx context.Context, d *driver.DriverHandle, inp *Inputs, pw pro
target.FrontendAttrs["context:"+k] = "oci-layout://" + storeName + ":" + tag + "@" + dig target.FrontendAttrs["context:"+k] = "oci-layout://" + storeName + ":" + tag + "@" + dig
continue continue
} }
st, err := os.Stat(v.Path) st, err := os.Stat(v.Path)
if err != nil { if err != nil {
return nil, errors.Wrapf(err, "failed to get build context %v", k) return nil, errors.Wrapf(err, "failed to get build context %v", k)
@ -558,6 +595,12 @@ func loadInputs(ctx context.Context, d *driver.DriverHandle, inp *Inputs, pw pro
} }
} }
if len(caps) > 0 {
keys := slices.Collect(maps.Keys(caps))
slices.Sort(keys)
target.FrontendAttrs["frontend.caps"] = strings.Join(keys, ",")
}
inp.DockerfileMappingSrc = dockerfileSrcName inp.DockerfileMappingSrc = dockerfileSrcName
inp.DockerfileMappingDst = dockerfileName inp.DockerfileMappingDst = dockerfileName
return release, nil return release, nil

View File

@ -1,10 +1,15 @@
package build package build
import ( import (
"cmp"
"context" "context"
_ "crypto/sha256" // ensure digests can be computed _ "crypto/sha256" // ensure digests can be computed
"encoding/json" "encoding/json"
"io" "io"
iofs "io/fs"
"path/filepath"
"slices"
"strings"
"sync" "sync"
"github.com/moby/buildkit/exporter/containerimage/exptypes" "github.com/moby/buildkit/exporter/containerimage/exptypes"
@ -14,6 +19,7 @@ import (
ocispecs "github.com/opencontainers/image-spec/specs-go/v1" ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
"github.com/tonistiigi/fsutil/types"
) )
// NewResultHandle stores a gateway client, gateway reference, and the error from // NewResultHandle stores a gateway client, gateway reference, and the error from
@ -75,6 +81,40 @@ func (r *ResultHandle) NewContainer(ctx context.Context, cfg *InvokeConfig) (gat
return r.gwClient.NewContainer(ctx, req) return r.gwClient.NewContainer(ctx, req)
} }
func (r *ResultHandle) StatFile(ctx context.Context, fpath string, cfg *InvokeConfig) (*types.Stat, error) {
containerCfg, err := r.getContainerConfig(cfg)
if err != nil {
return nil, err
}
candidateMounts := make([]gateway.Mount, 0, len(containerCfg.Mounts))
for _, m := range containerCfg.Mounts {
if strings.HasPrefix(fpath, m.Dest) {
candidateMounts = append(candidateMounts, m)
}
}
if len(candidateMounts) == 0 {
return nil, iofs.ErrNotExist
}
slices.SortFunc(candidateMounts, func(a, b gateway.Mount) int {
return cmp.Compare(len(a.Dest), len(b.Dest))
})
m := candidateMounts[len(candidateMounts)-1]
relpath, err := filepath.Rel(m.Dest, fpath)
if err != nil {
return nil, err
}
if m.Ref == nil {
return nil, iofs.ErrNotExist
}
req := gateway.StatRequest{Path: filepath.ToSlash(relpath)}
return m.Ref.StatFile(ctx, req)
}
func (r *ResultHandle) getContainerConfig(cfg *InvokeConfig) (containerCfg gateway.NewContainerRequest, _ error) { func (r *ResultHandle) getContainerConfig(cfg *InvokeConfig) (containerCfg gateway.NewContainerRequest, _ error) {
if r.ref != nil && r.solveErr == nil { if r.ref != nil && r.solveErr == nil {
logrus.Debugf("creating container from successful build") logrus.Debugf("creating container from successful build")

View File

@ -11,7 +11,7 @@ import (
"github.com/docker/buildx/driver" "github.com/docker/buildx/driver"
"github.com/docker/cli/opts" "github.com/docker/cli/opts"
"github.com/moby/buildkit/util/gitutil" "github.com/moby/buildkit/frontend/dockerfile/dfgitutil"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
) )
@ -36,7 +36,7 @@ func IsRemoteURL(c string) bool {
if isHTTPURL(c) { if isHTTPURL(c) {
return true return true
} }
if _, err := gitutil.ParseGitRef(c); err == nil { if _, ok, _ := dfgitutil.ParseGitRef(c); ok {
return true return true
} }
return false return false

View File

@ -122,6 +122,7 @@ func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []N
Name: driver.BuilderName(n.Name), Name: driver.BuilderName(n.Name),
EndpointAddr: n.Endpoint, EndpointAddr: n.Endpoint,
DockerAPI: dockerapi, DockerAPI: dockerapi,
DockerContext: b.opts.dockerCli.CurrentContext(),
ContextStore: b.opts.dockerCli.ContextStore(), ContextStore: b.opts.dockerCli.ContextStore(),
BuildkitdFlags: n.BuildkitdFlags, BuildkitdFlags: n.BuildkitdFlags,
Files: n.Files, Files: n.Files,

View File

@ -492,7 +492,8 @@ func bakeCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
// Other common flags (noCache, pull and progress) are processed in runBake function. // Other common flags (noCache, pull and progress) are processed in runBake function.
return runBake(cmd.Context(), dockerCli, args, options, cFlags, filesFromEnv) return runBake(cmd.Context(), dockerCli, args, options, cFlags, filesFromEnv)
}, },
ValidArgsFunction: completion.BakeTargets(options.files), ValidArgsFunction: completion.BakeTargets(options.files),
DisableFlagsInUseLine: true,
} }
flags := cmd.Flags() flags := cmd.Flags()

View File

@ -490,6 +490,7 @@ func buildCmd(dockerCli command.Cli, rootOpts *rootOptions, debugger debuggerOpt
ValidArgsFunction: func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) { ValidArgsFunction: func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {
return nil, cobra.ShellCompDirectiveFilterDirs return nil, cobra.ShellCompDirectiveFilterDirs
}, },
DisableFlagsInUseLine: true,
} }
var platformsDefault []string var platformsDefault []string
@ -691,6 +692,11 @@ func wrapBuildError(err error, bake bool) error {
msg += " Named contexts are supported since Dockerfile v1.4. Use #syntax directive in Dockerfile or update to latest BuildKit." msg += " Named contexts are supported since Dockerfile v1.4. Use #syntax directive in Dockerfile or update to latest BuildKit."
return &wrapped{err, msg} return &wrapped{err, msg}
} }
if st.Code() == codes.Unimplemented && strings.Contains(st.Message(), "unsupported frontend capability moby.buildkit.frontend.gitquerystring") {
msg := "current frontend does not support Git URLs with query string components."
msg += " Git URLs with query string are supported since Dockerfile v1.18 and BuildKit v0.24. Use BUILDKIT_SYNTAX build-arg, #syntax directive in Dockerfile or update to latest BuildKit."
return &wrapped{err, msg}
}
} }
return err return err
} }
@ -1003,9 +1009,8 @@ func RunBuild(ctx context.Context, dockerCli command.Cli, in *BuildOptions, inSt
} }
opts.Platforms = platforms opts.Platforms = platforms
dockerConfig := dockerCli.ConfigFile()
opts.Session = append(opts.Session, authprovider.NewDockerAuthProvider(authprovider.DockerAuthProviderConfig{ opts.Session = append(opts.Session, authprovider.NewDockerAuthProvider(authprovider.DockerAuthProviderConfig{
ConfigFile: dockerConfig, ConfigFile: dockerCli.ConfigFile(),
})) }))
secrets, err := build.CreateSecrets(in.Secrets) secrets, err := build.CreateSecrets(in.Secrets)
@ -1063,13 +1068,13 @@ func RunBuild(ctx context.Context, dockerCli command.Cli, in *BuildOptions, inSt
} }
} }
annotations, err := buildflags.ParseAnnotations(in.Annotations) opts.Annotations, err = buildflags.ParseAnnotations(in.Annotations)
if err != nil { if err != nil {
return nil, nil, errors.Wrap(err, "parse annotations") return nil, nil, errors.Wrap(err, "parse annotations")
} }
for _, o := range outputs { for _, o := range outputs {
for k, v := range annotations { for k, v := range opts.Annotations {
o.Attrs[k.String()] = v o.Attrs[k.String()] = v
} }
} }

View File

@ -98,7 +98,8 @@ func createCmd(dockerCli command.Cli) *cobra.Command {
RunE: func(cmd *cobra.Command, args []string) error { RunE: func(cmd *cobra.Command, args []string) error {
return runCreate(cmd.Context(), dockerCli, options, args) return runCreate(cmd.Context(), dockerCli, options, args)
}, },
ValidArgsFunction: completion.Disable, ValidArgsFunction: completion.Disable,
DisableFlagsInUseLine: true,
} }
flags := cmd.Flags() flags := cmd.Flags()

View File

@ -23,6 +23,8 @@ func dapCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
cmd := &cobra.Command{ cmd := &cobra.Command{
Use: "dap", Use: "dap",
Short: "Start debug adapter protocol compatible debugger", Short: "Start debug adapter protocol compatible debugger",
DisableFlagsInUseLine: true,
} }
cobrautil.MarkCommandExperimental(cmd) cobrautil.MarkCommandExperimental(cmd)
@ -116,6 +118,7 @@ func dapAttachCmd() *cobra.Command {
} }
return nil return nil
}, },
DisableFlagsInUseLine: true,
} }
return cmd return cmd
} }

View File

@ -44,6 +44,8 @@ func debugCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
cmd := &cobra.Command{ cmd := &cobra.Command{
Use: "debug", Use: "debug",
Short: "Start debugger", Short: "Start debugger",
DisableFlagsInUseLine: true,
} }
cobrautil.MarkCommandExperimental(cmd) cobrautil.MarkCommandExperimental(cmd)

View File

@ -122,6 +122,7 @@ func dialStdioCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
opts.builder = rootOpts.builder opts.builder = rootOpts.builder
return runDialStdio(dockerCli, opts) return runDialStdio(dockerCli, opts)
}, },
DisableFlagsInUseLine: true,
} }
flags := cmd.Flags() flags := cmd.Flags()

View File

@ -4,8 +4,6 @@ import (
"context" "context"
"fmt" "fmt"
"io" "io"
"os"
"strings"
"text/tabwriter" "text/tabwriter"
"time" "time"
@ -13,20 +11,77 @@ import (
"github.com/docker/buildx/util/cobrautil/completion" "github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/cli/cli" "github.com/docker/cli/cli"
"github.com/docker/cli/cli/command" "github.com/docker/cli/cli/command"
"github.com/docker/cli/cli/command/formatter"
"github.com/docker/cli/opts" "github.com/docker/cli/opts"
"github.com/docker/go-units" "github.com/docker/go-units"
"github.com/moby/buildkit/client" "github.com/moby/buildkit/client"
"github.com/pkg/errors"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"golang.org/x/sync/errgroup" "golang.org/x/sync/errgroup"
) )
const (
duIDHeader = "ID"
duParentsHeader = "PARENTS"
duCreatedAtHeader = "CREATED AT"
duMutableHeader = "MUTABLE"
duReclaimHeader = "RECLAIMABLE"
duSharedHeader = "SHARED"
duSizeHeader = "SIZE"
duDescriptionHeader = "DESCRIPTION"
duUsageHeader = "USAGE COUNT"
duLastUsedAtHeader = "LAST ACCESSED"
duTypeHeader = "TYPE"
duDefaultTableFormat = "table {{.ID}}\t{{.Reclaimable}}\t{{.Size}}\t{{.LastUsedAt}}"
duDefaultPrettyTemplate = `ID: {{.ID}}
{{- if .Parents }}
Parents:
{{- range .Parents }}
- {{.}}
{{- end }}
{{- end }}
Created at: {{.CreatedAt}}
Mutable: {{.Mutable}}
Reclaimable: {{.Reclaimable}}
Shared: {{.Shared}}
Size: {{.Size}}
{{- if .Description}}
Description: {{ .Description }}
{{- end }}
Usage count: {{.UsageCount}}
{{- if .LastUsedAt}}
Last used: {{ .LastUsedAt }}
{{- end }}
{{- if .Type}}
Type: {{ .Type }}
{{- end }}
`
)
type duOptions struct { type duOptions struct {
builder string builder string
filter opts.FilterOpt filter opts.FilterOpt
verbose bool verbose bool
format string
} }
func runDiskUsage(ctx context.Context, dockerCli command.Cli, opts duOptions) error { func runDiskUsage(ctx context.Context, dockerCli command.Cli, opts duOptions) error {
if opts.format != "" && opts.verbose {
return errors.New("--format and --verbose cannot be used together")
} else if opts.format == "" {
if opts.verbose {
opts.format = duDefaultPrettyTemplate
} else {
opts.format = duDefaultTableFormat
}
} else if opts.format == formatter.PrettyFormatKey {
opts.format = duDefaultPrettyTemplate
} else if opts.format == formatter.TableFormatKey {
opts.format = duDefaultTableFormat
}
pi, err := toBuildkitPruneInfo(opts.filter.Value()) pi, err := toBuildkitPruneInfo(opts.filter.Value())
if err != nil { if err != nil {
return err return err
@ -74,33 +129,53 @@ func runDiskUsage(ctx context.Context, dockerCli command.Cli, opts duOptions) er
return err return err
} }
tw := tabwriter.NewWriter(os.Stdout, 1, 8, 1, '\t', 0) fctx := formatter.Context{
first := true Output: dockerCli.Out(),
Format: formatter.Format(opts.format),
}
var dus []*client.UsageInfo
for _, du := range out { for _, du := range out {
if du == nil { if du != nil {
continue dus = append(dus, du...)
}
if opts.verbose {
printVerbose(tw, du)
} else {
if first {
printTableHeader(tw)
first = false
}
for _, di := range du {
printTableRow(tw, di)
}
tw.Flush()
} }
} }
if opts.filter.Value().Len() == 0 { render := func(format func(subContext formatter.SubContext) error) error {
printSummary(tw, out) for _, du := range dus {
if err := format(&diskusageContext{
format: fctx.Format,
du: du,
}); err != nil {
return err
}
}
return nil
} }
tw.Flush() duCtx := diskusageContext{}
return nil duCtx.Header = formatter.SubHeaderContext{
"ID": duIDHeader,
"Parents": duParentsHeader,
"CreatedAt": duCreatedAtHeader,
"Mutable": duMutableHeader,
"Reclaimable": duReclaimHeader,
"Shared": duSharedHeader,
"Size": duSizeHeader,
"Description": duDescriptionHeader,
"UsageCount": duUsageHeader,
"LastUsedAt": duLastUsedAtHeader,
"Type": duTypeHeader,
}
defer func() {
if (fctx.Format != duDefaultTableFormat && fctx.Format != duDefaultPrettyTemplate) || fctx.Format.IsJSON() || opts.filter.Value().Len() > 0 {
return
}
printSummary(dockerCli.Out(), out)
}()
return fctx.Write(&duCtx, render)
} }
func duCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command { func duCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
@ -114,69 +189,84 @@ func duCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
options.builder = rootOpts.builder options.builder = rootOpts.builder
return runDiskUsage(cmd.Context(), dockerCli, options) return runDiskUsage(cmd.Context(), dockerCli, options)
}, },
ValidArgsFunction: completion.Disable, ValidArgsFunction: completion.Disable,
DisableFlagsInUseLine: true,
} }
flags := cmd.Flags() flags := cmd.Flags()
flags.Var(&options.filter, "filter", "Provide filter values") flags.Var(&options.filter, "filter", "Provide filter values")
flags.BoolVar(&options.verbose, "verbose", false, "Provide a more verbose output") flags.BoolVar(&options.verbose, "verbose", false, `Shorthand for "--format=pretty"`)
flags.StringVar(&options.format, "format", "", "Format the output")
return cmd return cmd
} }
func printKV(w io.Writer, k string, v any) { type diskusageContext struct {
fmt.Fprintf(w, "%s:\t%v\n", k, v) formatter.HeaderContext
format formatter.Format
du *client.UsageInfo
} }
func printVerbose(tw *tabwriter.Writer, du []*client.UsageInfo) { func (d *diskusageContext) MarshalJSON() ([]byte, error) {
for _, di := range du { return formatter.MarshalJSON(d)
printKV(tw, "ID", di.ID)
if len(di.Parents) != 0 {
printKV(tw, "Parent", strings.Join(di.Parents, ","))
}
printKV(tw, "Created at", di.CreatedAt)
printKV(tw, "Mutable", di.Mutable)
printKV(tw, "Reclaimable", !di.InUse)
printKV(tw, "Shared", di.Shared)
printKV(tw, "Size", units.HumanSize(float64(di.Size)))
if di.Description != "" {
printKV(tw, "Description", di.Description)
}
printKV(tw, "Usage count", di.UsageCount)
if di.LastUsedAt != nil {
printKV(tw, "Last used", units.HumanDuration(time.Since(*di.LastUsedAt))+" ago")
}
if di.RecordType != "" {
printKV(tw, "Type", di.RecordType)
}
fmt.Fprintf(tw, "\n")
}
tw.Flush()
} }
func printTableHeader(tw *tabwriter.Writer) { func (d *diskusageContext) ID() string {
fmt.Fprintln(tw, "ID\tRECLAIMABLE\tSIZE\tLAST ACCESSED") id := d.du.ID
} if d.format.IsTable() && d.du.Mutable {
func printTableRow(tw *tabwriter.Writer, di *client.UsageInfo) {
id := di.ID
if di.Mutable {
id += "*" id += "*"
} }
size := units.HumanSize(float64(di.Size)) return id
if di.Shared {
size += "*"
}
lastAccessed := ""
if di.LastUsedAt != nil {
lastAccessed = units.HumanDuration(time.Since(*di.LastUsedAt)) + " ago"
}
fmt.Fprintf(tw, "%-40s\t%-5v\t%-10s\t%s\n", id, !di.InUse, size, lastAccessed)
} }
func printSummary(tw *tabwriter.Writer, dus [][]*client.UsageInfo) { func (d *diskusageContext) Parents() []string {
return d.du.Parents
}
func (d *diskusageContext) CreatedAt() string {
return d.du.CreatedAt.String()
}
func (d *diskusageContext) Mutable() bool {
return d.du.Mutable
}
func (d *diskusageContext) Reclaimable() bool {
return !d.du.InUse
}
func (d *diskusageContext) Shared() bool {
return d.du.Shared
}
func (d *diskusageContext) Size() string {
size := units.HumanSize(float64(d.du.Size))
if d.format.IsTable() && d.du.Shared {
size += "*"
}
return size
}
func (d *diskusageContext) Description() string {
return d.du.Description
}
func (d *diskusageContext) UsageCount() int {
return d.du.UsageCount
}
func (d *diskusageContext) LastUsedAt() string {
if d.du.LastUsedAt != nil {
return units.HumanDuration(time.Since(*d.du.LastUsedAt)) + " ago"
}
return ""
}
func (d *diskusageContext) Type() string {
return string(d.du.RecordType)
}
func printSummary(w io.Writer, dus [][]*client.UsageInfo) {
total := int64(0) total := int64(0)
reclaimable := int64(0) reclaimable := int64(0)
shared := int64(0) shared := int64(0)
@ -195,11 +285,11 @@ func printSummary(tw *tabwriter.Writer, dus [][]*client.UsageInfo) {
} }
} }
tw := tabwriter.NewWriter(w, 1, 8, 1, '\t', 0)
if shared > 0 { if shared > 0 {
fmt.Fprintf(tw, "Shared:\t%s\n", units.HumanSize(float64(shared))) fmt.Fprintf(tw, "Shared:\t%s\n", units.HumanSize(float64(shared)))
fmt.Fprintf(tw, "Private:\t%s\n", units.HumanSize(float64(total-shared))) fmt.Fprintf(tw, "Private:\t%s\n", units.HumanSize(float64(total-shared)))
} }
fmt.Fprintf(tw, "Reclaimable:\t%s\n", units.HumanSize(float64(reclaimable))) fmt.Fprintf(tw, "Reclaimable:\t%s\n", units.HumanSize(float64(reclaimable)))
fmt.Fprintf(tw, "Total:\t%s\n", units.HumanSize(float64(total))) fmt.Fprintf(tw, "Total:\t%s\n", units.HumanSize(float64(total)))
tw.Flush() tw.Flush()

View File

@ -160,7 +160,8 @@ func exportCmd(dockerCli command.Cli, rootOpts RootOptions) *cobra.Command {
options.builder = *rootOpts.Builder options.builder = *rootOpts.Builder
return runExport(cmd.Context(), dockerCli, options) return runExport(cmd.Context(), dockerCli, options)
}, },
ValidArgsFunction: completion.Disable, ValidArgsFunction: completion.Disable,
DisableFlagsInUseLine: true,
} }
flags := cmd.Flags() flags := cmd.Flags()

View File

@ -125,7 +125,8 @@ func importCmd(dockerCli command.Cli, _ RootOptions) *cobra.Command {
RunE: func(cmd *cobra.Command, args []string) error { RunE: func(cmd *cobra.Command, args []string) error {
return runImport(cmd.Context(), dockerCli, options) return runImport(cmd.Context(), dockerCli, options)
}, },
ValidArgsFunction: completion.Disable, ValidArgsFunction: completion.Disable,
DisableFlagsInUseLine: true,
} }
flags := cmd.Flags() flags := cmd.Flags()

View File

@ -656,7 +656,8 @@ func inspectCmd(dockerCli command.Cli, rootOpts RootOptions) *cobra.Command {
options.builder = *rootOpts.Builder options.builder = *rootOpts.Builder
return runInspect(cmd.Context(), dockerCli, options) return runInspect(cmd.Context(), dockerCli, options)
}, },
ValidArgsFunction: completion.Disable, ValidArgsFunction: completion.Disable,
DisableFlagsInUseLine: true,
} }
cmd.AddCommand( cmd.AddCommand(

View File

@ -129,7 +129,8 @@ func attachmentCmd(dockerCli command.Cli, rootOpts RootOptions) *cobra.Command {
options.builder = *rootOpts.Builder options.builder = *rootOpts.Builder
return runAttachment(cmd.Context(), dockerCli, options) return runAttachment(cmd.Context(), dockerCli, options)
}, },
ValidArgsFunction: completion.Disable, ValidArgsFunction: completion.Disable,
DisableFlagsInUseLine: true,
} }
flags := cmd.Flags() flags := cmd.Flags()

View File

@ -96,7 +96,8 @@ func logsCmd(dockerCli command.Cli, rootOpts RootOptions) *cobra.Command {
options.builder = *rootOpts.Builder options.builder = *rootOpts.Builder
return runLogs(cmd.Context(), dockerCli, options) return runLogs(cmd.Context(), dockerCli, options)
}, },
ValidArgsFunction: completion.Disable, ValidArgsFunction: completion.Disable,
DisableFlagsInUseLine: true,
} }
flags := cmd.Flags() flags := cmd.Flags()

View File

@ -103,7 +103,8 @@ func lsCmd(dockerCli command.Cli, rootOpts RootOptions) *cobra.Command {
options.builder = *rootOpts.Builder options.builder = *rootOpts.Builder
return runLs(cmd.Context(), dockerCli, options) return runLs(cmd.Context(), dockerCli, options)
}, },
ValidArgsFunction: completion.Disable, ValidArgsFunction: completion.Disable,
DisableFlagsInUseLine: true,
} }
flags := cmd.Flags() flags := cmd.Flags()

View File

@ -55,7 +55,8 @@ func openCmd(dockerCli command.Cli, rootOpts RootOptions) *cobra.Command {
options.builder = *rootOpts.Builder options.builder = *rootOpts.Builder
return runOpen(cmd.Context(), dockerCli, options) return runOpen(cmd.Context(), dockerCli, options)
}, },
ValidArgsFunction: completion.Disable, ValidArgsFunction: completion.Disable,
DisableFlagsInUseLine: true,
} }
return cmd return cmd

View File

@ -129,7 +129,8 @@ func rmCmd(dockerCli command.Cli, rootOpts RootOptions) *cobra.Command {
options.builder = *rootOpts.Builder options.builder = *rootOpts.Builder
return runRm(cmd.Context(), dockerCli, options) return runRm(cmd.Context(), dockerCli, options)
}, },
ValidArgsFunction: completion.Disable, ValidArgsFunction: completion.Disable,
DisableFlagsInUseLine: true,
} }
flags := cmd.Flags() flags := cmd.Flags()

View File

@ -16,6 +16,8 @@ func RootCmd(rootcmd *cobra.Command, dockerCli command.Cli, opts RootOptions) *c
Short: "Commands to work on build records", Short: "Commands to work on build records",
ValidArgsFunction: completion.Disable, ValidArgsFunction: completion.Disable,
RunE: rootcmd.RunE, RunE: rootcmd.RunE,
DisableFlagsInUseLine: true,
} }
cmd.AddCommand( cmd.AddCommand(

View File

@ -199,7 +199,8 @@ func traceCmd(dockerCli command.Cli, rootOpts RootOptions) *cobra.Command {
options.builder = *rootOpts.Builder options.builder = *rootOpts.Builder
return runTrace(cmd.Context(), dockerCli, options) return runTrace(cmd.Context(), dockerCli, options)
}, },
ValidArgsFunction: completion.Disable, ValidArgsFunction: completion.Disable,
DisableFlagsInUseLine: true,
} }
flags := cmd.Flags() flags := cmd.Flags()

View File

@ -18,7 +18,7 @@ import (
"github.com/docker/buildx/localstate" "github.com/docker/buildx/localstate"
"github.com/docker/cli/cli/command" "github.com/docker/cli/cli/command"
controlapi "github.com/moby/buildkit/api/services/control" controlapi "github.com/moby/buildkit/api/services/control"
"github.com/moby/buildkit/util/gitutil" "github.com/moby/buildkit/frontend/dockerfile/dfgitutil"
"github.com/pkg/errors" "github.com/pkg/errors"
"golang.org/x/sync/errgroup" "golang.org/x/sync/errgroup"
) )
@ -26,6 +26,10 @@ import (
const recordsLimit = 50 const recordsLimit = 50
func buildName(fattrs map[string]string, ls *localstate.State) string { func buildName(fattrs map[string]string, ls *localstate.State) string {
if v, ok := fattrs["build-arg:BUILDKIT_BUILD_NAME"]; ok && v != "" {
return v
}
var res string var res string
var target, contextPath, dockerfilePath, vcsSource string var target, contextPath, dockerfilePath, vcsSource string
@ -328,7 +332,7 @@ func valueFiler(key, value, sep string) matchFunc {
recValue = v recValue = v
} else { } else {
if context, ok := rec.FrontendAttrs["context"]; ok { if context, ok := rec.FrontendAttrs["context"]; ok {
if ref, err := gitutil.ParseGitRef(context); err == nil { if ref, _, err := dfgitutil.ParseGitRef(context); err == nil {
recValue = ref.Remote recValue = ref.Remote
} }
} }

View File

@ -279,7 +279,8 @@ func createCmd(dockerCli command.Cli, opts RootOptions) *cobra.Command {
options.builder = *opts.Builder options.builder = *opts.Builder
return runCreate(cmd.Context(), dockerCli, options, args) return runCreate(cmd.Context(), dockerCli, options, args)
}, },
ValidArgsFunction: completion.Disable, ValidArgsFunction: completion.Disable,
DisableFlagsInUseLine: true,
} }
flags := cmd.Flags() flags := cmd.Flags()

View File

@ -52,7 +52,8 @@ func inspectCmd(dockerCli command.Cli, rootOpts RootOptions) *cobra.Command {
options.builder = *rootOpts.Builder options.builder = *rootOpts.Builder
return runInspect(cmd.Context(), dockerCli, options, args[0]) return runInspect(cmd.Context(), dockerCli, options, args[0])
}, },
ValidArgsFunction: completion.Disable, ValidArgsFunction: completion.Disable,
DisableFlagsInUseLine: true,
} }
flags := cmd.Flags() flags := cmd.Flags()

View File

@ -12,10 +12,11 @@ type RootOptions struct {
func RootCmd(rootcmd *cobra.Command, dockerCli command.Cli, opts RootOptions) *cobra.Command { func RootCmd(rootcmd *cobra.Command, dockerCli command.Cli, opts RootOptions) *cobra.Command {
cmd := &cobra.Command{ cmd := &cobra.Command{
Use: "imagetools", Use: "imagetools",
Short: "Commands to work on images in registry", Short: "Commands to work on images in registry",
ValidArgsFunction: completion.Disable, ValidArgsFunction: completion.Disable,
RunE: rootcmd.RunE, RunE: rootcmd.RunE,
DisableFlagsInUseLine: true,
} }
cmd.AddCommand( cmd.AddCommand(

View File

@ -182,7 +182,8 @@ func inspectCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
} }
return runInspect(cmd.Context(), dockerCli, options) return runInspect(cmd.Context(), dockerCli, options)
}, },
ValidArgsFunction: completion.BuilderNames(dockerCli), ValidArgsFunction: completion.BuilderNames(dockerCli),
DisableFlagsInUseLine: true,
} }
flags := cmd.Flags() flags := cmd.Flags()

View File

@ -47,8 +47,9 @@ func installCmd(dockerCli command.Cli) *cobra.Command {
RunE: func(cmd *cobra.Command, args []string) error { RunE: func(cmd *cobra.Command, args []string) error {
return runInstall(dockerCli, options) return runInstall(dockerCli, options)
}, },
Hidden: true, Hidden: true,
ValidArgsFunction: completion.Disable, ValidArgsFunction: completion.Disable,
DisableFlagsInUseLine: true,
} }
// hide builder persistent flag for this command // hide builder persistent flag for this command

View File

@ -107,7 +107,8 @@ func lsCmd(dockerCli command.Cli) *cobra.Command {
RunE: func(cmd *cobra.Command, args []string) error { RunE: func(cmd *cobra.Command, args []string) error {
return runLs(cmd.Context(), dockerCli, options) return runLs(cmd.Context(), dockerCli, options)
}, },
ValidArgsFunction: completion.Disable, ValidArgsFunction: completion.Disable,
DisableFlagsInUseLine: true,
} }
flags := cmd.Flags() flags := cmd.Flags()

View File

@ -3,6 +3,7 @@ package commands
import ( import (
"context" "context"
"fmt" "fmt"
"io"
"os" "os"
"strings" "strings"
"text/tabwriter" "text/tabwriter"
@ -169,12 +170,13 @@ func pruneCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
options.builder = rootOpts.builder options.builder = rootOpts.builder
return runPrune(cmd.Context(), dockerCli, options) return runPrune(cmd.Context(), dockerCli, options)
}, },
ValidArgsFunction: completion.Disable, ValidArgsFunction: completion.Disable,
DisableFlagsInUseLine: true,
} }
flags := cmd.Flags() flags := cmd.Flags()
flags.BoolVarP(&options.all, "all", "a", false, "Include internal/frontend images") flags.BoolVarP(&options.all, "all", "a", false, "Include internal/frontend images")
flags.Var(&options.filter, "filter", `Provide filter values (e.g., "until=24h")`) flags.Var(&options.filter, "filter", `Provide filter values`)
flags.Var(&options.reservedSpace, "reserved-space", "Amount of disk space always allowed to keep for cache") flags.Var(&options.reservedSpace, "reserved-space", "Amount of disk space always allowed to keep for cache")
flags.Var(&options.minFreeSpace, "min-free-space", "Target amount of free disk space after pruning") flags.Var(&options.minFreeSpace, "min-free-space", "Target amount of free disk space after pruning")
flags.Var(&options.maxUsedSpace, "max-used-space", "Maximum amount of disk space allowed to keep for cache") flags.Var(&options.maxUsedSpace, "max-used-space", "Maximum amount of disk space allowed to keep for cache")
@ -241,3 +243,55 @@ func toBuildkitPruneInfo(f filters.Args) (*client.PruneInfo, error) {
Filter: []string{strings.Join(filters, ",")}, Filter: []string{strings.Join(filters, ",")},
}, nil }, nil
} }
func printKV(w io.Writer, k string, v any) {
fmt.Fprintf(w, "%s:\t%v\n", k, v)
}
func printVerbose(tw *tabwriter.Writer, du []*client.UsageInfo) {
for _, di := range du {
printKV(tw, "ID", di.ID)
if len(di.Parents) != 0 {
printKV(tw, "Parent", strings.Join(di.Parents, ","))
}
printKV(tw, "Created at", di.CreatedAt)
printKV(tw, "Mutable", di.Mutable)
printKV(tw, "Reclaimable", !di.InUse)
printKV(tw, "Shared", di.Shared)
printKV(tw, "Size", units.HumanSize(float64(di.Size)))
if di.Description != "" {
printKV(tw, "Description", di.Description)
}
printKV(tw, "Usage count", di.UsageCount)
if di.LastUsedAt != nil {
printKV(tw, "Last used", units.HumanDuration(time.Since(*di.LastUsedAt))+" ago")
}
if di.RecordType != "" {
printKV(tw, "Type", di.RecordType)
}
fmt.Fprintf(tw, "\n")
}
tw.Flush()
}
func printTableHeader(tw *tabwriter.Writer) {
fmt.Fprintln(tw, "ID\tRECLAIMABLE\tSIZE\tLAST ACCESSED")
}
func printTableRow(tw *tabwriter.Writer, di *client.UsageInfo) {
id := di.ID
if di.Mutable {
id += "*"
}
size := units.HumanSize(float64(di.Size))
if di.Shared {
size += "*"
}
lastAccessed := ""
if di.LastUsedAt != nil {
lastAccessed = units.HumanDuration(time.Since(*di.LastUsedAt)) + " ago"
}
fmt.Fprintf(tw, "%-40s\t%-5v\t%-10s\t%s\n", id, !di.InUse, size, lastAccessed)
}

View File

@ -111,7 +111,8 @@ func rmCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
} }
return runRm(cmd.Context(), dockerCli, options) return runRm(cmd.Context(), dockerCli, options)
}, },
ValidArgsFunction: completion.BuilderNames(dockerCli), ValidArgsFunction: completion.BuilderNames(dockerCli),
DisableFlagsInUseLine: true,
} }
flags := cmd.Flags() flags := cmd.Flags()

View File

@ -71,6 +71,7 @@ func NewRootCmd(name string, isPlugin bool, dockerCli *command.DockerCli) *cobra
Status: fmt.Sprintf("ERROR: unknown command: %q", args[0]), Status: fmt.Sprintf("ERROR: unknown command: %q", args[0]),
} }
}, },
DisableFlagsInUseLine: true,
} }
if !isPlugin { if !isPlugin {
// match plugin behavior for standalone mode // match plugin behavior for standalone mode
@ -78,8 +79,6 @@ func NewRootCmd(name string, isPlugin bool, dockerCli *command.DockerCli) *cobra
cmd.SilenceUsage = true cmd.SilenceUsage = true
cmd.SilenceErrors = true cmd.SilenceErrors = true
cmd.TraverseChildren = true cmd.TraverseChildren = true
cmd.DisableFlagsInUseLine = true
cli.DisableFlagsInUseLine(cmd)
if !confutil.IsExperimental() { if !confutil.IsExperimental() {
cmd.SetHelpTemplate(cmd.HelpTemplate() + "\n" + experimentalCommandHint + "\n") cmd.SetHelpTemplate(cmd.HelpTemplate() + "\n" + experimentalCommandHint + "\n")
} }

33
commands/root_test.go Normal file
View File

@ -0,0 +1,33 @@
package commands
import (
stderrs "errors"
"testing"
"github.com/docker/cli/cli/command"
"github.com/pkg/errors"
"github.com/spf13/cobra"
"github.com/stretchr/testify/require"
)
func TestDisableFlagsInUseLineIsSet(t *testing.T) {
cmd, err := command.NewDockerCli()
require.NoError(t, err)
rootCmd := NewRootCmd("buildx", true, cmd)
var errs []error
visitAll(rootCmd, func(c *cobra.Command) {
if !c.DisableFlagsInUseLine {
errs = append(errs, errors.New("DisableFlagsInUseLine is not set for "+c.CommandPath()))
}
})
err = stderrs.Join(errs...)
require.NoError(t, err)
}
func visitAll(root *cobra.Command, fn func(*cobra.Command)) {
for _, cmd := range root.Commands() {
visitAll(cmd, fn)
}
fn(root)
}

View File

@ -44,7 +44,8 @@ func stopCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
} }
return runStop(cmd.Context(), dockerCli, options) return runStop(cmd.Context(), dockerCli, options)
}, },
ValidArgsFunction: completion.BuilderNames(dockerCli), ValidArgsFunction: completion.BuilderNames(dockerCli),
DisableFlagsInUseLine: true,
} }
return cmd return cmd

View File

@ -53,8 +53,9 @@ func uninstallCmd(dockerCli command.Cli) *cobra.Command {
RunE: func(cmd *cobra.Command, args []string) error { RunE: func(cmd *cobra.Command, args []string) error {
return runUninstall(dockerCli, options) return runUninstall(dockerCli, options)
}, },
Hidden: true, Hidden: true,
ValidArgsFunction: completion.Disable, ValidArgsFunction: completion.Disable,
DisableFlagsInUseLine: true,
} }
// hide builder persistent flag for this command // hide builder persistent flag for this command

View File

@ -71,7 +71,8 @@ func useCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
} }
return runUse(dockerCli, options) return runUse(dockerCli, options)
}, },
ValidArgsFunction: completion.BuilderNames(dockerCli), ValidArgsFunction: completion.BuilderNames(dockerCli),
DisableFlagsInUseLine: true,
} }
flags := cmd.Flags() flags := cmd.Flags()

View File

@ -24,7 +24,8 @@ func versionCmd(dockerCli command.Cli) *cobra.Command {
RunE: func(cmd *cobra.Command, args []string) error { RunE: func(cmd *cobra.Command, args []string) error {
return runVersion(dockerCli) return runVersion(dockerCli)
}, },
ValidArgsFunction: completion.Disable, ValidArgsFunction: completion.Disable,
DisableFlagsInUseLine: true,
} }
// hide builder persistent flag for this command // hide builder persistent flag for this command

View File

@ -38,9 +38,14 @@ type Adapter[C LaunchConfig] struct {
threadsMu sync.RWMutex threadsMu sync.RWMutex
nextThreadID int nextThreadID int
sharedState
}
type sharedState struct {
breakpointMap *breakpointMap breakpointMap *breakpointMap
sourceMap sourceMap sourceMap *sourceMap
idPool *idPool idPool *idPool
sh *shell
} }
func New[C LaunchConfig]() *Adapter[C] { func New[C LaunchConfig]() *Adapter[C] {
@ -51,8 +56,12 @@ func New[C LaunchConfig]() *Adapter[C] {
evaluateReqCh: make(chan *evaluateRequest), evaluateReqCh: make(chan *evaluateRequest),
threads: make(map[int]*thread), threads: make(map[int]*thread),
nextThreadID: 1, nextThreadID: 1,
breakpointMap: newBreakpointMap(), sharedState: sharedState{
idPool: new(idPool), breakpointMap: newBreakpointMap(),
sourceMap: new(sourceMap),
idPool: new(idPool),
sh: newShell(),
},
} }
d.srv = NewServer(d.dapHandler()) d.srv = NewServer(d.dapHandler())
return d return d
@ -161,26 +170,21 @@ func (d *Adapter[C]) Next(c Context, req *dap.NextRequest, resp *dap.NextRespons
} }
func (d *Adapter[C]) StepIn(c Context, req *dap.StepInRequest, resp *dap.StepInResponse) error { func (d *Adapter[C]) StepIn(c Context, req *dap.StepInRequest, resp *dap.StepInResponse) error {
var ( d.threadsMu.RLock()
subReq dap.NextRequest t := d.threads[req.Arguments.ThreadId]
subResp dap.NextResponse d.threadsMu.RUnlock()
)
subReq.Arguments.ThreadId = req.Arguments.ThreadId t.StepIn()
subReq.Arguments.SingleThread = req.Arguments.SingleThread return nil
subReq.Arguments.Granularity = req.Arguments.Granularity
return d.Next(c, &subReq, &subResp)
} }
func (d *Adapter[C]) StepOut(c Context, req *dap.StepOutRequest, resp *dap.StepOutResponse) error { func (d *Adapter[C]) StepOut(c Context, req *dap.StepOutRequest, resp *dap.StepOutResponse) error {
var ( d.threadsMu.RLock()
subReq dap.ContinueRequest t := d.threads[req.Arguments.ThreadId]
subResp dap.ContinueResponse d.threadsMu.RUnlock()
)
subReq.Arguments.ThreadId = req.Arguments.ThreadId t.StepOut()
subReq.Arguments.SingleThread = req.Arguments.SingleThread return nil
return d.Continue(c, &subReq, &subResp)
} }
func (d *Adapter[C]) SetBreakpoints(c Context, req *dap.SetBreakpointsRequest, resp *dap.SetBreakpointsResponse) error { func (d *Adapter[C]) SetBreakpoints(c Context, req *dap.SetBreakpointsRequest, resp *dap.SetBreakpointsResponse) error {
@ -238,12 +242,10 @@ func (d *Adapter[C]) newThread(ctx Context, name string) (t *thread) {
d.threadsMu.Lock() d.threadsMu.Lock()
id := d.nextThreadID id := d.nextThreadID
t = &thread{ t = &thread{
id: id, id: id,
name: name, name: name,
sourceMap: &d.sourceMap, sharedState: d.sharedState,
breakpointMap: d.breakpointMap, variables: newVariableReferences(),
idPool: d.idPool,
variables: newVariableReferences(),
} }
d.threads[t.id] = t d.threads[t.id] = t
d.nextThreadID++ d.nextThreadID++
@ -266,20 +268,6 @@ func (d *Adapter[C]) getThread(id int) (t *thread) {
return t return t
} }
func (d *Adapter[C]) getFirstThread() (t *thread) {
d.threadsMu.Lock()
defer d.threadsMu.Unlock()
for _, thread := range d.threads {
if thread.isPaused() {
if t == nil || thread.id < t.id {
t = thread
}
}
}
return t
}
func (d *Adapter[C]) deleteThread(ctx Context, t *thread) { func (d *Adapter[C]) deleteThread(ctx Context, t *thread) {
d.threadsMu.Lock() d.threadsMu.Lock()
if t := d.threads[t.id]; t != nil { if t := d.threads[t.id]; t != nil {

View File

@ -81,14 +81,14 @@ func NewTestAdapter[C LaunchConfig](t *testing.T) (*Adapter[C], Conn, *Client) {
}) })
clientConn := logConn(t, "client", NewConn(rd2, wr1)) clientConn := logConn(t, "client", NewConn(rd2, wr1))
t.Cleanup(func() { t.Cleanup(func() { clientConn.Close() })
clientConn.Close()
})
adapter := New[C]() adapter := New[C]()
t.Cleanup(func() { adapter.Stop() }) t.Cleanup(func() { adapter.Stop() })
client := NewClient(clientConn) client := NewClient(clientConn)
t.Cleanup(func() { client.Close() })
return adapter, srvConn, client return adapter, srvConn, client
} }

308
dap/debug_shell.go Normal file
View File

@ -0,0 +1,308 @@
package dap
import (
"context"
"fmt"
"io"
"io/fs"
"net"
"os"
"path/filepath"
"strings"
"sync"
"github.com/docker/buildx/build"
"github.com/docker/buildx/util/ioset"
"github.com/docker/cli/cli-plugins/metadata"
"github.com/google/go-dap"
"github.com/pkg/errors"
"golang.org/x/sync/errgroup"
"golang.org/x/sync/semaphore"
)
type shell struct {
// SocketPath is set on the first time Init is invoked
// and stays that way.
SocketPath string
// Locks access to the session from the debug adapter.
// Only one debug thread can access the shell at a time.
sem *semaphore.Weighted
// Initialized once per shell and reused.
once sync.Once
err error
l net.Listener
eg *errgroup.Group
// For the specific session.
fwd *ioset.Forwarder
connected chan struct{}
mu sync.RWMutex
}
func newShell() *shell {
sh := &shell{
sem: semaphore.NewWeighted(1),
}
sh.resetSession()
return sh
}
func (s *shell) resetSession() {
s.mu.Lock()
defer s.mu.Unlock()
s.fwd = nil
s.connected = make(chan struct{})
}
// Init initializes the shell for connections on the client side.
// Attach will block until the terminal has been initialized.
func (s *shell) Init() error {
return s.listen()
}
func (s *shell) listen() error {
s.once.Do(func() {
var dir string
dir, s.err = os.MkdirTemp("", "buildx-dap-exec")
if s.err != nil {
return
}
defer func() {
if s.err != nil {
os.RemoveAll(dir)
}
}()
s.SocketPath = filepath.Join(dir, "s.sock")
s.l, s.err = net.Listen("unix", s.SocketPath)
if s.err != nil {
return
}
s.eg, _ = errgroup.WithContext(context.Background())
s.eg.Go(s.acceptLoop)
})
return s.err
}
func (s *shell) acceptLoop() error {
for {
if err := s.accept(); err != nil {
if errors.Is(err, net.ErrClosed) {
return nil
}
return err
}
}
}
func (s *shell) accept() error {
conn, err := s.l.Accept()
if err != nil {
return err
}
s.mu.Lock()
defer s.mu.Unlock()
if s.fwd != nil {
writeLine(conn, "Error: Already connected to exec instance.")
conn.Close()
return nil
}
// Set the input of the forwarder to the connection.
s.fwd = ioset.NewForwarder()
s.fwd.SetIn(&ioset.In{
Stdin: io.NopCloser(conn),
Stdout: conn,
Stderr: nopCloser{conn},
})
close(s.connected)
writeLine(conn, "Attached to build process.")
return nil
}
// Attach will attach the given thread to the shell.
// Only one container can attach to a shell at any given time.
// Other attaches will block until the context is canceled or it is
// able to reserve the shell for its own use.
//
// This method is intended to be called by paused threads.
func (s *shell) Attach(ctx context.Context, t *thread) {
rCtx := t.rCtx
if rCtx == nil {
return
}
var f dap.StackFrame
if len(t.stackTrace) > 0 {
f = t.frames[t.stackTrace[0]].StackFrame
}
cfg := &build.InvokeConfig{Tty: true}
if len(cfg.Entrypoint) == 0 && len(cfg.Cmd) == 0 {
cfg.Entrypoint = []string{"/bin/sh"} // launch shell by default
cfg.Cmd = []string{}
cfg.NoCmd = false
}
for {
if err := s.attach(ctx, f, rCtx, cfg); err != nil {
return
}
}
}
func (s *shell) wait(ctx context.Context) error {
s.mu.RLock()
connected := s.connected
s.mu.RUnlock()
select {
case <-connected:
return nil
case <-ctx.Done():
return context.Cause(ctx)
}
}
func (s *shell) attach(ctx context.Context, f dap.StackFrame, rCtx *build.ResultHandle, cfg *build.InvokeConfig) (retErr error) {
if err := s.wait(ctx); err != nil {
return err
}
in, out := ioset.Pipe()
defer in.Close()
defer out.Close()
s.mu.RLock()
fwd := s.fwd
s.mu.RUnlock()
fwd.SetOut(&out)
defer func() {
if retErr != nil {
fwd.SetOut(nil)
}
}()
// Check if the entrypoint is executable. If it isn't, don't bother
// trying to invoke.
if reason, ok := s.canInvoke(ctx, rCtx, cfg); !ok {
writeLineF(in.Stdout, "Build container is not executable. (reason: %s)", reason)
<-ctx.Done()
return context.Cause(ctx)
}
if err := s.sem.Acquire(ctx, 1); err != nil {
return err
}
defer s.sem.Release(1)
ctr, err := build.NewContainer(ctx, rCtx, cfg)
if err != nil {
return err
}
defer ctr.Cancel()
writeLineF(in.Stdout, "Running %s in build container from line %d.",
strings.Join(append(cfg.Entrypoint, cfg.Cmd...), " "),
f.Line,
)
writeLine(in.Stdout, "Changes to the container will be reset after the next step is executed.")
err = ctr.Exec(ctx, cfg, in.Stdin, in.Stdout, in.Stderr)
// Send newline to properly terminate the output.
writeLine(in.Stdout, "")
if err != nil {
return err
}
fwd.Close()
s.resetSession()
return nil
}
func (s *shell) canInvoke(ctx context.Context, rCtx *build.ResultHandle, cfg *build.InvokeConfig) (reason string, ok bool) {
var cmd string
if len(cfg.Entrypoint) > 0 {
cmd = cfg.Entrypoint[0]
} else if len(cfg.Cmd) > 0 {
cmd = cfg.Cmd[0]
}
if cmd == "" {
return "no command specified", false
}
st, err := rCtx.StatFile(ctx, cmd, cfg)
if err != nil {
return fmt.Sprintf("stat error: %s", err), false
}
mode := fs.FileMode(st.Mode)
if !mode.IsRegular() {
return fmt.Sprintf("%s: not a file", cmd), false
}
if mode&0111 == 0 {
return fmt.Sprintf("%s: not an executable", cmd), false
}
return "", true
}
// SendRunInTerminalRequest will send the request to the client to attach to
// the socket path that was created by Init. This is intended to be run
// from the adapter and interact directly with the client.
func (s *shell) SendRunInTerminalRequest(ctx Context) error {
// TODO: this should work in standalone mode too.
docker := os.Getenv(metadata.ReexecEnvvar)
req := &dap.RunInTerminalRequest{
Request: dap.Request{
Command: "runInTerminal",
},
Arguments: dap.RunInTerminalRequestArguments{
Kind: "integrated",
Args: []string{docker, "buildx", "dap", "attach", s.SocketPath},
Env: map[string]any{
"BUILDX_EXPERIMENTAL": "1",
},
},
}
resp := ctx.Request(req)
if !resp.GetResponse().Success {
return errors.New(resp.GetResponse().Message)
}
return nil
}
type nopCloser struct {
io.Writer
}
func (nopCloser) Close() error {
return nil
}
func writeLine(w io.Writer, msg string) {
if os.PathSeparator == '\\' {
fmt.Fprint(w, msg+"\r\n")
} else {
fmt.Fprintln(w, msg)
}
}
func writeLineF(w io.Writer, format string, a ...any) {
if os.PathSeparator == '\\' {
fmt.Fprintf(w, format+"\r\n", a...)
} else {
fmt.Fprintf(w, format+"\n", a...)
}
}

View File

@ -29,43 +29,69 @@ func (d *Adapter[C]) Evaluate(ctx Context, req *dap.EvaluateRequest, resp *dap.E
return nil return nil
} }
var t *thread var retErr error
if req.Arguments.FrameId > 0 { cmd := d.replCommands(ctx, resp, &retErr)
if t = d.getThreadByFrameID(req.Arguments.FrameId); t == nil {
return errors.Errorf("no thread with frame id %d", req.Arguments.FrameId)
}
} else {
if t = d.getFirstThread(); t == nil {
return errors.New("no paused thread")
}
}
cmd := d.replCommands(ctx, t, resp)
cmd.SetArgs(args) cmd.SetArgs(args)
cmd.SetErr(d.Out()) cmd.SetErr(d.Out())
if err := cmd.Execute(); err != nil { if err := cmd.Execute(); err != nil {
fmt.Fprintf(d.Out(), "ERROR: %+v\n", err) // This error should only happen if there was something command
// related that malfunctioned as it will also print usage.
// Normal errors should set retErr from replCommands.
return err
} }
return nil return retErr
} }
func (d *Adapter[C]) replCommands(ctx Context, t *thread, resp *dap.EvaluateResponse) *cobra.Command { func (d *Adapter[C]) replCommands(ctx Context, resp *dap.EvaluateResponse, retErr *error) *cobra.Command {
rootCmd := &cobra.Command{} rootCmd := &cobra.Command{
SilenceErrors: true,
execCmd := &cobra.Command{
Use: "exec",
RunE: func(cmd *cobra.Command, args []string) error {
if !d.supportsExec {
return errors.New("cannot exec without runInTerminal client capability")
}
return t.Exec(ctx, args, resp)
},
} }
execCmd, _ := replCmd(ctx, "exec", resp, retErr, d.execCmd)
rootCmd.AddCommand(execCmd) rootCmd.AddCommand(execCmd)
return rootCmd return rootCmd
} }
func (t *thread) Exec(ctx Context, args []string, eresp *dap.EvaluateResponse) (retErr error) { type execOptions struct{}
func (d *Adapter[C]) execCmd(ctx Context, _ []string, _ execOptions) (string, error) {
if !d.supportsExec {
return "", errors.New("cannot exec without runInTerminal client capability")
}
// Initialize the shell if it hasn't been done before. This will allow any
// containers that are attempting to attach to actually attach.
if err := d.sh.Init(); err != nil {
return "", err
}
// Send the request to attach to the terminal.
if err := d.sh.SendRunInTerminalRequest(ctx); err != nil {
return "", err
}
return fmt.Sprintf("Started process attached to %s.", d.sh.SocketPath), nil
}
func replCmd[Flags any, RetVal any](ctx Context, name string, resp *dap.EvaluateResponse, retErr *error, fn func(ctx Context, args []string, flags Flags) (RetVal, error)) (*cobra.Command, *Flags) {
flags := new(Flags)
return &cobra.Command{
Use: name,
Run: func(cmd *cobra.Command, args []string) {
v, err := fn(ctx, args, *flags)
if err != nil {
*retErr = err
return
}
resp.Body.Result = fmt.Sprint(v)
},
}, flags
}
func (t *thread) Exec(ctx Context, args []string) (message string, retErr error) {
if t.rCtx == nil {
return "", errors.New("no container context for exec")
}
cfg := &build.InvokeConfig{Tty: true} cfg := &build.InvokeConfig{Tty: true}
if len(cfg.Entrypoint) == 0 && len(cfg.Cmd) == 0 { if len(cfg.Entrypoint) == 0 && len(cfg.Cmd) == 0 {
cfg.Entrypoint = []string{"/bin/sh"} // launch shell by default cfg.Entrypoint = []string{"/bin/sh"} // launch shell by default
@ -75,7 +101,7 @@ func (t *thread) Exec(ctx Context, args []string, eresp *dap.EvaluateResponse) (
ctr, err := build.NewContainer(ctx, t.rCtx, cfg) ctr, err := build.NewContainer(ctx, t.rCtx, cfg)
if err != nil { if err != nil {
return err return "", err
} }
defer func() { defer func() {
if retErr != nil { if retErr != nil {
@ -85,7 +111,7 @@ func (t *thread) Exec(ctx Context, args []string, eresp *dap.EvaluateResponse) (
dir, err := os.MkdirTemp("", "buildx-dap-exec") dir, err := os.MkdirTemp("", "buildx-dap-exec")
if err != nil { if err != nil {
return err return "", err
} }
defer func() { defer func() {
if retErr != nil { if retErr != nil {
@ -96,7 +122,7 @@ func (t *thread) Exec(ctx Context, args []string, eresp *dap.EvaluateResponse) (
socketPath := filepath.Join(dir, "s.sock") socketPath := filepath.Join(dir, "s.sock")
l, err := net.Listen("unix", socketPath) l, err := net.Listen("unix", socketPath)
if err != nil { if err != nil {
return err return "", err
} }
go func() { go func() {
@ -121,11 +147,11 @@ func (t *thread) Exec(ctx Context, args []string, eresp *dap.EvaluateResponse) (
resp := ctx.Request(req) resp := ctx.Request(req)
if !resp.GetResponse().Success { if !resp.GetResponse().Success {
return errors.New(resp.GetResponse().Message) return "", errors.New(resp.GetResponse().Message)
} }
eresp.Body.Result = fmt.Sprintf("Started process attached to %s.", socketPath) message = fmt.Sprintf("Started process attached to %s.", socketPath)
return nil return message, nil
} }
func (t *thread) runExec(l net.Listener, ctr *build.Container, cfg *build.InvokeConfig) { func (t *thread) runExec(l net.Listener, ctr *build.Container, cfg *build.InvokeConfig) {

View File

@ -2,8 +2,8 @@ package dap
import ( import (
"context" "context"
"path"
"path/filepath" "path/filepath"
"slices"
"sync" "sync"
"github.com/docker/buildx/build" "github.com/docker/buildx/build"
@ -23,10 +23,8 @@ type thread struct {
name string name string
// Persistent state from the adapter. // Persistent state from the adapter.
idPool *idPool sharedState
sourceMap *sourceMap variables *variableReferences
breakpointMap *breakpointMap
variables *variableReferences
// Inputs to the evaluate call. // Inputs to the evaluate call.
c gateway.Client c gateway.Client
@ -40,27 +38,21 @@ type thread struct {
head digest.Digest head digest.Digest
bps map[digest.Digest]int bps map[digest.Digest]int
frames map[int32]*frame
framesByDigest map[digest.Digest]*frame
// Runtime state for the evaluate call. // Runtime state for the evaluate call.
regions []*region entrypoint *step
regionsByDigest map[digest.Digest]int
// Controls pause. // Controls pause.
paused chan stepType paused chan stepType
mu sync.Mutex mu sync.Mutex
// Attributes set when a thread is paused. // Attributes set when a thread is paused.
cancel context.CancelCauseFunc // invoked when the thread is resumed
rCtx *build.ResultHandle rCtx *build.ResultHandle
curPos digest.Digest curPos digest.Digest
stackTrace []int32 stackTrace []int32
frames map[int32]*frame
}
type region struct {
// dependsOn means this thread depends on the result of another thread.
dependsOn map[int]struct{}
// digests is a set of digests associated with this thread.
digests []digest.Digest
} }
type stepType int type stepType int
@ -68,51 +60,210 @@ type stepType int
const ( const (
stepContinue stepType = iota stepContinue stepType = iota
stepNext stepNext
stepIn
stepOut
) )
func (t *thread) Evaluate(ctx Context, c gateway.Client, ref gateway.Reference, meta map[string][]byte, inputs build.Inputs, cfg common.Config) error { func (t *thread) Evaluate(ctx Context, c gateway.Client, headRef gateway.Reference, meta map[string][]byte, inputs build.Inputs, cfg common.Config) error {
if err := t.init(ctx, c, ref, meta, inputs); err != nil { if err := t.init(ctx, c, headRef, meta, inputs); err != nil {
return err return err
} }
defer t.reset() defer t.reset()
step := stepContinue action := stepContinue
if cfg.StopOnEntry { if cfg.StopOnEntry {
step = stepNext action = stepNext
} }
for { var (
if step == stepContinue { ref gateway.Reference
t.setBreakpoints(ctx) next = t.entrypoint
err error
)
for next != nil {
event := t.needsDebug(next, action, err)
if event.Reason != "" {
select {
case action = <-t.pause(ctx, ref, err, next, event):
// do nothing here
case <-ctx.Done():
return context.Cause(ctx)
}
} }
ref, pos, err := t.seekNext(ctx, step)
event := t.needsDebug(pos, step, err) if err != nil {
if event.Reason == "" {
return err return err
} }
select { if action == stepContinue {
case step = <-t.pause(ctx, ref, err, event): t.setBreakpoints(ctx)
if err != nil {
return err
}
case <-ctx.Done():
return context.Cause(ctx)
} }
ref, next, err = t.seekNext(ctx, next, action)
} }
return nil
} }
func (t *thread) init(ctx Context, c gateway.Client, ref gateway.Reference, meta map[string][]byte, inputs build.Inputs) error { func (t *thread) init(ctx Context, c gateway.Client, ref gateway.Reference, meta map[string][]byte, inputs build.Inputs) error {
t.c = c t.c = c
t.ref = ref t.ref = ref
t.meta = meta t.meta = meta
t.sourcePath = inputs.ContextPath
// Combine the dockerfile directory with the context path to find the
// real base path. The frontend will report the base path as the filename.
dir := path.Dir(inputs.DockerfilePath)
if !path.IsAbs(dir) {
dir = path.Join(inputs.ContextPath, dir)
}
t.sourcePath = dir
if err := t.getLLBState(ctx); err != nil { if err := t.getLLBState(ctx); err != nil {
return err return err
} }
return t.createRegions() return t.createProgram()
}
type step struct {
// dgst holds the digest that should be resolved by this step.
// If this is empty, no digest should be resolved.
dgst digest.Digest
// in holds the next target when step in is used.
in *step
// out holds the next target when step out is used.
out *step
// next holds the next target when next is used.
next *step
// frame will hold the stack frame associated with this step.
frame *frame
}
func (t *thread) createProgram() error {
t.framesByDigest = make(map[digest.Digest]*frame)
t.frames = make(map[int32]*frame)
// Create the entrypoint by using the last node.
// We will build on top of that.
head := &step{
dgst: t.head,
frame: t.getStackFrame(t.head),
}
t.entrypoint = t.createBranch(head)
return nil
}
func (t *thread) createBranch(last *step) (first *step) {
first = last
for first.dgst != "" {
prev := &step{
// set to first temporarily until we determine
// if there are other inputs.
in: first,
// always first
next: first,
// exit point always matches the one set on first
out: first.out,
// always set to the same as next which is always first
frame: t.getStackFrame(first.dgst),
}
op := t.ops[first.dgst]
if len(op.Inputs) > 0 {
parent := t.determineParent(op)
for i := len(op.Inputs) - 1; i >= 0; i-- {
if i == parent {
// Skip the direct parent.
continue
}
inp := op.Inputs[i]
// Create a pseudo-step that acts as an exit point for this
// branch. This step exists so this branch has a place to go
// after it has finished that will advance to the next
// instruction.
exit := &step{
in: prev.in,
next: prev.next,
out: prev.out,
frame: prev.frame,
}
head := &step{
dgst: digest.Digest(inp.Digest),
in: exit,
next: exit,
out: exit,
frame: t.getStackFrame(digest.Digest(inp.Digest)),
}
prev.in = t.createBranch(head)
}
// Set the digest of the parent input on the first step associated
// with this step if it exists.
if parent >= 0 {
prev.dgst = digest.Digest(op.Inputs[parent].Digest)
}
}
// New first is the step we just created.
first = prev
}
return first
}
func (t *thread) getStackFrame(dgst digest.Digest) *frame {
if f := t.framesByDigest[dgst]; f != nil {
return f
}
f := &frame{
op: t.ops[dgst],
}
f.Id = int(t.idPool.Get())
if meta, ok := t.def.Metadata[dgst]; ok {
f.setNameFromMeta(meta)
}
if loc, ok := t.def.Source.Locations[string(dgst)]; ok {
f.fillLocation(t.def, loc, t.sourcePath)
}
t.frames[int32(f.Id)] = f
return f
}
func (t *thread) determineParent(op *pb.Op) int {
// Another section should have already checked this but
// double check here just in case we forget somewhere else.
// The rest of this method assumes there's at least one parent
// at index zero.
n := len(op.Inputs)
if n == 0 {
return -1
}
switch op := op.Op.(type) {
case *pb.Op_Exec:
for _, m := range op.Exec.Mounts {
if m.Dest == "/" {
return int(m.Input)
}
}
return -1
case *pb.Op_File:
// Use the first input where the index is from one of the inputs.
for _, action := range op.File.Actions {
if input := int(action.Input); input >= 0 && input < n {
return input
}
}
// Default to having no parent.
return -1
default:
// Default to index zero.
return 0
}
} }
func (t *thread) reset() { func (t *thread) reset() {
@ -123,23 +274,25 @@ func (t *thread) reset() {
t.ops = nil t.ops = nil
} }
func (t *thread) needsDebug(target digest.Digest, step stepType, err error) (e dap.StoppedEventBody) { func (t *thread) needsDebug(cur *step, step stepType, err error) (e dap.StoppedEventBody) {
if err != nil { if err != nil {
e.Reason = "exception" e.Reason = "exception"
e.Description = "Encountered an error during result evaluation" e.Description = "Encountered an error during result evaluation"
} else if step == stepNext && target != "" { } else if cur != nil {
e.Reason = "step" if step != stepContinue {
} else if step == stepContinue { e.Reason = "step"
if id, ok := t.bps[target]; ok { } else if next := cur.in; next != nil {
e.Reason = "breakpoint" if id, ok := t.bps[next.dgst]; ok {
e.Description = "Paused on breakpoint" e.Reason = "breakpoint"
e.HitBreakpointIds = []int{id} e.Description = "Paused on breakpoint"
e.HitBreakpointIds = []int{id}
}
} }
} }
return return
} }
func (t *thread) pause(c Context, ref gateway.Reference, err error, event dap.StoppedEventBody) <-chan stepType { func (t *thread) pause(c Context, ref gateway.Reference, err error, pos *step, event dap.StoppedEventBody) <-chan stepType {
t.mu.Lock() t.mu.Lock()
defer t.mu.Unlock() defer t.mu.Unlock()
@ -148,7 +301,6 @@ func (t *thread) pause(c Context, ref gateway.Reference, err error, event dap.St
} }
t.paused = make(chan stepType, 1) t.paused = make(chan stepType, 1)
t.rCtx = build.NewResultHandle(c, t.c, ref, t.meta, err)
if err != nil { if err != nil {
var solveErr *errdefs.SolveError var solveErr *errdefs.SolveError
if errors.As(err, &solveErr) { if errors.As(err, &solveErr) {
@ -157,7 +309,14 @@ func (t *thread) pause(c Context, ref gateway.Reference, err error, event dap.St
} }
} }
} }
t.collectStackTrace()
ctx, cancel := context.WithCancelCause(c)
t.collectStackTrace(ctx, pos, ref)
t.cancel = cancel
if ref != nil || err != nil {
t.prepareResultHandle(c, ref, err)
}
event.ThreadId = t.id event.ThreadId = t.id
c.C() <- &dap.StoppedEvent{ c.C() <- &dap.StoppedEvent{
@ -167,6 +326,27 @@ func (t *thread) pause(c Context, ref gateway.Reference, err error, event dap.St
return t.paused return t.paused
} }
func (t *thread) prepareResultHandle(c Context, ref gateway.Reference, err error) {
// Create a context for cancellations and make the cancel function
// block on the wait group.
var wg sync.WaitGroup
ctx, cancel := context.WithCancelCause(c)
t.cancel = func(cause error) {
defer wg.Wait()
cancel(cause)
}
t.rCtx = build.NewResultHandle(ctx, t.c, ref, t.meta, err)
// Start the attach. Use the context we created and perform it in
// a goroutine. We aren't necessarily assuming this will actually work.
wg.Add(1)
go func() {
defer wg.Done()
t.sh.Attach(ctx, t)
}()
}
func (t *thread) Continue() { func (t *thread) Continue() {
t.resume(stepContinue) t.resume(stepContinue)
} }
@ -175,6 +355,14 @@ func (t *thread) Next() {
t.resume(stepNext) t.resume(stepNext)
} }
func (t *thread) StepIn() {
t.resume(stepIn)
}
func (t *thread) StepOut() {
t.resume(stepOut)
}
func (t *thread) resume(step stepType) { func (t *thread) resume(step stepType) {
t.mu.Lock() t.mu.Lock()
defer t.mu.Unlock() defer t.mu.Unlock()
@ -189,13 +377,6 @@ func (t *thread) resume(step stepType) {
t.paused = nil t.paused = nil
} }
func (t *thread) isPaused() bool {
t.mu.Lock()
defer t.mu.Unlock()
return t.paused != nil
}
func (t *thread) StackTrace() []dap.StackFrame { func (t *thread) StackTrace() []dap.StackFrame {
t.mu.Lock() t.mu.Lock()
defer t.mu.Unlock() defer t.mu.Unlock()
@ -261,233 +442,92 @@ func (t *thread) setBreakpoints(ctx Context) {
t.bps = t.breakpointMap.Intersect(ctx, t.def.Source, t.sourcePath) t.bps = t.breakpointMap.Intersect(ctx, t.def.Source, t.sourcePath)
} }
func (t *thread) findBacklinks() map[digest.Digest]map[digest.Digest]struct{} { func (t *thread) seekNext(ctx Context, from *step, action stepType) (gateway.Reference, *step, error) {
backlinks := make(map[digest.Digest]map[digest.Digest]struct{})
for dgst := range t.ops {
backlinks[dgst] = make(map[digest.Digest]struct{})
}
for dgst, op := range t.ops {
for _, inp := range op.Inputs {
if digest.Digest(inp.Digest) == t.head {
continue
}
backlinks[digest.Digest(inp.Digest)][dgst] = struct{}{}
}
}
return backlinks
}
func (t *thread) createRegions() error {
// Find the links going from inputs to their outputs.
// This isn't represented in the LLB graph but we need it to ensure
// an op only has one child and whether we are allowed to visit a node.
backlinks := t.findBacklinks()
// Create distinct regions whenever we have any branch (inputs or outputs).
t.regions = []*region{}
t.regionsByDigest = map[digest.Digest]int{}
determineRegion := func(dgst digest.Digest, children map[digest.Digest]struct{}) {
if len(children) == 1 {
var cDgst digest.Digest
for d := range children {
cDgst = d
}
childOp := t.ops[cDgst]
if len(childOp.Inputs) == 1 {
// We have one child and our child has one input so we can be merged
// into the same region as our child.
region := t.regionsByDigest[cDgst]
t.regions[region].digests = append(t.regions[region].digests, dgst)
t.regionsByDigest[dgst] = region
return
}
}
// We will require a new region for this digest because
// we weren't able to merge it in within the existing regions.
next := len(t.regions)
t.regions = append(t.regions, &region{
digests: []digest.Digest{dgst},
dependsOn: make(map[int]struct{}),
})
t.regionsByDigest[dgst] = next
// Mark each child as depending on this new region.
for child := range children {
region := t.regionsByDigest[child]
t.regions[region].dependsOn[next] = struct{}{}
}
}
canVisit := func(dgst digest.Digest) bool {
for dgst := range backlinks[dgst] {
if _, ok := t.regionsByDigest[dgst]; !ok {
// One of our outputs has not been categorized.
return false
}
}
return true
}
unvisited := []digest.Digest{t.head}
for len(unvisited) > 0 {
dgst := pop(&unvisited)
op := t.ops[dgst]
children := backlinks[dgst]
determineRegion(dgst, children)
// Determine which inputs we can now visit.
for _, inp := range op.Inputs {
indgst := digest.Digest(inp.Digest)
if canVisit(indgst) {
unvisited = append(unvisited, indgst)
}
}
}
// Reverse each of the digests so dependencies are first.
// It is currently in reverse topological order and it needs to be in
// topological order.
for _, r := range t.regions {
slices.Reverse(r.digests)
}
t.propagateRegionDependencies()
return nil
}
// propagateRegionDependencies will propagate the dependsOn attribute between
// different regions to make dependency lookups easier. If A depends on B
// and B depends on C, then A depends on C. But the algorithm before this will only
// record direct dependencies.
func (t *thread) propagateRegionDependencies() {
for _, r := range t.regions {
for {
n := len(r.dependsOn)
for i := range r.dependsOn {
for j := range t.regions[i].dependsOn {
r.dependsOn[j] = struct{}{}
}
}
if n == len(r.dependsOn) {
break
}
}
}
}
func (t *thread) seekNext(ctx Context, step stepType) (gateway.Reference, digest.Digest, error) {
// If we're at the end, return no digest to signal that // If we're at the end, return no digest to signal that
// we should conclude debugging. // we should conclude debugging.
if t.curPos == t.head { var target *step
return nil, "", nil switch action {
}
target := t.head
switch step {
case stepNext: case stepNext:
target = t.nextDigest(nil) target = from.next
case stepIn:
target = from.in
case stepOut:
target = from.out
case stepContinue: case stepContinue:
target = t.continueDigest() target = t.continueDigest(from)
}
if target == "" {
return nil, "", nil
} }
return t.seek(ctx, target) return t.seek(ctx, target)
} }
func (t *thread) seek(ctx Context, target digest.Digest) (gateway.Reference, digest.Digest, error) { func (t *thread) seek(ctx Context, target *step) (ref gateway.Reference, result *step, err error) {
ref, err := t.solve(ctx, target) if target != nil {
if err != nil { if target.dgst != "" {
return ref, "", err ref, err = t.solve(ctx, target.dgst)
if err != nil {
return ref, nil, err
}
}
result = target
} else {
ref = t.ref
} }
if err = ref.Evaluate(ctx); err != nil { if ref != nil {
var solveErr *errdefs.SolveError if err = ref.Evaluate(ctx); err != nil {
if errors.As(err, &solveErr) { // If this is not a solve error, do not return the
if dt, err := solveErr.Op.MarshalVT(); err == nil { // reference and target step.
t.curPos = digest.FromBytes(dt) var solveErr *errdefs.SolveError
if errors.As(err, &solveErr) {
if dt, err := solveErr.Op.MarshalVT(); err == nil {
// Find the error digest.
errDgst := digest.FromBytes(dt)
// Iterate from the first step to find the one
// we failed on.
result = t.entrypoint
for result != nil {
next := result.in
if next != nil && next.dgst == errDgst {
break
}
result = next
}
}
} else {
return nil, nil, err
} }
} else {
t.curPos = ""
} }
} else {
t.curPos = target
} }
return ref, t.curPos, err return ref, result, err
} }
func (t *thread) nextDigest(fn func(digest.Digest) bool) digest.Digest { func (t *thread) continueDigest(from *step) *step {
isValid := func(dgst digest.Digest) bool { if len(t.bps) == 0 {
// Skip this digest because it has no locations in the source file. return nil
if loc, ok := t.def.Source.Locations[string(dgst)]; !ok || len(loc.Locations) == 0 { }
isBreakpoint := func(dgst digest.Digest) bool {
if dgst == "" {
return false return false
} }
// If a custom function has been set for validation, use it.
return fn == nil || fn(dgst)
}
// If we have no position, automatically select the first step.
if t.curPos == "" {
r := t.regions[len(t.regions)-1]
if isValid(r.digests[0]) {
return r.digests[0]
}
// We cannot use the first position. Treat the first position as our
// current position so we can iterate.
t.curPos = r.digests[0]
}
// Look up the region associated with our current position.
// If we can't find it, just pretend we're using step continue.
region, ok := t.regionsByDigest[t.curPos]
if !ok {
return t.head
}
r := t.regions[region]
i := slices.Index(r.digests, t.curPos) + 1
for {
if i >= len(r.digests) {
if region <= 0 {
// We're at the end of our execution. Should have been caught by
// t.head == t.curPos.
return ""
}
region--
r = t.regions[region]
i = 0
continue
}
next := r.digests[i]
if !isValid(next) {
i++
continue
}
return next
}
}
func (t *thread) continueDigest() digest.Digest {
if len(t.bps) == 0 {
return t.head
}
isValid := func(dgst digest.Digest) bool {
_, ok := t.bps[dgst] _, ok := t.bps[dgst]
return ok return ok
} }
return t.nextDigest(isValid)
next := func(s *step) *step {
cur := s.in
for cur != nil {
next := cur.in
if next != nil && isBreakpoint(next.dgst) {
return cur
}
cur = next
}
return nil
}
return next(from)
} }
func (t *thread) solve(ctx context.Context, target digest.Digest) (gateway.Reference, error) { func (t *thread) solve(ctx context.Context, target digest.Digest) (gateway.Reference, error) {
@ -520,38 +560,26 @@ func (t *thread) releaseState() {
t.rCtx.Done() t.rCtx.Done()
t.rCtx = nil t.rCtx = nil
} }
t.stackTrace = nil
t.frames = nil
}
func (t *thread) collectStackTrace() { for _, f := range t.frames {
region := t.regionsByDigest[t.curPos] f.ResetVars()
r := t.regions[region]
digests := r.digests
if index := slices.Index(digests, t.curPos); index >= 0 {
digests = digests[:index+1]
} }
t.frames = make(map[int32]*frame) if t.cancel != nil {
for i := len(digests) - 1; i >= 0; i-- { t.cancel(context.Canceled)
dgst := digests[i] t.cancel = nil
}
frame := &frame{} t.stackTrace = t.stackTrace[:0]
frame.Id = int(t.idPool.Get()) t.variables.Reset()
}
if meta, ok := t.def.Metadata[dgst]; ok { func (t *thread) collectStackTrace(ctx context.Context, pos *step, ref gateway.Reference) {
frame.setNameFromMeta(meta) for pos != nil {
} frame := pos.frame
if loc, ok := t.def.Source.Locations[string(dgst)]; ok { frame.ExportVars(ctx, ref, t.variables)
frame.fillLocation(t.def, loc, t.sourcePath)
}
if op := t.ops[dgst]; op != nil {
frame.fillVarsFromOp(op, t.variables)
}
t.stackTrace = append(t.stackTrace, int32(frame.Id)) t.stackTrace = append(t.stackTrace, int32(frame.Id))
t.frames[int32(frame.Id)] = frame pos, ref = pos.out, nil
} }
} }
@ -566,9 +594,3 @@ func (t *thread) hasFrame(id int) bool {
_, ok := t.frames[int32(id)] _, ok := t.frames[int32(id)]
return ok return ok
} }
func pop[S ~[]E, E any](s *S) E {
e := (*s)[len(*s)-1]
*s = (*s)[:len(*s)-1]
return e
}

View File

@ -1,20 +1,28 @@
package dap package dap
import ( import (
"context"
"fmt" "fmt"
"io/fs"
"path"
"path/filepath" "path/filepath"
"strconv" "strconv"
"strings" "strings"
"sync" "sync"
"sync/atomic" "sync/atomic"
"time"
"unicode/utf8"
"github.com/google/go-dap" "github.com/google/go-dap"
"github.com/moby/buildkit/client/llb" "github.com/moby/buildkit/client/llb"
gateway "github.com/moby/buildkit/frontend/gateway/client"
"github.com/moby/buildkit/solver/pb" "github.com/moby/buildkit/solver/pb"
"github.com/tonistiigi/fsutil/types"
) )
type frame struct { type frame struct {
dap.StackFrame dap.StackFrame
op *pb.Op
scopes []dap.Scope scopes []dap.Scope
} }
@ -37,6 +45,7 @@ func (f *frame) fillLocation(def *llb.Definition, loc *pb.Locations, ws string)
info := def.Source.Infos[l.SourceIndex] info := def.Source.Infos[l.SourceIndex]
f.Source = &dap.Source{ f.Source = &dap.Source{
Name: path.Base(info.Filename),
Path: filepath.Join(ws, info.Filename), Path: filepath.Join(ws, info.Filename),
} }
return return
@ -44,27 +53,36 @@ func (f *frame) fillLocation(def *llb.Definition, loc *pb.Locations, ws string)
} }
} }
func (f *frame) fillVarsFromOp(op *pb.Op, refs *variableReferences) { func (f *frame) ExportVars(ctx context.Context, ref gateway.Reference, refs *variableReferences) {
f.scopes = []dap.Scope{ f.fillVarsFromOp(f.op, refs)
{ if ref != nil {
Name: "Arguments", f.fillVarsFromResult(ctx, ref, refs)
PresentationHint: "arguments",
VariablesReference: refs.New(func() []dap.Variable {
var vars []dap.Variable
if op.Platform != nil {
vars = append(vars, platformVars(op.Platform, refs))
}
switch op := op.Op.(type) {
case *pb.Op_Exec:
vars = append(vars, execOpVars(op.Exec, refs))
}
return vars
}),
},
} }
} }
func (f *frame) ResetVars() {
f.scopes = nil
}
func (f *frame) fillVarsFromOp(op *pb.Op, refs *variableReferences) {
f.scopes = append(f.scopes, dap.Scope{
Name: "Arguments",
PresentationHint: "arguments",
VariablesReference: refs.New(func() []dap.Variable {
var vars []dap.Variable
if op.Platform != nil {
vars = append(vars, platformVars(op.Platform, refs))
}
switch op := op.Op.(type) {
case *pb.Op_Exec:
vars = append(vars, execOpVars(op.Exec, refs))
}
return vars
}),
})
}
func platformVars(platform *pb.Platform, refs *variableReferences) dap.Variable { func platformVars(platform *pb.Platform, refs *variableReferences) dap.Variable {
return dap.Variable{ return dap.Variable{
Name: "platform", Name: "platform",
@ -154,7 +172,152 @@ func execOpVars(exec *pb.ExecOp, refs *variableReferences) dap.Variable {
} }
} }
func (f *frame) fillVarsFromResult(ctx context.Context, ref gateway.Reference, refs *variableReferences) {
f.scopes = append(f.scopes, dap.Scope{
Name: "File Explorer",
PresentationHint: "locals",
VariablesReference: refs.New(func() []dap.Variable {
return fsVars(ctx, ref, "/", refs)
}),
Expensive: true,
})
}
func fsVars(ctx context.Context, ref gateway.Reference, path string, vars *variableReferences) []dap.Variable {
files, err := ref.ReadDir(ctx, gateway.ReadDirRequest{
Path: path,
})
if err != nil {
return []dap.Variable{
{
Name: "error",
Value: err.Error(),
},
}
}
paths := make([]dap.Variable, len(files))
for i, file := range files {
stat := statf(file)
fv := dap.Variable{
Name: file.Path,
}
fullpath := filepath.Join(path, file.Path)
if file.IsDir() {
fv.Name += "/"
fv.VariablesReference = vars.New(func() []dap.Variable {
dvar := dap.Variable{
Name: ".",
Value: statf(file),
VariablesReference: vars.New(func() []dap.Variable {
return statVars(file)
}),
}
return append([]dap.Variable{dvar}, fsVars(ctx, ref, fullpath, vars)...)
})
fv.Value = ""
} else {
fv.Value = stat
fv.VariablesReference = vars.New(func() (dvars []dap.Variable) {
if fs.FileMode(file.Mode).IsRegular() {
// Regular file so display a small blurb of the file.
dvars = append(dvars, fileVars(ctx, ref, fullpath)...)
}
return append(dvars, statVars(file)...)
})
}
paths[i] = fv
}
return paths
}
func statf(st *types.Stat) string {
mode := fs.FileMode(st.Mode)
modTime := time.Unix(0, st.ModTime).UTC()
return fmt.Sprintf("%s %d:%d %s", mode, st.Uid, st.Gid, modTime.Format("Jan 2 15:04:05 2006"))
}
func fileVars(ctx context.Context, ref gateway.Reference, fullpath string) []dap.Variable {
b, err := ref.ReadFile(ctx, gateway.ReadRequest{
Filename: fullpath,
Range: &gateway.FileRange{Length: 512},
})
var (
data string
dataErr error
)
if err != nil {
data = err.Error()
} else if isBinaryData(b) {
data = "binary data"
} else {
if len(b) == 512 {
// Get the remainder of the file.
remaining, err := ref.ReadFile(ctx, gateway.ReadRequest{
Filename: fullpath,
Range: &gateway.FileRange{Offset: 512},
})
if err != nil {
dataErr = err
} else {
b = append(b, remaining...)
}
}
data = string(b)
}
dvars := []dap.Variable{
{
Name: "data",
Value: data,
},
}
if dataErr != nil {
dvars = append(dvars, dap.Variable{
Name: "dataError",
Value: dataErr.Error(),
})
}
return dvars
}
func statVars(st *types.Stat) (vars []dap.Variable) {
if st.Linkname != "" {
vars = append(vars, dap.Variable{
Name: "linkname",
Value: st.Linkname,
})
}
mode := fs.FileMode(st.Mode)
modTime := time.Unix(0, st.ModTime).UTC()
vars = append(vars, []dap.Variable{
{
Name: "mode",
Value: mode.String(),
},
{
Name: "uid",
Value: strconv.FormatUint(uint64(st.Uid), 10),
},
{
Name: "gid",
Value: strconv.FormatUint(uint64(st.Gid), 10),
},
{
Name: "mtime",
Value: modTime.Format("Jan 2 15:04:05 2006"),
},
}...)
return vars
}
func (f *frame) Scopes() []dap.Scope { func (f *frame) Scopes() []dap.Scope {
if f.scopes == nil {
return []dap.Scope{}
}
return f.scopes return f.scopes
} }
@ -205,6 +368,34 @@ func (v *variableReferences) Reset() {
v.nextID.Store(0) v.nextID.Store(0)
} }
// isBinaryData uses heuristics to determine if the file
// is binary. Algorithm taken from this blog post:
// https://eli.thegreenplace.net/2011/10/19/perls-guess-if-file-is-text-or-binary-implemented-in-python/
func isBinaryData(b []byte) bool {
odd := 0
for i := 0; i < len(b); i++ {
c := b[i]
if c == 0 {
return true
}
isHighBit := c&128 > 0
if !isHighBit {
if c < 32 && c != '\n' && c != '\t' {
odd++
}
} else {
r, sz := utf8.DecodeRune(b)
if r != utf8.RuneError && sz > 1 {
i += sz - 1
continue
}
odd++
}
}
return float64(odd)/float64(len(b)) > .3
}
func brief(s string) string { func brief(s string) string {
if len(s) >= 64 { if len(s) >= 64 {
return s[:60] + " ..." return s[:60] + " ..."

View File

@ -43,6 +43,7 @@ title: Bake standard library functions
| `greaterthan` | Returns true if and only if the second number is greater than the first. | | `greaterthan` | Returns true if and only if the second number is greater than the first. |
| `greaterthanorequalto` | Returns true if and only if the second number is greater than or equal to the first. | | `greaterthanorequalto` | Returns true if and only if the second number is greater than or equal to the first. |
| `hasindex` | Returns true if if the given collection can be indexed with the given key without producing an error, or false otherwise. | | `hasindex` | Returns true if if the given collection can be indexed with the given key without producing an error, or false otherwise. |
| `homedir` | Returns the current user's home directory. |
| `indent` | Adds a given number of spaces after each newline character in the given string. | | `indent` | Adds a given number of spaces after each newline character in the given string. |
| `index` | Returns the element with the given key from the given collection, or raises an error if there is no such element. | | `index` | Returns the element with the given key from the given collection, or raises an error if there is no such element. |
| `indexof` | Finds the element index for a given value in a list. | | `indexof` | Finds the element index for a given value in a list. |

View File

@ -11,72 +11,38 @@ Many [popular editors](https://microsoft.github.io/debug-adapter-protocol/implem
- Pause on exception. - Pause on exception.
- Set breakpoints on instructions. - Set breakpoints on instructions.
- Step next and continue. - Step next and continue.
- Open terminal in an intermediate container image.
- File explorer.
## Limitations ## Limitations
- **Step In** is the same as **Next**. - The debugger cannot differentiate between identical `FROM` directives.
- **Step Out** is the same as **Continue**.
- **FROM** directives may have unintuitive breakpoint lines.
- Stack traces may not show the full sequence of events.
- Invalid `args` in launch request may not produce an error in the UI. - Invalid `args` in launch request may not produce an error in the UI.
- Does not support arbitrary pausing. - Does not support arbitrary pausing.
- Output is always the plain text printer. - Output is always the plain text printer.
- File explorer does not work when pausing on an exception.
## Future Improvements ## Future Improvements
- Support for Bake. - Support for Bake.
- Open terminal in an intermediate container image.
- Backwards stepping. - Backwards stepping.
- Better UI for errors with invalid arguments. - Better UI for errors with invalid arguments.
## We would like feedback on ## We would like feedback on
- Stack traces.
- Step/pause locations. - Step/pause locations.
- Variable inspections. - Variable inspections.
- Additional information that would be helpful while debugging. - Additional information that would be helpful while debugging.
- Annoyances or difficulties with the current implementation.
### Stack Traces
We would like feedback on whether the stack traces are easy to read and useful for debugging.
The goal was to include the parent commands inside of a stack trace to make it easier to understand the previous commands used to reach the current step. Stack traces in normal programming languages will only have one parent (the calling function).
In a Dockerfile, there are no functions which makes displaying a call stack not useful. Instead, we decided to show the input to the step as the "calling function" to make it easier to see the preceding steps.
This method of showing a stack trace is not always clear. When a step has multiple parents, such as a `COPY --from` or a `RUN` with a bind mount, there are multiple parents. Only one can be the official "parent" in the stack trace. At the moment, we do not try to choose one and will break the stack trace into two separate call stacks. This is also the case when one step is used as the parent for multiple steps.
### Step/pause Locations ### Step/pause Locations
Execution is paused **after** the step has been executed rather than before. Execution is paused **before** the step has been executed. Due to the way Dockerfiles are written, this sometimes creates
some unclear visuals regarding where the pause happened.
For example: For the last command in a stage, step **next** will highlight the same instruction twice. One of these is before the execution and the second is after. For every other command, they are only highlighted before the command is executed. It is not currently possible to set a breakpoint at the end of a stage. You must set the breakpoint on the last step and then use step **next**.
```dockerfile When a command has multiple parents, step **into** will step into one of the parents. Step **out** will then return from that stage. This will continue until there are no additional parents. There is currently no way to tell the difference between which parents have executed and which ones have not.
FROM busybox
RUN echo hello > /hello
```
If you set a breakpoint on line 2, then execution will pause **after** the `RUN` has executed rather than before.
We thought this method would be more useful because we figured it was more common to want to inspect the state after a step rather than before the step.
There are also Dockerfiles where some instructions are aliases for another instruction and don't have their own representation in the Dockerfile.
```dockerfile
FROM golang:1.24 AS golang-base
# Does not show up as a breakpoint since it refers to the instruction
# from earlier.
FROM golang-base
RUN go build ...
```
### Step into/out
It is required to implement these for a debug adapter but we haven't determined a way that these map to Dockerfile execution. Feedback about how you would expect these to work would be helpful for future development.
For now, step into is implemented the same as next while step out is implemented the same as continue. The logic here is that next step is always going into the next call and stepping out would be returning from the current function which is the same as building the final step.
### Variable Inspections ### Variable Inspections

View File

@ -368,6 +368,7 @@ You can override the following fields:
* `args` * `args`
* `cache-from` * `cache-from`
* `cache-to` * `cache-to`
* `call`
* `context` * `context`
* `dockerfile` * `dockerfile`
* `entitlements` * `entitlements`

View File

@ -75,13 +75,15 @@ The following [launch request arguments](https://microsoft.github.io/debug-adapt
Command line arguments may be passed to the debug adapter the same way they would be passed to the normal build command and they will set the value. Command line arguments may be passed to the debug adapter the same way they would be passed to the normal build command and they will set the value.
Launch request arguments that are set will override command line arguments if they are present. Launch request arguments that are set will override command line arguments if they are present.
A debug extension should include an `args` entry in the launch configuration and should append these arguments to the end of the tool invocation. A debug extension should include an `args` and `builder` entry in the launch configuration. These will modify the arguments passed to the binary for the tool invocation.
`builder` will add `--builder <arg>` directly after the executable and `args` will append to the end of the tool invocation.
For example, a launch configuration in Visual Studio Code with the following: For example, a launch configuration in Visual Studio Code with the following:
```json ```json
{ {
"args": ["--build-arg", "FOO=AAA"] "args": ["--build-arg", "FOO=AAA"]
"builder": ["mybuilder"]
} }
``` ```
This should cause the debug adapter to be invoked as `docker buildx dap build --build-arg FOO=AAA`. This should cause the debug adapter to be invoked as `docker buildx --builder mybuilder dap build --build-arg FOO=AAA`.

View File

@ -13,8 +13,9 @@ Disk usage
|:------------------------|:---------|:--------|:-----------------------------------------| |:------------------------|:---------|:--------|:-----------------------------------------|
| [`--builder`](#builder) | `string` | | Override the configured builder instance | | [`--builder`](#builder) | `string` | | Override the configured builder instance |
| `-D`, `--debug` | `bool` | | Enable debug logging | | `-D`, `--debug` | `bool` | | Enable debug logging |
| `--filter` | `filter` | | Provide filter values | | [`--filter`](#filter) | `filter` | | Provide filter values |
| [`--verbose`](#verbose) | `bool` | | Provide a more verbose output | | [`--format`](#format) | `string` | | Format the output |
| [`--verbose`](#verbose) | `bool` | | Shorthand for `--format=pretty` |
<!---MARKER_GEN_END--> <!---MARKER_GEN_END-->
@ -50,7 +51,7 @@ If `RECLAIMABLE` is false, the `docker buildx du prune` command won't delete
the record, even if you use `--all`. That's because the record is actively in the record, even if you use `--all`. That's because the record is actively in
use by some component of the builder. use by some component of the builder.
The asterisks (\*) in the default output indicate the following: The asterisks (\*) in the default output format indicate the following:
- An asterisk next to an ID (`zu7m6evdpebh5h8kfkpw9dlf2*`) indicates that the record - An asterisk next to an ID (`zu7m6evdpebh5h8kfkpw9dlf2*`) indicates that the record
is mutable. The size of the record may change, or another build can take ownership of is mutable. The size of the record may change, or another build can take ownership of
@ -61,33 +62,156 @@ The asterisks (\*) in the default output indicate the following:
If you prune such a record then you will lose build cache but only metadata If you prune such a record then you will lose build cache but only metadata
will be deleted as the image still needs to actual storage layers. will be deleted as the image still needs to actual storage layers.
### <a name="filter"></a> Provide filter values (--filter)
Same as [`buildx prune --filter`](buildx_prune.md#filter).
### <a name="format"></a> Format the output (--format)
The formatting options (`--format`) pretty-prints usage information output
using a Go template.
Valid placeholders for the Go template are:
* `.ID`
* `.Parents`
* `.CreatedAt`
* `.Mutable`
* `.Reclaimable`
* `.Shared`
* `.Size`
* `.Description`
* `.UsageCount`
* `.LastUsedAt`
* `.Type`
When using the `--format` option, the `du` command will either output the data
exactly as the template declares or, when using the `table` directive, includes
column headers as well.
The `pretty` format is useful for inspecting the disk usage records in more
detail. It shows the mutable and shared states more clearly, as well as
additional information about the corresponding layer:
```console
$ docker buildx du --format=pretty
...
ID: 6wqu0v6hjdwvhh8yjozrepaof
Parents:
- bqx15bcewecz4wcg14b7iodvp
Created at: 2025-06-12 15:44:02.715795569 +0000 UTC
Mutable: false
Reclaimable: true
Shared: true
Size: 1.653GB
Description: [build-base 4/4] COPY . .
Usage count: 1
Last used: 2 months ago
Type: regular
Shared: 35.57GB
Private: 97.94GB
Reclaimable: 131.5GB
Total: 133.5GB
```
The following example uses a template without headers and outputs the
`ID` and `Size` entries separated by a colon (`:`):
```console
$ docker buildx du --format "{{.ID}}: {{.Size}}"
6wqu0v6hjdwvhh8yjozrepaof: 1.653GB
4m8061kctvjyh9qleus8rgpgx: 1.723GB
fcm9mlz2641u8r5eicjqdhy1l: 1.841GB
z2qu1swvo3afzd9mhihi3l5k0: 1.873GB
nmi6asc00aa3ja6xnt6o7wbrr: 2.027GB
0qlam41jxqsq6i27yqllgxed3: 2.495GB
3w9qhzzskq5jc262snfu90bfz: 2.617GB
```
The following example uses a `table` template and outputs the `ID` and
`Description`:
```console
$ docker buildx du --format "table {{.ID}} {{.Description}}"
ID DESCRIPTION
03bbhchaib8cygqs68um6hfnl [binaries-linux 2/5] LINK COPY --link --from=binfmt-filter /out/ /
2h8un0tyg57oj64xvbas6mzea [cni-plugins-export 2/4] LINK COPY --link --from=cni-plugins /opt/cni/bin/loopback /buildkit-cni-loopback
evckox33t07ob9dmollhn4h4j [cni-plugins-export 3/4] LINK COPY --link --from=cni-plugins /opt/cni/bin/host-local /buildkit-cni-host-local
jlxzwcw6xaomxj8irerow9bhb [binaries-linux 4/5] LINK COPY --link --from=buildctl /usr/bin/buildctl /
ov2oetgebkhpsw39rv1sbh5w1 [buildkit-linux 1/1] LINK COPY --link --from=binaries / /usr/bin/
ruoczhyq25n5v9ld7n231zalx [binaries-linux 3/5] LINK COPY --link --from=cni-plugins-export-squashed / /
ax7cov6kizxi9ufvcwsef4occ* local source for context
```
JSON output is also supported and will print as newline delimited JSON:
```console
$ docker buildx du --format=json
{"CreatedAt":"2025-07-29T12:36:01Z","Description":"pulled from docker.io/library/rust:1.85.1-bookworm@sha256:e51d0265072d2d9d5d320f6a44dde6b9ef13653b035098febd68cce8fa7c0bc4","ID":"ic1gfidvev5nciupzz53alel4","LastUsedAt":"2025-07-29T12:36:01Z","Mutable":false,"Parents":["hmpdhm4sjrfpmae4xm2y3m0ra"],"Reclaimable":true,"Shared":false,"Size":"829889526","Type":"regular","UsageCount":1}
{"CreatedAt":"2025-08-05T09:24:09Z","Description":"pulled from docker.io/library/node:22@sha256:3218f0d1b9e4b63def322e9ae362d581fbeac1ef21b51fc502ef91386667ce92","ID":"jsw7fx09l5zsda3bri1z4mwk5","LastUsedAt":"2025-08-05T09:24:09Z","Mutable":false,"Parents":["098jsj5ebbv1w47ikqigeuurs"],"Reclaimable":true,"Shared":true,"Size":"829898832","Type":"regular","UsageCount":1}
```
You can use `jq` to pretty-print the JSON output:
```console
$ docker buildx du --format=json | jq .
{
"CreatedAt": "2025-07-29T12:36:01Z",
"Description": "pulled from docker.io/library/rust:1.85.1-bookworm@sha256:e51d0265072d2d9d5d320f6a44dde6b9ef13653b035098febd68cce8fa7c0bc4",
"ID": "ic1gfidvev5nciupzz53alel4",
"LastUsedAt": "2025-07-29T12:36:01Z",
"Mutable": false,
"Parents": [
"hmpdhm4sjrfpmae4xm2y3m0ra"
],
"Reclaimable": true,
"Shared": false,
"Size": "829889526",
"Type": "regular",
"UsageCount": 1
}
{
"CreatedAt": "2025-08-05T09:24:09Z",
"Description": "pulled from docker.io/library/node:22@sha256:3218f0d1b9e4b63def322e9ae362d581fbeac1ef21b51fc502ef91386667ce92",
"ID": "jsw7fx09l5zsda3bri1z4mwk5",
"LastUsedAt": "2025-08-05T09:24:09Z",
"Mutable": false,
"Parents": [
"098jsj5ebbv1w47ikqigeuurs"
],
"Reclaimable": true,
"Shared": true,
"Size": "829898832",
"Type": "regular",
"UsageCount": 1
}
```
### <a name="verbose"></a> Use verbose output (--verbose) ### <a name="verbose"></a> Use verbose output (--verbose)
The verbose output of the `docker buildx du` command is useful for inspecting Shorthand for [`--format=pretty`](#format):
the disk usage records in more detail. The verbose output shows the mutable and
shared states more clearly, as well as additional information about the
corresponding layer.
```console ```console
$ docker buildx du --verbose $ docker buildx du --verbose
... ...
Last used: 2 days ago ID: 6wqu0v6hjdwvhh8yjozrepaof
Type: regular Parents:
- bqx15bcewecz4wcg14b7iodvp
Created at: 2025-06-12 15:44:02.715795569 +0000 UTC
Mutable: false
Reclaimable: true
Shared: true
Size: 1.653GB
Description: [build-base 4/4] COPY . .
Usage count: 1
Last used: 2 months ago
Type: regular
ID: 05d0elirb4mmvpmnzbrp3ssrg Shared: 35.57GB
Parent: e8sfdn4mygrg7msi9ak1dy6op Private: 97.94GB
Created at: 2023-11-20 09:53:30.881558721 +0000 UTC Reclaimable: 131.5GB
Mutable: false Total: 133.5GB
Reclaimable: true
Shared: false
Size: 0B
Description: [gobase 3/3] WORKDIR /src
Usage count: 3
Last used: 24 hours ago
Type: regular
Reclaimable: 4.453GB
Total: 4.453GB
``` ```
### <a name="builder"></a> Override the configured builder instance (--builder) ### <a name="builder"></a> Override the configured builder instance (--builder)
@ -95,7 +219,7 @@ Total: 4.453GB
Use the `--builder` flag to inspect the disk usage of a particular builder. Use the `--builder` flag to inspect the disk usage of a particular builder.
```console ```console
$ docker buildx du --builder youthful_shtern $ docker buildx du --builder mybuilder
ID RECLAIMABLE SIZE LAST ACCESSED ID RECLAIMABLE SIZE LAST ACCESSED
g41agepgdczekxg2mtw0dujsv* true 1.312GB 47 hours ago g41agepgdczekxg2mtw0dujsv* true 1.312GB 47 hours ago
e6ycrsa0bn9akigqgzu0sc6kr true 318MB 47 hours ago e6ycrsa0bn9akigqgzu0sc6kr true 318MB 47 hours ago

View File

@ -9,17 +9,17 @@ Remove build cache
### Options ### Options
| Name | Type | Default | Description | | Name | Type | Default | Description |
|:------------------------|:---------|:--------|:-------------------------------------------------------| |:--------------------------------------|:---------|:--------|:-------------------------------------------------------|
| `-a`, `--all` | `bool` | | Include internal/frontend images | | [`-a`](#all), [`--all`](#all) | `bool` | | Include internal/frontend images |
| [`--builder`](#builder) | `string` | | Override the configured builder instance | | [`--builder`](#builder) | `string` | | Override the configured builder instance |
| `-D`, `--debug` | `bool` | | Enable debug logging | | `-D`, `--debug` | `bool` | | Enable debug logging |
| `--filter` | `filter` | | Provide filter values (e.g., `until=24h`) | | [`--filter`](#filter) | `filter` | | Provide filter values |
| `-f`, `--force` | `bool` | | Do not prompt for confirmation | | `-f`, `--force` | `bool` | | Do not prompt for confirmation |
| `--max-used-space` | `bytes` | `0` | Maximum amount of disk space allowed to keep for cache | | [`--max-used-space`](#max-used-space) | `bytes` | `0` | Maximum amount of disk space allowed to keep for cache |
| `--min-free-space` | `bytes` | `0` | Target amount of free disk space after pruning | | [`--min-free-space`](#min-free-space) | `bytes` | `0` | Target amount of free disk space after pruning |
| `--reserved-space` | `bytes` | `0` | Amount of disk space always allowed to keep for cache | | [`--reserved-space`](#reserved-space) | `bytes` | `0` | Amount of disk space always allowed to keep for cache |
| `--verbose` | `bool` | | Provide a more verbose output | | `--verbose` | `bool` | | Provide a more verbose output |
<!---MARKER_GEN_END--> <!---MARKER_GEN_END-->
@ -28,24 +28,89 @@ Remove build cache
Clears the build cache of the selected builder. Clears the build cache of the selected builder.
You can finely control what cache data is kept using:
- The `--filter=until=<duration>` flag to keep images that have been used in
the last `<duration>` time.
`<duration>` is a duration string, e.g. `24h` or `2h30m`, with allowable
units of `(h)ours`, `(m)inutes` and `(s)econds`.
- The `--keep-storage=<size>` flag to keep `<size>` bytes of data in the cache.
`<size>` is a human-readable memory string, e.g. `128mb`, `2gb`, etc. Units
are case-insensitive.
- The `--all` flag to allow clearing internal helper images and frontend images
set using the `#syntax=` directive or the `BUILDKIT_SYNTAX` build argument.
## Examples ## Examples
### <a name="all"></a> Include internal/frontend images (--all)
The `--all` flag to allow clearing internal helper images and frontend images
set using the `#syntax=` directive or the `BUILDKIT_SYNTAX` build argument.
### <a name="filter"></a> Provide filter values (--filter)
You can finely control which cache records to delete using the `--filter` flag.
The filter format is in the form of `<key><op><value>`, known as selectors. All
selectors must match the target object for the filter to be true. We define the
operators `=` for equality, `!=` for not equal and `~=` for a regular
expression.
Valid filter keys are:
- `until` flag to keep records that have been used in the last duration time.
Value is a duration string, e.g. `24h` or `2h30m`, with allowable units of
`(h)ours`, `(m)inutes` and `(s)econds`.
- `id` flag to target a specific image ID.
- `parents` flag to target records that are parents of the
specified image ID. Multiple parent IDs are separated by a semicolon (`;`).
- `description` flag to target records whose description is the specified
substring.
- `inuse` flag to target records that are actively in use and therefore not
reclaimable.
- `mutable` flag to target records that are mutable.
- `immutable` flag to target records that are immutable.
- `shared` flag to target records that are shared with other resources,
typically images.
- `private` flag to target records that are not shared.
- `type` flag to target records by type. Valid types are:
- `internal`
- `frontend`
- `source.local`
- `source.git.checkout`
- `exec.cachemount`
- `regular`
Examples:
```console
docker buildx prune --filter "until=24h"
docker buildx prune --filter "description~=golang"
docker buildx prune --filter "parents=dpetmoi6n0yqanxjqrbnofz9n;kgoj0q6g57i35gdyrv546alz7"
docker buildx prune --filter "type=source.local"
docker buildx prune --filter "type!=exec.cachemount"
```
> [!NOTE]
> Multiple `--filter` flags are ANDed together.
### <a name="max-used-space"></a> Maximum amount of disk space allowed to keep for cache (--max-used-space)
The `--max-used-space` flag allows setting a maximum amount of disk space
that the build cache can use. If the cache is using more disk space than this
value, the least recently used cache records are deleted until the total
used space is less than or equal to the specified value.
The value is specified in bytes. You can use a human-readable memory string,
e.g. `128mb`, `2gb`, etc. Units are case-insensitive.
### <a name="min-free-space"></a> Target amount of free disk space after pruning (--min-free-space)
The `--min-free-space` flag allows setting a target amount of free disk space
that should be available after pruning. If the available disk space is less
than this value, the least recently used cache records are deleted until
the available free space is greater than or equal to the specified value.
The value is specified in bytes. You can use a human-readable memory string,
e.g. `128mb`, `2gb`, etc. Units are case-insensitive.
### <a name="reserved-space"></a> Amount of disk space always allowed to keep for cache (--reserved-space)
The `--reserved-space` flag allows setting an amount of disk space that
should always be kept for the build cache. If the available disk space is less
than this value, the least recently used cache records are deleted until
the available free space is greater than or equal to the specified value.
The value is specified in bytes. You can use a human-readable memory string,
e.g. `128mb`, `2gb`, etc. Units are case-insensitive.
### <a name="builder"></a> Override the configured builder instance (--builder) ### <a name="builder"></a> Override the configured builder instance (--builder)
Same as [`buildx --builder`](buildx.md#builder). Same as [`buildx --builder`](buildx.md#builder).

View File

@ -18,6 +18,7 @@ import (
"github.com/docker/buildx/util/confutil" "github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/imagetools" "github.com/docker/buildx/util/imagetools"
"github.com/docker/buildx/util/progress" "github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/context/docker"
"github.com/docker/cli/opts" "github.com/docker/cli/opts"
"github.com/docker/docker/api/types/container" "github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/image" "github.com/docker/docker/api/types/image"
@ -125,15 +126,38 @@ func (d *Driver) create(ctx context.Context, l progress.SubLogger) error {
hc := &container.HostConfig{ hc := &container.HostConfig{
Privileged: true, Privileged: true,
RestartPolicy: d.restartPolicy, RestartPolicy: d.restartPolicy,
Mounts: []mount.Mount{ Init: &useInit,
{
Type: mount.TypeVolume,
Source: d.Name + volumeStateSuffix,
Target: confutil.DefaultBuildKitStateDir,
},
},
Init: &useInit,
} }
mounts := []mount.Mount{
{
Type: mount.TypeVolume,
Source: d.Name + volumeStateSuffix,
Target: confutil.DefaultBuildKitStateDir,
},
}
// Mount WSL libaries if running in WSL environment and Docker context
// is a local socket as requesting GPU on container builder creation
// is not enough when generating the CDI specification for GPU devices.
// https://github.com/docker/buildx/pull/3320
if os.Getenv("WSL_DISTRO_NAME") != "" {
if cm, err := d.ContextStore.GetMetadata(d.DockerContext); err == nil {
if epm, err := docker.EndpointFromContext(cm); err == nil && isSocket(epm.Host) {
wslLibPath := "/usr/lib/wsl"
if st, err := os.Stat(wslLibPath); err == nil && st.IsDir() {
mounts = append(mounts, mount.Mount{
Type: mount.TypeBind,
Source: wslLibPath,
Target: wslLibPath,
ReadOnly: true,
})
}
}
}
}
hc.Mounts = mounts
if d.netMode != "" { if d.netMode != "" {
hc.NetworkMode = container.NetworkMode(d.netMode) hc.NetworkMode = container.NetworkMode(d.netMode)
} }
@ -531,3 +555,12 @@ func getBuildkitFlags(initConfig driver.InitConfig) []string {
} }
return flags return flags
} }
func isSocket(addr string) bool {
switch proto, _, _ := strings.Cut(addr, "://"); proto {
case "unix", "npipe", "fd":
return true
default:
return false
}
}

View File

@ -3,13 +3,14 @@ package context
import ( import (
"net/url" "net/url"
"os" "os"
"os/user"
"path/filepath" "path/filepath"
"runtime"
"strings" "strings"
"github.com/docker/cli/cli/command" "github.com/docker/cli/cli/command"
"github.com/docker/cli/cli/context" "github.com/docker/cli/cli/context"
"github.com/docker/cli/cli/context/store" "github.com/docker/cli/cli/context/store"
"github.com/docker/docker/pkg/homedir"
"k8s.io/client-go/tools/clientcmd" "k8s.io/client-go/tools/clientcmd"
clientcmdapi "k8s.io/client-go/tools/clientcmd/api" clientcmdapi "k8s.io/client-go/tools/clientcmd/api"
) )
@ -99,7 +100,7 @@ func (c *Endpoint) KubernetesConfig() clientcmd.ClientConfig {
func (c *EndpointMeta) ResolveDefault() (any, *store.EndpointTLSData, error) { func (c *EndpointMeta) ResolveDefault() (any, *store.EndpointTLSData, error) {
kubeconfig := os.Getenv("KUBECONFIG") kubeconfig := os.Getenv("KUBECONFIG")
if kubeconfig == "" { if kubeconfig == "" {
kubeconfig = filepath.Join(homedir.Get(), ".kube/config") kubeconfig = filepath.Join(getHomeDir(), ".kube/config")
} }
kubeEP, err := FromKubeConfig(kubeconfig, "", "") kubeEP, err := FromKubeConfig(kubeconfig, "", "")
if err != nil { if err != nil {
@ -156,7 +157,7 @@ func NewKubernetesConfig(configPath string) clientcmd.ClientConfig {
if config := os.Getenv("KUBECONFIG"); config != "" { if config := os.Getenv("KUBECONFIG"); config != "" {
kubeConfig = config kubeConfig = config
} else { } else {
kubeConfig = filepath.Join(homedir.Get(), ".kube/config") kubeConfig = filepath.Join(getHomeDir(), ".kube/config")
} }
} }
return clientcmd.NewNonInteractiveDeferredLoadingClientConfig( return clientcmd.NewNonInteractiveDeferredLoadingClientConfig(
@ -181,3 +182,28 @@ func ConfigFromEndpoint(endpointName string, s store.Reader) (clientcmd.ClientCo
} }
return ConfigFromContext(endpointName, s) return ConfigFromContext(endpointName, s)
} }
// getHomeDir returns the home directory of the current user with the help of
// environment variables depending on the target operating system.
// Returned path should be used with "path/filepath" to form new paths.
//
// On non-Windows platforms, it falls back to nss lookups, if the home
// directory cannot be obtained from environment-variables.
//
// If linking statically with cgo enabled against glibc, ensure the
// osusergo build tag is used.
//
// If needing to do nss lookups, do not disable cgo or set osusergo.
//
// It's a local fork of [pkg/homedir].
//
// [pkg/homedir]: https://github.com/moby/moby/blob/v28.3.2/pkg/homedir/homedir.go#L9-L28
func getHomeDir() string {
home, _ := os.UserHomeDir()
if home == "" && runtime.GOOS != "windows" {
if u, err := user.Current(); err == nil {
return u.HomeDir
}
}
return home
}

View File

@ -176,38 +176,36 @@ func (f *factory) processDriverOpts(deploymentName string, namespace string, cfg
defaultLoad := false defaultLoad := false
timeout := defaultTimeout timeout := defaultTimeout
deploymentOpt.Qemu.Image = bkimage.QemuImage deploymentOpt.Qemu.Image = bkimage.QemuImage
loadbalance := LoadbalanceSticky loadbalance := LoadbalanceSticky
var err error var err error
for k, v := range cfg.DriverOpts { for k, v := range cfg.DriverOpts {
switch k { switch {
case "image": case k == "image":
if v != "" { if v != "" {
deploymentOpt.Image = v deploymentOpt.Image = v
} }
case "namespace": case k == "namespace":
namespace = v namespace = v
case "replicas": case k == "replicas":
deploymentOpt.Replicas, err = strconv.Atoi(v) deploymentOpt.Replicas, err = strconv.Atoi(v)
if err != nil { if err != nil {
return nil, "", "", false, 0, err return nil, "", "", false, 0, err
} }
case "requests.cpu": case k == "requests.cpu":
deploymentOpt.RequestsCPU = v deploymentOpt.RequestsCPU = v
case "requests.memory": case k == "requests.memory":
deploymentOpt.RequestsMemory = v deploymentOpt.RequestsMemory = v
case "requests.ephemeral-storage": case k == "requests.ephemeral-storage":
deploymentOpt.RequestsEphemeralStorage = v deploymentOpt.RequestsEphemeralStorage = v
case "limits.cpu": case k == "limits.cpu":
deploymentOpt.LimitsCPU = v deploymentOpt.LimitsCPU = v
case "limits.memory": case k == "limits.memory":
deploymentOpt.LimitsMemory = v deploymentOpt.LimitsMemory = v
case "limits.ephemeral-storage": case k == "limits.ephemeral-storage":
deploymentOpt.LimitsEphemeralStorage = v deploymentOpt.LimitsEphemeralStorage = v
case "rootless": case k == "rootless":
deploymentOpt.Rootless, err = strconv.ParseBool(v) deploymentOpt.Rootless, err = strconv.ParseBool(v)
if err != nil { if err != nil {
return nil, "", "", false, 0, err return nil, "", "", false, 0, err
@ -215,26 +213,26 @@ func (f *factory) processDriverOpts(deploymentName string, namespace string, cfg
if _, isImage := cfg.DriverOpts["image"]; !isImage { if _, isImage := cfg.DriverOpts["image"]; !isImage {
deploymentOpt.Image = bkimage.DefaultRootlessImage deploymentOpt.Image = bkimage.DefaultRootlessImage
} }
case "schedulername": case k == "schedulername":
deploymentOpt.SchedulerName = v deploymentOpt.SchedulerName = v
case "serviceaccount": case k == "serviceaccount":
deploymentOpt.ServiceAccountName = v deploymentOpt.ServiceAccountName = v
case "nodeselector": case k == "nodeselector":
deploymentOpt.NodeSelector, err = splitMultiValues(v, ",", "=") deploymentOpt.NodeSelector, err = splitMultiValues(v, ",", "=")
if err != nil { if err != nil {
return nil, "", "", false, 0, errors.Wrap(err, "cannot parse node selector") return nil, "", "", false, 0, errors.Wrap(err, "cannot parse node selector")
} }
case "annotations": case k == "annotations":
deploymentOpt.CustomAnnotations, err = splitMultiValues(v, ",", "=") deploymentOpt.CustomAnnotations, err = splitMultiValues(v, ",", "=")
if err != nil { if err != nil {
return nil, "", "", false, 0, errors.Wrap(err, "cannot parse annotations") return nil, "", "", false, 0, errors.Wrap(err, "cannot parse annotations")
} }
case "labels": case k == "labels":
deploymentOpt.CustomLabels, err = splitMultiValues(v, ",", "=") deploymentOpt.CustomLabels, err = splitMultiValues(v, ",", "=")
if err != nil { if err != nil {
return nil, "", "", false, 0, errors.Wrap(err, "cannot parse labels") return nil, "", "", false, 0, errors.Wrap(err, "cannot parse labels")
} }
case "tolerations": case k == "tolerations":
ts := strings.Split(v, ";") ts := strings.Split(v, ";")
deploymentOpt.Tolerations = []corev1.Toleration{} deploymentOpt.Tolerations = []corev1.Toleration{}
for i := range ts { for i := range ts {
@ -269,42 +267,46 @@ func (f *factory) processDriverOpts(deploymentName string, namespace string, cfg
deploymentOpt.Tolerations = append(deploymentOpt.Tolerations, t) deploymentOpt.Tolerations = append(deploymentOpt.Tolerations, t)
} }
case "loadbalance": case k == "loadbalance":
switch v { switch v {
case LoadbalanceSticky: case LoadbalanceSticky, LoadbalanceRandom:
case LoadbalanceRandom: loadbalance = v
default: default:
return nil, "", "", false, 0, errors.Errorf("invalid loadbalance %q", v) return nil, "", "", false, 0, errors.Errorf("invalid loadbalance %q", v)
} }
loadbalance = v case k == "qemu.install":
case "qemu.install":
deploymentOpt.Qemu.Install, err = strconv.ParseBool(v) deploymentOpt.Qemu.Install, err = strconv.ParseBool(v)
if err != nil { if err != nil {
return nil, "", "", false, 0, err return nil, "", "", false, 0, err
} }
case "qemu.image": case k == "qemu.image":
if v != "" { if v != "" {
deploymentOpt.Qemu.Image = v deploymentOpt.Qemu.Image = v
} }
case "buildkit-root-volume-memory": case k == "buildkit-root-volume-memory":
if v != "" { if v != "" {
deploymentOpt.BuildKitRootVolumeMemory = v deploymentOpt.BuildKitRootVolumeMemory = v
} }
case "default-load": case k == "default-load":
defaultLoad, err = strconv.ParseBool(v) defaultLoad, err = strconv.ParseBool(v)
if err != nil { if err != nil {
return nil, "", "", false, 0, err return nil, "", "", false, 0, err
} }
case "timeout": case k == "timeout":
timeout, err = time.ParseDuration(v) timeout, err = time.ParseDuration(v)
if err != nil { if err != nil {
return nil, "", "", false, 0, errors.Wrap(err, "cannot parse timeout") return nil, "", "", false, 0, errors.Wrap(err, "cannot parse timeout")
} }
case strings.HasPrefix(k, "env."):
envName := strings.TrimPrefix(k, "env.")
if envName == "" {
return nil, "", "", false, 0, errors.Errorf("invalid env option %q, expecting env.FOO=bar", k)
}
deploymentOpt.Env = append(deploymentOpt.Env, corev1.EnvVar{Name: envName, Value: v})
default: default:
return nil, "", "", false, 0, errors.Errorf("invalid driver option %s for driver %s", k, DriverName) return nil, "", "", false, 0, errors.Errorf("invalid driver option %s for driver %s", k, DriverName)
} }
} }
return deploymentOpt, loadbalance, namespace, defaultLoad, timeout, nil return deploymentOpt, loadbalance, namespace, defaultLoad, timeout, nil
} }

View File

@ -45,6 +45,7 @@ type DeploymentOpt struct {
LimitsMemory string LimitsMemory string
LimitsEphemeralStorage string LimitsEphemeralStorage string
Platforms []ocispecs.Platform Platforms []ocispecs.Platform
Env []corev1.EnvVar // injected into main buildkitd container
} }
const ( const (
@ -270,6 +271,10 @@ func NewDeployment(opt *DeploymentOpt) (d *appsv1.Deployment, c []*corev1.Config
}) })
} }
if len(opt.Env) > 0 {
d.Spec.Template.Spec.Containers[0].Env = append(d.Spec.Template.Spec.Containers[0].Env, opt.Env...)
}
return return
} }

View File

@ -30,6 +30,7 @@ type InitConfig struct {
Name string Name string
EndpointAddr string EndpointAddr string
DockerAPI dockerclient.APIClient DockerAPI dockerclient.APIClient
DockerContext string
ContextStore store.Reader ContextStore store.Reader
BuildkitdFlags []string BuildkitdFlags []string
Files map[string][]byte Files map[string][]byte

24
go.mod
View File

@ -6,9 +6,9 @@ require (
github.com/Masterminds/semver/v3 v3.4.0 github.com/Masterminds/semver/v3 v3.4.0
github.com/Microsoft/go-winio v0.6.2 github.com/Microsoft/go-winio v0.6.2
github.com/aws/aws-sdk-go-v2/config v1.27.27 github.com/aws/aws-sdk-go-v2/config v1.27.27
github.com/compose-spec/compose-go/v2 v2.7.2-0.20250703132301-891fce532a51 // main github.com/compose-spec/compose-go/v2 v2.8.1
github.com/containerd/console v1.0.5 github.com/containerd/console v1.0.5
github.com/containerd/containerd/v2 v2.1.3 github.com/containerd/containerd/v2 v2.1.4
github.com/containerd/continuity v0.4.5 github.com/containerd/continuity v0.4.5
github.com/containerd/errdefs v1.0.0 github.com/containerd/errdefs v1.0.0
github.com/containerd/log v0.1.0 github.com/containerd/log v0.1.0
@ -16,9 +16,9 @@ require (
github.com/creack/pty v1.1.24 github.com/creack/pty v1.1.24
github.com/davecgh/go-spew v1.1.1 github.com/davecgh/go-spew v1.1.1
github.com/distribution/reference v0.6.0 github.com/distribution/reference v0.6.0
github.com/docker/cli v28.3.2+incompatible github.com/docker/cli v28.4.0+incompatible
github.com/docker/cli-docs-tool v0.10.0 github.com/docker/cli-docs-tool v0.10.0
github.com/docker/docker v28.3.2+incompatible github.com/docker/docker v28.4.0+incompatible
github.com/docker/go-units v0.5.0 github.com/docker/go-units v0.5.0
github.com/gofrs/flock v0.12.1 github.com/gofrs/flock v0.12.1
github.com/google/go-dap v0.12.0 github.com/google/go-dap v0.12.0
@ -29,7 +29,7 @@ require (
github.com/hashicorp/hcl/v2 v2.23.0 github.com/hashicorp/hcl/v2 v2.23.0
github.com/in-toto/in-toto-golang v0.9.0 github.com/in-toto/in-toto-golang v0.9.0
github.com/mitchellh/hashstructure/v2 v2.0.2 github.com/mitchellh/hashstructure/v2 v2.0.2
github.com/moby/buildkit v0.23.0-rc1.0.20250618182037-9b91d20367db // master github.com/moby/buildkit v0.24.0
github.com/moby/go-archive v0.1.0 github.com/moby/go-archive v0.1.0
github.com/moby/sys/atomicwriter v0.1.0 github.com/moby/sys/atomicwriter v0.1.0
github.com/moby/sys/mountinfo v0.7.2 github.com/moby/sys/mountinfo v0.7.2
@ -43,8 +43,8 @@ require (
github.com/serialx/hashring v0.0.0-20200727003509-22c0c7ab6b1b github.com/serialx/hashring v0.0.0-20200727003509-22c0c7ab6b1b
github.com/sirupsen/logrus v1.9.3 github.com/sirupsen/logrus v1.9.3
github.com/spf13/cobra v1.9.1 github.com/spf13/cobra v1.9.1
github.com/spf13/pflag v1.0.6 github.com/spf13/pflag v1.0.7
github.com/stretchr/testify v1.10.0 github.com/stretchr/testify v1.11.0
github.com/tonistiigi/fsutil v0.0.0-20250605211040-586307ad452f github.com/tonistiigi/fsutil v0.0.0-20250605211040-586307ad452f
github.com/tonistiigi/go-csvvalue v0.0.0-20240814133006-030d3b2625d0 github.com/tonistiigi/go-csvvalue v0.0.0-20240814133006-030d3b2625d0
github.com/tonistiigi/jaeger-ui-rest v0.0.0-20250408171107-3dd17559e117 github.com/tonistiigi/jaeger-ui-rest v0.0.0-20250408171107-3dd17559e117
@ -55,8 +55,9 @@ require (
go.opentelemetry.io/otel/metric v1.35.0 go.opentelemetry.io/otel/metric v1.35.0
go.opentelemetry.io/otel/sdk v1.35.0 go.opentelemetry.io/otel/sdk v1.35.0
go.opentelemetry.io/otel/trace v1.35.0 go.opentelemetry.io/otel/trace v1.35.0
go.yaml.in/yaml/v3 v3.0.4
golang.org/x/mod v0.24.0 golang.org/x/mod v0.24.0
golang.org/x/sync v0.14.0 golang.org/x/sync v0.16.0
golang.org/x/sys v0.33.0 golang.org/x/sys v0.33.0
golang.org/x/term v0.31.0 golang.org/x/term v0.31.0
golang.org/x/text v0.24.0 golang.org/x/text v0.24.0
@ -64,7 +65,6 @@ require (
google.golang.org/grpc v1.72.2 google.golang.org/grpc v1.72.2
google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.5.1 google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.5.1
google.golang.org/protobuf v1.36.6 google.golang.org/protobuf v1.36.6
gopkg.in/yaml.v3 v3.0.1
k8s.io/api v0.32.3 k8s.io/api v0.32.3
k8s.io/apimachinery v0.32.3 k8s.io/apimachinery v0.32.3
k8s.io/client-go v0.32.3 k8s.io/client-go v0.32.3
@ -92,7 +92,7 @@ require (
github.com/containerd/errdefs/pkg v0.3.0 // indirect github.com/containerd/errdefs/pkg v0.3.0 // indirect
github.com/containerd/ttrpc v1.2.7 // indirect github.com/containerd/ttrpc v1.2.7 // indirect
github.com/containerd/typeurl/v2 v2.2.3 // indirect github.com/containerd/typeurl/v2 v2.2.3 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.6 // indirect github.com/cpuguy83/go-md2man/v2 v2.0.7 // indirect
github.com/docker/distribution v2.8.3+incompatible // indirect github.com/docker/distribution v2.8.3+incompatible // indirect
github.com/docker/docker-credential-helpers v0.9.3 // indirect github.com/docker/docker-credential-helpers v0.9.3 // indirect
github.com/docker/go-connections v0.5.0 // indirect github.com/docker/go-connections v0.5.0 // indirect
@ -166,6 +166,7 @@ require (
google.golang.org/genproto/googleapis/api v0.0.0-20250218202821-56aae31c358a // indirect google.golang.org/genproto/googleapis/api v0.0.0-20250218202821-56aae31c358a // indirect
gopkg.in/evanphx/json-patch.v4 v4.12.0 // indirect gopkg.in/evanphx/json-patch.v4 v4.12.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
k8s.io/klog/v2 v2.130.1 // indirect k8s.io/klog/v2 v2.130.1 // indirect
k8s.io/kube-openapi v0.0.0-20241105132330-32ad38e42d3f // indirect k8s.io/kube-openapi v0.0.0-20241105132330-32ad38e42d3f // indirect
k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738 // indirect k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738 // indirect
@ -182,3 +183,6 @@ exclude (
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2
) )
// restore junctions to have os.ModeSymlink flag set on Windows: https://github.com/docker/buildx/issues/3221
godebug winsymlink=0

36
go.sum
View File

@ -62,16 +62,16 @@ github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XL
github.com/cloudflare/cfssl v0.0.0-20180223231731-4e2dcbde5004/go.mod h1:yMWuSON2oQp+43nFtAV/uvKQIFpSPerB57DCt9t8sSA= github.com/cloudflare/cfssl v0.0.0-20180223231731-4e2dcbde5004/go.mod h1:yMWuSON2oQp+43nFtAV/uvKQIFpSPerB57DCt9t8sSA=
github.com/codahale/rfc6979 v0.0.0-20141003034818-6a90f24967eb h1:EDmT6Q9Zs+SbUoc7Ik9EfrFqcylYqgPZ9ANSbTAntnE= github.com/codahale/rfc6979 v0.0.0-20141003034818-6a90f24967eb h1:EDmT6Q9Zs+SbUoc7Ik9EfrFqcylYqgPZ9ANSbTAntnE=
github.com/codahale/rfc6979 v0.0.0-20141003034818-6a90f24967eb/go.mod h1:ZjrT6AXHbDs86ZSdt/osfBi5qfexBrKUdONk989Wnk4= github.com/codahale/rfc6979 v0.0.0-20141003034818-6a90f24967eb/go.mod h1:ZjrT6AXHbDs86ZSdt/osfBi5qfexBrKUdONk989Wnk4=
github.com/compose-spec/compose-go/v2 v2.7.2-0.20250703132301-891fce532a51 h1:AjI75N9METifYMZK7eNt8XIgY9Sryv+1w3XDA7X2vZQ= github.com/compose-spec/compose-go/v2 v2.8.1 h1:27O4dzyhiS/UEUKp1zHOHCBWD1WbxGsYGMNNaSejTk4=
github.com/compose-spec/compose-go/v2 v2.7.2-0.20250703132301-891fce532a51/go.mod h1:Zow/3eYNOnl2T4qLGZEizf8d/ht1qfy09G7WGOSzGOY= github.com/compose-spec/compose-go/v2 v2.8.1/go.mod h1:veko/VB7URrg/tKz3vmIAQDaz+CGiXH8vZsW79NmAww=
github.com/containerd/cgroups/v3 v3.0.5 h1:44na7Ud+VwyE7LIoJ8JTNQOa549a8543BmzaJHo6Bzo= github.com/containerd/cgroups/v3 v3.0.5 h1:44na7Ud+VwyE7LIoJ8JTNQOa549a8543BmzaJHo6Bzo=
github.com/containerd/cgroups/v3 v3.0.5/go.mod h1:SA5DLYnXO8pTGYiAHXz94qvLQTKfVM5GEVisn4jpins= github.com/containerd/cgroups/v3 v3.0.5/go.mod h1:SA5DLYnXO8pTGYiAHXz94qvLQTKfVM5GEVisn4jpins=
github.com/containerd/console v1.0.5 h1:R0ymNeydRqH2DmakFNdmjR2k0t7UPuiOV/N/27/qqsc= github.com/containerd/console v1.0.5 h1:R0ymNeydRqH2DmakFNdmjR2k0t7UPuiOV/N/27/qqsc=
github.com/containerd/console v1.0.5/go.mod h1:YynlIjWYF8myEu6sdkwKIvGQq+cOckRm6So2avqoYAk= github.com/containerd/console v1.0.5/go.mod h1:YynlIjWYF8myEu6sdkwKIvGQq+cOckRm6So2avqoYAk=
github.com/containerd/containerd/api v1.9.0 h1:HZ/licowTRazus+wt9fM6r/9BQO7S0vD5lMcWspGIg0= github.com/containerd/containerd/api v1.9.0 h1:HZ/licowTRazus+wt9fM6r/9BQO7S0vD5lMcWspGIg0=
github.com/containerd/containerd/api v1.9.0/go.mod h1:GhghKFmTR3hNtyznBoQ0EMWr9ju5AqHjcZPsSpTKutI= github.com/containerd/containerd/api v1.9.0/go.mod h1:GhghKFmTR3hNtyznBoQ0EMWr9ju5AqHjcZPsSpTKutI=
github.com/containerd/containerd/v2 v2.1.3 h1:eMD2SLcIQPdMlnlNF6fatlrlRLAeDaiGPGwmRKLZKNs= github.com/containerd/containerd/v2 v2.1.4 h1:/hXWjiSFd6ftrBOBGfAZ6T30LJcx1dBjdKEeI8xucKQ=
github.com/containerd/containerd/v2 v2.1.3/go.mod h1:8C5QV9djwsYDNhxfTCFjWtTBZrqjditQ4/ghHSYjnHM= github.com/containerd/containerd/v2 v2.1.4/go.mod h1:8C5QV9djwsYDNhxfTCFjWtTBZrqjditQ4/ghHSYjnHM=
github.com/containerd/continuity v0.4.5 h1:ZRoN1sXq9u7V6QoHMcVWGhOwDFqZ4B9i5H6un1Wh0x4= github.com/containerd/continuity v0.4.5 h1:ZRoN1sXq9u7V6QoHMcVWGhOwDFqZ4B9i5H6un1Wh0x4=
github.com/containerd/continuity v0.4.5/go.mod h1:/lNJvtJKUQStBzpVQ1+rasXO1LAWtUQssk28EZvJ3nE= github.com/containerd/continuity v0.4.5/go.mod h1:/lNJvtJKUQStBzpVQ1+rasXO1LAWtUQssk28EZvJ3nE=
github.com/containerd/errdefs v1.0.0 h1:tg5yIfIlQIrxYtu9ajqY42W3lpS19XqdxRQeEwYG8PI= github.com/containerd/errdefs v1.0.0 h1:tg5yIfIlQIrxYtu9ajqY42W3lpS19XqdxRQeEwYG8PI=
@ -95,8 +95,9 @@ github.com/containerd/ttrpc v1.2.7 h1:qIrroQvuOL9HQ1X6KHe2ohc7p+HP/0VE6XPU7elJRq
github.com/containerd/ttrpc v1.2.7/go.mod h1:YCXHsb32f+Sq5/72xHubdiJRQY9inL4a4ZQrAbN1q9o= github.com/containerd/ttrpc v1.2.7/go.mod h1:YCXHsb32f+Sq5/72xHubdiJRQY9inL4a4ZQrAbN1q9o=
github.com/containerd/typeurl/v2 v2.2.3 h1:yNA/94zxWdvYACdYO8zofhrTVuQY73fFU1y++dYSw40= github.com/containerd/typeurl/v2 v2.2.3 h1:yNA/94zxWdvYACdYO8zofhrTVuQY73fFU1y++dYSw40=
github.com/containerd/typeurl/v2 v2.2.3/go.mod h1:95ljDnPfD3bAbDJRugOiShd/DlAAsxGtUBhJxIn7SCk= github.com/containerd/typeurl/v2 v2.2.3/go.mod h1:95ljDnPfD3bAbDJRugOiShd/DlAAsxGtUBhJxIn7SCk=
github.com/cpuguy83/go-md2man/v2 v2.0.6 h1:XJtiaUW6dEEqVuZiMTn1ldk455QWwEIsMIJlo5vtkx0=
github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g= github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
github.com/cpuguy83/go-md2man/v2 v2.0.7 h1:zbFlGlXEAKlwXpmvle3d8Oe3YnkKIK4xSRTd3sHPnBo=
github.com/cpuguy83/go-md2man/v2 v2.0.7/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E= github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/creack/pty v1.1.24 h1:bJrF4RRfyJnbTJqzRLHzcGaZK1NeM5kTC9jGgovnR1s= github.com/creack/pty v1.1.24 h1:bJrF4RRfyJnbTJqzRLHzcGaZK1NeM5kTC9jGgovnR1s=
github.com/creack/pty v1.1.24/go.mod h1:08sCNb52WyoAwi2QDyzUCTgcvVFhUzewun7wtTfvcwE= github.com/creack/pty v1.1.24/go.mod h1:08sCNb52WyoAwi2QDyzUCTgcvVFhUzewun7wtTfvcwE=
@ -108,15 +109,15 @@ github.com/distribution/reference v0.6.0 h1:0IXCQ5g4/QMHHkarYzh5l+u8T3t73zM5Qvfr
github.com/distribution/reference v0.6.0/go.mod h1:BbU0aIcezP1/5jX/8MP0YiH4SdvB5Y4f/wlDRiLyi3E= github.com/distribution/reference v0.6.0/go.mod h1:BbU0aIcezP1/5jX/8MP0YiH4SdvB5Y4f/wlDRiLyi3E=
github.com/dlclark/regexp2 v1.11.0 h1:G/nrcoOa7ZXlpoa/91N3X7mM3r8eIlMBBJZvsz/mxKI= github.com/dlclark/regexp2 v1.11.0 h1:G/nrcoOa7ZXlpoa/91N3X7mM3r8eIlMBBJZvsz/mxKI=
github.com/dlclark/regexp2 v1.11.0/go.mod h1:DHkYz0B9wPfa6wondMfaivmHpzrQ3v9q8cnmRbL6yW8= github.com/dlclark/regexp2 v1.11.0/go.mod h1:DHkYz0B9wPfa6wondMfaivmHpzrQ3v9q8cnmRbL6yW8=
github.com/docker/cli v28.3.2+incompatible h1:mOt9fcLE7zaACbxW1GeS65RI67wIJrTnqS3hP2huFsY= github.com/docker/cli v28.4.0+incompatible h1:RBcf3Kjw2pMtwui5V0DIMdyeab8glEw5QY0UUU4C9kY=
github.com/docker/cli v28.3.2+incompatible/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8= github.com/docker/cli v28.4.0+incompatible/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8=
github.com/docker/cli-docs-tool v0.10.0 h1:bOD6mKynPQgojQi3s2jgcUWGp/Ebqy1SeCr9VfKQLLU= github.com/docker/cli-docs-tool v0.10.0 h1:bOD6mKynPQgojQi3s2jgcUWGp/Ebqy1SeCr9VfKQLLU=
github.com/docker/cli-docs-tool v0.10.0/go.mod h1:5EM5zPnT2E7yCLERZmrDA234Vwn09fzRHP4aX1qwp1U= github.com/docker/cli-docs-tool v0.10.0/go.mod h1:5EM5zPnT2E7yCLERZmrDA234Vwn09fzRHP4aX1qwp1U=
github.com/docker/distribution v2.7.1+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w= github.com/docker/distribution v2.7.1+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
github.com/docker/distribution v2.8.3+incompatible h1:AtKxIZ36LoNK51+Z6RpzLpddBirtxJnzDrHLEKxTAYk= github.com/docker/distribution v2.8.3+incompatible h1:AtKxIZ36LoNK51+Z6RpzLpddBirtxJnzDrHLEKxTAYk=
github.com/docker/distribution v2.8.3+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w= github.com/docker/distribution v2.8.3+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
github.com/docker/docker v28.3.2+incompatible h1:wn66NJ6pWB1vBZIilP8G3qQPqHy5XymfYn5vsqeA5oA= github.com/docker/docker v28.4.0+incompatible h1:KVC7bz5zJY/4AZe/78BIvCnPsLaC9T/zh72xnlrTTOk=
github.com/docker/docker v28.3.2+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk= github.com/docker/docker v28.4.0+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/docker-credential-helpers v0.9.3 h1:gAm/VtF9wgqJMoxzT3Gj5p4AqIjCBS4wrsOh9yRqcz8= github.com/docker/docker-credential-helpers v0.9.3 h1:gAm/VtF9wgqJMoxzT3Gj5p4AqIjCBS4wrsOh9yRqcz8=
github.com/docker/docker-credential-helpers v0.9.3/go.mod h1:x+4Gbw9aGmChi3qTLZj8Dfn0TD20M/fuWy0E5+WDeCo= github.com/docker/docker-credential-helpers v0.9.3/go.mod h1:x+4Gbw9aGmChi3qTLZj8Dfn0TD20M/fuWy0E5+WDeCo=
github.com/docker/go v1.5.1-1.0.20160303222718-d30aec9fd63c h1:lzqkGL9b3znc+ZUgi7FlLnqjQhcXxkNM/quxIjBVMD0= github.com/docker/go v1.5.1-1.0.20160303222718-d30aec9fd63c h1:lzqkGL9b3znc+ZUgi7FlLnqjQhcXxkNM/quxIjBVMD0=
@ -254,8 +255,8 @@ github.com/mitchellh/go-wordwrap v0.0.0-20150314170334-ad45545899c7/go.mod h1:ZX
github.com/mitchellh/hashstructure/v2 v2.0.2 h1:vGKWl0YJqUNxE8d+h8f6NJLcCJrgbhC4NcD46KavDd4= github.com/mitchellh/hashstructure/v2 v2.0.2 h1:vGKWl0YJqUNxE8d+h8f6NJLcCJrgbhC4NcD46KavDd4=
github.com/mitchellh/hashstructure/v2 v2.0.2/go.mod h1:MG3aRVU/N29oo/V/IhBX8GR/zz4kQkprJgF2EVszyDE= github.com/mitchellh/hashstructure/v2 v2.0.2/go.mod h1:MG3aRVU/N29oo/V/IhBX8GR/zz4kQkprJgF2EVszyDE=
github.com/mitchellh/mapstructure v0.0.0-20150613213606-2caf8efc9366/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= github.com/mitchellh/mapstructure v0.0.0-20150613213606-2caf8efc9366/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/moby/buildkit v0.23.0-rc1.0.20250618182037-9b91d20367db h1:ZzrDuG9G1A/RwJvuogNplxCEKsIUQh1CqEnqbOGFgKE= github.com/moby/buildkit v0.24.0 h1:qYfTl7W1SIJzWDIDCcPT8FboHIZCYfi++wvySi3eyFE=
github.com/moby/buildkit v0.23.0-rc1.0.20250618182037-9b91d20367db/go.mod h1:v5jMDvQgUyidk3wu3NvVAAd5JJo83nfet9Gf/o0+EAQ= github.com/moby/buildkit v0.24.0/go.mod h1:4qovICAdR2H4C7+EGMRva5zgHW1gyhT4/flHI7F5F9k=
github.com/moby/docker-image-spec v1.3.1 h1:jMKff3w6PgbfSa69GfNg+zN/XLhfXJGnEx3Nl2EsFP0= github.com/moby/docker-image-spec v1.3.1 h1:jMKff3w6PgbfSa69GfNg+zN/XLhfXJGnEx3Nl2EsFP0=
github.com/moby/docker-image-spec v1.3.1/go.mod h1:eKmb5VW8vQEh/BAr2yvVNvuiJuY6UIocYsFu/DxxRpo= github.com/moby/docker-image-spec v1.3.1/go.mod h1:eKmb5VW8vQEh/BAr2yvVNvuiJuY6UIocYsFu/DxxRpo=
github.com/moby/go-archive v0.1.0 h1:Kk/5rdW/g+H8NHdJW2gsXyZ7UnzvJNOy6VKJqueWdcQ= github.com/moby/go-archive v0.1.0 h1:Kk/5rdW/g+H8NHdJW2gsXyZ7UnzvJNOy6VKJqueWdcQ=
@ -361,8 +362,9 @@ github.com/spf13/cobra v1.9.1 h1:CXSaggrXdbHK9CF+8ywj8Amf7PBRmPCOJugH954Nnlo=
github.com/spf13/cobra v1.9.1/go.mod h1:nDyEzZ8ogv936Cinf6g1RU9MRY64Ir93oCnqb9wxYW0= github.com/spf13/cobra v1.9.1/go.mod h1:nDyEzZ8ogv936Cinf6g1RU9MRY64Ir93oCnqb9wxYW0=
github.com/spf13/jwalterweatherman v0.0.0-20141219030609-3d60171a6431/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo= github.com/spf13/jwalterweatherman v0.0.0-20141219030609-3d60171a6431/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
github.com/spf13/pflag v1.0.0/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4= github.com/spf13/pflag v1.0.0/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.6 h1:jFzHGLGAlb3ruxLB8MhbI6A8+AQX/2eW4qeyNZXNp2o=
github.com/spf13/pflag v1.0.6/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= github.com/spf13/pflag v1.0.6/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/pflag v1.0.7 h1:vN6T9TfwStFPFM5XzjsvmzZkLuaLX+HS+0SeFLRgU6M=
github.com/spf13/pflag v1.0.7/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/viper v0.0.0-20150530192845-be5ff3e4840c/go.mod h1:A8kyI5cUJhb8N+3pkfONlcEcZbueH6nhAm0Fq7SrnBM= github.com/spf13/viper v0.0.0-20150530192845-be5ff3e4840c/go.mod h1:A8kyI5cUJhb8N+3pkfONlcEcZbueH6nhAm0Fq7SrnBM=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
@ -377,8 +379,8 @@ github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU= github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4= github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA= github.com/stretchr/testify v1.11.0 h1:ib4sjIrwZKxE5u/Japgo/7SJV3PvgjGiRNAvTVGqQl8=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY= github.com/stretchr/testify v1.11.0/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/theupdateframework/notary v0.7.0 h1:QyagRZ7wlSpjT5N2qQAh/pN+DVqgekv4DzbAiAiEL3c= github.com/theupdateframework/notary v0.7.0 h1:QyagRZ7wlSpjT5N2qQAh/pN+DVqgekv4DzbAiAiEL3c=
github.com/theupdateframework/notary v0.7.0/go.mod h1:c9DRxcmhHmVLDay4/2fUYdISnHqbFDGRSlXPO0AhYWw= github.com/theupdateframework/notary v0.7.0/go.mod h1:c9DRxcmhHmVLDay4/2fUYdISnHqbFDGRSlXPO0AhYWw=
github.com/tonistiigi/dchapes-mode v0.0.0-20250318174251-73d941a28323 h1:r0p7fK56l8WPequOaR3i9LBqfPtEdXIQbUTzT55iqT4= github.com/tonistiigi/dchapes-mode v0.0.0-20250318174251-73d941a28323 h1:r0p7fK56l8WPequOaR3i9LBqfPtEdXIQbUTzT55iqT4=
@ -441,6 +443,8 @@ go.opentelemetry.io/proto/otlp v1.5.0 h1:xJvq7gMzB31/d406fB8U5CBdyQGw4P399D1aQWU
go.opentelemetry.io/proto/otlp v1.5.0/go.mod h1:keN8WnHxOy8PG0rQZjJJ5A2ebUoafqWp0eVQ4yIXvJ4= go.opentelemetry.io/proto/otlp v1.5.0/go.mod h1:keN8WnHxOy8PG0rQZjJJ5A2ebUoafqWp0eVQ4yIXvJ4=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto= go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE= go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc=
go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190325154230-a5d413f7728c/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/crypto v0.0.0-20190325154230-a5d413f7728c/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
@ -466,8 +470,8 @@ golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJ
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.14.0 h1:woo0S4Yywslg6hp4eUFjTVOyKt0RookbpAHG4c1HmhQ= golang.org/x/sync v0.16.0 h1:ycBJEhp9p4vXvUZNszeOq0kGTPghopOL8q0fq3vstxw=
golang.org/x/sync v0.14.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA= golang.org/x/sync v0.16.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=

View File

@ -72,9 +72,9 @@ var bakeTests = []func(t *testing.T, sb integration.Sandbox){
testBakeMetadataWarningsDedup, testBakeMetadataWarningsDedup,
testBakeMultiExporters, testBakeMultiExporters,
testBakeLoadPush, testBakeLoadPush,
testListTargets, testBakeListTargets,
testListVariables, testBakeListVariables,
testListTypedVariables, testBakeListTypedVariables,
testBakeCallCheck, testBakeCallCheck,
testBakeCallCheckFlag, testBakeCallCheckFlag,
testBakeCallMetadata, testBakeCallMetadata,
@ -1691,7 +1691,7 @@ target "default" {
// TODO: test metadata file when supported by multi exporters https://github.com/docker/buildx/issues/2181 // TODO: test metadata file when supported by multi exporters https://github.com/docker/buildx/issues/2181
} }
func testListTargets(t *testing.T, sb integration.Sandbox) { func testBakeListTargets(t *testing.T, sb integration.Sandbox) {
bakefile := []byte(` bakefile := []byte(`
target "foo" { target "foo" {
description = "This builds foo" description = "This builds foo"
@ -1714,7 +1714,7 @@ target "abc" {
require.Equal(t, "TARGET\tDESCRIPTION\nabc\t\nfoo\tThis builds foo", strings.TrimSpace(out)) require.Equal(t, "TARGET\tDESCRIPTION\nabc\t\nfoo\tThis builds foo", strings.TrimSpace(out))
} }
func testListVariables(t *testing.T, sb integration.Sandbox) { func testBakeListVariables(t *testing.T, sb integration.Sandbox) {
bakefile := []byte(` bakefile := []byte(`
variable "foo" { variable "foo" {
default = "bar" default = "bar"
@ -1743,7 +1743,7 @@ target "default" {
require.Equal(t, "VARIABLE\tTYPE\tVALUE\tDESCRIPTION\nabc\t\t\t<null>\t\ndef\t\t\t\t\nfoo\t\t\tbar\tThis is foo", strings.TrimSpace(out)) require.Equal(t, "VARIABLE\tTYPE\tVALUE\tDESCRIPTION\nabc\t\t\t<null>\t\ndef\t\t\t\t\nfoo\t\t\tbar\tThis is foo", strings.TrimSpace(out))
} }
func testListTypedVariables(t *testing.T, sb integration.Sandbox) { func testBakeListTypedVariables(t *testing.T, sb integration.Sandbox) {
bakefile := []byte(` bakefile := []byte(`
variable "abc" { variable "abc" {
type = string type = string

View File

@ -76,8 +76,9 @@ var buildTests = []func(t *testing.T, sb integration.Sandbox){
testBuildSecret, testBuildSecret,
testBuildDefaultLoad, testBuildDefaultLoad,
testBuildCall, testBuildCall,
testCheckCallOutput, testBuildCheckCallOutput,
testBuildExtraHosts, testBuildExtraHosts,
testBuildIndexAnnotationsLoadDocker,
} }
func testBuild(t *testing.T, sb integration.Sandbox) { func testBuild(t *testing.T, sb integration.Sandbox) {
@ -114,28 +115,155 @@ COPY --from=base /etc/bar /bar
} }
func testBuildRemote(t *testing.T, sb integration.Sandbox) { func testBuildRemote(t *testing.T, sb integration.Sandbox) {
dockerfile := []byte(` t.Run("default branch", func(t *testing.T) {
dockerfile := []byte(`
FROM busybox:latest FROM busybox:latest
COPY foo /foo COPY foo /foo
`) `)
dir := tmpdir( dir := tmpdir(
t, t,
fstest.CreateFile("Dockerfile", dockerfile, 0600), fstest.CreateFile("Dockerfile", dockerfile, 0600),
fstest.CreateFile("foo", []byte("foo"), 0600), fstest.CreateFile("foo", []byte("foo"), 0600),
) )
dirDest := t.TempDir() dirDest := t.TempDir()
git, err := gitutil.New(gitutil.WithWorkingDir(dir)) git, err := gitutil.New(gitutil.WithWorkingDir(dir))
require.NoError(t, err) require.NoError(t, err)
gittestutil.GitInit(git, t) gittestutil.GitInit(git, t)
gittestutil.GitAdd(git, t, "Dockerfile", "foo") gittestutil.GitAdd(git, t, "Dockerfile", "foo")
gittestutil.GitCommit(git, t, "initial commit") gittestutil.GitCommit(git, t, "initial commit")
addr := gittestutil.GitServeHTTP(git, t) addr := gittestutil.GitServeHTTP(git, t)
out, err := buildCmd(sb, withDir(dir), withArgs("--output=type=local,dest="+dirDest, addr)) out, err := buildCmd(sb, withDir(dir), withArgs("--output=type=local,dest="+dirDest, addr))
require.NoError(t, err, out) require.NoError(t, err, out)
require.FileExists(t, filepath.Join(dirDest, "foo")) require.FileExists(t, filepath.Join(dirDest, "foo"))
})
t.Run("tag ref with url fragment", func(t *testing.T) {
dockerfile := []byte(`
FROM busybox:latest
COPY foo /foo
`)
dir := tmpdir(
t,
fstest.CreateFile("Dockerfile", dockerfile, 0600),
fstest.CreateFile("foo", []byte("foo"), 0600),
)
dirDest := t.TempDir()
git, err := gitutil.New(gitutil.WithWorkingDir(dir))
require.NoError(t, err)
gittestutil.GitInit(git, t)
gittestutil.GitAdd(git, t, "Dockerfile", "foo")
gittestutil.GitCommit(git, t, "initial commit")
gittestutil.GitTag(git, t, "v0.1.0")
addr := gittestutil.GitServeHTTP(git, t)
addr = addr + "#v0.1.0" // tag
out, err := buildCmd(sb, withDir(dir), withArgs("--output=type=local,dest="+dirDest, addr))
require.NoError(t, err, out)
require.FileExists(t, filepath.Join(dirDest, "foo"))
})
t.Run("tag ref with query string", func(t *testing.T) {
dockerfile := []byte(`
FROM busybox:latest
COPY foo /foo
`)
dir := tmpdir(
t,
fstest.CreateFile("Dockerfile", dockerfile, 0600),
fstest.CreateFile("foo", []byte("foo"), 0600),
)
dirDest := t.TempDir()
git, err := gitutil.New(gitutil.WithWorkingDir(dir))
require.NoError(t, err)
gittestutil.GitInit(git, t)
gittestutil.GitAdd(git, t, "Dockerfile", "foo")
gittestutil.GitCommit(git, t, "initial commit")
gittestutil.GitTag(git, t, "v0.1.0")
addr := gittestutil.GitServeHTTP(git, t)
addr = addr + "?tag=v0.1.0" // tag
out, err := buildCmd(sb, withDir(dir), withArgs("--output=type=local,dest="+dirDest, addr))
if matchesBuildKitVersion(t, sb, ">= 0.24.0-0") {
require.NoError(t, err, out)
require.FileExists(t, filepath.Join(dirDest, "foo"))
} else {
require.Error(t, err)
require.Contains(t, out, "current frontend does not support Git URLs with query string components")
}
})
t.Run("tag ref with query string frontend 1.17", func(t *testing.T) {
dockerfile := []byte(`
# syntax=docker/dockerfile:1.17
FROM busybox:latest
COPY foo /foo
`)
dir := tmpdir(
t,
fstest.CreateFile("Dockerfile", dockerfile, 0600),
fstest.CreateFile("foo", []byte("foo"), 0600),
)
dirDest := t.TempDir()
git, err := gitutil.New(gitutil.WithWorkingDir(dir))
require.NoError(t, err)
gittestutil.GitInit(git, t)
gittestutil.GitAdd(git, t, "Dockerfile", "foo")
gittestutil.GitCommit(git, t, "initial commit")
gittestutil.GitTag(git, t, "v0.1.0")
addr := gittestutil.GitServeHTTP(git, t)
addr = addr + "?tag=v0.1.0" // tag
out, err := buildCmd(sb, withDir(dir), withArgs("--output=type=local,dest="+dirDest, addr))
if matchesBuildKitVersion(t, sb, ">= 0.24.0-0") {
require.NoError(t, err, out)
require.FileExists(t, filepath.Join(dirDest, "foo"))
} else {
require.Error(t, err)
require.Contains(t, out, "current frontend does not support Git URLs with query string components")
}
})
t.Run("tag ref with query string frontend 1.18.0", func(t *testing.T) {
dockerfile := []byte(`
# syntax=docker/dockerfile-upstream:1.18.0
FROM busybox:latest
COPY foo /foo
`)
dir := tmpdir(
t,
fstest.CreateFile("Dockerfile", dockerfile, 0600),
fstest.CreateFile("foo", []byte("foo"), 0600),
)
dirDest := t.TempDir()
git, err := gitutil.New(gitutil.WithWorkingDir(dir))
require.NoError(t, err)
gittestutil.GitInit(git, t)
gittestutil.GitAdd(git, t, "Dockerfile", "foo")
gittestutil.GitCommit(git, t, "initial commit")
gittestutil.GitTag(git, t, "v0.1.0")
addr := gittestutil.GitServeHTTP(git, t)
addr = addr + "?tag=v0.1.0" // tag
out, err := buildCmd(sb, withDir(dir), withArgs("--output=type=local,dest="+dirDest, addr))
if matchesBuildKitVersion(t, sb, ">= 0.24.0-0") {
require.NoError(t, err, out)
require.FileExists(t, filepath.Join(dirDest, "foo"))
} else {
require.Error(t, err)
require.Contains(t, out, "current frontend does not support Git URLs with query string components")
}
})
} }
func testBuildLocalState(t *testing.T, sb integration.Sandbox) { func testBuildLocalState(t *testing.T, sb integration.Sandbox) {
@ -1241,7 +1369,7 @@ COPy --from=base \
}) })
} }
func testCheckCallOutput(t *testing.T, sb integration.Sandbox) { func testBuildCheckCallOutput(t *testing.T, sb integration.Sandbox) {
t.Run("check for warning count msg in check without warnings", func(t *testing.T) { t.Run("check for warning count msg in check without warnings", func(t *testing.T) {
dockerfile := []byte(` dockerfile := []byte(`
FROM busybox AS base FROM busybox AS base
@ -1341,6 +1469,17 @@ RUN cat /etc/hosts | grep myhostmulti | grep 162.242.195.82
require.NoError(t, err, string(out)) require.NoError(t, err, string(out))
} }
func testBuildIndexAnnotationsLoadDocker(t *testing.T, sb integration.Sandbox) {
if sb.DockerAddress() == "" {
t.Skip("only testing with docker available")
}
skipNoCompatBuildKit(t, sb, ">= 0.11.0-0", "annotations")
dir := createTestProject(t)
out, err := buildCmd(sb, withArgs("--annotation", "index:foo=bar", "--provenance", "false", "--output", "type=docker", dir))
require.Error(t, err, out)
require.Contains(t, out, "index annotations not supported for single platform export")
}
func createTestProject(t *testing.T) string { func createTestProject(t *testing.T) string {
dockerfile := []byte(` dockerfile := []byte(`
FROM busybox:latest AS base FROM busybox:latest AS base

172
tests/compose.go Normal file
View File

@ -0,0 +1,172 @@
package tests
import (
"fmt"
"os"
"testing"
"github.com/containerd/continuity/fs/fstest"
"github.com/moby/buildkit/identity"
"github.com/moby/buildkit/util/contentutil"
"github.com/moby/buildkit/util/testutil"
"github.com/moby/buildkit/util/testutil/integration"
"github.com/pkg/errors"
"github.com/stretchr/testify/require"
)
var composeTests = []func(t *testing.T, sb integration.Sandbox){
testComposeBuildLocalStore,
testComposeBuildRegistry,
testComposeBuildMultiPlatform,
testComposeBuildCheck,
}
func testComposeBuildLocalStore(t *testing.T, sb integration.Sandbox) {
if !isDockerWorker(sb) && !isDockerContainerWorker(sb) {
t.Skip("only testing with docker and docker-container worker")
}
target := "buildx:local-" + identity.NewID()
dir := composeTestProject(target, t)
t.Cleanup(func() {
cmd := dockerCmd(sb, withArgs("image", "rm", target))
cmd.Stderr = os.Stderr
require.NoError(t, cmd.Run())
})
cmd := composeCmd(sb, withDir(dir), withArgs("build"))
out, err := cmd.CombinedOutput()
require.NoError(t, err, string(out))
cmd = dockerCmd(sb, withArgs("image", "inspect", target))
cmd.Stderr = os.Stderr
require.NoError(t, cmd.Run())
}
func testComposeBuildRegistry(t *testing.T, sb integration.Sandbox) {
registry, err := sb.NewRegistry()
if errors.Is(err, integration.ErrRequirements) {
t.Skip(err.Error())
}
require.NoError(t, err)
target := registry + "/buildx/registry:latest"
dir := composeTestProject(target, t)
cmd := composeCmd(sb, withDir(dir), withArgs("build", "--push"))
out, err := cmd.CombinedOutput()
require.NoError(t, err, string(out))
desc, provider, err := contentutil.ProviderFromRef(target)
require.NoError(t, err)
_, err = testutil.ReadImages(sb.Context(), provider, desc)
require.NoError(t, err)
}
func testComposeBuildMultiPlatform(t *testing.T, sb integration.Sandbox) {
registry, err := sb.NewRegistry()
if errors.Is(err, integration.ErrRequirements) {
t.Skip(err.Error())
}
require.NoError(t, err)
target := registry + "/buildx/registry:latest"
dockerfile := []byte(`
FROM busybox:latest
COPY foo /etc/foo
`)
composefile := fmt.Appendf([]byte{}, `
services:
bar:
build:
context: .
platforms:
- linux/amd64
- linux/arm64
image: %s
`, target)
dir := tmpdir(
t,
fstest.CreateFile("compose.yml", composefile, 0600),
fstest.CreateFile("Dockerfile", dockerfile, 0600),
fstest.CreateFile("foo", []byte("foo"), 0600),
)
cmd := composeCmd(sb, withDir(dir), withArgs("build", "--push"))
out, err := cmd.CombinedOutput()
if !isMobyWorker(sb) {
require.NoError(t, err, string(out))
desc, provider, err := contentutil.ProviderFromRef(target)
require.NoError(t, err)
imgs, err := testutil.ReadImages(sb.Context(), provider, desc)
require.NoError(t, err)
img := imgs.Find("linux/amd64")
require.NotNil(t, img)
img = imgs.Find("linux/arm64")
require.NotNil(t, img)
} else {
require.Error(t, err, string(out))
require.Contains(t, string(out), "Multi-platform build is not supported")
}
}
func testComposeBuildCheck(t *testing.T, sb integration.Sandbox) {
dockerfile := []byte(`
frOM busybox as base
cOpy Dockerfile .
from scratch
COPy --from=base \
/Dockerfile \
/
`)
composefile := []byte(`
services:
bar:
build:
context: .
`)
dir := tmpdir(
t,
fstest.CreateFile("compose.yml", composefile, 0600),
fstest.CreateFile("Dockerfile", dockerfile, 0600),
)
cmd := composeCmd(sb, withDir(dir), withArgs("build", "--check"))
out, err := cmd.CombinedOutput()
require.Error(t, err, string(out))
require.Contains(t, string(out), "Check complete, 3 warnings have been found!")
}
func composeTestProject(imageName string, t *testing.T) string {
dockerfile := []byte(`
FROM busybox:latest AS base
COPY foo /etc/foo
RUN cp /etc/foo /etc/bar
FROM scratch
COPY --from=base /etc/bar /bar
`)
composefile := fmt.Appendf([]byte{}, `
services:
bar:
build:
context: .
image: %s
`, imageName)
return tmpdir(
t,
fstest.CreateFile("compose.yml", composefile, 0600),
fstest.CreateFile("Dockerfile", dockerfile, 0600),
fstest.CreateFile("foo", []byte("foo"), 0600),
)
}

51
tests/diskusage.go Normal file
View File

@ -0,0 +1,51 @@
package tests
import (
"testing"
"github.com/moby/buildkit/util/testutil/integration"
"github.com/stretchr/testify/require"
)
var diskusageTests = []func(t *testing.T, sb integration.Sandbox){
testDiskusage,
testDiskusageVerbose,
testDiskusageVerboseFormatError,
testDiskusageFormatJSON,
testDiskusageFormatGoTemplate,
}
func testDiskusage(t *testing.T, sb integration.Sandbox) {
buildTestProject(t, sb)
cmd := buildxCmd(sb, withArgs("du"))
out, err := cmd.Output()
require.NoError(t, err, string(out))
}
func testDiskusageVerbose(t *testing.T, sb integration.Sandbox) {
buildTestProject(t, sb)
cmd := buildxCmd(sb, withArgs("du", "--verbose"))
out, err := cmd.Output()
require.NoError(t, err, string(out))
}
func testDiskusageVerboseFormatError(t *testing.T, sb integration.Sandbox) {
buildTestProject(t, sb)
cmd := buildxCmd(sb, withArgs("du", "--verbose", "--format=json"))
out, err := cmd.Output()
require.Error(t, err, string(out))
}
func testDiskusageFormatJSON(t *testing.T, sb integration.Sandbox) {
buildTestProject(t, sb)
cmd := buildxCmd(sb, withArgs("du", "--format=json"))
out, err := cmd.Output()
require.NoError(t, err, string(out))
}
func testDiskusageFormatGoTemplate(t *testing.T, sb integration.Sandbox) {
buildTestProject(t, sb)
cmd := buildxCmd(sb, withArgs("du", "--format={{.ID}}: {{.Size}}"))
out, err := cmd.Output()
require.NoError(t, err, string(out))
}

View File

@ -20,6 +20,7 @@ var historyTests = []func(t *testing.T, sb integration.Sandbox){
testHistoryLs, testHistoryLs,
testHistoryRm, testHistoryRm,
testHistoryLsStoppedBuilder, testHistoryLsStoppedBuilder,
testHistoryBuildNameOverride,
} }
func testHistoryExport(t *testing.T, sb integration.Sandbox) { func testHistoryExport(t *testing.T, sb integration.Sandbox) {
@ -136,6 +137,45 @@ func testHistoryLsStoppedBuilder(t *testing.T, sb integration.Sandbox) {
require.NoError(t, err, string(bout)) require.NoError(t, err, string(bout))
} }
func testHistoryBuildNameOverride(t *testing.T, sb integration.Sandbox) {
dir := createTestProject(t)
out, err := buildCmd(sb, withArgs("--build-arg=BUILDKIT_BUILD_NAME=foobar", "--metadata-file", filepath.Join(dir, "md.json"), dir))
require.NoError(t, err, string(out))
dt, err := os.ReadFile(filepath.Join(dir, "md.json"))
require.NoError(t, err)
type mdT struct {
BuildRef string `json:"buildx.build.ref"`
}
var md mdT
err = json.Unmarshal(dt, &md)
require.NoError(t, err)
refParts := strings.Split(md.BuildRef, "/")
require.Len(t, refParts, 3)
cmd := buildxCmd(sb, withArgs("history", "ls", "--filter=ref="+refParts[2], "--format=json"))
bout, err := cmd.Output()
require.NoError(t, err, string(bout))
type recT struct {
Ref string `json:"ref"`
Name string `json:"name"`
Status string `json:"status"`
CreatedAt *time.Time `json:"created_at"`
CompletedAt *time.Time `json:"completed_at"`
TotalSteps int32 `json:"total_steps"`
CompletedSteps int32 `json:"completed_steps"`
CachedSteps int32 `json:"cached_steps"`
}
var rec recT
err = json.Unmarshal(bout, &rec)
require.NoError(t, err)
require.Equal(t, md.BuildRef, rec.Ref)
require.Equal(t, "foobar", rec.Name)
}
type buildRef struct { type buildRef struct {
Builder string Builder string
Node string Node string

View File

@ -75,6 +75,30 @@ func buildxCmd(sb integration.Sandbox, opts ...cmdOpt) *exec.Cmd {
return cmd return cmd
} }
func composeCmd(sb integration.Sandbox, opts ...cmdOpt) *exec.Cmd {
cmd := exec.Command("compose")
cmd.Env = os.Environ()
for _, opt := range opts {
opt(cmd)
}
if builder := sb.Address(); builder != "" {
cmd.Env = append(cmd.Env,
"BUILDX_CONFIG="+buildxConfig(sb),
"BUILDX_BUILDER="+builder,
)
}
if context := sb.DockerAddress(); context != "" {
cmd.Env = append(cmd.Env, "DOCKER_CONTEXT="+context)
}
if v := os.Getenv("GO_TEST_COVERPROFILE"); v != "" {
coverDir := filepath.Join(filepath.Dir(v), "helpers")
cmd.Env = append(cmd.Env, "GOCOVERDIR="+coverDir)
}
cmd.Env = append(cmd.Env, "COMPOSE_BAKE=true")
return cmd
}
func dockerCmd(sb integration.Sandbox, opts ...cmdOpt) *exec.Cmd { func dockerCmd(sb integration.Sandbox, opts ...cmdOpt) *exec.Cmd {
cmd := exec.Command("docker") cmd := exec.Command("docker")
cmd.Env = os.Environ() cmd.Env = os.Environ()

View File

@ -32,6 +32,8 @@ func TestIntegration(t *testing.T) {
tests = append(tests, createTests...) tests = append(tests, createTests...)
tests = append(tests, rmTests...) tests = append(tests, rmTests...)
tests = append(tests, dialstdioTests...) tests = append(tests, dialstdioTests...)
tests = append(tests, composeTests...)
tests = append(tests, diskusageTests...)
testIntegration(t, tests...) testIntegration(t, tests...)
} }
@ -47,6 +49,7 @@ func testIntegration(t *testing.T, funcs ...func(t *testing.T, sb integration.Sa
} }
} }
mirroredImages["moby/buildkit:buildx-stable-1"] = buildkitImage mirroredImages["moby/buildkit:buildx-stable-1"] = buildkitImage
mirroredImages["docker/dockerfile-upstream:1.18.0"] = "docker.io/docker/dockerfile-upstream:1.18.0"
mirrors := integration.WithMirroredImages(mirroredImages) mirrors := integration.WithMirroredImages(mirroredImages)
tests := integration.TestFuncs(funcs...) tests := integration.TestFuncs(funcs...)

View File

@ -25,7 +25,7 @@ import (
"strings" "strings"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
"gopkg.in/yaml.v3" "go.yaml.in/yaml/v3"
"github.com/compose-spec/compose-go/v2/consts" "github.com/compose-spec/compose-go/v2/consts"
"github.com/compose-spec/compose-go/v2/dotenv" "github.com/compose-spec/compose-go/v2/dotenv"

View File

@ -56,7 +56,7 @@ func GetEnvFromFile(currentEnv map[string]string, filenames []string) (map[strin
return envMap, err return envMap, err
} }
env, err := ParseWithLookup(bytes.NewReader(b), func(k string) (string, bool) { err = parseWithLookup(bytes.NewReader(b), envMap, func(k string) (string, bool) {
v, ok := currentEnv[k] v, ok := currentEnv[k]
if ok { if ok {
return v, true return v, true
@ -67,9 +67,6 @@ func GetEnvFromFile(currentEnv map[string]string, filenames []string) (map[strin
if err != nil { if err != nil {
return envMap, fmt.Errorf("failed to read %s: %w", dotEnvFile, err) return envMap, fmt.Errorf("failed to read %s: %w", dotEnvFile, err)
} }
for k, v := range env {
envMap[k] = v
}
} }
return envMap, nil return envMap, nil

View File

@ -43,7 +43,7 @@ import (
"github.com/compose-spec/compose-go/v2/validation" "github.com/compose-spec/compose-go/v2/validation"
"github.com/go-viper/mapstructure/v2" "github.com/go-viper/mapstructure/v2"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
"gopkg.in/yaml.v3" "go.yaml.in/yaml/v3"
) )
// Options supported by Load // Options supported by Load

View File

@ -22,7 +22,7 @@ import (
"strings" "strings"
"github.com/compose-spec/compose-go/v2/tree" "github.com/compose-spec/compose-go/v2/tree"
"gopkg.in/yaml.v3" "go.yaml.in/yaml/v3"
) )
type ResetProcessor struct { type ResetProcessor struct {

View File

@ -123,6 +123,8 @@
"no_cache": {"type": ["boolean", "string"], "description": "Do not use cache when building the image."}, "no_cache": {"type": ["boolean", "string"], "description": "Do not use cache when building the image."},
"additional_contexts": {"$ref": "#/definitions/list_or_dict", "description": "Additional build contexts to use, specified as a map of name to context path or URL."}, "additional_contexts": {"$ref": "#/definitions/list_or_dict", "description": "Additional build contexts to use, specified as a map of name to context path or URL."},
"network": {"type": "string", "description": "Network mode to use for the build. Options include 'default', 'none', 'host', or a network name."}, "network": {"type": "string", "description": "Network mode to use for the build. Options include 'default', 'none', 'host', or a network name."},
"provenance": {"type": ["string","boolean"], "description": "Add a provenance attestation"},
"sbom": {"type": ["string","boolean"], "description": "Add a SBOM attestation"},
"pull": {"type": ["boolean", "string"], "description": "Always attempt to pull a newer version of the image."}, "pull": {"type": ["boolean", "string"], "description": "Always attempt to pull a newer version of the image."},
"target": {"type": "string", "description": "Build stage to target in a multi-stage Dockerfile."}, "target": {"type": "string", "description": "Build stage to target in a multi-stage Dockerfile."},
"shm_size": {"type": ["integer", "string"], "description": "Size of /dev/shm for the build container. A string value can use suffix like '2g' for 2 gigabytes."}, "shm_size": {"type": ["integer", "string"], "description": "Size of /dev/shm for the build container. A string value can use suffix like '2g' for 2 gigabytes."},
@ -206,7 +208,8 @@
}, },
"container_name": { "container_name": {
"type": "string", "type": "string",
"description": "Specify a custom container name, rather than a generated default name." "description": "Specify a custom container name, rather than a generated default name.",
"pattern": "[a-zA-Z0-9][a-zA-Z0-9_.-]+"
}, },
"cpu_count": { "cpu_count": {
"oneOf": [ "oneOf": [

View File

@ -57,10 +57,9 @@ func recurseExtract(value interface{}, pattern *regexp.Regexp) map[string]Variab
case []interface{}: case []interface{}:
for _, elem := range value { for _, elem := range value {
if values, is := extractVariable(elem, pattern); is { submap := recurseExtract(elem, pattern)
for _, v := range values { for key, value := range submap {
m[v.Name] = v m[key] = value
}
} }
} }
} }

View File

@ -17,16 +17,21 @@
package transform package transform
import ( import (
"fmt"
"github.com/compose-spec/compose-go/v2/tree" "github.com/compose-spec/compose-go/v2/tree"
) )
type transformFunc func(data any, p tree.Path, ignoreParseError bool) (any, error) // Func is a function that can transform data at a specific path
type Func func(data any, p tree.Path, ignoreParseError bool) (any, error)
var transformers = map[tree.Path]transformFunc{} var transformers = map[tree.Path]Func{}
func init() { func init() {
transformers["services.*"] = transformService transformers["services.*"] = transformService
transformers["services.*.build.secrets.*"] = transformFileMount transformers["services.*.build.secrets.*"] = transformFileMount
transformers["services.*.build.provenance"] = transformStringOrX
transformers["services.*.build.sbom"] = transformStringOrX
transformers["services.*.build.additional_contexts"] = transformKeyValue transformers["services.*.build.additional_contexts"] = transformKeyValue
transformers["services.*.depends_on"] = transformDependsOn transformers["services.*.depends_on"] = transformDependsOn
transformers["services.*.env_file"] = transformEnvFile transformers["services.*.env_file"] = transformEnvFile
@ -121,3 +126,12 @@ func transformMapping(v map[string]any, p tree.Path, ignoreParseError bool) (map
} }
return v, nil return v, nil
} }
func transformStringOrX(data any, _ tree.Path, _ bool) (any, error) {
switch v := data.(type) {
case string:
return v, nil
default:
return fmt.Sprint(v), nil
}
}

View File

@ -20,14 +20,20 @@ import (
"github.com/compose-spec/compose-go/v2/tree" "github.com/compose-spec/compose-go/v2/tree"
) )
var defaultValues = map[tree.Path]transformFunc{} // DefaultValues contains the default value transformers for compose fields
var DefaultValues = map[tree.Path]Func{}
func init() { func init() {
defaultValues["services.*.build"] = defaultBuildContext DefaultValues["services.*.build"] = defaultBuildContext
defaultValues["services.*.secrets.*"] = defaultSecretMount DefaultValues["services.*.secrets.*"] = defaultSecretMount
defaultValues["services.*.ports.*"] = portDefaults DefaultValues["services.*.ports.*"] = portDefaults
defaultValues["services.*.deploy.resources.reservations.devices.*"] = deviceRequestDefaults DefaultValues["services.*.deploy.resources.reservations.devices.*"] = deviceRequestDefaults
defaultValues["services.*.gpus.*"] = deviceRequestDefaults DefaultValues["services.*.gpus.*"] = deviceRequestDefaults
}
// RegisterDefaultValue registers a custom transformer for the given path pattern
func RegisterDefaultValue(path string, transformer Func) {
DefaultValues[tree.Path(path)] = transformer
} }
// SetDefaultValues transforms a compose model to set default values to missing attributes // SetDefaultValues transforms a compose model to set default values to missing attributes
@ -40,7 +46,7 @@ func SetDefaultValues(yaml map[string]any) (map[string]any, error) {
} }
func setDefaults(data any, p tree.Path) (any, error) { func setDefaults(data any, p tree.Path) (any, error) {
for pattern, transformer := range defaultValues { for pattern, transformer := range DefaultValues {
if p.Matches(pattern) { if p.Matches(pattern) {
t, err := transformer(data, p, false) t, err := transformer(data, p, false)
if err != nil { if err != nil {

View File

@ -34,7 +34,7 @@ import "github.com/mattn/go-shellwords"
// preserved so that it can override any base value (e.g. container entrypoint). // preserved so that it can override any base value (e.g. container entrypoint).
// //
// The different semantics between YAML and JSON are due to limitations with // The different semantics between YAML and JSON are due to limitations with
// JSON marshaling + `omitempty` in the Go stdlib, while gopkg.in/yaml.v3 gives // JSON marshaling + `omitempty` in the Go stdlib, while go.yaml.in/yaml/v3 gives
// us more flexibility via the yaml.IsZeroer interface. // us more flexibility via the yaml.IsZeroer interface.
// //
// In the future, it might make sense to make fields of this type be // In the future, it might make sense to make fields of this type be
@ -58,7 +58,7 @@ func (s ShellCommand) IsZero() bool {
// accurately if the `omitempty` struct tag is omitted/forgotten. // accurately if the `omitempty` struct tag is omitted/forgotten.
// //
// A similar MarshalJSON() implementation is not needed because the Go stdlib // A similar MarshalJSON() implementation is not needed because the Go stdlib
// already serializes nil slices to `null`, whereas gopkg.in/yaml.v3 by default // already serializes nil slices to `null`, whereas go.yaml.in/yaml/v3 by default
// serializes nil slices to `[]`. // serializes nil slices to `[]`.
func (s ShellCommand) MarshalYAML() (interface{}, error) { func (s ShellCommand) MarshalYAML() (interface{}, error) {
if s == nil { if s == nil {

View File

@ -875,6 +875,8 @@ func deriveDeepCopy_6(dst, src *BuildConfig) {
} else { } else {
dst.Args = nil dst.Args = nil
} }
dst.Provenance = src.Provenance
dst.SBOM = src.SBOM
if src.SSH == nil { if src.SSH == nil {
dst.SSH = nil dst.SSH = nil
} else { } else {

View File

@ -95,7 +95,7 @@ func (m *MappingWithEquals) DecodeMapstructure(value interface{}) error {
mapping := make(MappingWithEquals, len(v)) mapping := make(MappingWithEquals, len(v))
for _, s := range v { for _, s := range v {
k, e, ok := strings.Cut(fmt.Sprint(s), "=") k, e, ok := strings.Cut(fmt.Sprint(s), "=")
if unicode.IsSpace(rune(k[len(k)-1])) { if k != "" && unicode.IsSpace(rune(k[len(k)-1])) {
return fmt.Errorf("environment variable %s is declared with a trailing space", k) return fmt.Errorf("environment variable %s is declared with a trailing space", k)
} }
if !ok { if !ok {

View File

@ -32,8 +32,8 @@ import (
"github.com/compose-spec/compose-go/v2/utils" "github.com/compose-spec/compose-go/v2/utils"
"github.com/distribution/reference" "github.com/distribution/reference"
godigest "github.com/opencontainers/go-digest" godigest "github.com/opencontainers/go-digest"
"go.yaml.in/yaml/v3"
"golang.org/x/sync/errgroup" "golang.org/x/sync/errgroup"
"gopkg.in/yaml.v3"
) )
// Project is the result of loading a set of compose files // Project is the result of loading a set of compose files
@ -118,6 +118,16 @@ func (p *Project) ConfigNames() []string {
return names return names
} }
// ModelNames return names for all models in this Compose config
func (p *Project) ModelNames() []string {
var names []string
for k := range p.Models {
names = append(names, k)
}
sort.Strings(names)
return names
}
func (p *Project) ServicesWithBuild() []string { func (p *Project) ServicesWithBuild() []string {
servicesBuild := p.Services.Filter(func(s ServiceConfig) bool { servicesBuild := p.Services.Filter(func(s ServiceConfig) bool {
return s.Build != nil && s.Build.Context != "" return s.Build != nil && s.Build.Context != ""
@ -139,6 +149,11 @@ func (p *Project) ServicesWithDependsOn() []string {
return slices.Collect(maps.Keys(servicesDependsOn)) return slices.Collect(maps.Keys(servicesDependsOn))
} }
func (p *Project) ServicesWithModels() []string {
servicesModels := p.Services.Filter(func(s ServiceConfig) bool { return len(s.Models) > 0 })
return slices.Collect(maps.Keys(servicesModels))
}
func (p *Project) ServicesWithCapabilities() ([]string, []string, []string) { func (p *Project) ServicesWithCapabilities() ([]string, []string, []string) {
capabilities := []string{} capabilities := []string{}
gpu := []string{} gpu := []string{}

View File

@ -309,6 +309,8 @@ type BuildConfig struct {
DockerfileInline string `yaml:"dockerfile_inline,omitempty" json:"dockerfile_inline,omitempty"` DockerfileInline string `yaml:"dockerfile_inline,omitempty" json:"dockerfile_inline,omitempty"`
Entitlements []string `yaml:"entitlements,omitempty" json:"entitlements,omitempty"` Entitlements []string `yaml:"entitlements,omitempty" json:"entitlements,omitempty"`
Args MappingWithEquals `yaml:"args,omitempty" json:"args,omitempty"` Args MappingWithEquals `yaml:"args,omitempty" json:"args,omitempty"`
Provenance string `yaml:"provenance,omitempty" json:"provenance,omitempty"`
SBOM string `yaml:"sbom,omitempty" json:"sbom,omitempty"`
SSH SSHConfig `yaml:"ssh,omitempty" json:"ssh,omitempty"` SSH SSHConfig `yaml:"ssh,omitempty" json:"ssh,omitempty"`
Labels Labels `yaml:"labels,omitempty" json:"labels,omitempty"` Labels Labels `yaml:"labels,omitempty" json:"labels,omitempty"`
CacheFrom StringList `yaml:"cache_from,omitempty" json:"cache_from,omitempty"` CacheFrom StringList `yaml:"cache_from,omitempty" json:"cache_from,omitempty"`

View File

@ -474,7 +474,18 @@ func (r dockerFetcher) open(ctx context.Context, req *request, mediatype string,
return nil, err return nil, err
} }
body := resp.Body body := &fnOnClose{
BeforeClose: func() {
r.Release(1)
},
ReadCloser: resp.Body,
}
defer func() {
if retErr != nil {
body.Close()
}
}()
encoding := strings.FieldsFunc(resp.Header.Get("Content-Encoding"), func(r rune) bool { encoding := strings.FieldsFunc(resp.Header.Get("Content-Encoding"), func(r rune) bool {
return r == ' ' || r == '\t' || r == ',' return r == ' ' || r == '\t' || r == ','
}) })
@ -505,29 +516,33 @@ func (r dockerFetcher) open(ctx context.Context, req *request, mediatype string,
for i := range numChunks { for i := range numChunks {
readers[i], writers[i] = newPipeWriter(bufPool) readers[i], writers[i] = newPipeWriter(bufPool)
} }
// keep reference of the initial body value to ensure it is closed
ibody := body
go func() { go func() {
for i := range numChunks { for i := range numChunks {
select { select {
case queue <- i: case queue <- i:
case <-done: case <-done:
if i == 0 {
ibody.Close()
}
return // avoid leaking a goroutine if we exit early. return // avoid leaking a goroutine if we exit early.
} }
} }
close(queue) close(queue)
}() }()
r.Release(1)
for range parallelism { for range parallelism {
go func() { go func() {
for i := range queue { // first in first out for i := range queue { // first in first out
copy := func() error { copy := func() error {
if err := r.Acquire(ctx, 1); err != nil {
return err
}
defer r.Release(1)
var body io.ReadCloser var body io.ReadCloser
if i == 0 { if i == 0 {
body = resp.Body body = ibody
} else { } else {
if err := r.Acquire(ctx, 1); err != nil {
return err
}
defer r.Release(1)
reqClone := req.clone() reqClone := req.clone()
reqClone.setOffset(offset + i*chunkSize) reqClone.setOffset(offset + i*chunkSize)
nresp, err := reqClone.doWithRetries(ctx, lastHost, withErrorCheck) nresp, err := reqClone.doWithRetries(ctx, lastHost, withErrorCheck)
@ -564,32 +579,27 @@ func (r dockerFetcher) open(ctx context.Context, req *request, mediatype string,
}, },
ReadCloser: io.NopCloser(io.MultiReader(readers...)), ReadCloser: io.NopCloser(io.MultiReader(readers...)),
} }
} else {
body = &fnOnClose{
BeforeClose: func() {
r.Release(1)
},
ReadCloser: body,
}
} }
for i := len(encoding) - 1; i >= 0; i-- { for i := len(encoding) - 1; i >= 0; i-- {
algorithm := strings.ToLower(encoding[i]) algorithm := strings.ToLower(encoding[i])
switch algorithm { switch algorithm {
case "zstd": case "zstd":
r, err := zstd.NewReader(body, r, err := zstd.NewReader(body.ReadCloser,
zstd.WithDecoderLowmem(false), zstd.WithDecoderLowmem(false),
) )
if err != nil { if err != nil {
return nil, err return nil, err
} }
body = r.IOReadCloser() body.ReadCloser = r.IOReadCloser()
case "gzip": case "gzip":
body, err = gzip.NewReader(body) r, err := gzip.NewReader(body.ReadCloser)
if err != nil { if err != nil {
return nil, err return nil, err
} }
body.ReadCloser = r
case "deflate": case "deflate":
body = flate.NewReader(body) body.ReadCloser = flate.NewReader(body.ReadCloser)
case "identity", "": case "identity", "":
// no content-encoding applied, use raw body // no content-encoding applied, use raw body
default: default:

View File

@ -24,7 +24,7 @@ var (
Package = "github.com/containerd/containerd/v2" Package = "github.com/containerd/containerd/v2"
// Version holds the complete version number. Filled in at linking time. // Version holds the complete version number. Filled in at linking time.
Version = "2.1.3+unknown" Version = "2.1.4+unknown"
// Revision is filled with the VCS (e.g. git) revision being used to build // Revision is filled with the VCS (e.g. git) revision being used to build
// the program at linking time. // the program at linking time.

View File

@ -1,3 +1,4 @@
// Package md2man aims in converting markdown into roff (man pages).
package md2man package md2man
import ( import (

View File

@ -47,13 +47,13 @@ const (
tableStart = "\n.TS\nallbox;\n" tableStart = "\n.TS\nallbox;\n"
tableEnd = ".TE\n" tableEnd = ".TE\n"
tableCellStart = "T{\n" tableCellStart = "T{\n"
tableCellEnd = "\nT}\n" tableCellEnd = "\nT}"
tablePreprocessor = `'\" t` tablePreprocessor = `'\" t`
) )
// NewRoffRenderer creates a new blackfriday Renderer for generating roff documents // NewRoffRenderer creates a new blackfriday Renderer for generating roff documents
// from markdown // from markdown
func NewRoffRenderer() *roffRenderer { // nolint: golint func NewRoffRenderer() *roffRenderer {
return &roffRenderer{} return &roffRenderer{}
} }
@ -316,9 +316,8 @@ func (r *roffRenderer) handleTableCell(w io.Writer, node *blackfriday.Node, ente
} else if nodeLiteralSize(node) > 30 { } else if nodeLiteralSize(node) > 30 {
end = tableCellEnd end = tableCellEnd
} }
if node.Next == nil && end != tableCellEnd { if node.Next == nil {
// Last cell: need to carriage return if we are at the end of the // Last cell: need to carriage return if we are at the end of the header row.
// header row and content isn't wrapped in a "tablecell"
end += crTag end += crTag
} }
out(w, end) out(w, end)
@ -356,7 +355,7 @@ func countColumns(node *blackfriday.Node) int {
} }
func out(w io.Writer, output string) { func out(w io.Writer, output string) {
io.WriteString(w, output) // nolint: errcheck io.WriteString(w, output) //nolint:errcheck
} }
func escapeSpecialChars(w io.Writer, text []byte) { func escapeSpecialChars(w io.Writer, text []byte) {
@ -395,7 +394,7 @@ func escapeSpecialCharsLine(w io.Writer, text []byte) {
i++ i++
} }
if i > org { if i > org {
w.Write(text[org:i]) // nolint: errcheck w.Write(text[org:i]) //nolint:errcheck
} }
// escape a character // escape a character
@ -403,7 +402,7 @@ func escapeSpecialCharsLine(w io.Writer, text []byte) {
break break
} }
w.Write([]byte{'\\', text[i]}) // nolint: errcheck w.Write([]byte{'\\', text[i]}) //nolint:errcheck
} }
} }

View File

@ -175,11 +175,24 @@ func newPluginCommand(dockerCli *command.DockerCli, plugin *cobra.Command, meta
newMetadataSubcommand(plugin, meta), newMetadataSubcommand(plugin, meta),
) )
cli.DisableFlagsInUseLine(cmd) visitAll(cmd,
// prevent adding "[flags]" to the end of the usage line.
func(c *cobra.Command) { c.DisableFlagsInUseLine = true },
)
return cli.NewTopLevelCommand(cmd, dockerCli, opts, cmd.Flags()) return cli.NewTopLevelCommand(cmd, dockerCli, opts, cmd.Flags())
} }
// visitAll traverses all commands from the root.
func visitAll(root *cobra.Command, fns ...func(*cobra.Command)) {
for _, cmd := range root.Commands() {
visitAll(cmd, fns...)
}
for _, fn := range fns {
fn(root)
}
}
func newMetadataSubcommand(plugin *cobra.Command, meta metadata.Metadata) *cobra.Command { func newMetadataSubcommand(plugin *cobra.Command, meta metadata.Metadata) *cobra.Command {
if meta.ShortDescription == "" { if meta.ShortDescription == "" {
meta.ShortDescription = plugin.Short meta.ShortDescription = plugin.Short

View File

@ -168,34 +168,30 @@ func (tcmd *TopLevelCommand) Initialize(ops ...command.CLIOption) error {
} }
// VisitAll will traverse all commands from the root. // VisitAll will traverse all commands from the root.
// This is different from the VisitAll of cobra.Command where only parents //
// are checked. // Deprecated: this utility was only used internally and will be removed in the next release.
func VisitAll(root *cobra.Command, fn func(*cobra.Command)) { func VisitAll(root *cobra.Command, fn func(*cobra.Command)) {
visitAll(root, fn)
}
func visitAll(root *cobra.Command, fn func(*cobra.Command)) {
for _, cmd := range root.Commands() { for _, cmd := range root.Commands() {
VisitAll(cmd, fn) visitAll(cmd, fn)
} }
fn(root) fn(root)
} }
// DisableFlagsInUseLine sets the DisableFlagsInUseLine flag on all // DisableFlagsInUseLine sets the DisableFlagsInUseLine flag on all
// commands within the tree rooted at cmd. // commands within the tree rooted at cmd.
//
// Deprecated: this utility was only used internally and will be removed in the next release.
func DisableFlagsInUseLine(cmd *cobra.Command) { func DisableFlagsInUseLine(cmd *cobra.Command) {
VisitAll(cmd, func(ccmd *cobra.Command) { visitAll(cmd, func(ccmd *cobra.Command) {
// do not add a `[flags]` to the end of the usage line. // do not add a `[flags]` to the end of the usage line.
ccmd.DisableFlagsInUseLine = true ccmd.DisableFlagsInUseLine = true
}) })
} }
// HasCompletionArg returns true if a cobra completion arg request is found.
func HasCompletionArg(args []string) bool {
for _, arg := range args {
if arg == cobra.ShellCompRequestCmd || arg == cobra.ShellCompNoDescRequestCmd {
return true
}
}
return false
}
var helpCommand = &cobra.Command{ var helpCommand = &cobra.Command{
Use: "help [command]", Use: "help [command]",
Short: "Help about the command", Short: "Help about the command",

View File

@ -282,6 +282,17 @@ func (cli *DockerCli) Initialize(opts *cliflags.ClientOptions, ops ...CLIOption)
} }
filterResourceAttributesEnvvar() filterResourceAttributesEnvvar()
// early return if GODEBUG is already set or the docker context is
// the default context, i.e. is a virtual context where we won't override
// any GODEBUG values.
if v := os.Getenv("GODEBUG"); cli.currentContext == DefaultContextName || v != "" {
return nil
}
meta, err := cli.contextStore.GetMetadata(cli.currentContext)
if err == nil {
setGoDebug(meta)
}
return nil return nil
} }
@ -475,6 +486,57 @@ func (cli *DockerCli) getDockerEndPoint() (ep docker.Endpoint, err error) {
return resolveDockerEndpoint(cli.contextStore, cn) return resolveDockerEndpoint(cli.contextStore, cn)
} }
// setGoDebug is an escape hatch that sets the GODEBUG environment
// variable value using docker context metadata.
//
// {
// "Name": "my-context",
// "Metadata": { "GODEBUG": "x509negativeserial=1" }
// }
//
// WARNING: Setting x509negativeserial=1 allows Go's x509 library to accept
// X.509 certificates with negative serial numbers.
// This behavior is deprecated and non-compliant with current security
// standards (RFC 5280). Accepting negative serial numbers can introduce
// serious security vulnerabilities, including the risk of certificate
// collision or bypass attacks.
// This option should only be used for legacy compatibility and never in
// production environments.
// Use at your own risk.
func setGoDebug(meta store.Metadata) {
fieldName := "GODEBUG"
godebugEnv := os.Getenv(fieldName)
// early return if GODEBUG is already set. We don't want to override what
// the user already sets.
if godebugEnv != "" {
return
}
var cfg any
var ok bool
switch m := meta.Metadata.(type) {
case DockerContext:
cfg, ok = m.AdditionalFields[fieldName]
if !ok {
return
}
case map[string]any:
cfg, ok = m[fieldName]
if !ok {
return
}
default:
return
}
v, ok := cfg.(string)
if !ok {
return
}
// set the GODEBUG environment variable with whatever was in the context
_ = os.Setenv(fieldName, v)
}
func (cli *DockerCli) initialize() error { func (cli *DockerCli) initialize() error {
cli.init.Do(func() { cli.init.Do(func() {
cli.dockerEndpoint, cli.initErr = cli.getDockerEndPoint() cli.dockerEndpoint, cli.initErr = cli.getDockerEndPoint()

View File

@ -7,7 +7,6 @@ import (
"time" "time"
"github.com/docker/docker/api/types/build" "github.com/docker/docker/api/types/build"
"github.com/docker/docker/pkg/stringid"
"github.com/docker/go-units" "github.com/docker/go-units"
) )
@ -115,7 +114,7 @@ func (c *buildCacheContext) MarshalJSON() ([]byte, error) {
func (c *buildCacheContext) ID() string { func (c *buildCacheContext) ID() string {
id := c.v.ID id := c.v.ID
if c.trunc { if c.trunc {
id = stringid.TruncateID(c.v.ID) id = TruncateID(c.v.ID)
} }
if c.v.InUse { if c.v.InUse {
return id + "*" return id + "*"
@ -131,7 +130,7 @@ func (c *buildCacheContext) Parent() string {
parent = c.v.Parent //nolint:staticcheck // Ignore SA1019: Field was deprecated in API v1.42, but kept for backward compatibility parent = c.v.Parent //nolint:staticcheck // Ignore SA1019: Field was deprecated in API v1.42, but kept for backward compatibility
} }
if c.trunc { if c.trunc {
return stringid.TruncateID(parent) return TruncateID(parent)
} }
return parent return parent
} }

View File

@ -14,7 +14,6 @@ import (
"github.com/containerd/platforms" "github.com/containerd/platforms"
"github.com/distribution/reference" "github.com/distribution/reference"
"github.com/docker/docker/api/types/container" "github.com/docker/docker/api/types/container"
"github.com/docker/docker/pkg/stringid"
"github.com/docker/go-units" "github.com/docker/go-units"
ocispec "github.com/opencontainers/image-spec/specs-go/v1" ocispec "github.com/opencontainers/image-spec/specs-go/v1"
) )
@ -135,7 +134,7 @@ func (c *ContainerContext) MarshalJSON() ([]byte, error) {
// option being set, the full or truncated ID is returned. // option being set, the full or truncated ID is returned.
func (c *ContainerContext) ID() string { func (c *ContainerContext) ID() string {
if c.trunc { if c.trunc {
return stringid.TruncateID(c.c.ID) return TruncateID(c.c.ID)
} }
return c.c.ID return c.c.ID
} }
@ -172,7 +171,7 @@ func (c *ContainerContext) Image() string {
return "<no image>" return "<no image>"
} }
if c.trunc { if c.trunc {
if trunc := stringid.TruncateID(c.c.ImageID); trunc == stringid.TruncateID(c.c.Image) { if trunc := TruncateID(c.c.ImageID); trunc == TruncateID(c.c.Image) {
return trunc return trunc
} }
// truncate digest if no-trunc option was not selected // truncate digest if no-trunc option was not selected

View File

@ -11,7 +11,7 @@ import (
"github.com/docker/docker/api/types/container" "github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/image" "github.com/docker/docker/api/types/image"
"github.com/docker/docker/api/types/volume" "github.com/docker/docker/api/types/volume"
units "github.com/docker/go-units" "github.com/docker/go-units"
) )
const ( const (

Some files were not shown because too many files have changed in this diff Show More