Compare commits

...

1325 Commits

Author SHA1 Message Date
Tõnis Tiigi ae4e7ee6a4
Merge pull request #3370 from thaJeztah/bump_engine
vendor: github.com/docker/docker, docker/cli v28.4.0
2025-09-05 14:23:53 -07:00
CrazyMax 70487beecb
Merge pull request #3405 from thaJeztah/dockerfile_bump_docker
Dockerfile: update to docker v28.4.0
2025-09-05 10:03:38 +02:00
CrazyMax 86ddc5de4e
Merge pull request #3406 from docker/dependabot/github_actions/actions/labeler-6
build(deps): bump actions/labeler from 5 to 6
2025-09-05 10:03:17 +02:00
CrazyMax 7bcaf399b9
Merge pull request #3407 from docker/dependabot/github_actions/actions/setup-go-6
build(deps): bump actions/setup-go from 5 to 6
2025-09-05 10:02:58 +02:00
dependabot[bot] dc10c680f3
build(deps): bump actions/setup-go from 5 to 6
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 5 to 6.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-04 18:03:46 +00:00
dependabot[bot] 9c9fb2a12a
build(deps): bump actions/labeler from 5 to 6
Bumps [actions/labeler](https://github.com/actions/labeler) from 5 to 6.
- [Release notes](https://github.com/actions/labeler/releases)
- [Commits](https://github.com/actions/labeler/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/labeler
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-04 18:03:42 +00:00
Sebastiaan van Stijn b4d5ec9bc2
Dockerfile: update to docker v28.4.0
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-09-04 00:24:06 +02:00
Sebastiaan van Stijn a923dbc1d9
vendor: github.com/docker/docker, docker/cli v28.4.0
full diffs:

- https://github.com/docker/docker/compare/v28.3.3...v28.4.0
- https://github.com/docker/cli/compare/v28.3.3...v28.4.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-09-04 00:11:44 +02:00
Tõnis Tiigi bafc4e207e
Merge pull request #3402 from jsternberg/dap-test-close-client
dap: ensure test client is closed on cleanup
2025-09-03 11:23:51 -07:00
Tõnis Tiigi 2109c9d80d
Merge pull request #3399 from crazy-max/test-gitquerystring
test: git query string
2025-09-03 09:35:04 -07:00
Jonathan A. Sternberg 8841b2dfc8
dap: ensure test client is closed on cleanup
The dap test wasn't waiting for the client's goroutines to complete
before exiting which caused a race condition that could cause it to log
to the dead test logger. This became apparent when `--count` of greater
than one was used since it caused the test to run long enough to trigger
the behavior. It would have also triggered if we had added more tests.

Add the client close to the cleanup so it waits for the goroutine to
finish before the test exits as it was properly supposed to do.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-09-03 10:51:01 -05:00
CrazyMax 643322cbc3
test: git query string
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-09-03 17:49:38 +02:00
Akihiro Suda 056780314b
Merge pull request #3401 from crazy-max/buildkit-0.24.0
vendor: update buildkit to v0.24.0
2025-09-03 23:33:10 +09:00
CrazyMax d136d2ba53
vendor: update buildkit to v0.24.0
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-09-03 16:14:35 +02:00
Akihiro Suda e4f23adf3f
Merge pull request #3398 from tonistiigi/gitquerystring-cap-detect
git querystring frontend capability detection
2025-09-03 16:50:10 +09:00
Tonis Tiigi 5e6951c571
git querystring frontend capability detection
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-09-02 10:46:37 -07:00
Tõnis Tiigi d873cae872
Merge pull request #3392 from crazy-max/docs-du-filter
docs: list available filters for du and prune commands
2025-08-29 15:05:35 -07:00
Tõnis Tiigi 4df89d89fc
Merge pull request #3397 from tonistiigi/update-buildkit-v0.24.0-rc2
vendor: update buildkit to v0.24.0-rc2
2025-08-29 15:04:27 -07:00
Tonis Tiigi 1f39ad2001
vendor: update buildkit to v0.24.0-rc2
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-08-29 14:49:16 -07:00
Tõnis Tiigi ce3592e4ab
Merge pull request #3390 from crazy-max/docs-du-fixes
docs: fixes for du command
2025-08-29 10:37:22 -07:00
Tõnis Tiigi 67218bef58
Merge pull request #3394 from thaJeztah/check_DisableFlagsInUseLine
commands: verify that DisableFlagsInUseLine is set for all commands
2025-08-28 12:53:21 -07:00
Sebastiaan van Stijn 07b99ae7bf
commands: verify that DisableFlagsInUseLine is set for all commands
This replaces the DisableFlagsInUseLine call from the CLI with a test
that verifies the option is set for all commands and subcommands, so
that it doesn't have to be modified at runtime.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-08-28 17:55:50 +02:00
CrazyMax ebe66a8e2e
docs: list available filters for du and prune commands
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-08-28 15:32:10 +02:00
CrazyMax ce07ae04cd
docs: fixes for du command
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-08-28 11:25:53 +02:00
Tõnis Tiigi bb41e835b6
Merge pull request #3375 from jsternberg/update-dap-docs
docs: update dap docs to reflect updates to the debugger
2025-08-27 15:03:15 -07:00
Tõnis Tiigi 31a3fbf107
Merge pull request #3387 from tonistiigi/update-buildkit-v0.24.0-rc1
vendor: update buildkit to v0.24.0-rc1
2025-08-27 15:01:12 -07:00
Tonis Tiigi 440dc2a212
temp skip DAP test that panics in errgroup goroutine
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-08-27 14:38:37 -07:00
Tõnis Tiigi d94a6cf92a
Merge pull request #3377 from crazy-max/du-json
cmd: multiple formats output support for du command
2025-08-27 13:45:23 -07:00
Tonis Tiigi ec3b99180b
vendor: update buildkit to v0.24.0-rc1
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-08-27 13:39:14 -07:00
CrazyMax f0646eeab5
tests: diskusage command
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-08-27 18:07:14 +02:00
CrazyMax b6baad406b
cmd: multiple formats output support for du command
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-08-27 18:07:14 +02:00
Tõnis Tiigi df7c46b02d
Merge pull request #3384 from crazy-max/export-annotations-check
build: fail early if trying to export index annotations with moby exporter
2025-08-27 08:44:51 -07:00
Tõnis Tiigi 026e55b376
Merge pull request #3386 from crazy-max/winsymlink0
restore junctions to have os.ModeSymlink flag set on Windows
2025-08-27 08:38:43 -07:00
CrazyMax 300a136d4c
restore junctions to have os.ModeSymlink flag set on Windows
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-08-27 15:06:48 +02:00
CrazyMax a8f546eea5
build: fail early if trying to export index annotations with moby exporter
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-08-27 11:17:31 +02:00
Tõnis Tiigi 177b980958
Merge pull request #3385 from marxarelli/feature/support-buildkit-syntax-arg
Add BUILDKIT_SYNTAX option handling
2025-08-27 03:19:45 +03:00
Dan Duvall fc3ecb60fb Preserve raw BUILDKIT_SYNTAX as cmdline option
Set gateway `source` to the first part of `BUILDKIT_SYNTAX` and
`cmdline` to the entire raw value to preserve additional options.

Signed-off-by: Dan Duvall <dduvall@wikimedia.org>
2025-08-26 13:56:07 -07:00
Will Nonnemaker b99e799f00 Add BUILDKIT_SYNTAX option handling
This fix allows building with a remote builder where
frontend.dockerfile.v0 enabled = false in the buildkitd yaml file.

Note that this change only allows the usage of BUILDKIT_SYNTAX with
a custom frontend image, and using the #syntax directive in this case
will still fail.

Resolves: docker#3077

Signed-off-by: Will Nonnemaker <wnonnemaker@gmail.com>
2025-08-26 13:51:22 -07:00
CrazyMax 15da6042cc
Merge pull request #3379 from crazy-max/update-pflag
vendor: github.com/spf13/pflag v1.0.7
2025-08-26 17:29:30 +02:00
CrazyMax bfeb19abc8
vendor: github.com/spf13/pflag v1.0.7
full diff: https://github.com/spf13/pflag/compare/v1.0.6...v1.0.7

Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-08-26 13:37:43 +02:00
CrazyMax d7cd677480
Merge pull request #3378 from crazy-max/update-testify
vendor: github.com/stretchr/testify v1.11.0
2025-08-26 13:34:59 +02:00
CrazyMax 149b2a231b
vendor: github.com/stretchr/testify v1.11.0
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-08-25 15:19:36 +02:00
Tõnis Tiigi a6e198a341
Merge pull request #3301 from crazy-max/ci-matrix-subaction
ci(validate): use matrix subaction
2025-08-21 10:49:29 +03:00
CrazyMax 159a68cbb8
ci(validate): use matrix subaction
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-08-21 08:19:26 +02:00
Jonathan A. Sternberg a7c54da345
docs: update dap docs to reflect updates to the debugger
Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-08-19 10:19:09 -05:00
Akihiro Suda 2d65b12a65
Merge pull request #3373 from tonistiigi/kubernetes-env
kubernetes: add env driver opt to kubernetes
2025-08-19 00:38:25 +09:00
Tõnis Tiigi bac71def78
Merge pull request #3366 from jsternberg/dap-detect-parent
dap: improve determination of the proper parent for certain ops
2025-08-18 18:34:57 +03:00
Tonis Tiigi 9f721e3190
kubernetes: add env driver opt to kubernetes
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-08-18 18:11:53 +03:00
Jonathan A. Sternberg 5c97696d64
dap: improve determination of the proper parent for certain ops
Improves the determination of the proper parent for exec and file ops.
With file ops, it will only consider inputs and ignore secondary inputs.
This prevents the following case:

```
FROM busybox AS build1
RUN echo foo > /hello

FROM scratch
COPY --from=build1 /hello .
```

Previously, `build1` would be considered the parent of the copy
instruction. Now, copy properly does not have a parent.

If there are multiple file ops and the operations disagree on the
canonical "parent", we give up on trying to find a canonical parent and
assume there is none.

For exec operations, whichever input is associated with the root mount
is considered the primary parent.

For all other operations, the first parent is considered the primary
parent if it exists.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-08-18 09:27:02 -05:00
Tõnis Tiigi 8033908d09
Merge pull request #3371 from jsternberg/dap-nested-dockerfile-path
dap: look for base name of dockerfile name instead of path from context
2025-08-16 08:26:40 +03:00
Jonathan A. Sternberg b3c389690c
dap: look for base name of dockerfile name instead of path from context
When the builder loads a dockerfile, it does it by using the base name
of the dockerfile path and only loads the innermost directory. This
means that the source name we're looking for is the base name and not
the full relative path.

Update the set breakpoints functionality so it takes this into account.
Fixes scenarios where DAP is used with a dockerfile nested in the
context.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-08-15 14:41:00 -05:00
Tõnis Tiigi 10605b8c35
Merge pull request #3320 from crazy-max/mount-wsl-lib
driver: mount wsl lib folder for docker-container driver
2025-08-13 15:42:38 +03:00
Tõnis Tiigi 4f9f47deec
Merge pull request #3341 from jsternberg/dap-persistent-exec
dap: make exec shell persistent across the build
2025-08-13 15:34:54 +03:00
CrazyMax da81bc15b3
Merge pull request #3364 from docker/dependabot/github_actions/actions/checkout-5
build(deps): bump actions/checkout from 4 to 5
2025-08-13 09:21:26 +02:00
dependabot[bot] a3fa6a7b15
build(deps): bump actions/checkout from 4 to 5
Bumps [actions/checkout](https://github.com/actions/checkout) from 4 to 5.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-12 22:51:34 +00:00
Jonathan A. Sternberg dbda218489
dap: make exec shell persistent across the build
Invoking the shell will cause it to persist across the entire build and
to re-execute whenever the builder pauses at another location again.

This still requires using `exec` to launch the shell. Launching by frame
id is also removed since it no longer applies to this version.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-08-11 12:40:09 -05:00
Tõnis Tiigi c6d1e397a8
Merge pull request #3352 from crazy-max/compose-pull-nocache
bake: pull and no-cache support for compose
2025-08-11 11:55:06 +03:00
CrazyMax 7c434131a3
Merge pull request #3361 from crazy-max/compose-sanitize-ncontexts
compose: sanitize value of named contexts for target type
2025-08-08 16:07:37 +02:00
CrazyMax c365f015b1
compose: sanitize value of named contexts for target type
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-08-08 14:43:48 +02:00
CrazyMax 358312317a
Merge pull request #3357 from crazy-max/update-compose
dockerfile: update compose to 2.39.1
2025-08-08 09:34:45 +02:00
CrazyMax 8f57074638
Merge pull request #3358 from docker/dependabot/github_actions/actions/download-artifact-5
build(deps): bump actions/download-artifact from 4 to 5
2025-08-07 09:39:02 +02:00
CrazyMax d869a0ef65
Merge pull request #3359 from thaJeztah/minor_nits
remove some intermediate vars
2025-08-07 09:26:35 +02:00
Sebastiaan van Stijn 36b18a4c7a
remove some intermediate vars
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-08-06 23:00:24 +02:00
CrazyMax 24eccb0ba5
Merge pull request #3356 from crazy-max/update-test-deps
dockerfile: update docker to 28.3
2025-08-06 20:56:43 +02:00
CrazyMax 0279c49822
dockerfile: update docker to 28.3
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-08-06 20:26:48 +02:00
dependabot[bot] 8e133a5bbb
build(deps): bump actions/download-artifact from 4 to 5
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 4 to 5.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](https://github.com/actions/download-artifact/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-06 18:21:46 +00:00
CrazyMax dbad205dfe
dockerfile: update compose to 2.39.1
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-08-06 18:59:51 +02:00
CrazyMax 0bdd0aa624
Merge pull request #3355 from thaJeztah/bump_buildkit
vendor: github.com/moby/buildkit 9b91d20367db (master, v0.24-dev)
2025-08-06 18:58:23 +02:00
Sebastiaan van Stijn bbd18927d2
vendor: github.com/moby/buildkit 9b91d20367db (master, v0.24-dev)
full diff: 9b91d20367...955c2b2f7d

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-08-06 16:47:11 +02:00
CrazyMax 46463d93bf
Merge pull request #3346 from crazy-max/docs-call-override
docs: missing call as override field
2025-08-05 19:07:31 +02:00
Jonathan A. Sternberg 5c27294f27
Merge pull request #3327 from jsternberg/dap-fs-inspect
dap: filesystem inspection when paused on a digest
2025-08-05 11:06:16 -05:00
CrazyMax 2690ddd9a6
Merge pull request #3351 from crazy-max/bake-homedir
bake: add homedir func
2025-08-05 17:32:14 +02:00
CrazyMax 669fd1df2f
bake: pull and no-cache support for compose
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-08-05 11:49:37 +02:00
CrazyMax e4b49a8cd9
bake: add homedir func
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-08-05 10:21:55 +02:00
CrazyMax 5743e3a77a
Merge pull request #3347 from crazy-max/bake-frix-empty-dockerfile
bake: fix dockerfile default if empty
2025-08-05 09:49:52 +02:00
CrazyMax 9d56b30c42
bake: fix dockerfile default if empty
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-07-31 10:02:44 +02:00
CrazyMax 264c8f9f3d
docs: missing call as override field
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-07-30 13:01:23 +02:00
CrazyMax 1e50e8ddab
Merge pull request #3340 from thaJeztah/docker_28.3.3
vendor: github.com/docker/docker, docker/cli v28.3.3
2025-07-29 21:13:10 +02:00
Sebastiaan van Stijn 4b9a2b07fc
vendor: github.com/docker/docker, docker/cli v28.3.3
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-07-29 19:39:20 +02:00
Jonathan A. Sternberg 8e356c3454
dap: filesystem inspection when paused on a digest
Add a file explorer to the debugger that allows exploring the filesystem
of the current container. It will show directory contents, file
contents, and symlink destinations. It will also show the file mode
associated with a file.

The file explorer defaults to marking itself as an expensive operation
so the debugger doesn't automatically retrieve the information.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-07-28 09:52:30 -05:00
Tõnis Tiigi 4c791dce97
Merge pull request #3325 from jsternberg/dap-alternate-stepping
dap: refactor how step in/step out works
2025-07-24 15:08:52 -07:00
Tõnis Tiigi 4dd5dd5a6d
Merge pull request #3337 from glours/bump-compose-go-v2.8.1
bump compose-go to v2.8.1
2025-07-24 14:29:25 -07:00
Tõnis Tiigi f9be714a52
Merge pull request #3333 from crazy-max/compose-tests
compose integration tests
2025-07-24 14:22:55 -07:00
Guillaume Lours f388981ca4
bump compose-go to v2.8.1
Signed-off-by: Guillaume Lours <705411+glours@users.noreply.github.com>
2025-07-24 19:09:58 +02:00
CrazyMax 03000cc590
compose integration tests
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-07-24 11:45:30 +02:00
Jonathan A. Sternberg 1e3c44709d
dap: refactor how step in/step out works
Change how breakpoints and stepping works. These now work more how you
would expect another programming language to work. Breakpoints happen
before the step has been invoked rather than after which means you can
inspect the state before the command runs.

This has the advantage of being more intuitive for someone familiar with
other debuggers. The negative is that you can't run to after a certain
step as easily as you could before. Instead, you would run to that stage
and then use next to go to the step directly afterwards.

Step in and out also now have different behaviors. When a step has
multiple inputs, the inputs of non-zero index are considered like
"function calls". The most common cause of this is to use `COPY --from`
or a bind mount. Stepping into these will cause it to jump to the
beginning of the call chain for that branch. Using step out will exit
back to the location where step in was used.

This change also makes it so some steps may be invoked multiple times in
the callgraph if multiple steps depend on them. The reused steps will
still be cached, but you may end up stepping through more lines than the
previous implementation.

Stack traces now represent where these step in and step out areas
happen rather than the previous steps. This can help you know from where
a certain step is being used.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-07-23 17:10:40 -05:00
Jonathan A. Sternberg fea53ad1f8
dap: return error from evaluate command in repl context
In the repl context, we will now return the error instead of directly
printing it. We also suppress reporting errors from cobra. The logic
flow has also been changed to prevent returning errors from cobra unless
there was something related to the command line invocation so usage will
only be printed when a command was typed wrong and it will not show up
for every error.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-07-23 13:51:03 -05:00
CrazyMax a1ca46e85e
Merge pull request #3334 from glours/bump-compose-go-v2.8.0
bump compose-go to v2.8.0
2025-07-23 14:53:21 +02:00
Guillaume Lours 19304c0c54
bump compose-go to v2.8.0
Signed-off-by: Guillaume Lours <705411+glours@users.noreply.github.com>
2025-07-23 13:16:22 +02:00
Tõnis Tiigi 7d0efdc50e
Merge pull request #3330 from crazy-max/history-build-name-override
history: use built-in build-arg to override the build name
2025-07-22 16:33:21 -07:00
Tõnis Tiigi f0d16f5914
Merge pull request #3329 from crazy-max/fix-compose-validation
bake: fix compose files validation
2025-07-22 08:09:08 -07:00
CrazyMax 7e11d3601e
history: use built-in build-arg to override the build name
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-07-22 14:34:36 +02:00
CrazyMax 98f04b1290
bake: fix compose files validation
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-07-22 10:18:14 +02:00
CrazyMax ed67ab795b
driver: mount wsl lib folder for docker-container driver
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-07-21 16:47:20 +02:00
CrazyMax 3f4bf829d8
Merge pull request #3324 from thaJeztah/no_pkg_homedir
driver/kubernetes: remove uses of pkg/homedir
2025-07-21 15:35:10 +02:00
Sebastiaan van Stijn 3f725bf4d8
driver/kubernetes: remove uses of pkg/homedir
Create a local fork to keep the existing behavior.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-07-21 14:46:45 +02:00
CrazyMax dcd113370e
Merge pull request #3322 from ndeloof/validateComposeFile
do not assume input is a compose file on .env parsing error
2025-07-21 09:33:12 +02:00
Nicolas De Loof 08e74f8b62
do not assume intput is a compose file on .env parsing error
Signed-off-by: Nicolas De Loof <nicolas.deloof@gmail.com>
2025-07-21 09:02:10 +02:00
Tõnis Tiigi a894a678f1
Merge pull request #3308 from jsternberg/dap-docs
docs: add docs related to dap
2025-07-15 09:18:29 -07:00
Tõnis Tiigi 32a9b908cc
Merge pull request #3257 from jsternberg/dap-invoke
dap: support evaluate request to invoke a container
2025-07-15 09:17:37 -07:00
Jonathan A. Sternberg ac9050261e
docs: add docs related to dap
Adds some entry-level and developer-friendly docs for the debug adapter.
The one in `docs/dap.md` is meant for someone trying to use the debugger
while the one in `docs/reference/buildx_dap_build.md` is more focused on
documenting the command to be integrated in a debugger extension.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-07-15 10:58:01 -05:00
Jonathan A. Sternberg 3453f3b00a
dap: support evaluate request to invoke a container
Supports using the `evaluate` request in REPL mode to start a container
with the `exec` command. Presently doesn't support any arguments.

This improves the dap server so it is capable of sending reverse
requests and receiving the response. It also adds a hidden command
`dap attach` that attaches to the socket created by `evaluate`.

This requires the client to support `runInTerminal`.

Likely needs some additional work to make sure resources are cleaned up
cleanly especially when the build is unpaused or terminated, but it
should work as a decent base.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-07-15 10:45:25 -05:00
Tõnis Tiigi d5b9564c04
Merge pull request #3185 from crazy-max/bump-semver
vendor: github.com/Masterminds/semver/v3 v3.4.0
2025-07-15 08:43:08 -07:00
CrazyMax 6241ce056e
Merge pull request #3317 from crazy-max/docker-28.3.2
vendor: github.com/docker/cli and github.com/docker/docker v28.3.2
2025-07-15 17:02:38 +02:00
CrazyMax d418737c98
vendor: github.com/docker/cli and github.com/docker/docker v28.3.2
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-07-15 15:41:42 +02:00
CrazyMax 8c34ab52c0
vendor: github.com/Masterminds/semver/v3 v3.4.0
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-07-15 15:36:39 +02:00
CrazyMax 92a4783070
Merge pull request #3316 from tonistiigi/containerd-v2.1.3
vendor: update containerd to v2.1.3 to fix registry issues
2025-07-15 09:36:54 +02:00
Tonis Tiigi eed5fa80c2
vendor: update containerd to v2.1.3 to fix registry issues
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-07-14 23:44:31 -07:00
Tõnis Tiigi c027db2446
Merge pull request #3315 from jsternberg/dap-target-launch-config
dap: correctly set the target when provided by the launch config
2025-07-14 17:20:55 -07:00
Tõnis Tiigi 9a2207a692
Merge pull request #3313 from jsternberg/dap-variable-reference-fix
dap: do not modify variable references on variables that are zero
2025-07-14 13:33:22 -07:00
Tõnis Tiigi 60d96d3495
Merge pull request #3314 from jsternberg/dap-error-fail-on-next
dap: always return the error from execution if we paused on an error and resume
2025-07-14 13:32:31 -07:00
Jonathan A. Sternberg e7b8de2b0c
dap: correctly set the target when provided by the launch config
Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-07-14 14:51:05 -05:00
Jonathan A. Sternberg e9d4b86161
dap: always return the error from execution if we paused on an error and resume
Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-07-14 14:00:02 -05:00
Jonathan A. Sternberg 7925996c0c
dap: do not modify variable references on variables that are zero
Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-07-14 13:57:26 -05:00
Tõnis Tiigi ebe66f6922
Merge pull request #3309 from jsternberg/dap-step-into
dap: alias step into to next and step out to continue
2025-07-14 09:55:27 -07:00
Tõnis Tiigi 272bcb43fe
Merge pull request #3307 from jsternberg/dap-variables
dap: implement variable references
2025-07-14 09:54:48 -07:00
Jonathan A. Sternberg 0a78659776
dap: alias step into to next and step out to continue
Step into and step out are required by the UI for DAP. We don't have a
way to implement these in a logical manner but they need to exist. We'll
discuss in further iterations how these might differ from next and
continue, but for now, we just need some implementation for the UI.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-07-14 11:00:33 -05:00
Jonathan A. Sternberg 1886e232c5
dap: implement variable references
Implement variable references to inspect the state of a stack frame.

Variable reference ids are composed of two sections. A thread mask that
is the first 8 bytes and the remainder is an increasing number that gets
reset each time a thread is resumed. This allows the adapter to know
which thread to delegate the variables request to and allows the
variable references to still remain confined to each thread. An int32 is
used for this because variable references need to be in the range of
(0, 2^32).

At the moment, only the platform variables and some of the exec
operations for an operation. These are labeled as "arguments" to the
stack frame.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-07-14 10:59:05 -05:00
Tõnis Tiigi 266db938a7
Merge pull request #3302 from jsternberg/dap-breakpoints
dap: implement first pass at breakpoints
2025-07-14 08:58:12 -07:00
Jonathan A. Sternberg 0dddf0a7b8
dap: implement first pass at breakpoints
Implement the first iteration of breakpoints. When a breakpoint is set,
it starts unverified. When a thread begins evaluation, it tries to see
if a breakpoint corresponds to one of the parsed instructions and will
verify it.

Breakpoints work when continue is used.

At the current moment, setting breakpoints while a thread is currently
running doesn't work. Breakpoints are rechecked each time execution is
about to restart.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-07-14 09:19:17 -05:00
Tõnis Tiigi f6d2c5ed7a
Merge pull request #3292 from crazy-max/compose-filtered-spec
bake: ignore unrelated fields when parsing compose files
2025-07-13 23:36:05 -07:00
Tõnis Tiigi 661a1593f6
Merge pull request #3289 from crazy-max/bake-stdlib-description
bake: set missing stdlib functions description
2025-07-11 17:27:15 -07:00
Tõnis Tiigi 9a07004534
Merge pull request #3290 from thaJeztah/fix_lint
fix some linting issues
2025-07-10 08:30:29 -07:00
Tõnis Tiigi 39ee904262
Merge pull request #3298 from crazy-max/cmd-fix-duplicated-usage
cmd: fix duplicated commands description
2025-07-09 17:44:46 -07:00
Tõnis Tiigi e3eb64e73d
Merge pull request #3294 from jsternberg/dap-stop-on-entry
dap: support stopOnEntry to configure behavior when starting the debugger
2025-07-09 13:05:41 -07:00
Tõnis Tiigi d902b19e61
Merge pull request #3305 from crazy-max/build-cache-docs
docs: update links and add missing type for build cache exporters
2025-07-09 10:25:41 -07:00
CrazyMax 84f188a035
Merge pull request #3304 from crazy-max/update-readme
chore: readme cleanup
2025-07-09 12:43:15 +02:00
CrazyMax 42dcd3c655
docs: update links and add missing type for build cache exporters
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-07-09 12:32:04 +02:00
CrazyMax 20e99f69cf
chore: readme cleanup
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-07-09 12:17:25 +02:00
Tõnis Tiigi 3b452204b0
Merge pull request #3300 from crazy-max/history-bootstrap
history: bootstrap builder
2025-07-08 16:22:09 -07:00
CrazyMax af0090e434
history: bootstrap builder
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-07-08 11:11:37 +02:00
Jonathan A. Sternberg a291698eaf
dap: support stopOnEntry to configure behavior when starting the debugger
This will configure the default behavior when beginning to evaluate a
build target. When `stopOnEntry` is used, it will default to `stepNext`.
Otherwise, `stepContinue` will be used.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-07-07 09:39:23 -05:00
CrazyMax b6cd86e068
cmd: fix duplicated commands description
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-07-07 11:58:45 +02:00
CrazyMax c124b14978
bake: ignore unrelated fields when parsing compose files
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-07-04 08:09:14 +02:00
Tõnis Tiigi 0c747263ef
Merge pull request #3279 from jsternberg/dap-step
dap: add stack traces with next and continue functionality
2025-07-03 18:13:13 -07:00
Jonathan A. Sternberg 3a3fc54e33
Merge pull request #3293 from jsternberg/dap-adapter-test-flaky
dap: increase timeout for receiving configuration done in adapter test
2025-07-03 09:47:42 -05:00
Jonathan A. Sternberg 4f2e23a9b8
dap: add stack traces with next and continue functionality
It is now possible to send next and continue as separate signals. When
executing a build, the debug adapter will divide the LLB graph into
regions. Each region corresponds to an uninterrupted chain of
instructions. It will also record which regions depend on which other
ones.

This determines the execution order and it also determines what the
stack traces look like.

When continue is used, we will attempt to evaluate the last leaf node
(the head). If we push next, we will determine which digest would be the
next one to be processed.

In the future, this will be used to also support breakpoints.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-07-03 09:18:55 -05:00
Jonathan A. Sternberg f03ed8cc9f
dap: increase timeout for receiving configuration done in adapter test
Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-07-03 09:12:34 -05:00
CrazyMax 07e322663a
vendor: github.com/compose/compose-go/v2 891fce532a51
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-07-03 15:53:45 +02:00
CrazyMax a007368729
Merge pull request #3291 from crazy-max/chore-rm-comment
otelutil: remove unrelated comment
2025-07-03 14:33:49 +02:00
CrazyMax 84e03159a6
otelutil: remove unrelated comment
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-07-03 13:50:50 +02:00
Sebastiaan van Stijn 1205802f63
util/otelutil: change uses of deprecated instrumentation.Library
While the interface's signature uses the deprecated "Library" type,
and upstream documents it as "needed for backward compatibility";
0f7f1d0bad/sdk/trace/span.go (L62-L65)

The Library type is now an alias for Scope, so using the non-deprecated
type still satisfies the interface;
0f7f1d0bad/sdk/instrumentation/library.go (L6-L9)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-07-03 13:40:29 +02:00
Sebastiaan van Stijn fd87647da1
use "#nosec" instead of "nolint:gosec" to be more specific
The `#nosec` comment allows ignoring a specific rule; this prevents
potentially other "gosec" linting failulres from being silently ignored.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-07-03 13:31:06 +02:00
Sebastiaan van Stijn 5da2ff5990
bake/hclparser/gohcl: fix typo
Looks like we forked this code, including the typo. As we already modify
the code to add the `//nolint`, we may as well fix the typo itself instead.
dfa124f3c9/gohcl/decode_test.go (L417-L423)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-07-03 13:27:37 +02:00
CrazyMax ddec7679f6
bake: set missing stdlib functions description
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-07-03 08:17:42 +02:00
CrazyMax 55698253a5
Merge pull request #3287 from crazy-max/bake-stdlib-docs
docs: generate docs for bake standard library functions
2025-07-03 08:14:01 +02:00
CrazyMax 6997d61998
docs: generate docs for bake standard library functions
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-07-02 23:57:02 +02:00
Tõnis Tiigi 758ea75f60
Merge pull request #3235 from jsternberg/dap-handler
dap: add debug adapter implementation
2025-07-02 12:58:07 -07:00
Tõnis Tiigi 87371a89c2
Merge pull request #3283 from crazy-max/buildkit-0.23.2
dockerfile: update buildkit to 0.23.2
2025-07-02 12:56:33 -07:00
CrazyMax 4c47a576bf
dockerfile: update buildkit to 0.23.2
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-07-02 10:47:24 +02:00
Tõnis Tiigi 9a5cbaca15
Merge pull request #3278 from crazy-max/fix-exit-after-defer
cmd: fix possible skipped defers for build and bake
2025-07-01 09:32:05 -07:00
Tõnis Tiigi 2b5a8c661f
Merge pull request #3280 from crazy-max/bake-target-pattern
bake: add pattern matching for targets input
2025-07-01 09:31:32 -07:00
CrazyMax 84f6e845dc
Merge pull request #3282 from glours/bump-compose-go-v2.7.1
bump compose-go to version v2.7.1
2025-07-01 12:58:19 +02:00
Guillaume Lours e99190b8b2
bump compose-go to version v2.7.1
Signed-off-by: Guillaume Lours <705411+glours@users.noreply.github.com>
2025-07-01 12:26:25 +02:00
CrazyMax 7c22db3c34
bake: add pattern matching for targets input
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-07-01 10:53:02 +02:00
CrazyMax a2da10ad7e
Merge pull request #3276 from crazy-max/compose-dotenv-evaluate
compose: evaluate vars from current env in dotenv file
2025-07-01 10:13:54 +02:00
CrazyMax a711b8ff88
cmd: fix possible skipped defers for build and bake
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-07-01 09:45:27 +02:00
CrazyMax 68b49403a3
compose: evaluate vars from current env in dotenv file
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-07-01 09:43:17 +02:00
Tõnis Tiigi f6c730ec87
Merge pull request #3277 from crazy-max/bake-fix-envfiles
bake: fix BUILDX_BAKE_FILE env behavior
2025-06-30 20:27:29 -07:00
Tõnis Tiigi c45de54bad
Merge pull request #3275 from crazy-max/compose-dotenv-dir
compose: ignore dotenv that is a directory
2025-06-30 20:25:18 -07:00
Jonathan A. Sternberg 42599a7d49
dap: add debug adapter implementation
Adds a simple implementation of the debug adapter that supports the very
basics of a debug adapter.

It supports the launch request, the configuration done request, the
creation of threads, stopping, resuming, and disconnecting from server.

It does not support custom breakpoints, stack traces, or variable
inspection yet. These are planned to be added in the future.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-06-30 10:51:20 -05:00
CrazyMax 40e3e5a27e
compose: ignore dotenv that is a directory
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-06-30 17:11:54 +02:00
CrazyMax f1db389f07
bake: fix BUILDX_BAKE_FILE env behavior
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-06-30 15:17:21 +02:00
Tõnis Tiigi 6deb9ff384
Merge pull request #3269 from crazy-max/alpine-3.22
dockerfile: update alpine to 3.22
2025-06-26 12:19:14 -07:00
Tõnis Tiigi 6e30826a67
Merge pull request #3270 from thaJeztah/bump_otel
vendor: align otel deps to v0.60.0 / v1.35.0
2025-06-26 11:00:45 -07:00
Tõnis Tiigi 0bed0b5653
Merge pull request #3242 from rrjjvv/new-bakefile-env-var
Allow bake files to be specified via environment variable
2025-06-25 08:54:04 -07:00
Sebastiaan van Stijn 5c3e534a79
vendor: align otel deps to v0.60.0 / v1.35.0
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-06-25 12:50:03 +02:00
CrazyMax 08e0099e0a
dockerfile: update alpine to 3.22
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-06-25 09:47:37 +02:00
CrazyMax b034cff8c2
Merge pull request #3268 from thaJeztah/bump_engine
vendor: github.com/docker/docker, github.com/docker/cli v28.3.0
2025-06-25 09:38:40 +02:00
CrazyMax fdb0ebc6cb
Merge pull request #3252 from thaJeztah/docker_28.3
Dockerfile: update to docker v28.3.0
2025-06-25 09:26:55 +02:00
Sebastiaan van Stijn 25a9ad6abd
vendor: github.com/docker/cli v28.3.0
full diff: https://github.com/docker/cli/compare/v28.2.2...v28.3.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-06-25 09:21:46 +02:00
Sebastiaan van Stijn a11757121a
vendor: github.com/docker/docker v28.3.0
full diff: https://github.com/docker/docker/compare/v28.2.2...v28.3.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-06-25 09:17:40 +02:00
Sebastiaan van Stijn 7a05ca4547
Dockerfile: update to docker v28.3.0
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-06-25 09:06:20 +02:00
Tõnis Tiigi 63bb3db985
Merge pull request #3264 from crazy-max/fix-args-history
history: fix required args for inspect attachment command
2025-06-24 11:13:05 -07:00
Tõnis Tiigi fba5d5e554
Merge pull request #3265 from crazy-max/update-govulncheck
dockerfile: update govulncheck to v1.1.4
2025-06-24 11:12:20 -07:00
CrazyMax 179aad79b5
history: fix required args for inspect attachment command
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-06-24 16:06:54 +02:00
Roberto Villarreal d44ffb4bd4 Display source of bake definitions when read from environment
While it would make sense to add "from file" to complement "from env,"
 (in the common case of `--file` or using the default), it wouldn't
 provide any real value.

A simpler solution would have been looking for the existence of the
variable at the point where printing happens.  It felt wrong
duplicating the logic.  Executing the same logic (if it was extracted)
wouldn't be as bad, but still not ideal.

A 'correct' solution would be to explicitly track the source of each
definition, which would be clearer and more future-proof.  It didn't
seem like this feature warranted that amount of engineering (with no
known features that might make use of it).

This implementation seemed like a fair compromise; none of the functions
 are exported, and all have only one caller.

I also considered converting prefixing environment values with `env://`
so they could be thought of (and processed like) `cmd://` values.  I
didn't think it would be viewed as a good solution.

Co-authored-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
Signed-off-by: Roberto Villarreal <rrjjvv@yahoo.com>
2025-06-24 00:32:37 -06:00
Tõnis Tiigi 4c1e7b2119
Merge pull request #3258 from crazy-max/docs-fix-history-attachment
docs: fix history inspect attachment examples
2025-06-23 16:48:09 -07:00
CrazyMax 2d3a9ef229
dockerfile: update govulncheck to v1.1.4
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-06-23 09:56:45 +02:00
CrazyMax ec45eb6ebc
docs: fix history inspect attachment examples
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-06-23 09:40:13 +02:00
CrazyMax e9b6a01aef
Merge pull request #3259 from crazy-max/build-metadata-provenance-02
build: fix buildx.build.provenance metadata
2025-06-23 09:23:37 +02:00
Tõnis Tiigi c48ccdee36
Merge pull request #3262 from crazy-max/buildkit-0.23.1
dockerfile: update buildkit to 0.23.1
2025-06-20 13:11:18 -07:00
CrazyMax 22f776f664
Merge pull request #3253 from samifruit514/master
driver kubernetes: allow to work in a Memory mount to speed up things
2025-06-20 16:13:19 +02:00
CrazyMax 8da4f0fe64
dockerfile: update buildkit to 0.23.1
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-06-20 11:49:38 +02:00
CrazyMax 2588b66fd9
build: fix buildx.build.provenance metadata
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-06-19 18:47:27 +02:00
CrazyMax 931e714919
vendor: github.com/moby/buildkit 9b91d20
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-06-19 18:47:27 +02:00
Samuel Archambault d5f914a263 driver kubernetes: allow to work in a Memory mount to speed up things
Signed-off-by: Samuel Archambault <samuel.archambault@getmaintainx.com>
2025-06-18 14:49:54 -04:00
Tõnis Tiigi d09eb752a5
Merge pull request #3256 from jsternberg/buildkit-bump
dockerfile: update buildkit to 0.23.0
2025-06-17 17:42:39 -07:00
Jonathan A. Sternberg 3c2decea38
dockerfile: update buildkit to 0.23.0
Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-06-17 13:29:22 -05:00
Tõnis Tiigi 18041a5855
Merge pull request #3254 from crazy-max/buildkit-0.23.0
vendor: update buildkit v0.23.0
2025-06-17 08:32:22 -07:00
CrazyMax 96ebe9d9a9
vendor: update buildkit v0.23.0
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-06-17 15:31:32 +02:00
Tõnis Tiigi 08dd378b59
Merge pull request #3249 from tonistiigi/update-buildkit-v0.23.0-rc2
vendor: update buildkit v0.23.0-rc2
2025-06-16 14:31:08 -07:00
Tonis Tiigi cb29cd0efb
vendor: update buildkit v0.23.0-rc2
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-06-16 13:42:31 -07:00
Tõnis Tiigi 99f1c4b15c
Merge pull request #3245 from crazy-max/history-slsa-check
history: slsa v1 support
2025-06-16 10:53:26 -07:00
Tõnis Tiigi 77e4a88781
Merge pull request #3248 from jsternberg/printer-bake-wait-fix
progress: ensure bake waits for progress to finish printing on error conditions
2025-06-16 10:52:50 -07:00
Jonathan A. Sternberg 7660acf9c7
progress: ensure bake waits for progress to finish printing on error conditions
Some minor fixes to the printer and how bake invokes it. Bake previously
had a race condition that could result in the display not updating on an
error condition, but it was much rarer because the channel communication
was much closer. The refactor added a proxy for the status channel so
there was more of an opportunity to surface the race condition.

When bake exits with an error when reading the bakefiles, it doesn't
wait for the printer to finish so it is possible for the printer to
update the display after an error is printed. This adds an extra `Wait`
in a defer to make sure the printer is finished.

`Wait` has also been fixed to allow it to be called multiple times and
have the same behavior. Previously, it only waited for the done channel
once so only the first wait would block.

The `onclose` method is now called every time the display is paused or
stopped. That was the previous behavior and it's been restored here.

The display only gets refreshed if we aren't exiting. There's no point
in initializing another display if we're about to exit.

The metric writer attached to the printer was erroneously removed. It is
now assigned properly.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-06-16 12:24:04 -05:00
Tõnis Tiigi 03737f11bc
Merge pull request #3244 from crazy-max/bake-extra-hosts-multi-ip
bake: multi ips support for extra hosts
2025-06-16 09:21:39 -07:00
CrazyMax 4a22b92775
history: slsa v1 support
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-06-16 16:23:20 +02:00
CrazyMax ba782f195b
Merge pull request #3236 from docker/dependabot/github_actions/softprops/action-gh-release-2.3.2
build(deps): bump softprops/action-gh-release from 2.2.2 to 2.3.2
2025-06-16 13:38:29 +02:00
CrazyMax 989978a42b
bake: multi ips support for extra hosts
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-06-16 11:55:17 +02:00
Roberto Villarreal cb54ddb9fe Allow bake files to be specified via environment variable
The environment variable `BUILDX_BAKE_FILE` (and optional variable
`BUILDX_BAKE_FILE_SEPARATOR`) can be used to specify one or more bake
files (similar to `compose`).  This is mutually exclusive with`--file`
(which takes precedence).

This is done very early to ensure the values are treated just like
`--file`, e.g., participate in telemetry.  This includes leaving
relative paths as-is, which deviates from `compose` (which makes them
absolute).

Signed-off-by: Roberto Villarreal <rrjjvv@yahoo.com>
2025-06-16 00:08:54 -06:00
Tõnis Tiigi eb43f4c237
Merge pull request #3183 from crazy-max/modernize-fix
hack: modernize-fix bake target
2025-06-13 15:39:07 -07:00
Tõnis Tiigi 43e2f27cac
Merge pull request #3240 from jsternberg/remove-debugcmd-package
commands: remove debug package in commands
2025-06-13 11:46:37 -07:00
Jonathan A. Sternberg 7f5ff6b797
commands: remove debug package in commands
The package just causes the entire flow to be more complicated as build
has to pretend it doesn't know about debug options and the debugger has
to pretend it doesn't know about the build.

This abstraction has been difficult when integrating a DAP command into
this same workflow so I don't think this abstraction has much of a
value.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-06-13 09:32:35 -05:00
Tõnis Tiigi 32e9bfcba8
Merge pull request #3237 from jsternberg/vendor-update
vendor: github.com/moby/buildkit v0.23.0-rc1
2025-06-11 14:47:39 -07:00
Jonathan A. Sternberg e1adeee898
vendor: github.com/moby/buildkit v0.23.0-rc1
Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-06-11 16:29:31 -05:00
Tõnis Tiigi 1e969978aa
Merge pull request #3234 from crazy-max/bake-add-host
bake: extra-hosts support
2025-06-11 12:50:34 -07:00
dependabot[bot] 640541cefa
build(deps): bump softprops/action-gh-release from 2.2.2 to 2.3.2
Bumps [softprops/action-gh-release](https://github.com/softprops/action-gh-release) from 2.2.2 to 2.3.2.
- [Release notes](https://github.com/softprops/action-gh-release/releases)
- [Changelog](https://github.com/softprops/action-gh-release/blob/master/CHANGELOG.md)
- [Commits](da05d55257...72f2c25fcb)

---
updated-dependencies:
- dependency-name: softprops/action-gh-release
  dependency-version: 2.3.2
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-06-11 18:13:37 +00:00
CrazyMax b514ed45fb
bake: extra-hosts support
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-06-11 19:38:01 +02:00
Tõnis Tiigi 1b4bd20e6f
Merge pull request #3233 from tonistiigi/imagetools-registrytoken
imagetools: support registrytoken auth in docker config
2025-06-11 09:07:19 -07:00
Tonis Tiigi da426ecd3a
imagetools: support registrytoken auth in docker config
This is not supported by the Authorizer from containerd and
needs to be added manually. Build authentication happens through
BuildKit session that already supports this.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-06-10 23:20:08 -07:00
Tonis Tiigi 10618d4c73
imagetools: move auth function to separate file
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-06-10 22:04:56 -07:00
Tõnis Tiigi 52b5d0862f
Merge pull request #3224 from jsternberg/evaluate-handler
build: change build handler to evaluate instead of onresult
2025-06-10 11:07:31 -07:00
Tõnis Tiigi d1e22e5fc3
Merge pull request #3228 from tonistiigi/hack-link-gold
lint: fix linter error on arm64
2025-06-10 10:37:36 -07:00
Jonathan A. Sternberg 38cf84346c
build: change build handler to evaluate instead of onresult
This changes the build handler to customize the behavior of evaluate
rather than onresult and also simplifies the `ResultHandle`. The
`ResultHandle` is now only valid within the gateway callback and can be
used to start containers from the handler.

`Evaluate` now executes inside of the gateway callback rather than
having a separate implementation that executes or re-invokes the build.
This keeps the gateway callback session open until the debugger has
returned.

The `ErrReload` for monitor has now been moved into the `build` package
and been renamed to `ErrRestart`. This is because it restarts the build
so the name makes a bit more sense. The actual use of this functionality
is still tied to the monitor reload.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-06-10 11:48:41 -05:00
Jonathan A. Sternberg 34e59ca1bd
progress: fix progress writer pause and unpause to prevent panics
This changes the progress printer's pause and unpause implementation to
be reentrant to prevent race conditions and it also allows the status
updates to be buffered when the display is paused.

The previous implementation mixed the pause implementation with the
finish implementation and could cause a send on closed channel panic
because it could close the status channel before it had finished being
used. Now, the status channel is not closed.

When the display is enabled, the status channel will be forwarded to an
internal channel that is used to display the updates. When the display
is paused, the status channel will have the statuses buffered in memory
to be sent when the progress display is resumed.

The `Unpause` method has also been renamed to `Resume`.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-06-09 14:07:52 -05:00
Tonis Tiigi 2706e2f429
lint: fix linter error on arm64
Something has changed in golang or alpine requiring gold linker by
default. In future this could be updated to clang/lld instead, eg.
by just calling xx.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-06-09 10:59:23 -07:00
Tõnis Tiigi 02ab492cac
Merge pull request #3226 from ArthurFlag/ENGDOCS-2699-build-list-and-explain-accepted-schemes
docs: restructure examples for context
2025-06-06 11:54:37 -07:00
Tõnis Tiigi b8d8c7b1a6
Merge pull request #3227 from crazy-max/hcl-merge-tests
bake: hcl merged tests
2025-06-06 11:52:59 -07:00
ArthurFlag dc6ec35e1d
docs: restructure examples for context
Signed-off-by: ArthurFlag <arthur.flageul@docker.com>
2025-06-06 17:22:22 +02:00
CrazyMax 3f49ee5a90
bake: hcl merged tests
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-06-06 17:16:56 +02:00
Tõnis Tiigi c45185fde0
Merge pull request #3222 from jsternberg/controller-remove-final
controller: remove remaining parts of the controller
2025-06-05 10:18:40 -07:00
Jonathan A. Sternberg 1d7cda1232
controller: remove remaining parts of the controller
Removes all references to the controller and moves the remaining
sections of code to other packages.

Processes has been moved to monitor where it is used and the data
structs have been removed so buildflags is used directly. The controller
build function has been moved to the commands package.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-06-05 11:57:03 -05:00
Tõnis Tiigi fb916a960c
Merge pull request #3214 from tonistiigi/internal-codes
cmd: custom exit codes for internal, resource and canceled errors
2025-06-05 09:11:19 -07:00
Tõnis Tiigi 60b1eda2df
Merge pull request #3220 from jsternberg/monitor-driven-build
monitor: move remaining controller functionality into monitor
2025-06-04 13:50:24 -07:00
Jonathan A. Sternberg 8f2604b6b4
monitor: move remaining controller functionality into monitor
This creates a `Monitor` type that keeps the global state between
monitor invocations and allows the monitor to exist during the build so
it can be utilized for callbacks.

The result handler is now registered with the monitor during the build
and `Run` will use the result if it is present and the configuration
intends the monitor to be invoked with the given result.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-06-04 15:27:24 -05:00
Tõnis Tiigi bb5b5e37e8
Merge pull request #3219 from jsternberg/monitor-reload-refactor
monitor: refactor how reload works
2025-06-04 13:26:48 -07:00
Jonathan A. Sternberg 21ebf82c99
monitor: refactor how reload works
The build now happens in a loop and the monitor is run after every
build. The monitor can return `ErrReload` to signal to the main thread
that it should reload the build result.

This will be used in the future to move the monitor into a callback
rather than as a separate existence. It allows the monitor to not
control the build itself which now makes it possible to completely
remove the controller.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-06-04 15:06:31 -05:00
Tõnis Tiigi d61853bbb3
Merge pull request #3213 from jsternberg/build-refactors
build: refactor some of the build functions into smaller utility functions
2025-06-03 14:43:26 -07:00
Jonathan A. Sternberg 65e46cc6af
commands: simplify passing stdin to the build when the monitor is configured
The monitor needs stdin to run and isn't compatible with loading a
context or dockerfile from stdin. We already disallow this combination
and, with the removal of the remote controller, there's no way to use
stdin during the build when invoke is configured.

This just removes the extra code to allow forwarding stdin to the build
when the monitor is configured to simplify that section of code.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-06-03 15:41:23 -05:00
Jonathan A. Sternberg 6a0f5610e3
controller: remove the controller interface
The controller interface is removed and the local controller is used for
only the initial build, invoke, and rebuilds.

Process control has been moved to the monitor.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-06-03 15:41:23 -05:00
Jonathan A. Sternberg e78aa98c92
build: refactor some of the build functions into smaller utility functions
Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-06-03 15:41:22 -05:00
CrazyMax e6ff731323
Merge pull request #3216 from jsternberg/keep-storage-deprecation-notice
commands: update deprecation notice for keep-storage
2025-06-02 16:58:58 +02:00
Jonathan A. Sternberg 9bd1ba2f5c
commands: update deprecation notice for keep-storage
The `--keep-storage` flag was changed to `--reserved-space`. Before it was
changed to that name, it was changed to `--max-storage`. This flag never
made it into a release as the name was changed before release, but the
update to the flag in buildx forgot to update the deprecation notice.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-06-02 09:35:39 -05:00
CrazyMax f90170965a
Merge pull request #3207 from rrjjvv/show-var-types
Show types during variable list operation
2025-06-02 09:09:18 +02:00
Tonis Tiigi b3e37e899f
cmd: custom exit codes for internal, resource and canceled errors
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-05-30 11:12:24 -07:00
Tõnis Tiigi a04b7d8689
Merge pull request #3212 from thaJeztah/bump_engine
vendor: github.com/docker/docker, docker/cli v28.2.2
2025-05-30 10:49:29 -07:00
Tõnis Tiigi 52bf4bf7ce
Merge pull request #3210 from thaJeztah/dockerfile_bump_docker
Dockerfile: update to docker v28.2.2
2025-05-30 10:49:08 -07:00
Sebastiaan van Stijn 13031cc2ca
vendor: github.com/docker/docker, docker/cli v28.2.2
no changes in vendored file, just version update

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-05-30 17:30:09 +02:00
Sebastiaan van Stijn 46fae59e2e
Dockerfile: update to docker v28.2.2
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-05-30 17:24:23 +02:00
Roberto Villarreal b40b2caf1a Show types during variable list operation
If a type was explicitly provided, it will be displayed in the variable
listing.  Inferred type names are not displayed, as they likely would
not match the user's intent.

Previously only `string` and `bool` default values were displayed in the
 listing.  All default values, regardless of type, are now displayed.

Signed-off-by: Roberto Villarreal <rrjjvv@yahoo.com>
2025-05-29 17:36:46 -06:00
Tõnis Tiigi 1436f93aa1
Merge pull request #3194 from thaJeztah/bump_engine
vendor: github.com/docker/docker, github.com/docker/cli v28.2.1
2025-05-29 16:05:26 -07:00
Sebastiaan van Stijn 99d82e6cea
vendor: github.com/docker/cli v28.2.1
full diff: https://github.com/docker/cli/compare/v28.1.1...v28.2.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-05-29 09:36:54 +02:00
Sebastiaan van Stijn bc620fcc71
vendor: github.com/docker/docker v28.2.1
full diff: https://github.com/docker/docker/compare/v28.1.1...v28.2.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-05-29 09:34:27 +02:00
CrazyMax e3c6618db2
Merge pull request #3201 from jsternberg/remove-generated-files
hack: remove code generation related to generated files
2025-05-23 11:14:45 +02:00
Tõnis Tiigi 542bda49f2
Merge pull request #3188 from crazy-max/buildkit-0.22
dockerfile: update buildkit to 0.22.0
2025-05-22 15:33:45 -07:00
Jonathan A. Sternberg 781a3f117a
hack: remove code generation related to generated files
With the removal of the protobuf for the controller, there are no longer
any generated files. Remove the makefile targets and the associated
dockerfiles and bake targets.

This wasn't being included in CI because it wasn't part of the
`validate` target.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-05-22 14:59:42 -05:00
CrazyMax 614cc880dd
dockerfile: update buildkit to 0.22.0
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-05-21 16:55:34 +02:00
CrazyMax dfad6e0b1f
Merge pull request #3189 from rrjjvv/var-typing-docs
Add variable typing to reference docs
2025-05-21 16:34:59 +02:00
CrazyMax 776dbd4086
Merge pull request #3198 from rrjjvv/var-typing-no-value-fix
Consider typed, value-less variables to have `null` value
2025-05-21 16:34:42 +02:00
Tõnis Tiigi 75f1d5e26b
Merge pull request #3199 from crazy-max/buildkit-0.22.0
vendor: github.com/moby/buildkit v0.22.0
2025-05-21 07:26:25 -07:00
CrazyMax 291c353575
bake: TestEmptyVariable
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-05-21 16:15:30 +02:00
CrazyMax a11bb4985c
vendor: github.com/moby/buildkit v0.22.0
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-05-21 14:57:25 +02:00
Roberto Villarreal cfeca919a9 Add variable typing to reference docs
This documents the variable typing introduced in #3167.

Signed-off-by: Roberto Villarreal <rrjjvv@yahoo.com>
2025-05-20 13:49:27 -06:00
Roberto Villarreal 3c0f5c5c21 Consider typed, value-less variables to have `null` value
A variable with a type but no default value or override resulted in an
empty string.  This matches the legacy behavior of untyped variables,
but does not make sense when using types (an empty string is itself a
type violation for everything except `string`).  All variables defined
with a type but with no value are now a typed `null`.

A variable explicitly typed `any` was previously treated as if the
typing was omitted; with no defined value or override, that resulted in
an empty string.  The `any` type is now distinguished from an omitted
type; these variables, with no default or override, are also `null`.

In other respects, the behavior of `any` is unchanged and largely
behaves as if the type was omitted.  It's not clear whether it should be
 supported, let alone how it should behave, so these tests were removed.
It's being treated as undefined behavior.

Signed-off-by: Roberto Villarreal <rrjjvv@yahoo.com>
2025-05-20 13:09:13 -06:00
CrazyMax ea2b7020a4
Merge pull request #3193 from crazy-max/buildkit-0.22.0-rc2
vendor: github.com/moby/buildkit v0.22.0-rc2
2025-05-19 17:29:11 +02:00
CrazyMax 5ba7d7eb4f
vendor: github.com/moby/buildkit v0.22.0-rc2
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-05-19 17:14:59 +02:00
CrazyMax 95ac2b4d09
Merge pull request #3192 from crazy-max/update-cli-docs-tool
vendor: github.com/docker/cli-docs-tool v0.10.0
2025-05-19 16:23:50 +02:00
CrazyMax 934cca3ab1
vendor: github.com/docker/cli-docs-tool v0.10.0
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-05-19 15:54:56 +02:00
CrazyMax 6e562e9ede
Merge pull request #3191 from glours/bump-compose-go-v2.6.3
bump compose-go to v2.6.3
2025-05-19 15:05:34 +02:00
Guillaume Lours 51b8646c44
bump compose-go to v2.6.3
Signed-off-by: Guillaume Lours <705411+glours@users.noreply.github.com>
2025-05-19 14:49:51 +02:00
CrazyMax 57a1c97c9d
Merge pull request #3187 from crazy-max/buildkit-0.22.0-rc1
vendor: github.com/moby/buildkit v0.22.0-rc1
2025-05-14 20:29:12 +02:00
CrazyMax 7a54b6ee7e
vendor: github.com/moby/buildkit v0.22.0-rc1
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-05-14 20:10:27 +02:00
CrazyMax 2e3108975d
Merge pull request #3186 from crazy-max/fix-readme
update readme
2025-05-14 13:40:07 +02:00
CrazyMax cd48c516e2
update readme
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-05-14 13:21:45 +02:00
Tõnis Tiigi 2149f03225
Merge pull request #3184 from tonistiigi/lint-merge-conflict-fix
lint: fix linter after merge conflict
2025-05-13 16:52:49 -07:00
Tonis Tiigi f41d5072fd
lint: fix linter after merge conflict
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-05-13 16:19:33 -07:00
Tõnis Tiigi 06a1a6344a
Merge pull request #3180 from crazy-max/dockerfile-update
dockerfile: update docker to 28.1.1 and buildkit to 0.21.1
2025-05-13 15:09:27 -07:00
Tõnis Tiigi 4feb05b0bf
Merge pull request #3179 from tonistiigi/ls-format-json-current
ls: make sure current builder is available in JSON output
2025-05-13 14:27:36 -07:00
Tõnis Tiigi 277548e91b
Merge pull request #3152 from crazy-max/history-export-finalize
history: make sure build record is finalized before exporting
2025-05-13 14:20:04 -07:00
Tõnis Tiigi 3f0aec1b3e
Merge pull request #3182 from crazy-max/go-1.24
update to go 1.24
2025-05-13 12:14:37 -07:00
CrazyMax 1383aa30c1
lint: modernize fix
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-05-13 20:44:57 +02:00
CrazyMax 09b824b9dc
update to go 1.24
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-05-13 19:27:03 +02:00
CrazyMax c1209acb27
hack: modernize-fix bake target
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-05-13 18:22:24 +02:00
CrazyMax 68ce10c4d9
tests: history cmds
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-05-13 17:17:47 +02:00
CrazyMax 78353f4e8e
history: make sure build record is finalized before exporting
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-05-13 17:17:47 +02:00
CrazyMax 03f9877429
Merge pull request #3181 from crazy-max/golangci-lint-v2
update golangci-lint to v2.1.5
2025-05-13 17:17:17 +02:00
CrazyMax b606e2f6bb
update golangci-lint to v2.1.5
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-05-13 16:54:43 +02:00
CrazyMax 874bb14de9
hack: golangci build from source support
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-05-13 16:20:35 +02:00
CrazyMax a9ab809d15
Merge pull request #3138 from crazy-max/history-copy
history: copy update
2025-05-13 13:06:05 +02:00
CrazyMax 72fde4c53a
history: copy update
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-05-13 12:20:00 +02:00
CrazyMax df8b997588
dockerfile: update buildkit to 0.21.1
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-05-13 12:01:12 +02:00
CrazyMax f92c679e14
dockerfile: update docker to 28.1.1
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-05-13 12:00:45 +02:00
Tonis Tiigi a3180cbf3d
ls: make sure current builder is available in JSON output
While lsBuilder has a field called Current that gets lost
because the embedded struct implements custom MarshalJSON method.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-05-12 22:30:44 -07:00
Tõnis Tiigi c398e2a224
Merge pull request #3177 from crazy-max/docs-fix-hcl-syntax
docs: remove commas in bake hcl object blocks
2025-05-12 10:47:11 -07:00
Tõnis Tiigi 865ad2b8d5
Merge pull request #3167 from rrjjvv/variable-typing
Allow variables to be explicitly typed (and enforced)
2025-05-12 10:45:10 -07:00
CrazyMax 729d58152c
docs: remove commas in bake hcl object blocks
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-05-12 10:30:45 +02:00
CrazyMax 9998ef7045
Merge pull request #3171 from glours/bump-compose-go-v2.6.2
bump compose-go to v2.6.2
2025-05-12 09:15:42 +02:00
CrazyMax 7e960152a1
Merge pull request #3168 from tonistiigi/bake-call-empty
bake: fix nil deference on empty call definition
2025-05-12 09:13:05 +02:00
Roberto Villarreal 56d39e619d Skip case-sensitive test on Windows
Signed-off-by: Roberto Villarreal <rrjjvv@yahoo.com>
2025-05-09 18:36:11 -06:00
Roberto Villarreal 65aea3028f Apply suggestions from code review
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Signed-off-by: Roberto Villarreal <rrjjvv@yahoo.com>
2025-05-09 18:36:11 -06:00
Roberto Villarreal 956fc0c9eb Use unique environment variables to separate JSON from default parsing
The primary intent is to make JSON parsing explicitly opt-in rather than
 using heuristics to determine intent.

With some exceptions, given bake variable `VAR`, an environment variable
 `VAR_JSON` must be used to provide JSON content.  The value in
 `VAR_JSON` will be ignored when:
* a bake built-in of that same name exists
* a user-provided variable of that same name exists
* typing (attribute `type`) is not present

The first is unlikely to happen as built-ins will likely start with
`BUILDX_BAKE_`, an unlikely prefix for end users.  The second may be a
real scenario, where users have `VAR_JSON` dedicated to accepting a
string with JSON content and decoding via an HCL function.  This will
continue to work as-is, but can be simplified by removing the variable
from their bake file (`VAR_JSON`) and applying typing (to `VAR`).

Signed-off-by: Roberto Villarreal <rrjjvv@yahoo.com>
2025-05-09 18:36:11 -06:00
Roberto Villarreal 1f56984626 Implement CSV-based overrides for list-like variables
Though CSV is favored for 'simple' lists, a JSON value will be used if
it parses without error.  This assumes that it is extremely unlikely
that something that parses as JSON would be intended to be parsed as
CSV, e.g. `["a"` and `"b"]`, as opposed to `a` and `b`.  If
parsing/conversion fails, it is treated as if it was a CSV.

Since the CSV approach required processing of each element, code was
refactored to reuse the same logic used for individual non-typed
variables.

Signed-off-by: Roberto Villarreal <rrjjvv@yahoo.com>
2025-05-09 18:36:11 -06:00
Roberto Villarreal 5b8a3b3728 Allow variables to be explicitly typed (and enforced)
This allows variables to have explicit types, similar to Terraform
variables.  It uses HCL's `typeexpr` extension for the specification.
For conversion of overrides to complex types (when explicit typing is
provided), HCL's native JSON-based unmarshalling is used.

Typing is independent of any default, but if a default is provided, it
will be validated.  Similarly, if an override is provided, it will be
converted to that type.

When typing is not provided, previous behavior is used, namely
passing through as a string when no default, converting to primitives if
 the default was primitive, and failing otherwise (complex types).

For complex types, the happy path is lists of primitives, but in theory
any complex/composite type can be used provided they are expressed
correctly in JSON.  In the interest of simplicity and correctness, there
 are no shortcuts for lists.  There *is* a shortcut for strings as users
  don't provide them for untyped variables and would be unintuitive.

Signed-off-by: Roberto Villarreal <rrjjvv@yahoo.com>
2025-05-09 18:36:11 -06:00
Guillaume Lours acdf95fe75
bump compose-go to v2.6.2
Signed-off-by: Guillaume Lours <705411+glours@users.noreply.github.com>
2025-05-07 15:29:18 +02:00
Tõnis Tiigi 9e17bc7a4c
Merge pull request #3127 from sarahsanders-docker/docs-buildx-history
docs: add descriptions and examples for buildx history commands
2025-05-05 15:00:01 -07:00
Tonis Tiigi e1e8f5c68d
docs: updated reference docs generation
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-05-05 14:32:03 -07:00
sarahsanders-docker 6ed39b2618
fix examples and headings
Signed-off-by: sarahsanders-docker <sarah.sanders@docker.com>
2025-05-05 14:32:03 -07:00
sarahsanders-docker 03019049e8
addressed feedback
Signed-off-by: sarahsanders-docker <sarah.sanders@docker.com>
2025-05-05 14:32:03 -07:00
sarahsanders-docker 23ce21c341
feedback + updated examples + added links for h3 headings
Signed-off-by: sarahsanders-docker <sarah.sanders@docker.com>
2025-05-05 14:32:03 -07:00
sarahsanders-docker 4dac5295a1
Add descriptions and examples for buildx history commands
Signed-off-by: sarahsanders-docker <sarah.sanders@docker.com>
2025-05-05 14:32:03 -07:00
Tonis Tiigi b00dd42037
bake: fix nil deference on empty call definition
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-05-05 11:07:54 -07:00
Tõnis Tiigi 9a48aca461
Merge pull request #3136 from ctalledo/reworked-fix-for-moby-45458
Output correct image ID when using Docker with the containerd-snapshotter
2025-05-01 16:59:42 -07:00
Cesar Talledo 679407862f Output correct image ID when using Docker with the containerd-snapshotter.
Prior to this change, the following command emits the wrong image ID when buildx
uses the "docker-container" driver and Docker is configured with the
containerd-snapshotter.

$ docker buildx build --load --iidfile=img.txt

$ docker run --rm "$(cat img.txt)" echo hello
docker: Error response from daemon: No such image: sha256:4ac37e81e00f242010e42f3251094e47de6100e01d25e9bd0feac6b8906976df.
See 'docker run --help'.

The problem is that buildx is outputing the incorrect image ID in this scenario
(it's outputing the container image config digest, instead of the container
image digest used by the containerd-snapshotter).

This commit fixes this. See https://github.com/moby/moby/issues/45458.

Signed-off-by: Cesar Talledo <cesar.talledo@docker.com>
2025-05-01 16:33:22 -07:00
Tõnis Tiigi 674cfff1a4
Merge pull request #3165 from tonistiigi/fix-openbsd-ci
attempt openbsd fix
2025-05-01 11:29:09 -07:00
Tonis Tiigi 19a241f4ed
attempt openbsd fix
7.5 packages seem to be removed from main mirrors. Couldn't find
a popular 7.6/7.7 image in vagrant cloud.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-05-01 11:02:38 -07:00
Tõnis Tiigi 7da31076ae
Merge pull request #3164 from jsternberg/controller-errdefs-proto-removal
controller: remove controller/errdefs protobuf files
2025-05-01 10:40:41 -07:00
Jonathan A. Sternberg 384f0565f5
controller: remove controller/errdefs protobuf files
Remove the protobuf files associated with controller/errdefs.

This doesn't completely remove the type as the monitor still uses it as
a signal to start the monitor.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-05-01 12:14:36 -05:00
Tõnis Tiigi 6df3a09284
Merge pull request #3126 from jsternberg/controller-removal
controller: remove controller grpc service
2025-04-30 18:20:51 -07:00
Tõnis Tiigi e7be640d9b
Merge pull request #3155 from crazy-max/fix-bin-image
ci: fix bin-image job
2025-04-30 17:53:20 -07:00
Jonathan A. Sternberg 2f1be25b8f
controller: remove controller grpc service
Remove the controller grpc service along with associated code related to
sessions or remote controllers.

Data types that are still used with complicated dependency chains have
been kept in the same package for a future refactor.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-04-30 13:46:58 -05:00
CrazyMax a40edbb47b
ci: fix bin-image job
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-04-24 14:40:43 +02:00
Tõnis Tiigi 2eaea647d8
Merge pull request #3146 from fiam/alberto/propagate-otel-trace
chore(dockerutil): propagate OTEL context to Docker daemon
2025-04-23 09:45:17 -07:00
Alberto Garcia Hierro f3a3d9c26b
chore(dockerutil): propagate OTEL context to Docker daemon
This allows to correlate operations triggered by a build (e.g.
a client-side pull) with the build that generated them.

Signed-off-by: Alberto Garcia Hierro <damaso.hierro@docker.com>
2025-04-22 20:29:30 +01:00
Tõnis Tiigi 9ba3f77219
Merge pull request #3143 from crazy-max/ci-fix-vagrant
ci: fix vagrant build
2025-04-22 12:17:29 -07:00
Tõnis Tiigi 2799ed6dd8
Merge pull request #3142 from thaJeztah/bump_docker_28.1.1
vendor: github.com/docker/docker, docker/cli v28.1.1, containerd v2.0.5
2025-04-22 11:37:27 -07:00
CrazyMax 719a41a4c3
Merge pull request #3135 from docker/dependabot/github_actions/softprops/action-gh-release-2.2.2
build(deps): bump softprops/action-gh-release from 2.2.1 to 2.2.2
2025-04-22 14:03:03 +02:00
CrazyMax a9807be458
ci: fix vagrant build
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-04-22 13:47:03 +02:00
CrazyMax 7a7be2ffa1
Merge pull request #3141 from ndeloof/path.IsAbs
use filepath.IsAbs to support windows paths
2025-04-22 13:42:30 +02:00
Sebastiaan van Stijn ab533b0cb4
vendor: github.com/docker/cli v28.1.1
no changes in vendored code

diff:  https://github.com/docker/cli/compare/v28.1.0...v28.1.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-04-22 12:30:00 +02:00
Sebastiaan van Stijn 0855cab1bd
vendor: github.com/docker/docker v28.1.1
diff:  https://github.com/docker/docker/compare/v28.1.0...v28.1.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-04-22 12:29:21 +02:00
Sebastiaan van Stijn 735555ff7b
vendor: github.com/containerd/containerd v2.0.5
full diff: https://github.com/containerd/containerd/compare/v2.0.4...v2.0.5

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-04-22 12:28:37 +02:00
Sebastiaan van Stijn 67ccbd06f6
vendor: golang.org/x/oauth2 v0.29.0
notable changes

- fixes CVE-2025-22868
- oauth2.go: use a more straightforward return value
- oauth2: Deep copy context client in NewClient
- jws: improve fix for CVE-2025-22868

full diff: https://github.com/golang/oauth2/compare/v0.23.0...v0.29.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-04-22 12:26:36 +02:00
Nicolas De Loof c370f90b73
use filepath.IsAbs to support windows paths
Signed-off-by: Nicolas De Loof <nicolas.deloof@gmail.com>
2025-04-22 11:37:53 +02:00
CrazyMax 9730a20f6b
Merge pull request #3133 from tonistiigi/build-defers-fix
build: make sure defers always run in the end of the build
2025-04-22 09:39:05 +02:00
dependabot[bot] 2e93ac32bc
build(deps): bump softprops/action-gh-release from 2.2.1 to 2.2.2
Bumps [softprops/action-gh-release](https://github.com/softprops/action-gh-release) from 2.2.1 to 2.2.2.
- [Release notes](https://github.com/softprops/action-gh-release/releases)
- [Changelog](https://github.com/softprops/action-gh-release/blob/master/CHANGELOG.md)
- [Commits](c95fe14893...da05d55257)

---
updated-dependencies:
- dependency-name: softprops/action-gh-release
  dependency-version: 2.2.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-04-21 18:55:17 +00:00
Tonis Tiigi 19c22136b4
build: make sure defers always run in the end of the build
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-04-21 09:28:44 -07:00
Tõnis Tiigi bad5063577
Merge pull request #3107 from thaJeztah/bump_engine
vendor: github.com/docker/docker, github.com/docker/cli v28.1.0
2025-04-18 17:05:35 -07:00
Sebastiaan van Stijn 286c018f84
vendor: github.com/docker/cli v28.1.0
full diff: https://github.com/docker/cli/compare/v28.0.4...v28.1.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-04-18 15:54:36 -07:00
Sebastiaan van Stijn ac970c03e7
vendor: github.com/docker/docker v28.1.0
full diff: https://github.com/docker/docker/compare/v28.0.4...v28.1.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-04-18 15:54:08 -07:00
Sebastiaan van Stijn 5398c33937
vendor: github.com/mattn/go-runewidth v0.0.16
adds support for Unicode 15.1.0

full diff: https://github.com/mattn/go-runewidth/compare/v0.0.15...v0.0.16

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-04-18 15:54:06 -07:00
Tõnis Tiigi 1365652a74
Merge pull request #3113 from crazy-max/update-hcl
vendor: update hcl dependencies
2025-04-18 15:51:53 -07:00
Tõnis Tiigi a4f0a21468
Merge pull request #3125 from thaJeztah/dockerfile_update_engine
Dockerfile: update to docker v28.1.0
2025-04-18 15:51:03 -07:00
CrazyMax d55616b22c
Merge pull request #3130 from crazy-max/fix-pr-assign
ci: update pr-assign-author
2025-04-18 13:52:27 +02:00
CrazyMax 113606a24c
ci: update pr-assign-author
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-04-18 13:38:59 +02:00
CrazyMax cd38da0244
Merge pull request #3123 from thaJeztah/update_spdy
vendor: github.com/moby/spdystream v0.5.0 (indirect)
2025-04-17 16:25:49 +02:00
Sebastiaan van Stijn cc6547c51d
Dockerfile: update to docker v28.1.0
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-04-17 16:17:55 +02:00
Sebastiaan van Stijn 26f2e002c6
vendor: github.com/moby/spdystream v0.5.0 (indirect)
This is an indirect dependency, but I recalled it fixed some leaking
goroutines, so it may be worth considering updating.

full diff: https://github.com/moby/spdystream/compare/v0.4.0...v0.5.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-04-17 14:26:25 +02:00
CrazyMax 372feb38ff
Merge pull request #3120 from crazy-max/ci-pr-assign
ci: assign author on pull request
2025-04-16 15:39:06 +02:00
CrazyMax b08d576ec0
Merge pull request #3119 from thaJeztah/bump_archive
vendor: github.com/moby/go-archive v0.1.0
2025-04-16 15:14:07 +02:00
CrazyMax 0034cdbffc
ci: assign author on pull request
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-04-16 15:06:01 +02:00
CrazyMax a9666e7df1
Merge pull request #3118 from crazy-max/dockerfile-buildkit-0.21.0
dockerfile: update buildkit to 0.21.0
2025-04-16 14:37:09 +02:00
CrazyMax b7e77af256
dockerfile: update buildkit to 0.21.0
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-04-16 14:14:35 +02:00
CrazyMax d72ff8f88c
Merge pull request #2842 from thaJeztah/test_registry_v3
Dockerfile: update to registry v3.0.0
2025-04-16 14:14:00 +02:00
Sebastiaan van Stijn d75c650792
vendor: github.com/moby/go-archive v0.1.0
full diff: https://github.com/moby/go-archive/compare/21f3f3385ab7...v0.1.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-04-16 13:49:57 +02:00
Tõnis Tiigi 8c74109330
Merge pull request #3115 from crazy-max/buildkit-v0.21.0
vendor: github.com/moby/buildkit v0.21.0
2025-04-15 09:30:51 -07:00
CrazyMax 9f102b5c34
vendor: github.com/moby/buildkit v0.21.0
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-04-15 16:54:08 +02:00
CrazyMax b4b2dc9664
Merge pull request #3114 from tonistiigi/bake-variadic-fix
bake: fix variadic_params inconsistency for user functions
2025-04-15 15:48:49 +02:00
Tonis Tiigi 2e81e301ae
bake: fix variadic_params inconsistency for user functions
There was inconsistency between variables used for function
definitions in HCL and JSON format. Updated JSON to match HCL,
fixed documentation and removed the unused code from userfunc
pkg (based on HCL upstream) to avoid confusion.

Theoretically we could add some temporary backwards compatibility
for the JSON format but I think it is unlikely that someone uses
JSON format for this and also defined variadic parameters.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-04-14 10:56:20 -07:00
CrazyMax fb4417e14d
vendor: update hcl dependencies
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-04-14 12:55:59 +02:00
CrazyMax eb74b483bd
Merge pull request #3110 from crazy-max/buildkit-0.21.0-rc2
vendor: github.com/moby/buildkit v0.21.0-rc2
2025-04-11 19:44:05 +02:00
CrazyMax db194abdc8
vendor: github.com/moby/buildkit v0.21.0-rc2
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-04-11 19:28:20 +02:00
CrazyMax 86eb3be1c4
Merge pull request #3103 from thaJeztah/use_atomicwriter
migrate to use github.com/moby/sys/atomicwriter
2025-04-11 12:05:00 +02:00
CrazyMax a05a166f81
Merge pull request #3106 from crazy-max/inline-result
build: print frontend inline message
2025-04-11 12:04:47 +02:00
CrazyMax cfc9d3a8c9
Merge pull request #3105 from glours/bump-compose-go-v2.6.0
bump compose-go to version v2.6.0
2025-04-11 10:57:53 +02:00
CrazyMax 5bac0b1197
build: print frontend inline message
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-04-11 09:45:25 +02:00
Guillaume Lours 0b4e624aaa
bump compose-go to version v2.6.0
Signed-off-by: Guillaume Lours <705411+glours@users.noreply.github.com>
2025-04-10 18:04:00 +02:00
Sebastiaan van Stijn b7b5a3a1cc
migrate to use github.com/moby/sys/atomicwriter
The github.com/docker/docker/pkg/atomicwriter package was moved
to a separate module.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-04-10 12:13:01 +02:00
CrazyMax f8de3c3bdc
Merge pull request #3095 from thaJeztah/migrate_archive
migrate to github.com/moby/go-archive module
2025-04-10 10:31:55 +02:00
Sebastiaan van Stijn fa0c3e3786
migrate to github.com/moby/go-archive module
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-04-10 09:16:43 +02:00
Tõnis Tiigi d69301d57b
Merge pull request #3101 from crazy-max/bake-validation-fixes
bake: check condition and error_message are set during validation
2025-04-09 08:59:43 -07:00
Jonathan A. Sternberg ee77cdb175
Merge pull request #3102 from jsternberg/buildkit-rc1
vendor: github.com/moby/buildkit v0.21.0-rc1
2025-04-09 10:56:27 -05:00
Jonathan A. Sternberg 8fb1157b5f
vendor: github.com/moby/buildkit v0.21.0-rc1
Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-04-09 10:28:03 -05:00
CrazyMax a34cdff84e
Merge pull request #3098 from tonistiigi/vendor-jaeger-ui-v1.68
vendor: update jaeger-ui-rest to v1.68
2025-04-09 17:11:50 +02:00
CrazyMax 77139daa4b
bake: check condition and error_message are set during validation
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-04-09 16:04:18 +02:00
Tonis Tiigi 10e3892a63
vendor: update jaeger-ui-rest to v1.68
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-04-08 10:11:54 -07:00
Tõnis Tiigi d80ece5bb3
Merge pull request #3091 from tonistiigi/history-filters
history: add filters to ls
2025-04-08 09:19:05 -07:00
Tõnis Tiigi 1f44971fc9
Merge pull request #3097 from crazy-max/bake-esc-interpolation
bake: keep escaped interpolation in print output
2025-04-08 09:17:32 -07:00
CrazyMax a91db7ccc9
bake: keep escaped interpolation in print output
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-04-08 17:30:24 +02:00
Sebastiaan van Stijn df6d36af35
Dockerfile: update to registry v3.0.0
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-04-08 14:20:58 +02:00
CrazyMax 98c3abb756
Merge pull request #3092 from tonistiigi/testing-import-cleanup
avoid import to testing helpers outside of tests
2025-04-08 12:55:48 +02:00
CrazyMax 3b824a0e39
Merge pull request #3087 from crazy-max/fix-standalone-envs
cmd: support cli environment variables in standalone mode
2025-04-08 12:54:32 +02:00
CrazyMax b0156cd631
Merge pull request #3090 from tonistiigi/platforms-compose
bake: fix platforms field in compose yaml
2025-04-08 12:51:43 +02:00
Tonis Tiigi 29614f9734
avoid import to testing helpers outside of tests
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-04-08 00:38:31 -07:00
Tonis Tiigi f1b895196c
history: add local filters for older buildkit versions
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-04-07 22:20:32 -07:00
Tonis Tiigi 900502b139
history: add filters to ls
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-04-07 21:32:43 -07:00
Tonis Tiigi 49bd7e4edc
bake: fix platforms field in compose yaml
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-04-07 14:14:40 -07:00
CrazyMax 8f9c25e8b0
Merge pull request #3089 from co63oc/fix3
Fix typos
2025-04-07 10:03:05 +02:00
co63oc 7659798f80 Fix typos
Signed-off-by: co63oc <co63oc@users.noreply.github.com>
2025-04-07 14:01:52 +08:00
CrazyMax 7b8bf9f801
cmd: support cli environment variables in standalone mode
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-04-04 15:13:25 +02:00
CrazyMax 8efc528b84
Merge pull request #3033 from thaJeztah/remove_notary
vendor: github.com/docker/docker, docker/cli v28.0.4
2025-03-28 14:45:39 +01:00
CrazyMax 8593e0397b
Merge pull request #3073 from tonistiigi/add-trace-export-command
history: add export command
2025-03-28 14:12:55 +01:00
CrazyMax 0c0e8eefdf
Merge pull request #3080 from crazy-max/compose-service-context
bake: support compose service as build context
2025-03-28 13:09:36 +01:00
CrazyMax e114dd09a5
bake: support compose service as build context
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-03-27 17:58:16 +01:00
Sebastiaan van Stijn d25e260d2e
vendor: github.com/docker/cli v28.0.4
This removes Notary / Docker Content Trust related (indirect)
dependencies;

Before:

    ls -l bin/build/
    total 131200
    -rwxr-xr-x  1 thajeztah  staff  67039266 Mar 21 09:20 buildx*

    ls -lh bin/build/
    total 131200
    -rwxr-xr-x  1 thajeztah  staff    64M Mar 21 09:20 buildx*

After:

    ls -l bin/build/
    total 127288
    -rwxr-xr-x  1 thajeztah  staff  65168450 Mar 21 09:22 buildx*

    ls -lh bin/build/
    total 127288
    -rwxr-xr-x  1 thajeztah  staff    62M Mar 21 09:22 buildx*

Difference: `67039266 - 65168450 = 1870816` (1.87 MB)

full diff: https://github.com/docker/cli/compare/v28.0.2...v28.0.4

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-03-25 18:45:44 +01:00
Sebastiaan van Stijn 86e4e77ac1
vendor: github.com/docker/docker-credential-helpers v0.9.3
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-03-25 18:44:27 +01:00
Sebastiaan van Stijn 534d9fc276
vendor: github.com/docker/docker v28.0.4
full diff: https://github.com/docker/docker/compare/v28.0.2...v28.0.4

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-03-25 18:30:37 +01:00
Tõnis Tiigi e0c67bfc79
Merge pull request #3078 from jsternberg/buildkit-0.20.2
vendor: github.com/moby/buildkit v0.20.2
2025-03-24 20:12:24 -07:00
Jonathan A. Sternberg 53e576b306
vendor: github.com/moby/buildkit v0.20.2
Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-03-24 15:56:16 -05:00
Tõnis Tiigi d3aef6642c
Merge pull request #3075 from thaJeztah/bump_cobra
vendor: github.com/spf13/cobra v1.9.1
2025-03-24 08:46:50 -07:00
Sebastiaan van Stijn 824cef1b92
vendor: github.com/spf13/cobra v1.9.1
full diff: https://github.com/spf13/cobra/compare/v1.8.1...v1.9.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-03-21 20:21:42 +01:00
Sebastiaan van Stijn a8b0fa8965
vendor: github.com/spf13/pflag v1.0.6
full diff: https://github.com/spf13/pflag/compare/v1.0.5...v1.0.6

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-03-21 20:20:54 +01:00
Tonis Tiigi 45dfb84361
history: add support for exporting multiple and all records
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-03-21 11:21:00 -07:00
Tonis Tiigi 13ef01196d
history: add export command
Allow builds to be exported into .dockerbuild bundles
that can be shared and imported into Docker Desktop.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-03-20 20:28:14 -07:00
Tõnis Tiigi 646df6d4a0
Merge pull request #3064 from glours/bump-compose-v2.34.0
bump compose-go to v2.4.9
2025-03-20 20:27:32 -07:00
Tõnis Tiigi d46c1d8141
Merge pull request #3069 from crazy-max/golangci-lint-rm-goversion
don't set go version in golangci-lint config
2025-03-20 17:08:06 -07:00
CrazyMax c682742de0
Merge pull request #3071 from thaJeztah/bump_engine_28.0.2
vendor: github.com/docker/docker, docker/cli v28.0.2
2025-03-20 13:14:47 +01:00
Sebastiaan van Stijn 391acba718
use cli-plugins metadata package
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-03-19 17:58:27 +01:00
Sebastiaan van Stijn db4b96e62c
vendor: github.com/docker/cli v28.0.2
full diff: https://github.com/docker/cli/compare/v28.0.1...v28.0.2

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-03-19 17:58:27 +01:00
Sebastiaan van Stijn 882ef0db91
vendor: github.com/docker/docker v28.0.2
full diff: https://github.com/docker/docker/compare/v28.0.1...v28.0.2

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-03-19 17:58:27 +01:00
Sebastiaan van Stijn 967fc2a696
vendor: github.com/containerd/containerd/v2 v2.0.4
full diff: https://github.com/containerd/containerd/compare/v2.0.3...v2.0.4

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-03-19 17:58:20 +01:00
CrazyMax 212d598ab1
fix go.mod and lint issues
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-03-19 11:52:08 +01:00
Guillaume Lours bf95aa3dfa
bump compose-go to v2.4.9
Signed-off-by: Guillaume Lours <705411+glours@users.noreply.github.com>
2025-03-18 10:03:19 +01:00
Tõnis Tiigi 18ccba0720
Merge pull request #3068 from crazy-max/GHSA-m4gq-fm9h-8q75
cherry-picks for CVE-2025-0495
2025-03-17 11:37:50 -07:00
CrazyMax f5196f1167
localstate: remove definition and inputs fields from group
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-03-17 18:14:55 +01:00
Tonis Tiigi ef99381eab
otel: avoid tracing raw os arguments
User might pass a value that they don't expect to
be kept in trace storage. For example some cache backends
allow passing authentication tokens with a flag.

Instead use known primary config values as attributes
of the root span.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-03-17 18:14:52 +01:00
CrazyMax a41c9fa649
don't set go version in golangci-lint config
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-03-17 16:55:10 +01:00
CrazyMax 00fdcd38ab
Merge pull request #3062 from crazy-max/builder-error-boot
builder: return error if a node fails to boot
2025-03-13 18:02:13 +01:00
Tõnis Tiigi 97f1d47464
Merge pull request #3063 from crazy-max/driver-ctn-gpu-request
driver: request gpu when creating container builder
2025-03-13 09:56:10 -07:00
CrazyMax 337578242d
driver: request gpu when creating container builder
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-03-13 17:36:37 +01:00
CrazyMax 503a8925d2
builder: return error if a node fails to boot
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-03-12 16:05:16 +01:00
Tõnis Tiigi 0d708c0bc2
Merge pull request #3058 from crazy-max/buildkit-0.20.1
vendor: github.com/moby/buildkit v0.20.1
2025-03-11 09:30:42 -07:00
Tõnis Tiigi 3a7523a117
Merge pull request #3057 from crazy-max/update-compose
vendor: update compose-go to v2.4.8
2025-03-11 09:09:46 -07:00
CrazyMax 5dc1a3308d
Merge pull request #3040 from crazy-max/ci-fix-no-space-left
ci: fix faulty bin-image job
2025-03-11 16:04:39 +01:00
CrazyMax eb78253dfd
Merge pull request #3055 from tonistiigi/history-queryrecord
history: generalize query loading
2025-03-11 15:10:00 +01:00
CrazyMax 5f8b78a113
vendor: github.com/moby/buildkit v0.20.1
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-03-11 15:07:47 +01:00
CrazyMax 67d3ed34e4
vendor: update compose-go to v2.4.8
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-03-11 14:56:19 +01:00
Tõnis Tiigi b88423be50
Merge pull request #3053 from tonistiigi/modernize-fixes
lint: apply x/tools/modernize fixes and validation
2025-03-10 18:37:51 -07:00
Tonis Tiigi c1e2ae5636
history: generalize query loading
Some commands (logs/open) were still missing offset handling.
Now all commands use the same reference parsing/sort.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-03-10 15:51:03 -07:00
Tõnis Tiigi 23afb70e40
Merge pull request #3039 from tonistiigi/history-import
history: add history import command
2025-03-10 10:09:36 -07:00
CrazyMax 812b42b329
history: desktop build backend not yet supported on WSL
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-03-10 17:12:21 +01:00
Tonis Tiigi d5d3d3d502
lint: apply x/tools/modernize fixes
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-03-07 16:37:24 -08:00
Tõnis Tiigi e19c729d3e
Merge pull request #3049 from tonistiigi/history-inspect-index
history: allow index based inspect of builds
2025-03-06 11:09:36 -08:00
CrazyMax aefa49c4fa
Merge pull request #3044 from docker/dependabot/github_actions/peter-evans/create-pull-request-7.0.8
build(deps): bump peter-evans/create-pull-request from 7.0.7 to 7.0.8
2025-03-06 16:23:26 +01:00
dependabot[bot] 7d927ee604
build(deps): bump peter-evans/create-pull-request from 7.0.7 to 7.0.8
Bumps [peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request) from 7.0.7 to 7.0.8.
- [Release notes](https://github.com/peter-evans/create-pull-request/releases)
- [Commits](dd2324fc52...271a8d0340)

---
updated-dependencies:
- dependency-name: peter-evans/create-pull-request
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-06 14:58:27 +00:00
Tonis Tiigi 058c098c8c
history: allow index based inspect of builds
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-03-05 21:33:24 -08:00
Tõnis Tiigi 7b7dbe88b1
Merge pull request #3046 from crazy-max/buildkit-0.20.1
dockerfile: update buildkit to 0.20.1
2025-03-05 17:20:14 -08:00
Tonis Tiigi cadf4a5893
history: add multi-file/stdin import
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-03-05 11:12:52 -08:00
CrazyMax 6cd9fef556
dockerfile: update buildkit to 0.20.1
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-03-05 17:13:03 +01:00
Tonis Tiigi 963b9ca30d
history: print urls after importing builds
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-03-04 16:13:49 -08:00
CrazyMax 4636c8051a
ci: fix faulty bin-image job
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-03-05 00:47:17 +01:00
Tõnis Tiigi e23695d50d
Merge pull request #3042 from crazy-max/ci-bump-ubuntu
ci: bump to ubuntu-24.04
2025-03-04 15:41:06 -08:00
CrazyMax 6eff9b2d51
ci: update install-k3s step to fix issue with latest ubuntu runners
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-03-05 00:21:09 +01:00
CrazyMax fcbfc85f42
ci: bump to ubuntu-24.04
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-03-04 23:20:01 +01:00
Tõnis Tiigi 9a204c44c3
Merge pull request #3031 from crazy-max/bake-set-append
bake: support += operator to append with overrides
2025-03-04 09:33:57 -08:00
CrazyMax 4c6eba5acd
bake: support += operator to append with overrides
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-03-04 13:29:41 +01:00
Tonis Tiigi fea7459880
history: add history import command
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-03-03 22:52:05 -08:00
Tõnis Tiigi e2d52a8465
Merge pull request #2901 from crazy-max/netbsd
build and test netbsd
2025-03-03 16:43:02 -08:00
Tõnis Tiigi 48a591b1e1
Merge pull request #3032 from crazy-max/bake-secrets-dupes
correctly remove duplicated secrets and ssh keys
2025-03-03 16:40:14 -08:00
CrazyMax 128acdb471
Merge pull request #3027 from LaurentGoderre/fix-attest-extra-args
Fix attest extra arguments
2025-03-03 16:28:02 +01:00
CrazyMax 411d3f8cea
Merge pull request #3035 from co63oc/fix1
Fix typos
2025-03-03 14:07:56 +01:00
co63oc 7925a96726 Fix
Signed-off-by: co63oc <co63oc@users.noreply.github.com>
2025-03-02 21:20:50 +08:00
Laurent Goderre b06bddfee6 Fix handling of attest extra arguments
Signed-off-by: Laurent Goderre <laurent.goderre@docker.com>
2025-02-28 12:09:32 -05:00
CrazyMax fe17ebda89
correctly remove duplicated secrets and ssh keys
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-02-28 15:24:16 +01:00
CrazyMax 4ed1e07f16
Merge pull request #3030 from thaJeztah/bump_docker_28.0.1
vendor: github.com/docker/docker, docker/cli v28.0.1
2025-02-28 10:54:35 +01:00
Sebastiaan van Stijn f49593ce2c
vendor: github.com/docker/docker, docker/cli v28.0.1
diffs:

- https://github.com/docker/docker/compare/v28.0.0...v28.0.1
- https://github.com/docker/cli/compare/v28.0.0...v28.0.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-02-28 00:50:48 +01:00
Laurent Goderre 4e91fe6507 Add attest extra args tests
Signed-off-by: Laurent Goderre <laurent.goderre@docker.com>
2025-02-27 17:10:30 -05:00
CrazyMax 921b576f3a
Merge pull request #3023 from tonistiigi/dockerd-push-fix
avoid double pushing with docker driver with containerd
2025-02-25 16:44:00 +01:00
CrazyMax 548c80ab5a
Merge pull request #3024 from tonistiigi/imagetools-push-tag-fix
imagetools: avoid multiple tag pushes on create
2025-02-25 16:36:37 +01:00
CrazyMax f3a4740d5f
Merge pull request #3026 from thaJeztah/bump_engine_28.0
vendor: docker/docker, docker/cli v28.0.0
2025-02-25 16:35:56 +01:00
Sebastiaan van Stijn 89917dc696
vendor: docker/docker, docker/cli v28.0.0
no code changes in vendored code

full diff:

- https://github.com/docker/cli/compare/v28.0.0-rc.3...v28.0.0
- https://github.com/docker/docker/compare/v28.0.0-rc.3...v28.0.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-02-25 12:37:44 +01:00
CrazyMax f7276201ac
Merge pull request #3021 from jsternberg/empty-cache-to-override
buildflags: skip empty cache entries when parsing
2025-02-25 10:48:39 +01:00
CrazyMax beb9f515c0
Merge pull request #3022 from docker/dependabot/github_actions/peter-evans/create-pull-request-7.0.7
build(deps): bump peter-evans/create-pull-request from 7.0.6 to 7.0.7
2025-02-25 09:54:20 +01:00
Tonis Tiigi 4f7d145c0e
avoid double pushing with docker driver with containerd
In this mode buildkit can push directly so pushing manually
with docker would result in pushing image twice.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-02-24 16:48:57 -08:00
Tonis Tiigi ccdf63c644
imagetools: avoid multiple tag pushes on create
Ensure only the final manifest is pushed by tag and intermediate
blobs are only pushed by digest to avoid tag temorarily pointing to
wrong image.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-02-24 16:48:15 -08:00
dependabot[bot] 9a6b8754b1
build(deps): bump peter-evans/create-pull-request from 7.0.6 to 7.0.7
Bumps [peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request) from 7.0.6 to 7.0.7.
- [Release notes](https://github.com/peter-evans/create-pull-request/releases)
- [Commits](67ccf781d6...dd2324fc52)

---
updated-dependencies:
- dependency-name: peter-evans/create-pull-request
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-02-24 18:27:58 +00:00
Jonathan A. Sternberg e75ac22ba6
buildflags: skip empty cache entries when parsing
Broken in 11c84973ef. The section to skip
an empty input was accidentally removed when some code was refactored to
fix a separate issue.

This skips empty cache entries which allows disabling the `cache-from` and
`cache-to` entries from the command line overrides.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-02-24 10:09:02 -06:00
Shaun Thompson 62f5cc7c80
Merge pull request #3017 from tonistiigi/remove-debug
remove accidental debug
2025-02-20 20:08:16 -05:00
Tonis Tiigi 6272ae1afa
remove accidental debug
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-02-20 15:41:13 -08:00
CrazyMax accfbf6e24
Merge pull request #2997 from jsternberg/bake-set-annotations
bake: allow annotations to be set on the command line
2025-02-20 17:53:48 +01:00
CrazyMax af2d8fe555
build and test netbsd
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-02-20 13:04:48 +01:00
CrazyMax 18f4275a92
Merge pull request #2995 from crazy-max/ci-infer-goversion-bsd
ci: infer go version from workflow for bsd tests
2025-02-20 13:04:19 +01:00
CrazyMax 221a608b3c
Merge pull request #3014 from crazy-max/dockerfile-docker-28
Dockerfile: update to docker v28.0.0
2025-02-20 11:36:06 +01:00
CrazyMax cc0391eba5
ci: infer go version from workflow for bsd tests
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-02-20 11:29:40 +01:00
CrazyMax aef388bf7a
Dockerfile: update to docker v28.0.0
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-02-20 11:19:18 +01:00
CrazyMax 80c16bc28c
Merge pull request #3013 from jsternberg/buildkit-bump
ci: update buildkit to 0.20.0
2025-02-20 10:57:02 +01:00
Jonathan A. Sternberg 75160643e1
ci: update buildkit to 0.20.0
Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-02-19 15:21:14 -06:00
Jonathan A. Sternberg ad18ffc018
Merge pull request #3010 from jsternberg/vendor-update
vendor: github.com/moby/buildkit v0.20.0
2025-02-19 13:30:37 -06:00
Jonathan A. Sternberg 80c3832c94
vendor: github.com/moby/buildkit v0.20.0
Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-02-19 13:17:40 -06:00
Jonathan A. Sternberg 7762ab2c38
Merge pull request #3008 from thaJeztah/bump_engine_28.0_rc3
vendor: github.com/docker/docker, docker/cli v28.0.0-rc.3
2025-02-19 11:59:57 -06:00
Sebastiaan van Stijn b973de2dd3
vendor: github.com/docker/cli v28.0.0-rc.3
no significant changes, only linting fixes

full diff: https://github.com/docker/cli/compare/v28.0.0-rc.2...v28.0.0-rc.3

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-02-19 13:39:40 +01:00
Sebastiaan van Stijn 352ce7e875
vendor: github.com/docker/docker v28.0.0-rc.3
no code changes in vendor, only updated swagger file

full diff: https://github.com/docker/docker/compare/v28.0.0-rc.2...v28.0.0-rc.3

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-02-19 13:37:43 +01:00
CrazyMax cdfc1ed750
Merge pull request #2994 from tonistiigi/device-entitlements
support for device entitlement in build and bake
2025-02-18 22:28:23 +01:00
CrazyMax d0d3433b12
vendor: update buildkit to v0.20.0-rc3
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-02-18 21:59:36 +01:00
CrazyMax b04d39494f
Merge pull request #3001 from crazy-max/fix-gha-cache-v2
cache: enable gha cache backend if cache service v2 detected
2025-02-18 21:24:14 +01:00
CrazyMax 52f503e806
Merge pull request #3003 from tonistiigi/debug-progress-fix
progress: fix race on pausing progress on debug shell
2025-02-18 10:58:51 +01:00
Tonis Tiigi 79a978484d
progress: fix race on pausing progress on debug shell
Current progress writer has a logic of pausing/unpausing
the printer and internally recreating internal channels.

This conflicts with a change that added sync.Once to Wait
to allow it being called multiple times without erroring.

In debug shell this could mean that new progress printer
showed up in debug shell because it was not closed.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-02-17 21:02:49 -08:00
CrazyMax f7992033bf
cache: fix gha cache url handling
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-02-17 19:01:13 +01:00
CrazyMax 73f61aa338
cache: enable gha cache backend if cache service v2 detected
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-02-17 18:13:12 +01:00
CrazyMax faa573f484
Merge pull request #2998 from thaJeztah/bump_docker
vendor:  docker/docker, docker/cli v28.0.0-rc.2
2025-02-17 17:08:43 +01:00
Sebastiaan van Stijn 0a4a1babd1
vendor: github.com/docker/cli v28.0.0-rc.2
full diff: https://github.com/docker/cli/compare/v28.0.0-rc.1...v28.0.0-rc.2

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-02-17 16:43:59 +01:00
Sebastiaan van Stijn 461bd9e5d1
vendor: github.com/docker/docker v28.0.0-rc.2
full diff: https://github.com/docker/docker/compare/v28.0.0-rc.1...v28.0.0-rc.2

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-02-17 16:43:51 +01:00
Jonathan A. Sternberg d6fdf83f45
bake: allow annotations to be set on the command line
Annotations were not merged correctly. The overrides in `ArrValue` would
be merged, but the section of code setting them from the command line
did not include `annotations` in the list of available attributes so the
command line option was completely discarded.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-02-14 11:57:30 -06:00
CrazyMax ef4e9fea83
Merge pull request #2992 from crazy-max/docker-28
vendor: docker, docker/cli v28.0.0-rc.1
2025-02-14 14:06:09 +01:00
Tõnis Tiigi 0c296fe857
support for device entitlement in build and bake
Allow access to CDI Devices in Buildkit v0.20.0+ for
devices that are not automatically allowed to be used by
everyone in BuildKit configuration.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-02-14 11:51:47 +01:00
Tõnis Tiigi ef73c64d2c
Merge pull request #2993 from tonistiigi/update-buildkit-v0.20.0-rc2
vendor: update buildkit to v0.20.0-rc2
2025-02-13 17:15:50 -08:00
Tonis Tiigi 1784f84561
vendor: update buildkit to v0.20.0-rc2
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-02-13 16:54:50 -08:00
Tõnis Tiigi 6a6fa4f422
Merge pull request #2986 from tonistiigi/remove-x-slices
remove import of x/exp
2025-02-13 10:16:48 -08:00
Sebastiaan van Stijn 2dc0350ffe
vendor: github.com/docker/cli/v28.0.0-rc.1
full diff: https://github.com/docker/cli/compare/v27.5.1..v28.0.0-rc.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-02-13 13:53:45 +01:00
Sebastiaan van Stijn b85fc5c484
vendor: github.com/docker/docker/v28.0.0-rc.1
full diff: https://github.com/docker/docker/compare/v27.5.1..v28.0.0-rc.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-02-13 13:53:44 +01:00
Tõnis Tiigi 2389d457a4
Merge pull request #2988 from crazy-max/ctn-driver-display-pull-error
docker-container: check error from response body when pulling image
2025-02-12 08:47:05 -08:00
CrazyMax 3f82aadc6e
docker-container: check error from response body when pulling image
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-02-12 12:35:27 +01:00
Tonis Tiigi 79e3f12305
remove import of x/exp
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-02-11 19:23:36 -08:00
Tõnis Tiigi 1dc5f0751b
Merge pull request #2983 from tonistiigi/update-buildkit-v0.20.0-rc1
vendor: update buildkit to v0.20.0-rc1
2025-02-11 16:20:02 -08:00
Tonis Tiigi 7ba4da0800
gha: send v2 url as url_v2
Some repositories already have v2 enabled and that
causes errors avainst older BuildKit. To avoid that we
need to send both URLs as separate keys.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-02-11 15:49:29 -08:00
Tonis Tiigi a64e628774
.github: test github runtime envs
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-02-11 15:41:15 -08:00
Tonis Tiigi 1c4b1a376c
show CDI devices in builder inspection
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-02-11 14:52:33 -08:00
Tonis Tiigi e1f690abfc
allow passing github cache v2 urls from env
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-02-11 14:52:33 -08:00
Tonis Tiigi 03569c2188
vendor: update buildkit to v0.20.0-rc1
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-02-11 14:52:19 -08:00
Tõnis Tiigi 350d3f0f4b
Merge pull request #2904 from tonistiigi/history-command-trace
Add history trace command
2025-02-11 12:40:10 -08:00
CrazyMax dc27815236
ci: fix git config for unit tests
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-02-11 11:40:04 -08:00
Tonis Tiigi 1089ff7341
history: add comparison support to trace
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-02-11 11:40:04 -08:00
Tonis Tiigi 7433d37183
history: add loadTrace function and support for loading Nth trace
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-02-11 11:40:04 -08:00
Tonis Tiigi f9a76355b5
history: add UI view to traces
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-02-11 11:40:01 -08:00
Tonis Tiigi cfeea34b2d
add history trace command
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-02-11 11:38:23 -08:00
Tõnis Tiigi ba2d3692a6
Merge pull request #2982 from crazy-max/revert-docker-28-vendor
Revert "vendor: docker, docker/cli v28.0.0-rc.1"
2025-02-11 11:37:32 -08:00
Tõnis Tiigi 853b593a4d
Merge pull request #2981 from crazy-max/hack-mount-docker-cfg
hack: mount docker config on gha
2025-02-11 10:36:45 -08:00
CrazyMax efb300e613
chore: fix vendoring
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-02-11 19:17:35 +01:00
CrazyMax cee7b344da
Revert "vendor: github.com/docker/docker/v28.0.0-rc.1"
This reverts commit b195b80ddf.

Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-02-11 18:14:49 +01:00
CrazyMax 67dbde6970
Revert "vendor: github.com/docker/cli/v28.0.0-rc.1"
This reverts commit 7216086b8c.

Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-02-11 18:14:49 +01:00
CrazyMax 295653dabb
hack: mount docker config on gha
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-02-11 17:32:50 +01:00
CrazyMax f5802119c5
Merge pull request #2978 from jsternberg/rangefunc-go1.22-revert
buildflags: make work on go 1.22 by reverting rangefunc usage
2025-02-11 10:47:01 +01:00
CrazyMax 40b9ac1ec5
Merge pull request #2979 from tonistiigi/update-buildkit-0e3037c0182e
vendor: update buildkit to 0e3037c0182e
2025-02-11 10:29:51 +01:00
Tonis Tiigi f11496448a
vendor: update buildkit to 0e3037c0182e
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-02-10 16:48:59 -08:00
Tõnis Tiigi c8c9c72ca6
Merge pull request #2964 from crazy-max/history-inspect-json
history: inspect json and go template format
2025-02-10 16:30:42 -08:00
Tõnis Tiigi 9fe8139022
Merge pull request #2976 from crazy-max/ci-fix-vagrant
ci: install latest vagrant
2025-02-10 16:16:15 -08:00
CrazyMax b3e8c62635
ci: install latest vagrant
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-02-10 20:54:44 +01:00
Tõnis Tiigi b8e9c28315
Merge pull request #2970 from crazy-max/fix-ls-json
ls: fix duplicated builders for json format
2025-02-10 09:28:17 -08:00
Jonathan A. Sternberg 3ae9970da5
buildflags: make work on go 1.22 by reverting rangefunc usage
Reverts the usage of rangefunc and attempts to keep the foundation of it
in for when we move to go 1.23. We have downstream dependencies that
aren't ready to move to go 1.23. We can likely move after go 1.24 is
released.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-02-10 11:03:46 -06:00
CrazyMax 1d219100fc
Merge pull request #2868 from thaJeztah/bump_engine
vendor: docker, docker/cli v28.0.0-rc.1
2025-02-10 17:22:31 +01:00
CrazyMax 464f9278d1
history: fix default format for inspect command
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-02-10 11:30:59 +01:00
Sebastiaan van Stijn 7216086b8c
vendor: github.com/docker/cli/v28.0.0-rc.1
full diff: https://github.com/docker/cli/compare/v27.5.1..v28.0.0-rc.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-02-10 11:07:38 +01:00
Sebastiaan van Stijn b195b80ddf
vendor: github.com/docker/docker/v28.0.0-rc.1
full diff: https://github.com/docker/docker/compare/v27.5.1..v28.0.0-rc.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-02-10 11:07:35 +01:00
Sebastiaan van Stijn 70a5e266d1
vendor: github.com/moby/term v0.5.2
full diff:

- https://github.com/moby/term/compare/v0.5.0...v0.5.2
- d185dfc1b5...faa5f7b017

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-02-10 11:06:24 +01:00
Sebastiaan van Stijn 689bea7963
vendor: golang.org/x/exp v0.0.0-20241108190413-2d47ceb2692f
full diff: 701f63a606...2d47ceb269

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-02-10 11:06:22 +01:00
Sebastiaan van Stijn 5176c38115
vendor: golang.org/x/mod v0.22.0
full diff: https://github.com/golang/mod/compare/v0.21.0...v0.22.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-02-10 11:05:52 +01:00
Sebastiaan van Stijn ec440c4574
vendor: golang.org/x/sys v0.29.0
full diff: https://github.com/golang/sys/compare/v0.28.0...v0.29.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-02-10 11:05:51 +01:00
CrazyMax 0a4eb7ec76
Merge pull request #2971 from thaJeztah/test_engine_28
Dockerfile: update to docker v28.0.0-rc.1
2025-02-10 11:03:38 +01:00
Sebastiaan van Stijn f710c93157
vendor: github.com/docker/cli v27.5.1
no changes in vendored code

full diff: https://github.com/docker/cli/compare/v27.5.0...v27.5.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-02-09 13:53:05 +01:00
Sebastiaan van Stijn d1a0a1497c
vendor: github.com/docker/docker v27.5.1
no changes in vendored code

full diff: https://github.com/docker/docker/compare/v27.5.0...v27.5.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-02-09 13:53:05 +01:00
Sebastiaan van Stijn c880ecd513
Dockerfile: update to docker v28.0.0-rc.1
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-02-09 13:50:14 +01:00
Tõnis Tiigi d557da1935
Merge pull request #2957 from ndeloof/prompt-rawjson
don't warn user about missing --allows when running with progress=rawjson
2025-02-07 16:34:10 -08:00
CrazyMax 417af36abc
history: support go template format for inspect
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-02-07 12:09:31 +01:00
CrazyMax e236b86297
history: set materials and attachments to json output for inspect
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-02-07 12:09:31 +01:00
CrazyMax 633e8a0881
history: add error sources and stack to json output for inspect
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-02-07 11:37:46 +01:00
CrazyMax 5e1ea62f92
ls: fix duplicated builders for json format
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-02-07 10:23:55 +01:00
Tõnis Tiigi 4b90b84995
Merge pull request #2965 from jsternberg/handle-unknown-values
buildflags: handle unknown values from cty
2025-02-06 10:06:49 -08:00
Jonathan A. Sternberg abc85c38f8
buildflags: handle unknown values from cty
Update the buildflags cty code to handle unknown values. When hcl
decodes a value with an invalid variable name, it appends a diagnostic
for the error and then returns an unknown value so it can continue
processing the file and finding more errors.

The iteration code has now been changed to use a rangefunc from go 1.23
and it skips empty or unknown values. Empty values are valid when they
are skipped and unknown values will have a diagnostic for itself.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-02-06 09:45:18 -06:00
CrazyMax ccca7c795a
history: json format support for inspect command
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-02-06 16:25:49 +01:00
CrazyMax 04aab6958c
history: set num steps, name, default platform and error logs to inspect
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-02-06 16:12:37 +01:00
Tonis Tiigi 9d640f0e33
history: add formatting support to inspect
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-02-06 10:45:27 +01:00
CrazyMax b76fdcaf8d
Merge pull request #2963 from thaJeztah/consistent_alias
use a consistent alias for the docker client package
2025-02-03 13:39:27 +01:00
Sebastiaan van Stijn d693e18c04
use a consistent alias for the docker client package
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-02-03 11:36:52 +01:00
CrazyMax b066ee1110
Merge pull request #2961 from thaJeztah/driver_use_errdefs
driver/docker-container: remove uses of dockerclient.IsErrNotFound
2025-02-03 09:41:24 +01:00
CrazyMax cf8bf9e104
Merge pull request #2950 from thaJeztah/fix_usage_and_completion
fix: strip path from usage output and shell-completion scripts
2025-02-02 01:11:29 +01:00
Sebastiaan van Stijn 3bd54b19aa
driver/docker-container: remove uses of dockerclient.IsErrNotFound
It's a wrapper around errdefs.IsNotFound, which is already used, so we
can skip the wrapper.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-02-01 15:22:33 +01:00
Tõnis Tiigi 934841f329
Merge pull request #2958 from crazy-max/fix-debug-invoke
debug: fix invoke on error
2025-01-31 10:17:08 -08:00
CrazyMax b2ababc7b6
debug: fix invoke on error
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-01-31 10:45:34 +01:00
Nicolas De Loof 0ccdb7e248
don't warn user about missing --allows when running with progress=rawjson
Signed-off-by: Nicolas De Loof <nicolas.deloof@gmail.com>
2025-01-31 08:49:36 +01:00
CrazyMax cacb4fb9b3
Merge pull request #2953 from dvdksn/docs-bake-composable-attrs
docs: update bake reference to use composable attrs
2025-01-29 10:44:05 +01:00
David Karlsson df80bd72c6 docs: update bake reference to use composable attrs
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2025-01-29 09:55:45 +01:00
Sebastiaan van Stijn bb4bef2f04
fix: strip path from usage output and shell-completion scripts
Before this patch, both "usage" and shell-completion scripts would preserve
the path of the invoked command, which was especially problematic for the
completion-scripts, because Cobra's completion depends on Command.Name()
for this (see [1], [2]);

    ./bin/build/buildx --help | head -n 5
    Extended build capabilities with BuildKit

    Usage:
      ./bin/build/buildx
      ./bin/build/buildx [command]

    ./bin/build/buildx completion bash | head -n 3
    # bash completion V2 for ./bin/build/buildx                   -*- shell-script -*-

    __./bin/build/buildx_debug()

This would also be problematic if the path contained a space, for example;

    ln -s $(pwd)/bin/build $(pwd)/bin/Program\ Files

    ./bin/Program\ Files/buildx completion bash | head -n 3
    # bash completion V2 for ./bin/Program                        -*- shell-script -*-

    __./bin/Program_debug()

With this patch, the path is stripped to prevent this issue;

    ./bin/build/buildx --help | head -n 5
    Extended build capabilities with BuildKit

    Usage:
      buildx
      buildx [command]

    ./bin/build/buildx completion bash | head -n 3
    # bash completion V2 for buildx                               -*- shell-script -*-

    __buildx_debug()

    ./bin/Program\ Files/buildx completion bash | head -n 3
    # bash completion V2 for buildx                               -*- shell-script -*-

    __buildx_debug()

It's worth noting that this patch only fixes these basic issues. Other cases
are not yet addressed, and may need fixes in Cobra because (especially for
the completion scripts) it should likely not conflate "Name" with "executable".

For example, command.Name() does not handle situations where the executable
itself has a space in its name:

    ln -s $(pwd)/bin/build/buildx $(pwd)/bin/build/hello\ world

    ./bin/build/hello\ world completion bash | head -n 3
    # bash completion V2 for hello                                -*- shell-script -*-

    __hello_debug()

Other, less problematic, issues to address are case-insensitive filesystems,
where the binary can be invoked with any case;

    ./bin/build/bUiLdX --help | head -n 5
    Extended build capabilities with BuildKit

    Usage:
      bUiLdX
      bUiLdX [command]

    ./bin/build/bUiLdX completion bash | head -n 3
    # bash completion V2 for bUiLdX                               -*- shell-script -*-

    __bUiLdX_debug()

[1]: https://github.com/spf13/cobra/blob/v1.8.1/bash_completionsV2.go#L24-L39
[2]: https://github.com/spf13/cobra/blob/v1.8.1/command.go#L1502-L1510

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-01-25 14:25:43 +01:00
Tõnis Tiigi a11507344a
Merge pull request #2932 from crazy-max/buildkit-0.19.0
vendor: update buildkit to v0.19.0
2025-01-22 12:57:37 -08:00
Tõnis Tiigi 17af006857
Merge pull request #2944 from jsternberg/cache-ref-only-format-fix
buildflags: fix ref only format for command line and bake
2025-01-22 12:57:02 -08:00
Jonathan A. Sternberg 11c84973ef
buildflags: fix ref only format for command line and bake
Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-01-22 13:18:38 -06:00
Tõnis Tiigi cc4a291f6a
Merge pull request #2941 from crazy-max/ci-fix-docs-upstream
ci: use main branch for docs upstream validation workflow
2025-01-22 10:36:56 -08:00
CrazyMax aa1fbc0421
ci: use main branch for docs upstream validation workflow
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-01-22 19:11:26 +01:00
Tõnis Tiigi b2bbb337e4
Merge pull request #2835 from dvdksn/bake-v019-entitlements
docs: bake v0.19 entitlements
2025-01-22 09:48:38 -08:00
David Karlsson 012df71b63
docs: add docs for bake --allow
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2025-01-22 18:25:32 +01:00
David Karlsson a26bb271ab
docs(bake): improve docs on "call" and "description" in bake file
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2025-01-22 18:23:18 +01:00
CrazyMax 3e0682f039
Merge pull request #2937 from jsternberg/attests-json-marshal
buildflags: marshal attestations into json with extra attributes correctly
2025-01-22 09:16:54 +01:00
Jonathan A. Sternberg 3aed658dc4
buildflags: marshal attestations into json with extra attributes correctly
`MarshalJSON` would not include the extra attributes because it iterated
over the target map rather than the source map.

Also fixes JSON unmarshaling for SSH and secrets. The intention was to
unmarshal into the struct, but `UnmarshalText` takes priority over the
default struct unmarshaling so it didn't work as intended.

Tests have been added for all marshaling and unmarshaling methods.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2025-01-21 15:05:23 -06:00
CrazyMax b4a0dee723
Merge pull request #2935 from crazy-max/ci-update-buildkit
ci: update buildkit to 0.19.0
2025-01-21 13:50:26 +01:00
CrazyMax 6904512813
ci: update buildkit to 0.19.0
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-01-21 10:31:14 +01:00
CrazyMax d41e335466
Merge pull request #2934 from crazy-max/update-buildkit-dockerfile
dockerfile: update buildkit to 0.19.0
2025-01-21 10:17:21 +01:00
CrazyMax 0954dcb5fd
dockerfile: update buildkit to 0.19.0
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-01-20 20:41:12 +01:00
CrazyMax 38f64bf709
vendor: update buildkit to v0.19.0
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-01-20 18:55:10 +01:00
Tõnis Tiigi c1d3955fbe
Merge pull request #2928 from tonistiigi/update-buildkit-v0.19.0-rc3
vendor: update buildkit to v0.19.0-rc3
2025-01-17 12:53:50 -08:00
Tonis Tiigi d0b63e60e2
vendor: update buildkit to v0.19.0-rc3
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-01-17 12:09:08 -08:00
Tõnis Tiigi e141c8fa71
Merge pull request #2923 from crazy-max/docs-bake-overrides
chore: comments to not forget to update docs
2025-01-17 10:45:44 -08:00
Tõnis Tiigi 2ee156236b
Merge pull request #2925 from tonistiigi/history-inspect-error
history: add error details to history inspect command
2025-01-17 10:23:59 -08:00
Tonis Tiigi 1335264c9d
history: update formatting of error logs
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-01-17 08:54:38 -08:00
CrazyMax e74185aa6d
Merge pull request #2927 from crazy-max/update-labels
chore: handle area/history label
2025-01-17 15:37:28 +01:00
CrazyMax 0224773102
chore: handle area/history label
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-01-17 15:21:35 +01:00
Tonis Tiigi 8c27b5c545
history: make sure started time is shown in current timezone
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-01-16 21:12:37 -08:00
Tonis Tiigi f7594d484b
history: fix printing desktop URL
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-01-16 21:12:37 -08:00
Tonis Tiigi f118749cdc
history: add error details to history inspect command
For failed builds, show the source with error location and last
logs for vertex that caused the error. When debug mode is on,
stacktrace is printed.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-01-16 21:12:17 -08:00
CrazyMax 0d92ad713c
chore: comments to not forget to update docs
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-01-16 10:11:43 +01:00
Tõnis Tiigi a18ff4d5ef
Merge pull request #2891 from tonistiigi/history-command-initial
Add buildx history command
2025-01-15 08:51:23 -08:00
CrazyMax b035a04aaa
history: update containerd imports to v2
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-01-15 17:22:05 +01:00
Tonis Tiigi 6220e0aae8
add history inspect attachment command
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-01-15 16:17:21 +01:00
Tonis Tiigi d9abc78e8f
update history inspect formatting
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-01-15 16:17:21 +01:00
Tonis Tiigi 3313026961
add buildx history inspect formatting
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-01-15 16:17:20 +01:00
Tonis Tiigi 06912aa24c
Add buildx history command
These commands allow working with build records
of completed and running builds.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-01-15 16:17:20 +01:00
CrazyMax cde0e9814d
Merge pull request #2921 from thaJeztah/downgrade_tagged_releases
downgrade go-difflib and go-spew to tagged releases
2025-01-15 15:03:23 +01:00
CrazyMax 2e6e146087
Merge pull request #2920 from crazy-max/dockerfile-update-buildkit
dockerfile: update buildkit to 0.19.0-rc2
2025-01-15 14:50:15 +01:00
CrazyMax af3cbe6cec
Merge pull request #2919 from crazy-max/dockerfile-update-docker
dockerfile: update docker to 27.5.0
2025-01-15 14:48:30 +01:00
Sebastiaan van Stijn 1ef9e67cbb
downgrade go-difflib and go-spew to tagged releases
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2025-01-15 14:41:48 +01:00
CrazyMax 75204426bd
dockerfile: update buildkit to 0.19.0-rc2
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-01-15 13:33:17 +01:00
CrazyMax b73f58a90b
Merge pull request #2914 from tonistiigi/update-buildkit-v0.19.0-rc1
vendor: update buildkit to v0.19.0-rc2
2025-01-15 13:32:38 +01:00
CrazyMax 6f5486e718
dockerfile: update docker to 27.5.0
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-01-15 13:24:39 +01:00
CrazyMax 3fa0c3d122
vendor: update buildkit to v0.19.0-rc2
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-01-15 13:11:32 +01:00
CrazyMax b0b902de41
Merge pull request #2916 from docker/dependabot/github_actions/softprops/action-gh-release-2.2.1
build(deps): bump softprops/action-gh-release from 2.2.0 to 2.2.1
2025-01-15 08:47:21 +01:00
CrazyMax 77d632e0c5
Merge pull request #2917 from docker/dependabot/github_actions/peter-evans/create-pull-request-7.0.6
build(deps): bump peter-evans/create-pull-request from 7.0.5 to 7.0.6
2025-01-15 08:47:06 +01:00
CrazyMax 6a12543db3
Merge pull request #2918 from docker/dependabot/github_actions/docker/bake-action-6
build(deps): bump docker/bake-action from 5 to 6
2025-01-15 08:46:54 +01:00
dependabot[bot] 4027b60fa0
build(deps): bump docker/bake-action from 5 to 6
Bumps [docker/bake-action](https://github.com/docker/bake-action) from 5 to 6.
- [Release notes](https://github.com/docker/bake-action/releases)
- [Commits](https://github.com/docker/bake-action/compare/v5...v6)

---
updated-dependencies:
- dependency-name: docker/bake-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-01-15 07:37:56 +00:00
dependabot[bot] dda8df3b06
build(deps): bump peter-evans/create-pull-request from 7.0.5 to 7.0.6
Bumps [peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request) from 7.0.5 to 7.0.6.
- [Release notes](https://github.com/peter-evans/create-pull-request/releases)
- [Commits](5e914681df...67ccf781d6)

---
updated-dependencies:
- dependency-name: peter-evans/create-pull-request
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-01-15 07:37:53 +00:00
dependabot[bot] d54a110b3c
build(deps): bump softprops/action-gh-release from 2.2.0 to 2.2.1
Bumps [softprops/action-gh-release](https://github.com/softprops/action-gh-release) from 2.2.0 to 2.2.1.
- [Release notes](https://github.com/softprops/action-gh-release/releases)
- [Changelog](https://github.com/softprops/action-gh-release/blob/master/CHANGELOG.md)
- [Commits](7b4da11513...c95fe14893)

---
updated-dependencies:
- dependency-name: softprops/action-gh-release
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-01-15 07:37:50 +00:00
Tonis Tiigi 44fa243d58
vendor: update buildkit to v0.19.0-rc1
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-01-14 14:24:38 -08:00
Tõnis Tiigi 630066bfc5
Merge pull request #2905 from crazy-max/bake-infer-auth-token
bake: infer git auth token from remote files to build request
2025-01-14 09:12:53 -08:00
Tõnis Tiigi 026ac2313c
Merge pull request #2910 from crazy-max/update-testify
vendor: github.com/stretchr/testify v1.10.0
2025-01-14 08:10:55 -08:00
CrazyMax 45fc5ed3b3
bake: infer git auth token from remote files to build request
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-01-14 15:56:11 +01:00
Tõnis Tiigi 1eb167a767
Merge pull request #2908 from crazy-max/update-pty
vendor: github.com/creack/pty v1.1.24
2025-01-13 23:27:23 -08:00
Tõnis Tiigi 45d2ec69f1
Merge pull request #2911 from crazy-max/update-hcl
vendor: update hcl dependencies
2025-01-13 10:30:04 -08:00
Tõnis Tiigi 793ec7f3b2
Merge pull request #2866 from crazy-max/ci-e2e-bake
ci: e2e bake
2025-01-13 10:22:30 -08:00
CrazyMax 6cb62dddf2
Merge pull request #2909 from crazy-max/update-cli-docs-tool
vendor: github.com/docker/cli-docs-tool v0.9.0
2025-01-13 18:28:39 +01:00
CrazyMax 66ecb53fa7
vendor: github.com/zclconf/go-cty v1.16.0
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-01-13 18:00:34 +01:00
CrazyMax 26026810fe
vendor: github.com/hashicorp/go-cty-funcs c51673e0b3dd
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-01-13 17:59:48 +01:00
CrazyMax d3830e0a6e
vendor: github.com/hashicorp/hcl/v2 v2.23.0
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-01-13 17:58:59 +01:00
CrazyMax 8c2759f6ae
vendor: github.com/stretchr/testify v1.10.0
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-01-13 17:54:58 +01:00
CrazyMax 8a472c6c9d
vendor: github.com/docker/cli-docs-tool v0.9.0
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-01-13 17:53:44 +01:00
CrazyMax b98653d8fe
vendor: github.com/creack/pty v1.1.24
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-01-13 17:52:33 +01:00
CrazyMax 807d15ff9d
Merge pull request #2899 from crazy-max/docs-quiet-progress
docs: missing quiet progress mode
2025-01-13 15:22:39 +01:00
CrazyMax ac636fd2d8
docs: missing quiet progress mode
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-01-13 15:13:18 +01:00
CrazyMax 769d22fb30
Merge pull request #2907 from dvdksn/bake-list-flag-docs
docs: add description for bake --list
2025-01-13 13:37:34 +01:00
David Karlsson e36535e137 docs: add description for bake --list
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2025-01-13 11:48:21 +01:00
Tõnis Tiigi ada44e82ea
Merge pull request #2900 from crazy-max/bake-list-flag
bake: replace --list-targets and --list-variables flags with --list flag
2025-01-10 07:59:28 -08:00
Tõnis Tiigi 16edf5d4aa
Merge pull request #2898 from tonistiigi/bake-entitlement-ssh-fix
bake: fix entitlements check for default SSH socket
2025-01-09 08:49:23 -08:00
CrazyMax 11c85b2369
bake: list flag json format support
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-01-09 17:07:06 +01:00
CrazyMax 41215835cf
bake: print and list flag mutually exclusive
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-01-09 17:07:05 +01:00
CrazyMax a41fc81796
bake: replace list-targets and list-variables flags with list=<type>
also put this flag out of experimental

Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-01-09 17:07:05 +01:00
Tonis Tiigi 5f057bdee7
bake: fix entitlements check for default SSH socket
There was a mixup between fs.read and ssh entitlements check.

Corrected behavior is that if bake definition requires default
SSH forwarding then "ssh" entitlement is needed. If it requires
SSH forwarding via fixed file path then "fs.read" entitlement is
needed for that path.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2025-01-08 18:19:18 -08:00
Tõnis Tiigi 883806524a
Merge pull request #2894 from crazy-max/ci-update-bake-action
ci: update bake-action to v6
2025-01-08 09:32:40 -08:00
Tõnis Tiigi 38b71998f5
Merge pull request #2864 from crazy-max/builder-validate-config
builder: validate buildkit configuration
2025-01-08 09:17:08 -08:00
CrazyMax 07db2be2f0
builder: validate buildkit configuration
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-01-08 14:57:39 +01:00
CrazyMax f3f5e760b3
Merge pull request #2893 from glours/bump-compose-go-v2.4.7
bump compose-go to v2.4.7
2025-01-08 12:08:50 +01:00
Guillaume Lours e762d3dbca update compose build ssh path to an absolute one
the unit test doesn't define a working_dir so path generate on Windows is escaped
this use case is already covered and tested by compose-go CI

Signed-off-by: Guillaume Lours <705411+glours@users.noreply.github.com>
2025-01-08 11:57:35 +01:00
Guillaume Lours 4ecbb018f2 bump compose-go to v2.4.7
Signed-off-by: Guillaume Lours <705411+glours@users.noreply.github.com>
2025-01-08 11:57:35 +01:00
CrazyMax a8f4699c5e
ci: update bake-action to v6
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-01-08 11:56:50 +01:00
Tõnis Tiigi 7cf12fce98
Merge pull request #2875 from tonistiigi/bake-fs-entitlements-error
bake: make FS entitlements error by default
2025-01-07 16:13:42 -08:00
Tõnis Tiigi 07190d20da
Merge pull request #2892 from crazy-max/undock-0.9.0
dockerfile: update undock to 0.9.0
2025-01-07 16:13:28 -08:00
CrazyMax c79368c199
dockerfile: update undock to 0.9.0
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-01-07 17:09:46 +01:00
CrazyMax f47d12e692
ci: e2e bake
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-01-07 11:14:46 +01:00
Tõnis Tiigi 0fc204915a
Merge pull request #2804 from crazy-max/tests-bsd
test bsd
2025-01-06 09:34:46 -08:00
Tõnis Tiigi 3a0eeeacd5
Merge pull request #2863 from crazy-max/bake-fix-missing-default
bake: fix missing default target in group's default targets
2025-01-06 09:09:35 -08:00
CrazyMax e6ce3917d3
test bsd
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-01-06 18:00:14 +01:00
CrazyMax e085ed8c5c
Merge pull request #2886 from crazy-max/bake-override-sort
bake: update lookup order for override
2025-01-06 17:35:18 +01:00
CrazyMax b83c3e239e
bake: update lookup order for override
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-01-06 16:19:54 +01:00
CrazyMax a90d5794ee
bake: fix missing default target in group's default targets
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2025-01-06 12:53:54 +01:00
CrazyMax c571b9d730
Merge pull request #2874 from thaJeztah/vendor_docker_27.4.1
vendor: github.com/docker/docker, github.com/docker/cli v27.4.1
2025-01-06 12:30:32 +01:00
CrazyMax af53930206
Merge pull request #2872 from thaJeztah/test_engine_27.4.1
Dockerfile: update to docker v27.4.1
2025-01-06 12:30:01 +01:00
CrazyMax c4a2db8f0c
Merge pull request #2877 from saracen/platform-subset-fix
bake: fix context from target platform matching
2025-01-06 12:29:20 +01:00
Tõnis Tiigi 206bd6c3a2
Merge pull request #2876 from tonistiigi/progress-load-fix
progress: fix missing last progress from loading layers
2025-01-02 10:38:17 -08:00
Arran Walker 5c169dd878 bake: fix context from target platform matching
Signed-off-by: Arran Walker <arran.walker@fiveturns.org>
2024-12-20 11:42:55 +00:00
Tonis Tiigi 875e717361
progress: fix missing last progress from loading layers
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-12-19 21:46:00 -08:00
Tonis Tiigi 72c3d4a237
bake: make FS entitlements error by default
Change FS entitlements checks from warning to error
by default as expressed in initial PR. Users can still
opt-out with environment variable if the choose to.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-12-19 17:14:35 -08:00
Sebastiaan van Stijn ce46297960
vendor: github.com/docker/cli v27.4.1
no changes in vendored code

full diff: https://github.com/docker/cli/compare/v27.4.0...v27.4.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-12-19 17:08:45 +01:00
Sebastiaan van Stijn e8389c8a02
vendor: github.com/docker/docker v27.4.1
full diff: https://github.com/docker/docker/compare/v27.4.0...v27.4.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-12-19 17:07:42 +01:00
Sebastiaan van Stijn 804ee66f13
Dockerfile: update to docker v27.4.1
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-12-19 12:41:05 +01:00
Tõnis Tiigi 5c5bc510ac
Merge pull request #2848 from jsternberg/bake-composable-attributes-attests
bake: implement composable attributes for attestations
2024-12-18 13:11:50 -08:00
Tõnis Tiigi 0dfc4a1019
Merge pull request #2871 from jsternberg/bake-empty-variable-tests
bake: test empty override
2024-12-18 11:00:49 -08:00
Jonathan A. Sternberg 1e992b295c
bake: test empty override
Co-authored-by: CrazyMax <github@crazymax.dev>
Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-12-18 11:56:19 -06:00
Jonathan A. Sternberg 4f81bcb5c8
bake: implement composable attributes for attestations
Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-12-18 11:48:50 -06:00
Tõnis Tiigi 3771fe2034
Merge pull request #2814 from jsternberg/bake-composable-attributes-phase2
bake: various fixes for composable attributes
2024-12-18 09:35:35 -08:00
Jonathan A. Sternberg 5dd4ae0335
bake: various fixes for composable attributes
This changes how the composable attributes are implemented and provides
various fixes to the first iteration.

Cache-from and cache-to now no longer print sensitive values that are
automatically added. These automatically added attributes are added when
the protobuf is created rather than at the time of parsing so they will
no longer be printed. If they are part of the original configuration
file, they will still be printed.

Empty strings will now be skipped. This was the original behavior and
composable attributes removed this functionality accidentally. This
functionality is now restored.

This also expands the available syntax that works with each of the
composable attributes. It is now possible to interleave the csv syntax
with the object syntax without any problems. The canonical form is still
the object syntax and variables are resolved according to that syntax.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-12-18 10:26:15 -06:00
CrazyMax 567361d494
Merge pull request #2847 from thaJeztah/vendor_docker
vendor: github.com/docker/docker, github.com/docker/cli v27.4.0
2024-12-17 11:37:55 +01:00
CrazyMax 21b1be1667
Merge pull request #2860 from tonistiigi/entitlements-path-validation-fix
bake: change evaluation of entitlement paths
2024-12-17 10:01:35 +01:00
CrazyMax 876e003685
Merge pull request #2865 from tonistiigi/update-buildkit-v0.18.2
update test BuildKit to v0.18.2
2024-12-17 09:59:27 +01:00
Tonis Tiigi a53ed0a354
add additional test coverage for FS entitlement paths
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-12-16 22:29:35 -08:00
Tonis Tiigi 737da6959d
bake: change evaluation of entitlement paths
Currently, to compare the local path used by bake against the paths allowed
by entitlements, symlinks were evaluated for path normalization so that the
local path used by build was allowed to not exist while the path allowed by
entitlement needed to exist. If the path used by the build did not exist,
then the deepest existing parent path was used instead. This was concistent
with entitlement rules as that parent path would be the actual path access
is needed.

This raised an issue with `--set` if one provides a non-existing path as
an argument, as these paths are supposed to be allowed automatically. With
the above restrictions set to allowed paths, this meant the build would fail
as it can't grant entitlement to the non-existing paths.

This changes the evaluation logic for allowing paths so that they do not
need to exist. If such a case appears, then the path is evaluated to the
last component that exists, and then the rest of the path is appended as is.

This means that for example, if `output = /tmp/out/foo/` is set in HCL
and `/tmp` is the last component that exists then invoking build with
`--allow fs.write=/tmp/out/foo` will not fail with stat error anymore
but will fail in entitlements validation as build would also need to
write `/tmp/out` that is not inside the allowed `/tmp/out/foo` path. The
same would apply to `--set` as well so that if it points to
a non-existing path, then an additional `--allow` rule is needed
providing access to writing to the last existing component of that path.
This may or may not be unexpected.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-12-16 22:29:24 -08:00
Tonis Tiigi 6befa70cc8
update test BuildKit to v0.18.2
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-12-16 15:47:45 -08:00
Sebastiaan van Stijn 2d051bde96
vendor: github.com/docker/cli v27.4.0
full diff: https://github.com/docker/cli/compare/v27.4.0-rc.2...v27.4.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-12-16 22:14:30 +01:00
Sebastiaan van Stijn 63985b591b
vendor: github.com/docker/docker v27.4.0
full diff: https://github.com/docker/docker/compare/v27.4.0-rc.2...v27.4.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-12-16 22:14:30 +01:00
CrazyMax 695200c81a
Merge pull request #2857 from ndeloof/bump
bump compose-go v2.4.6
2024-12-16 11:57:12 +01:00
Nicolas De Loof 828c1dbf98
bump compose-go v2.4.6
Signed-off-by: Nicolas De Loof <nicolas.deloof@gmail.com>
2024-12-16 11:46:05 +01:00
CrazyMax f321d4ac95
Merge pull request #2854 from docker/dependabot/github_actions/softprops/action-gh-release-2.2.0
build(deps): bump softprops/action-gh-release from 2.1.0 to 2.2.0
2024-12-16 10:17:42 +01:00
dependabot[bot] 0d13bf6606
build(deps): bump softprops/action-gh-release from 2.1.0 to 2.2.0
Bumps [softprops/action-gh-release](https://github.com/softprops/action-gh-release) from 2.1.0 to 2.2.0.
- [Release notes](https://github.com/softprops/action-gh-release/releases)
- [Changelog](https://github.com/softprops/action-gh-release/blob/master/CHANGELOG.md)
- [Commits](01570a1f39...7b4da11513)

---
updated-dependencies:
- dependency-name: softprops/action-gh-release
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-12-11 18:19:22 +00:00
Tõnis Tiigi 3e3242cfdd
Merge pull request #2851 from crazy-max/dockerfile-pin-alpine
dockerfiles: pin alpine version
2024-12-10 10:47:04 -08:00
CrazyMax f9e2d07b30
Merge pull request #2830 from thaJeztah/bump_engine_27.4
Dockerfile: update to docker v27.4.0
2024-12-10 15:29:27 +01:00
Sebastiaan van Stijn c281e18892
Dockerfile: update to docker v27.4.0
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-12-10 10:56:06 +01:00
CrazyMax 98d4cb1eb3
dockerfiles: pin alpine version
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-12-10 10:25:50 +01:00
CrazyMax 70f2fb6442
Merge pull request #2849 from tonistiigi/update-xx-v1.6.0
update xx to v1.6.1
2024-12-10 09:32:13 +01:00
Tonis Tiigi fdac6d5fe7
update xx to v1.6.1
Fixes compatibility issues with Alpine 3.21

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-12-09 18:48:50 -08:00
Tõnis Tiigi d4eca07af8
Merge pull request #2834 from tonistiigi/bake-entitlements-output-fix
bake: fix entitlements path checks for local outputs
2024-12-06 13:52:48 -08:00
CrazyMax 95e77da0fa
Merge pull request #2838 from tonistiigi/update-test-buildkit
update buildkit used for tests
2024-12-04 09:42:27 +01:00
Tonis Tiigi 6810a7c69c
update buildkit used for tests
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-12-03 17:59:08 -08:00
Tonis Tiigi dd596d6542
bake: allow entitlements from overrides automatically
If override specifies a path, mark it automatically allowed
so there is no need to use duplicate flags for defining the
same feature.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-12-02 17:16:28 -08:00
Tonis Tiigi c6e403ad7f
bake: fix entitlements path checks for local outputs
Previous check based on dest attributes was not correct
as the attributes already get converted before validation happens.

Because the local path is not preserved for single-file
outputs and gets replaced by io.Writer, a temporary array variable
was needed. This value should instead be added to ExportEntry
struct in BuildKit in future revision.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-12-02 15:00:29 -08:00
CrazyMax d6d713aac6
Merge pull request #2828 from crazy-max/ci-buildx-edge
ci: use edge releases of buildx
2024-11-28 18:09:04 +01:00
CrazyMax f148976e6e
Merge pull request #2829 from glours/bump-compose-go-v2.4.5
bump compose-go to v2.4.5
2024-11-28 18:05:11 +01:00
Guillaume Lours 8f70196de1
bump compose-go to v2.4.5
Signed-off-by: Guillaume Lours <705411+glours@users.noreply.github.com>
2024-11-28 15:01:24 +01:00
CrazyMax e196855bed
ci: use edge releases of buildx
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-11-28 15:01:09 +01:00
Tõnis Tiigi 71c7889719
Merge pull request #2821 from tonistiigi/update-buildkit-v0.18.0
vendor: update buildkit to v0.18.0
2024-11-26 14:49:31 -08:00
Tonis Tiigi a3418e0178
vendor: update buildkit to v0.18.0
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-11-26 13:57:25 -08:00
Tõnis Tiigi 6a1cf78879
Merge pull request #2818 from tonistiigi/vendor-buildkit-v0.18.0-rc2
vendor: update buildkit to v0.18.0-rc2
2024-11-25 17:52:46 -08:00
Tonis Tiigi ec1f712328
vendor: update buildkit to v0.18.0-rc2
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-11-25 17:42:30 -08:00
CrazyMax 5ce6597c07
Merge pull request #2812 from crazy-max/bake-win-fs-ent
bake: add wildcard to fs entitlements to allow any paths
2024-11-25 20:29:14 +01:00
CrazyMax 9c75071793
bake: add wildcard to fs entitlements to allow any paths
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-11-25 20:13:27 +01:00
Tõnis Tiigi d612139b19
Merge pull request #2811 from crazy-max/update-buildkit
dockerfile: update buildkit to v0.17.2
2024-11-25 10:11:09 -08:00
Tõnis Tiigi 42f7898c53
Merge pull request #2815 from tonistiigi/entitlements-symlink-tests
bake: fix entitlement test when running from symlink temp
2024-11-25 10:08:19 -08:00
Tonis Tiigi 3148c098a2
bake: remove unnecessary GetLongPathName calls
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-11-25 08:26:02 -08:00
Tonis Tiigi f95d574f94
bake: fix entitlement test when running from symlink temp
As the paths returned by validator have the symlinks resolved,
the test needs to resolve the symlinks also in the expected
values. Previously this would fail if t.TempDir() or os.GetWd()
returned a path that contained a symlink.

The issue was purely in the test and not in the entitlements
validation logic.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-11-25 00:03:54 -08:00
CrazyMax 60822781be
ci: update buildkit to v0.17.2
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-11-22 13:00:07 +01:00
CrazyMax 4c83475703
dockerfile: update buildkit to v0.17.2
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-11-22 11:27:33 +01:00
Tõnis Tiigi 17eff25fe5
Merge pull request #2807 from tonistiigi/buildkit-v0.18.0-rc1
vendor: update buildkit to v0.18.0-rc1
2024-11-21 14:29:29 -08:00
Tõnis Tiigi 9c8ffb77d6
Merge pull request #2806 from tonistiigi/vendor-compose-v2.4.4
vendor: update compose to v2.4.4
2024-11-21 14:29:18 -08:00
Tonis Tiigi 13a426fca6
vendor: update buildkit to v0.18.0-rc1
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-11-21 12:57:27 -08:00
Tõnis Tiigi 1a039115bc
Merge pull request #2758 from jsternberg/bake-composable-attributes
bake: initial set of composable bake attributes
2024-11-21 12:54:54 -08:00
Tonis Tiigi 07d58782b8
vendor: update compose to v2.4.4
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-11-21 10:32:02 -08:00
Jonathan A. Sternberg 3ccbb88e6a
bake: initial set of composable bake attributes
This allows using either the csv syntax or object syntax to specify
certain attributes.

This applies to the following fields:
- output
- cache-from
- cache-to
- secret
- ssh

There are still some remaining fields to translate. Specifically
ulimits, annotations, and attest.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-11-21 12:31:11 -06:00
Tõnis Tiigi a34c641bc4
Merge pull request #2796 from tonistiigi/fs-entitlements
bake: add filesystem entitlements support
2024-11-21 09:51:49 -08:00
CrazyMax f10be074b4
bake: handle root evaluation with available drives for windows entitlement
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-11-21 14:05:13 +01:00
CrazyMax 9f429965c0
bake: windows entitlement path fixes take 2
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-11-21 14:05:12 +01:00
CrazyMax f3929447d7
fix lint issues
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-11-21 14:05:12 +01:00
Tonis Tiigi 615f4f6759
bake: windows entitlement path fixes
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-11-21 14:05:12 +01:00
Tonis Tiigi 9a7b028bab
bake: add fs entitlements for context paths
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-11-21 14:05:11 +01:00
Tonis Tiigi 1af4f05ba4
bake: add filesystem entitlements support
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-11-21 14:05:11 +01:00
CrazyMax 4b5d78db9b
Merge pull request #2801 from tonistiigi/enable-testifylint
lint: enable testifylint
2024-11-21 13:57:54 +01:00
Tonis Tiigi d2c512a95b
lint: enable testifylint
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-11-20 10:53:11 -08:00
Tõnis Tiigi 5937ba0e00
Merge pull request #2307 from crazy-max/test-docker-multi-ver
tests: handle multiple docker versions
2024-11-20 09:53:57 -08:00
Tõnis Tiigi 21fb026aa3
Merge pull request #2775 from crazy-max/openbsd
build openbsd
2024-11-20 09:49:49 -08:00
CrazyMax bc45641086
build openbsd
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-11-20 11:42:10 +01:00
CrazyMax 96689e5d05
Merge pull request #2782 from crazy-max/go-1.23
update to go 1.23
2024-11-20 11:40:54 +01:00
CrazyMax 50a8f11f0f
dockerfile: missing xx update to 1.5.0
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-11-20 11:20:18 +01:00
CrazyMax 11cf38bd97
update to go 1.23
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-11-20 11:20:18 +01:00
CrazyMax 300d56b3ff
update gopls and golangci-lint
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-11-20 11:20:18 +01:00
CrazyMax e04da86aca
fix golangci-lint issues
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-11-20 11:20:17 +01:00
Akihiro Suda 9f1fc99018
Merge pull request #2797 from crazy-max/ci-macos-14
ci: update runner to macos-14
2024-11-20 18:16:59 +09:00
CrazyMax 26bbddb5d6
Merge pull request #2798 from tonistiigi/linter-updates
Improve linter checks
2024-11-20 10:05:37 +01:00
Tonis Tiigi 58fd190c31
lint: enable importas rules from buildkit
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-11-19 18:29:04 -08:00
Tonis Tiigi e7a53fb829
lint: enable forbidigo context rules
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-11-19 18:27:25 -08:00
Tonis Tiigi c0fd64f4f8
lint: enable linters from buildkit
Skipping errname and testifylint

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-11-19 17:51:24 -08:00
Tonis Tiigi 0c629335ac
lint: sort linters
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-11-19 17:40:42 -08:00
Tonis Tiigi f216b71ad2
lint: enable gosimple
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-11-19 17:39:22 -08:00
CrazyMax debe8c0187
ci: update runner to macos-14
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-11-20 01:02:43 +01:00
CrazyMax a69d857b8a
tests: handle multiple docker versions
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-11-20 00:59:09 +01:00
Tõnis Tiigi a6ef9db84d
Merge pull request #2794 from crazy-max/bake-var-req
bake: basic variable validation
2024-11-19 12:23:44 -08:00
CrazyMax 9c27be752c
Merge pull request #2791 from docker/dependabot/github_actions/codecov/codecov-action-5
build(deps): bump codecov/codecov-action from 4 to 5
2024-11-19 19:06:56 +01:00
CrazyMax 82a65d4f9b
Merge pull request #2792 from thaJeztah/test_engine_27.4
Dockerfile: update to docker v27.4.0-rc.2
2024-11-19 18:36:00 +01:00
Sebastiaan van Stijn 8647f408ac
Dockerfile: update to docker v27.4.0-rc.2
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-11-19 17:31:21 +01:00
CrazyMax e51cdcac50
bake: basic variable validation
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-11-19 12:41:06 +01:00
CrazyMax 55a544d976
Merge pull request #2795 from dvdksn/bake-docs-call-check
docs: add "call" attribute for target
2024-11-18 16:12:57 +01:00
Tõnis Tiigi 3b943bd4ba
Merge pull request #2790 from crazy-max/fix-network-attr-yaml
bake: check for empty build network with compose
2024-11-14 18:55:33 -08:00
dependabot[bot] 502bb51a3b
build(deps): bump codecov/codecov-action from 4 to 5
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 4 to 5.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v4...v5)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-14 18:33:10 +00:00
CrazyMax 48977780ad
bake: check for empty build network with compose
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-11-14 19:27:54 +01:00
Tõnis Tiigi e540bb03a4
Merge pull request #2773 from crazy-max/dockerfile-bump-versions
dockerfile: update testing tools
2024-11-13 15:54:31 -08:00
CrazyMax 919c52395d
dockerfile: update gotestsum to 1.12.0
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-11-13 22:47:31 +01:00
CrazyMax 7f01c63be7
dockerfile: update registry to 2.8.3
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-11-13 22:47:31 +01:00
CrazyMax 076d2f19d5
dockerfile: update undock to 0.8.0
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-11-13 22:47:30 +01:00
CrazyMax 3c3150b8d3
dockerfile: update docker to 27.3.1
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-11-13 22:47:30 +01:00
CrazyMax b03d8c52e1
dockerfile: update buildkit to v0.17.1
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-11-13 22:47:30 +01:00
CrazyMax e67ccb080b
Merge pull request #2787 from docker/dependabot/github_actions/softprops/action-gh-release-2.1.0
build(deps): bump softprops/action-gh-release from 2.0.9 to 2.1.0
2024-11-13 12:14:53 +01:00
dependabot[bot] dab02c347e
build(deps): bump softprops/action-gh-release from 2.0.9 to 2.1.0
Bumps [softprops/action-gh-release](https://github.com/softprops/action-gh-release) from 2.0.9 to 2.1.0.
- [Release notes](https://github.com/softprops/action-gh-release/releases)
- [Changelog](https://github.com/softprops/action-gh-release/blob/master/CHANGELOG.md)
- [Commits](e7a8f85e1c...01570a1f39)

---
updated-dependencies:
- dependency-name: softprops/action-gh-release
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-12 18:14:26 +00:00
Tõnis Tiigi 6caa151e98
Merge pull request #2777 from LaurentGoderre/metadata-list-support
Add ability to output json lists in metadata build file
2024-11-11 13:52:09 -08:00
Laurent Goderre be6d8326a8 Add ability to output json lists in metadata build file
Signed-off-by: Laurent Goderre <laurent.goderre@docker.com>
2024-11-11 16:36:45 -05:00
Tõnis Tiigi 7855f8324b
Merge pull request #2781 from crazy-max/update-fsutil
vendor: github.com/tonistiigi/fsutil 8d32dbdd27d3
2024-11-10 20:21:13 -08:00
CrazyMax 850e5330ad
Merge pull request #2778 from jsternberg/improve-missing-name-set-error
bake: improve error when using incorrect format for setting labels
2024-11-06 12:07:26 +01:00
CrazyMax b7ea25eb59
vendor: github.com/tonistiigi/fsutil 8d32dbdd27d3
full diff: 397af5306b...8d32dbdd27

Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-11-06 12:02:13 +01:00
Tõnis Tiigi 8cdeac54ab
Merge pull request #2780 from glours/bump-compose-go-v2.4.3
bump compose-go to version v2.4.3
2024-11-05 09:48:20 -08:00
Guillaume Lours 752c70a06c
bump compose-go to version v2.4.3
Signed-off-by: Guillaume Lours <705411+glours@users.noreply.github.com>
2024-11-05 18:29:03 +01:00
Tõnis Tiigi 83dd969dc1
Merge pull request #2774 from crazy-max/freebsd
build freebsd
2024-11-05 08:30:26 -08:00
Jonathan A. Sternberg a5bb117ff0
bake: improve error when using incorrect format for setting labels
Improves the error message when using an incorrect format for setting
labels. This includes the intended format directly in the error message
instead of assuming the user knows what the format is.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-11-04 14:38:23 -06:00
CrazyMax 735b7f68fe
build freebsd
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-11-03 10:39:28 +01:00
Tõnis Tiigi bcac44f658
Merge pull request #2771 from docker/dependabot/github_actions/softprops/action-gh-release-2.0.9
build(deps): bump softprops/action-gh-release from 2.0.8 to 2.0.9
2024-10-31 16:59:55 -07:00
dependabot[bot] d46595eed8
build(deps): bump softprops/action-gh-release from 2.0.8 to 2.0.9
Bumps [softprops/action-gh-release](https://github.com/softprops/action-gh-release) from 2.0.8 to 2.0.9.
- [Release notes](https://github.com/softprops/action-gh-release/releases)
- [Changelog](https://github.com/softprops/action-gh-release/blob/master/CHANGELOG.md)
- [Commits](c062e08bd5...e7a8f85e1c)

---
updated-dependencies:
- dependency-name: softprops/action-gh-release
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-31 18:34:55 +00:00
Tõnis Tiigi 62407927fa
Merge pull request #2757 from dvdksn/pprof-dev-docs
docs: add dev instructions on generating/analyzing pprof samples
2024-10-30 15:09:19 -07:00
Tõnis Tiigi c7b0a84c6a
Merge pull request #2767 from tonistiigi/buildkit-v0.17.0
vendor: update buildkit to v0.17.0
2024-10-30 14:41:33 -07:00
Tonis Tiigi 1aac809c63
vendor: update buildkit to v0.17.0
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-10-30 12:04:42 -07:00
Tõnis Tiigi 9b0575b589
Merge pull request #2766 from tonistiigi/prune-caps-detection
prune: detect if buildkit supports newer storage filters
2024-10-29 13:55:29 -07:00
Tonis Tiigi 9f3a578149
prune: detect if buildkit supports newer storage filters
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-10-29 13:18:04 -07:00
CrazyMax 14b31d8b77
Merge pull request #2765 from crazy-max/ci-fix-content-read
ci: keep contents read permissions in jobs
2024-10-29 19:04:45 +01:00
CrazyMax e26911f403
ci: keep contents read permissions in jobs
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-10-29 18:48:42 +01:00
Tõnis Tiigi cd8d61a9d7
Merge pull request #2763 from neumantm/feat/listWithoutBuilder
Skip Builder Init For Bake List Flags
2024-10-29 10:20:58 -07:00
Tõnis Tiigi 3a56161d03
Merge pull request #2761 from crazy-max/fix-workflow-perms
ci: fix workflow permissions
2024-10-29 10:19:04 -07:00
Tim Neumann 0fd935b0ca Skip Builder Init For Bake List Flags
Add the flags --list-targets and --list-variables to the cases
where initializing the builder can be skipped.

This allows the listing of targets and variables
when no builder is available.

Resolves: docker/buildx#2755
Signed-off-by: Tim Neumann <git@neumann-tim.de>
2024-10-29 10:34:20 +01:00
CrazyMax 704b2cc52d
Merge pull request #2760 from tonistiigi/update-compose-v2.4.1
vendor: update compose to v2.4.1
2024-10-29 10:28:24 +01:00
CrazyMax 6b2dc8ce56
ci: fix workflow permissions
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-10-29 09:48:47 +01:00
Tonis Tiigi a585faf3d2
vendor: update compose to v2.4.1
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-10-28 17:26:28 -07:00
Tõnis Tiigi 181348397c
Merge pull request #2742 from tonistiigi/otel-build
build: add OTEL span around build function
2024-10-28 16:16:08 -07:00
Tõnis Tiigi ad371e428e
Merge pull request #2759 from tonistiigi/vendor-buildkit-v0.17.0-rc2
vendor: update buildkit to v0.17.0-rc2
2024-10-28 16:15:19 -07:00
Tonis Tiigi f35dae3726
build: add OTEL span around build function
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-10-28 15:53:22 -07:00
Tonis Tiigi 6fcc6853d9
vendor: update buildkit to v0.17.0-rc2
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-10-28 15:39:50 -07:00
Tõnis Tiigi 202c390fca
Merge pull request #2722 from crazy-max/test-details-link-exp
build: fix build details link in experimental mode
2024-10-28 10:03:10 -07:00
David Karlsson ca502cc9a5 docs: add dev instructions on generating/analyzing pprof samples
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-10-28 17:15:27 +01:00
Tõnis Tiigi 2bdf451b68
Merge pull request #2754 from crazy-max/call-localstate
build: don't generate local state for subrequests
2024-10-25 11:06:22 -07:00
CrazyMax 658ed584c7
Merge pull request #2746 from jsternberg/buildx-profiles
pprof: take cpu and memory profiles by setting environment variables
2024-10-25 15:52:35 +02:00
CrazyMax 886ae21e93
build: don't generate local state for subrequests
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-10-25 11:06:25 +02:00
Jonathan A. Sternberg cf7a9aa084
pprof: take cpu and memory profiles by setting environment variables
When run in standalone mode, the environment variables
`DOCKER_BUILDX_CPU_PROFILE` and `DOCKER_BUILDX_MEM_PROFILE` will cause
profiles to be written by the CLI.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-10-24 09:56:27 -05:00
CrazyMax eb15c667b9
controller: rename ref to sessionID and set buildRef back to ref
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-10-24 15:37:18 +02:00
CrazyMax 1060328a96
build: fix build details link in experimental mode
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-10-23 20:31:17 +02:00
Tõnis Tiigi 746eadd16e
Merge pull request #2745 from crazy-max/detect-sudo
config: fix file/folder ownership
2024-10-23 10:04:38 -07:00
CrazyMax f89f861999
config: fix file/folder ownership
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-10-23 18:23:14 +02:00
Tõnis Tiigi 08a973a148
Merge pull request #2741 from crazy-max/cli-fix-unknown-command
cli: error out on unknown command
2024-10-23 08:47:44 -07:00
CrazyMax cc286e2ef5
cli: error out on unknown command
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-10-18 14:04:16 +02:00
David Karlsson 8056a3dc7c docs: add "call" attribute for target
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-10-17 14:09:19 +02:00
CrazyMax 9f0ebd2643
Merge pull request #2744 from dvdksn/bake-docs-pull-bool
docs: bake pull attr is a boolean
2024-10-17 10:36:10 +02:00
David Karlsson 680cdf1179 docs: bake pull attr is a boolean
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-10-17 10:26:29 +02:00
Tõnis Tiigi 8d32cabc22
Merge pull request #2740 from dvdksn/src-attr-secret-env
docs: clarify options for secret types (file, env)
2024-10-16 12:20:58 -07:00
David Karlsson 239930c998 chore: fix FromAsCasing in Dockerfile example
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-10-16 15:58:34 +02:00
David Karlsson 8d7f69883f docs: clarify options for secret types (file, env)
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-10-16 15:57:58 +02:00
Tõnis Tiigi 1de332530f
Merge pull request #2729 from thaJeztah/touchup_security
touch-up security policy
2024-10-10 09:57:55 -07:00
CrazyMax 65c4756473
Merge pull request #2728 from thaJeztah/gha_permissions
gha: set default permissions to "contents: read"
2024-10-09 09:43:33 +02:00
Tõnis Tiigi d3ff70ace0
Merge pull request #2724 from jsternberg/vtproto
hack: generate vtproto files for buildx
2024-10-08 17:04:19 -07:00
Tonis Tiigi 14de641bec
vendor: update buildkit to v0.17.0-rc1
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-10-08 16:54:03 -07:00
Sebastiaan van Stijn 1ce3e6a221
touch-up security policy
Touch-up the security policy to make the OpenSSF scorecard
slightly happier;
https://securityscorecards.dev/viewer/?uri=github.com/docker/buildx

    Warn: One or no descriptive hints of disclosure, vulnerability, and/or timelines in security policy

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-10-09 01:22:26 +02:00
Sebastiaan van Stijn b1a13bb740
gha: set default permissions to "contents: read"
make the OpenSSF scorecard slightly happier;
https://securityscorecards.dev/viewer/?uri=github.com/docker/buildx

    Warn: no topLevel permission defined: .github/workflows/build.yml:1
    Warn: topLevel 'security-events' permission set to 'write': .github/workflows/codeql.yml:13
    Warn: no topLevel permission defined: .github/workflows/docs-release.yml:1
    Warn: no topLevel permission defined: .github/workflows/docs-upstream.yml:1
    Warn: no topLevel permission defined: .github/workflows/e2e.yml:1
    Warn: no topLevel permission defined: .github/workflows/labeler.yml:1
    Warn: no topLevel permission defined: .github/workflows/validate.yml:1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-10-09 01:07:18 +02:00
Jonathan A. Sternberg 64c5139ab6
hack: generate vtproto files for buildx
Integrates vtproto into buildx. The generated files dockerfile has been
modified to copy the buildkit equivalent file to ensure files are laid
out in the appropriate way for imports.

An import has also been included to change the grpc codec to the version
in buildkit that supports vtproto. This will allow buildx to utilize the
speed and memory improvements from that.

Also updates the gc control options for prune.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-10-08 13:35:06 -05:00
Tõnis Tiigi d353f5f6ba
Merge pull request #2717 from crazy-max/fix-ls-notrunc
ls: ensure deterministic output for truncated platforms
2024-10-04 12:52:45 -07:00
Tõnis Tiigi 4507a492da
Merge pull request #2719 from jsternberg/bake-remote-size
bake: raise maximum size limit and fix size check
2024-10-04 12:51:28 -07:00
Jonathan A. Sternberg 9fc6f39d71
bake: raise maximum size limit and fix size check
Similar to https://github.com/docker/buildx/pull/2716.

Use the file size rather than the proto size, raise the allowed limit to
the same value for consistency, and improve the error message to include
the limit in human units.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-10-04 09:11:07 -05:00
CrazyMax f6a27a664b
ls: ensure deterministic output for truncated platforms
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-10-04 09:27:03 +02:00
Tõnis Tiigi 48153169d8
Merge pull request #2716 from jsternberg/dockerfile-size-limit
build: raise maximum size limit for dockerfile and fix size check
2024-10-03 14:25:31 -07:00
Jonathan A. Sternberg d7de22c61f
build: raise maximum size limit for dockerfile and fix size check
Raise the maximum size limit for the dockerfile and correct the size
check. The size check was intended to use the size attribute from the
file stat, but the original gogo version confused the `Size()`
method (which returned the size of the proto message) with the `Size`
attribute (which was named `Size_`).

During the conversion, we noticed the mistake but kept the incorrect
behavior for the sake of keeping the conversion simple.

This also raises the maximum limit because 512 kB is likely a bit too
conservative. The limit has been raised to 2 MB and the limit has been
included in the error message.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-10-03 12:12:40 -05:00
Tõnis Tiigi 7c91f3d0dd
Merge pull request #2138 from crazy-max/ls-notrunc
ls: no-trunc opt
2024-10-03 08:21:09 -07:00
CrazyMax 820f5e77ed
ls: no-trunc opt
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-10-03 11:15:46 +02:00
Tõnis Tiigi 1db8f6789f
Merge pull request #2713 from jsternberg/gogoproto-remove
protobuf: remove gogoproto
2024-10-02 15:39:47 -07:00
Jonathan A. Sternberg b35a0f4718
protobuf: remove gogoproto
Removes gogo/protobuf from buildx and updates to a version of
moby/buildkit where gogo is removed.

This also changes how the proto files are generated. This is because
newer versions of protobuf are more strict about name conflicts. If two
files have the same name (even if they are relative paths) and are used
in different protoc commands, they'll conflict in the registry.

Since protobuf file generation doesn't work very well with
`paths=source_relative`, this removes the `go:generate` expression and
just relies on the dockerfile to perform the generation.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-10-02 15:51:59 -05:00
CrazyMax 8e47387d02
Merge pull request #2701 from tonistiigi/fix-link-entitlements
bake: fix linking to targets with entitlements
2024-09-25 10:43:21 +02:00
CrazyMax fdda92f304
Merge pull request #2703 from docker/dependabot/github_actions/peter-evans/create-pull-request-7.0.5
build(deps): bump peter-evans/create-pull-request from 7.0.3 to 7.0.5
2024-09-25 10:42:46 +02:00
CrazyMax d078a3047d
Merge pull request #2705 from tonistiigi/call-fallback
build: use better references for --call fallback images
2024-09-25 10:42:24 +02:00
Tõnis Tiigi f102ad73a8
Merge pull request #2672 from daghack/dockerfile-path-on-warnings
build: display Dockerfile path on check warnings
2024-09-19 08:30:48 -07:00
Talon Bowler 671bd1b54d Update to pass DockerMappingSrc and Dst in with Inputs, and return Inputs through Build
Signed-off-by: Talon Bowler <talon.bowler@docker.com>
2024-09-18 20:56:31 -07:00
Tonis Tiigi f8657e8798
build: use better references for --call fallback images
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-09-18 18:43:40 -07:00
dependabot[bot] 61d9f1d981
build(deps): bump peter-evans/create-pull-request from 7.0.3 to 7.0.5
Bumps [peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request) from 7.0.3 to 7.0.5.
- [Release notes](https://github.com/peter-evans/create-pull-request/releases)
- [Commits](6cd32fd936...5e914681df)

---
updated-dependencies:
- dependency-name: peter-evans/create-pull-request
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-09-18 18:49:37 +00:00
Tõnis Tiigi 9eb0318ee6
Merge pull request #2696 from crazy-max/test-fix-cleanup
test: fix missing envs when cleaning up some workers
2024-09-17 20:27:29 -07:00
CrazyMax 4528269102
Merge pull request #2699 from docker/dependabot/github_actions/peter-evans/create-pull-request-7.0.3
build(deps): bump peter-evans/create-pull-request from 7.0.2 to 7.0.3
2024-09-17 09:27:20 +02:00
CrazyMax 8d3d32e376
Merge pull request #2700 from tonistiigi/fix-link-itself
bake: fix validation for linking to itself
2024-09-17 09:25:26 +02:00
Tonis Tiigi c60afbb25b
bake: fix linking to targets with entitlements
When linked target requires entitlement, same entitlement
is also needed by the caller. Otherwise, the request will
fail when the build is processed.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-09-16 16:31:22 -07:00
Tonis Tiigi 9bfa8603f6
bake: fix validation for linking to itself
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-09-16 16:29:32 -07:00
dependabot[bot] 30e60628bf
build(deps): bump peter-evans/create-pull-request from 7.0.2 to 7.0.3
Bumps [peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request) from 7.0.2 to 7.0.3.
- [Release notes](https://github.com/peter-evans/create-pull-request/releases)
- [Commits](d121e62763...6cd32fd936)

---
updated-dependencies:
- dependency-name: peter-evans/create-pull-request
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-09-16 18:36:21 +00:00
CrazyMax df0270d0cc
test: fix missing envs when cleaning up some workers
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-09-13 14:19:46 +02:00
CrazyMax 056cf8a7ca
Merge pull request #2693 from docker/dependabot/github_actions/peter-evans/create-pull-request-7.0.2
build(deps): bump peter-evans/create-pull-request from 7.0.1 to 7.0.2
2024-09-12 22:48:06 +02:00
dependabot[bot] 15c596a091
build(deps): bump peter-evans/create-pull-request from 7.0.1 to 7.0.2
Bumps [peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request) from 7.0.1 to 7.0.2.
- [Release notes](https://github.com/peter-evans/create-pull-request/releases)
- [Commits](8867c4aba1...d121e62763)

---
updated-dependencies:
- dependency-name: peter-evans/create-pull-request
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-09-12 18:30:42 +00:00
CrazyMax e950b2e478
Merge pull request #2692 from glours/bump-compose-go-v2.2.0
bump compose-go to v2.2.0
2024-09-12 19:04:35 +02:00
Guillaume Lours 4da753da79
bump compose-go to v2.2.0
Signed-off-by: Guillaume Lours <705411+glours@users.noreply.github.com>
2024-09-12 18:14:18 +02:00
CrazyMax 3f81293fd4
Merge pull request #2691 from crazy-max/ci-fix-perms
ci: fix golvulncheck job permissions
2024-09-12 16:36:29 +02:00
CrazyMax 120578091f
ci: fix golvulncheck job permissions
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-09-12 15:23:33 +02:00
Tõnis Tiigi 604b723007
Merge pull request #2684 from crazy-max/inspect-buildkitd-conf
inspect: display buildkit daemon configuration file
2024-09-11 17:32:25 -07:00
CrazyMax 528181c759
inspect: display buildkit daemon configuration file
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-09-12 00:16:24 +02:00
Tõnis Tiigi cd5381900c
Merge pull request #2688 from crazy-max/bump-xx
dockerfile: update xx to 1.5.0
2024-09-11 10:50:58 -07:00
Tõnis Tiigi bba2bb4b89
Merge pull request #2686 from crazy-max/bump-buildkit
dockerfile, ci: update buildkit to latest stable
2024-09-11 10:50:40 -07:00
Tõnis Tiigi 8fd27b8c23
Merge pull request #2685 from crazy-max/skip-networkhost-conf
builder: do not set network.host entitlement flag if already set in buildkitd conf
2024-09-11 10:39:29 -07:00
Tõnis Tiigi 6dcc8d8b84
Merge pull request #2689 from crazy-max/bake-fix-network-field
bake: fix missing omitempty and optional tags for network field
2024-09-11 10:35:33 -07:00
CrazyMax 9fb8b04b64
bake: fix missing omitempty and optional tags for network field
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-09-11 14:47:01 +02:00
CrazyMax 6ba5802958
Merge pull request #2687 from crazy-max/bump-docker
dockerfile: update docker to 27.2.1
2024-09-11 13:57:09 +02:00
CrazyMax f039670961
dockerfile: update xx to 1.5.0
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-09-11 12:59:36 +02:00
CrazyMax 4ec12e7e68
dockerfile: update docker to 27.2.1
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-09-11 12:58:33 +02:00
CrazyMax 66ed7d6162
dockerfile, ci: update buildkit to latest stable
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-09-11 12:51:20 +02:00
CrazyMax 617d59d70b
builder: do not set network.host entitlement flag if already set in buildkitd conf
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-09-11 12:27:29 +02:00
CrazyMax 40f444f4b8
Merge pull request #2680 from crazy-max/update-buildkit
vendor: update buildkit to v0.16.0
2024-09-10 18:35:06 +02:00
CrazyMax 8201d301d5
vendor: update buildkit to v0.16.0
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-09-10 18:18:16 +02:00
Akihiro Suda 40ef3446f5
Merge pull request #2679 from tonistiigi/buildkit-v0.16.0-rc2
vendor: update buildkit to v0.16.0-rc2
2024-09-10 08:51:08 +09:00
Tonis Tiigi 7213b2a814
vendor: update buildkit to v0.16.0-rc2
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-09-09 16:29:01 -07:00
CrazyMax 9cfa25ab40
Merge pull request #2678 from thompson-shaun/mermaid-fix
Update PROJECT mermaid daigrams
2024-09-09 18:06:50 +02:00
Shaun Thompson 6db3444a25
update mermaid diagram to avoid GH mermaid issues
Signed-off-by: Shaun Thompson <shaun.thompson@docker.com>
2024-09-06 11:45:53 -04:00
CrazyMax 15e930b691
Merge pull request #2619 from thompson-shaun/project-guide
docs: add project processing guide
2024-09-06 16:37:21 +02:00
CrazyMax abc5eaed88
Merge pull request #2676 from docker/dependabot/github_actions/peter-evans/create-pull-request-7.0.1
build(deps): bump peter-evans/create-pull-request from 7.0.0 to 7.0.1
2024-09-06 16:37:07 +02:00
Talon Bowler f1b92e9e6c update Build commands to return dockerfile mapping for use in printing rule check warnings
Signed-off-by: Talon Bowler <talon.bowler@docker.com>
2024-09-06 07:34:13 -07:00
dependabot[bot] ad9a5196b3
build(deps): bump peter-evans/create-pull-request from 7.0.0 to 7.0.1
Bumps [peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request) from 7.0.0 to 7.0.1.
- [Release notes](https://github.com/peter-evans/create-pull-request/releases)
- [Commits](4320041ed3...8867c4aba1)

---
updated-dependencies:
- dependency-name: peter-evans/create-pull-request
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-09-05 18:55:59 +00:00
CrazyMax db117855da
Merge pull request #2675 from dvdksn/run-mount-secret-env
docs: update run mount secrets examples using env
2024-09-05 16:51:51 +02:00
CrazyMax ecfe98df6f
Merge pull request #2674 from crazy-max/update-authors
chore: update AUTHORS and mailmap
2024-09-05 15:17:12 +02:00
David Karlsson 479177eaf9 docs: update run mount secrets examples using env
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-09-05 11:51:00 +02:00
CrazyMax 194f523fe1
chore: update AUTHORS and mailmap
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-09-05 11:23:09 +02:00
CrazyMax 29d367bdd4
hack(authors): bump to alpine 3.20
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-09-05 11:18:08 +02:00
CrazyMax ed341bafd0
Merge pull request #2671 from tonistiigi/bake-net-docs
bake: add network mode support to HCL and docs for network and entitlements
2024-09-05 10:17:00 +02:00
CrazyMax c887c2c62a
Merge pull request #2673 from crazy-max/update-buildkit
vendor: update buildkit to v0.16.0-rc1
2024-09-04 21:04:15 +02:00
CrazyMax 7c481aae20
fix lint.PrintLintViolations signature change
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-09-04 18:44:07 +02:00
Tonis Tiigi f0f8876902
docs: add docs for bake network mode config
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-09-04 09:02:06 -07:00
Tonis Tiigi fa1d19bb1e
docs: add docs for bake entitlements config
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-09-04 09:01:59 -07:00
CrazyMax 7bea00f3dd
vendor: update buildkit to v0.16.0-rc1
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-09-04 16:58:37 +02:00
Tonis Tiigi 83d5c0c61b
bake: allow setting networkmode in HCL/JSON
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-09-03 18:18:59 -07:00
Tõnis Tiigi e58a1d35d1
Merge pull request #2670 from docker/dependabot/github_actions/peter-evans/create-pull-request-7.0.0
build(deps): bump peter-evans/create-pull-request from 6.1.0 to 7.0.0
2024-09-03 14:44:14 -07:00
dependabot[bot] b920b08ad3
build(deps): bump peter-evans/create-pull-request from 6.1.0 to 7.0.0
Bumps [peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request) from 6.1.0 to 7.0.0.
- [Release notes](https://github.com/peter-evans/create-pull-request/releases)
- [Commits](c5a7806660...4320041ed3)

---
updated-dependencies:
- dependency-name: peter-evans/create-pull-request
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-09-03 18:50:45 +00:00
Tõnis Tiigi f369377d74
Merge pull request #2666 from tonistiigi/bake-entitlements
bake: enable support for entitlements
2024-09-03 10:49:48 -07:00
Tõnis Tiigi b7486e5cd5
Merge pull request #2647 from daghack/print-warning-count
build: print out the number of warnings after completing a rule check
2024-09-03 10:22:46 -07:00
Tonis Tiigi 5ecff53e0c
bake: read original command name from the env for prompt
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-09-03 08:54:42 -07:00
CrazyMax 48faab5890
Merge pull request #2652 from dvdksn/update-build-manuals-links
docs: update links to moved manuals pages
2024-09-03 10:42:38 +02:00
David Karlsson f77866f5b4 docs: update links to moved manuals pages
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-09-03 10:27:46 +02:00
Tonis Tiigi 203fd8aee5
bake: enable support for entitlements
Add support for security.insecure and network.host
entitlements via bake. User needs to confirm elevated
privileges through a prompt or CLI flags.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-08-30 09:33:28 -07:00
Talon Bowler 806ccd3545 print out the number of warnings after completing a rule check
Signed-off-by: Talon Bowler <talon.bowler@docker.com>
2024-08-29 07:07:19 -07:00
Shaun Thompson d6e030eda7
docs: add project processing guide
Signed-off-by: Shaun Thompson <shaun.thompson@docker.com>
2024-08-27 11:59:01 -04:00
Tõnis Tiigi 96eb69aea4
Merge pull request #2663 from tonistiigi/git-attr-panic-fix
build: avoid possible panic when reading git info
2024-08-23 16:59:30 +03:00
Tonis Tiigi d1d8d6e19c
build: avoid possible panic when reading git info
Not all the error cases from getGitAttributes returned
appendNoneFunc. When nil was returned it caused a panic.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-08-23 16:42:04 +03:00
CrazyMax dc7f679ab1
Merge pull request #2660 from tonistiigi/debug-flag-cmds
commands: add debug as persistent flag
2024-08-22 17:33:06 +02:00
thompson-shaun e403ab2d63
Merge pull request #2656 from tonistiigi/repl-stdin
build: allow builds from stdin for multi-node builders
2024-08-22 11:28:55 -04:00
CrazyMax b6a2c96926
Merge pull request #2659 from dvdksn/docs-alerts-syntax
docs: use gh alert syntax for callouts
2024-08-20 15:13:34 +02:00
Tonis Tiigi 7a7a9c8e01
commands: add debug as persistent flag
Allows using `--debug` to enable debug logging under
any subcommand. Currently it needed to be set as
`docker --debug buildx` meaning only way to enable debug
in standalone mode was to set env variable instead and
updating existing commands to add `--debug` was cumbersome.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-08-16 13:20:28 +03:00
David Karlsson fa8f859159 docs: use gh alert syntax for callouts
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-08-16 11:08:03 +02:00
Tõnis Tiigi 8411a763d9
Merge pull request #2657 from jsternberg/metricwriter-race-condition
metrics: add mutex to the metric writer
2024-08-14 19:23:51 +03:00
Jonathan A. Sternberg 6c5279da54
metrics: add mutex to the metric writer
It was possible for multiple status messages to be written at the same
time which caused some of the metric writer code to have a race
condition.

This code should be fast enough that it doesn't interrupt the display,
but some further work might be needed here.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-08-14 10:57:04 -05:00
Tõnis Tiigi 0e64eb4f8b
Merge pull request #2651 from tonistiigi/bake-wrap-target-name
build: when building multiple targets include name in error
2024-08-14 13:19:26 +03:00
Tonis Tiigi adbcc2225e
build: allow builds from stdin for multi-node builders
When building from same stream all nodes need to read
data from the same stream. In order to achive that there
is a new SyncMultiReader wrapper that sends the stream
concurrently to all readers. Readers must read at similar
speed or pauses will happen while they wait for each other.

Dockerfiles were already written to disk before sent. Now
the file written by first node is reused for others.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-08-14 13:13:18 +03:00
CrazyMax e00efeb399
Merge pull request #2654 from crazy-max/rename-printfunc
chore: rename PrintFunc to CallFunc
2024-08-13 15:41:51 +02:00
CrazyMax d03c13b947
chore: rename PrintFunc to CallFunc
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-08-13 14:13:32 +02:00
Tõnis Tiigi 4787b5c046
Merge pull request #2649 from tonistiigi/bake-path-stdlib-functions
bake: add basename, dirname and sanitize functions
2024-08-13 13:15:12 +03:00
Tõnis Tiigi 1c66f293c7
Merge pull request #2650 from crazy-max/fix-subrequest-metadatafile
build: skip build ref and provenance metadata for subrequests
2024-08-13 13:13:35 +03:00
Tonis Tiigi 246a36d463
build: when building multiple targets include name in error
Some errors can appear without a stacktrace or progress record,
eg. wrong Dockerfile name passed. In that case when building many
targets with bake it might be hard to figure out which target
failed as in the progressbar there will only be steps that
were cancelled.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-08-13 12:21:30 +03:00
Tonis Tiigi a4adae3d6b
bake: add basename, dirname and sanitize functions
These functions help with dealing with path inputs and
using parts of them to configure targets.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-08-13 11:46:04 +03:00
CrazyMax 36cd88f8ca
build: skip build ref and provenance metadata for subrequests
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-08-13 10:40:40 +02:00
CrazyMax 07a85a544b
Merge pull request #2638 from crazy-max/update-buildkit
vendor: update buildkit to 664c2b469f19
2024-08-12 11:14:52 +02:00
CrazyMax f64b85afe6
build: update since session signature has changed
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-08-11 16:01:45 +02:00
CrazyMax 4b27fb3022
vendor: update buildkit to 664c2b469f19
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-08-11 09:46:35 +02:00
CrazyMax 38a8261f05
Merge pull request #2643 from crazy-max/fix-govulncheck
hack: ensure SARIF output has results field defined for govulncheck
2024-08-09 10:55:12 +02:00
CrazyMax a3e6f4be15
hack: ensure SARIF output has results field defined for govulncheck
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-08-09 10:47:33 +02:00
CrazyMax 6467a86427
Merge pull request #2610 from jsternberg/bake-metrics
metrics: add metrics for bake command
2024-08-09 10:05:05 +02:00
Jonathan A. Sternberg 58571ff6d6
metrics: add metrics for bake command
This adds metrics for the bake command using a different method of
calculating the build identifier but with the same attributes otherwise.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-08-08 10:00:11 -05:00
CrazyMax 71174c3041
Merge pull request #2642 from crazy-max/update-compose
vendor: update compose-go to v2.1.6
2024-08-08 16:46:51 +02:00
Jonathan A. Sternberg 16860e6dd2
Merge pull request #2640 from crazy-max/call-metadata
support metadata file with call flag for build and bake commands
2024-08-08 09:21:05 -05:00
CrazyMax 8e02b1a2f7
vendor: update compose-go to v2.1.6
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-08-08 14:06:59 +02:00
CrazyMax 531c6d4ff1
support metadata file with call flag for build and bake commands
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-08-08 10:50:37 +02:00
CrazyMax 238a3e03dd
Merge pull request #2641 from jsternberg/metricwriter-lazy-regexp
metricwriter: compile regular expressions only on first use
2024-08-07 17:54:59 +02:00
CrazyMax 9a0c320588
Merge pull request #2606 from crazy-max/builder-move-kube-cfg
builder: move kube config handling to k8s driver package
2024-08-07 14:44:20 +02:00
CrazyMax acf0216292
builder: move kube config handling to k8s driver package
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-08-07 14:01:33 +02:00
CrazyMax 5a50d13641
Merge pull request #2615 from tonistiigi/bake-shared-transfer-sessions
bake: use shared session for local sources for multiple targets
2024-08-07 12:53:03 +02:00
Jonathan A. Sternberg 2810f20f3a
metricwriter: compile regular expressions only on first use
Compile the regular expressions only on first use rather than implicitly
as part of the `init()` function of the package. This prevents taking a
speed hit on the initialization of the package regardless if this type
is used and moves it to the time when a regular expression is first
used.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-08-05 10:23:06 -05:00
CrazyMax e2f6808457
Merge pull request #2621 from thaJeztah/test_docker_27.1
Dockerfile: update to docker 27.1.1
2024-08-05 15:43:49 +02:00
CrazyMax 39bbb9e478
Merge pull request #2636 from crazy-max/fix-metadata-docs
docs: fix metadata section for build command
2024-08-05 15:21:25 +02:00
Sebastiaan van Stijn 771f0139ac
Dockerfile: update to docker 27.1.1
Also adding a DOCKER_CLI_VERSION build-arg, so that we can set versions
independently for (untagged) pre-releases.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-08-05 12:13:15 +02:00
CrazyMax 6034c58285
Merge pull request #2635 from crazy-max/labeler-sync-labels
ci: sync labels when files are reverted or no longer changed with labeler
2024-07-31 10:04:32 +02:00
CrazyMax 199890ff51
docs: fix metadata section for build command
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-07-30 17:56:10 +02:00
CrazyMax d391b1d3e6
ci: sync labels when files are reverted or no longer changed with labeler
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-07-30 12:48:31 +02:00
CrazyMax f4da6b8f69
Merge pull request #2631 from crazy-max/govulncheck
govulncheck to report known vulnerabilities
2024-07-30 12:37:43 +02:00
CrazyMax 386d599309
govulncheck to report known vulnerabilities
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-07-26 16:45:45 +02:00
thompson-shaun d130f8ef0a
Merge pull request #2628 from crazy-max/ci-skip-scout
ci: checkout step for scout job
2024-07-25 13:00:29 -04:00
CrazyMax b691a10379
Merge pull request #2620 from idnandre/test-multiplatform
tests: build multiplatform
2024-07-25 16:57:10 +02:00
CrazyMax e628f9ea14
Merge pull request #2629 from crazy-max/update-buildkit
vendor: update buildkit to v0.15.1
2024-07-25 16:45:00 +02:00
CrazyMax 0fb0b6db0d
vendor: update buildkit to v0.15.1
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-07-25 15:57:49 +02:00
CrazyMax 6efb1d7cdc
ci: skip scout job on forked repo
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-07-25 14:35:39 +02:00
CrazyMax bc2748da59
ci: checkout step for scout job
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-07-25 14:15:31 +02:00
CrazyMax d4c4632cf6
Merge pull request #2624 from crazy-max/scout-scan
ci: scan bin image with docker scout
2024-07-25 13:51:38 +02:00
Tõnis Tiigi cdd46af015
Merge pull request #2608 from crazy-max/bake-fix-printer
bake: fix printer handling
2024-07-24 11:04:09 -07:00
Tonis Tiigi b62d64b2b5
bake: use shared session for local sources for multiple targets
Detect cases where multiple bake targets would use the same
local source. For such cases a separate session request is
made in addition to session per target and local source
is made available in that source as well.

The new sessionID is sent with the request so the frontend
can ask associate it with the local source it needs.

The sources are still available in the main request session
as well. This would be used if frontend ignores the local-sessionid
parameter and makes sure that old version continue working.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-07-24 10:58:13 -07:00
CrazyMax 64171cb13e
bake: fix printer handling
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-07-24 19:54:11 +02:00
Tõnis Tiigi f28dff7598
Merge pull request #2599 from thaJeztah/bump_docker_deps
vendor: update indirect dependencies for docker/cli
2024-07-24 10:52:57 -07:00
Tõnis Tiigi 3d542f3d31
Merge pull request #2601 from tonistiigi/init-fixes
Improvements based on inittrace
2024-07-24 10:52:25 -07:00
Tõnis Tiigi 30dbdcfa3e
Merge pull request #2607 from tonistiigi/locals-git-refactor
build: refactor setting git info to local mounts
2024-07-24 10:52:05 -07:00
Tõnis Tiigi 16518091cd
Merge pull request #2600 from blampe/blampe/build-config
build: don't force default configuration
2024-07-24 10:51:53 -07:00
Tõnis Tiigi 897fc91802
Merge pull request #2625 from ndeloof/bump-compose-v2.29.1
Bump compose-go v2.1.5
2024-07-23 14:00:00 -07:00
Nicolas De Loof c4d3011a98
Bump compose-go v2.1.5
Signed-off-by: Nicolas De Loof <nicolas.deloof@gmail.com>
2024-07-23 17:06:43 +02:00
CrazyMax a47f761c55
ci: scan bin image with docker scout
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-07-23 16:26:38 +02:00
CrazyMax aa35c954f3
Merge pull request #2618 from docker/dependabot/github_actions/softprops/action-gh-release-2.0.8
build(deps): bump softprops/action-gh-release from 2.0.6 to 2.0.8
2024-07-22 11:41:17 +02:00
idnandre 56df4e98a0 tests: build multiplatform
Signed-off-by: idnandre <andre@idntimes.com>
2024-07-20 17:15:00 +07:00
dependabot[bot] 9f00a9eafa
build(deps): bump softprops/action-gh-release from 2.0.6 to 2.0.8
Bumps [softprops/action-gh-release](https://github.com/softprops/action-gh-release) from 2.0.6 to 2.0.8.
- [Release notes](https://github.com/softprops/action-gh-release/releases)
- [Changelog](https://github.com/softprops/action-gh-release/blob/master/CHANGELOG.md)
- [Commits](a74c6b72af...c062e08bd5)

---
updated-dependencies:
- dependency-name: softprops/action-gh-release
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-07-19 13:54:21 +00:00
CrazyMax 56cb197c0a
Merge pull request #2616 from crazy-max/chore-fix-labels
chore: update dependabot labels
2024-07-19 15:53:53 +02:00
CrazyMax 466006849a
chore: update dependabot labels
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-07-19 11:39:51 +02:00
Tõnis Tiigi 738f5ee9db
Merge pull request #2612 from daghack/debug-msg-clarification
update warning message
2024-07-18 09:16:39 -07:00
Tõnis Tiigi 9b49cf3ae6
Merge pull request #2603 from crazy-max/bake-fix-progress-panic
bake: check printer before printing warnings
2024-07-18 08:59:42 -07:00
CrazyMax bd0b425734
test: bake print
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-07-18 17:40:08 +02:00
CrazyMax 7823a2dc01
bake: check printer before printing warnings
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-07-18 17:40:07 +02:00
Talon Bowler cedbc5d68d clarify the appropriate place to use the debug flag when viewing warnings
Signed-off-by: Talon Bowler <talon.bowler@docker.com>
2024-07-18 08:36:34 -07:00
CrazyMax 12d431d1b4
Merge pull request #2609 from glours/bump-compose-go-v2.1.4
bump compose-go to v2.1.4
2024-07-17 18:18:12 +02:00
Guillaume Lours ca452c47d8
bump compose-go to v2.1.4
Signed-off-by: Guillaume Lours <705411+glours@users.noreply.github.com>
2024-07-17 16:57:39 +02:00
Tonis Tiigi d8f26f79ed
build: refactor setting git info to local mounts
This is a preparation to shared local sources for bake
targets and makes it possible to have equality check
between locals from different targets.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-07-16 21:50:06 -07:00
CrazyMax 4304d388ef
driver: refactor GetDriver func to take init config
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-07-16 14:34:09 +02:00
Tonis Tiigi 96509847b9
remote: avoid signal names map on init
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-07-15 16:40:06 -07:00
Tonis Tiigi 52bb668085
remoteutil: fix pkg remove unnecessary map initialization
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-07-15 16:39:58 -07:00
Tonis Tiigi 85cf3bace9
hclparser: avoid unnecessary allocations in init
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-07-15 13:37:33 -07:00
Tonis Tiigi b92bfb53d2
update errors handling allocations and comparison
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-07-15 13:37:23 -07:00
Bryce Lampe 6c929a45c7 build: don't force default configuration
Signed-off-by: Bryce Lampe <bryce@pulumi.com>
2024-07-15 12:32:23 -07:00
Sebastiaan van Stijn d296d5d46a
vendor: google.golang.org/appengine v1.6.8
full diff: https://github.com/golang/appengine/compare/v1.6.7...v1.6.8

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-07-15 12:47:27 +02:00
Sebastiaan van Stijn 6e433da23f
vendor: github.com/gorilla/mux v1.8.1
full diff: https://github.com/gorilla/mux/compare/v1.8.0...v1.8.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-07-15 12:46:33 +02:00
CrazyMax 3005743f7c
Merge pull request #2594 from tonistiigi/golangci-validate
fix issues in .golangci.yml and add validation check
2024-07-15 08:57:09 +02:00
CrazyMax d64d3a4caf
Merge pull request #2592 from tonistiigi/vendor-distribution-v2.8.3
vendor: update docker/distribution to v2.8.3
2024-07-15 08:55:59 +02:00
Tonis Tiigi 0d37d68efd
fix issues in .golangci.yml and add validation check
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-07-12 16:18:42 -07:00
Tonis Tiigi 03a691a0a5
vendor: update docker/distribution to v2.8.3
Gets rid of duplicate reference package

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-07-11 17:43:41 -07:00
Tõnis Tiigi fa392a2dca
Merge pull request #2589 from tonistiigi/vendor-buildkit-v0.15.0
vendor: update buildkit to v0.15.0
2024-07-11 11:35:41 -07:00
Tonis Tiigi 470e45e599
vendor: update buildkit to v0.15.0
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-07-11 11:23:39 -07:00
Tõnis Tiigi 2a2648b1db
Merge pull request #2588 from tonistiigi/vendor-buildkit-v0.15.0-rc2
vendor: update buildkit to v0.15.0-rc2
2024-07-10 15:35:25 -07:00
Tonis Tiigi ac930bda69
vendor: update buildkit to v0.15.0-rc2
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-07-10 15:03:14 -07:00
Tõnis Tiigi 6791ecb628
Merge pull request #2587 from tonistiigi/update-moby-27.0.3
Dockerfile: update moby for testing to v27.0.3
2024-07-10 13:06:16 -07:00
Tonis Tiigi d717237e4f
Dockerfile: update moby for testing to v27.0.3
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-07-10 11:16:38 -07:00
Tõnis Tiigi ee642ecc4c
Merge pull request #2578 from crazy-max/driver-client-opt
driver: allow arbitrary client opts
2024-07-10 09:55:15 -07:00
Tõnis Tiigi 06d96d665e
Merge pull request #2584 from tonistiigi/bake-test-fix
bake: fix testing json formatted output
2024-07-09 12:40:51 -07:00
Tõnis Tiigi dc83501a5b
Merge pull request #2583 from tonistiigi/bake-implicit-cacheonly
bake: use cacheonly exporter for implicit targets
2024-07-09 12:40:21 -07:00
Tonis Tiigi 0f74f9a794
bake: fix testing json formatted output
Because the test checked for combinedoutput, it
could contain internal warning messages in stderr.
JSON output is guaranteed in stdout.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-07-08 18:32:37 -07:00
Tonis Tiigi 6d6adc11a1
bake: use cacheonly exporter for implicit targets
Clearing the exporter may result in default export
behavior from the driver.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-07-08 17:53:52 -07:00
Tõnis Tiigi 68076909b9
Merge pull request #2579 from crazy-max/fix-compose-project-name
bake: use compose project name from env if set
2024-07-08 11:20:42 -07:00
CrazyMax 7957b73a30
bake: use compose project name from env if set
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-07-04 16:37:07 +02:00
CrazyMax 1dceb49a27
driver: allow arbitrary client opts
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-07-04 16:36:55 +02:00
Tõnis Tiigi b96ad59f64
Merge pull request #2577 from tonistiigi/vendor-buildkit-v0.15.0-rc1
vendor: update buildkit to v0.15.0-rc1
2024-07-03 12:51:46 -07:00
Tonis Tiigi 50aa895477
vendor: update buildkit to v0.15.0-rc1
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-07-03 12:43:04 -07:00
Tõnis Tiigi 74374ea418
Merge pull request #2576 from crazy-max/bake-call-check-docs
docs: link to build ref page for --call and --check ref with bake
2024-07-03 11:35:52 -07:00
CrazyMax 6bbe59697a
docs: link to build ref page for --call and --check ref with bake
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-07-03 20:19:56 +02:00
Tõnis Tiigi c51004e2e4
Merge pull request #2556 from tonistiigi/bake-call
bake: add call methods support and printing
2024-07-03 10:40:32 -07:00
Tõnis Tiigi 8535c6b455
Merge pull request #2562 from dvdksn/docs-buildx-b-call
docs: reference description for --call and --check
2024-07-03 10:17:33 -07:00
CrazyMax 153e5ed274
mark list-targets and list-variables as hidden and experimental
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-07-03 09:54:09 -07:00
Tonis Tiigi cc097db675
bake: fix printer reset before metadata written
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-07-03 09:54:09 -07:00
Tonis Tiigi 35313e865f
bake: add tests for call and list
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-07-03 09:54:03 -07:00
Tonis Tiigi 233b869c63
bake: add list-variables option
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-07-03 09:54:03 -07:00
Tonis Tiigi 7460f049f2
bake: add list-targets options to list available targets/groups
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-07-03 09:54:03 -07:00
Tonis Tiigi 8f4c8b094a
bake: allow text descriptions for targets
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-07-03 09:54:03 -07:00
Tonis Tiigi 8da28574b0
bake: add call methods support and printing
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-07-03 09:54:02 -07:00
Tõnis Tiigi 7e49141c4e
Merge pull request #2574 from crazy-max/update-mod-outdated
dockerfile: update go-mod-outdated to v0.9.0
2024-07-03 09:42:04 -07:00
David Karlsson 5ec703ba10 docs: reference description for --call and --check
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-07-03 18:34:51 +02:00
Tõnis Tiigi 1ffc6f1d58
Merge pull request #2572 from crazy-max/build-ref-multi-nodes
build: set same ref when building on multiple nodes
2024-07-03 09:07:46 -07:00
CrazyMax f65631546d
dockerfile: update go-mod-outdated to v0.9.0
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-07-03 15:37:28 +02:00
CrazyMax 6fc19c4024
build: set same ref when building on multiple nodes
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-07-03 15:06:53 +02:00
CrazyMax 5656c98133
Merge pull request #2565 from dvdksn/cli-docs-tool-v0.8.0
cli docs tool v0.8.0
2024-07-03 11:55:18 +02:00
David Karlsson 263a9ddaee chore: regenerate docs
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-07-03 11:48:11 +02:00
David Karlsson 1774aa0cf0 vendor: github.com/docker/cli-docs-tool v0.8.0
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-07-03 11:47:04 +02:00
CrazyMax 7b80ad7069
Merge pull request #2569 from dvdksn/fix-alias
fix: buildx b alias
2024-07-03 10:14:45 +02:00
CrazyMax c0c4d7172b
Merge pull request #2567 from tonistiigi/update-buildkit-0702
vendor: update buildkit to f7bda278b7e2
2024-07-03 10:09:10 +02:00
CrazyMax e498ba9c27
Merge pull request #2568 from tonistiigi/go-1.22
update Go to 1.22
2024-07-03 10:08:25 +02:00
David Karlsson 2e7e7abe42 test: add test for building with alias "buildx b"
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-07-03 10:04:09 +02:00
David Karlsson 048ef1fbf8 fix: buildx b alias
the shorthand "b" alias was accidentally removed in 19d838a

Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-07-03 10:03:01 +02:00
Tonis Tiigi cbe7901667
update Go to 1.22
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-07-02 22:27:43 -07:00
Tonis Tiigi f374f64d2f
vendor: update buildkit to f7bda278b7e2
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-07-02 22:24:55 -07:00
Tõnis Tiigi 4be2259719
Merge pull request #2501 from tonistiigi/remote-client-cache
remote: ensure that client connection is not established twice
2024-07-02 09:30:32 -07:00
Tõnis Tiigi 6627f315cb
Merge pull request #2397 from dvdksn/buildx_build_canonical
docs: make buildx build the canonical doc
2024-07-02 09:17:01 -07:00
David Karlsson 19d838a3f4 docs: make buildx build the canonical doc
Move descriptions of flags common with the legacy build client to the buildx
build reference doc.

Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-07-02 15:54:10 +02:00
CrazyMax 17878d641e
Merge pull request #2534 from tonistiigi/bake-warnings
bake: print warnings on progress
2024-07-01 18:16:52 +02:00
Tõnis Tiigi 63eb73d9cf
Merge pull request #2560 from crazy-max/fix-localstate-remote
build: fix localstate for remote context and stdin
2024-06-28 16:56:53 -07:00
Tõnis Tiigi 59a0ffcf83
Merge pull request #2546 from treuherz/multinode-annotations
Pass in index annotations from builds on multiple nodes
2024-06-28 16:46:20 -07:00
Tõnis Tiigi 2b17f277a1
Merge pull request #2549 from daghack/warning-free-msg
Add message when --check does not produce warnings or errors
2024-06-28 16:45:57 -07:00
Tõnis Tiigi ea7c8e83d2
Merge pull request #2559 from tonistiigi/update-buildkit-0627
use csvvalue package for parsing csv inputs
2024-06-28 16:42:10 -07:00
Tõnis Tiigi 9358c45b46
Merge pull request #2558 from tonistiigi/fix-sharedkey-for-context
build: fix sharedkey computation for local context
2024-06-28 16:41:30 -07:00
CrazyMax cfb7fc4fb5
build: fix localstate for remote context and stdin
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-06-28 14:56:45 +02:00
CrazyMax d4b112ab05
test: build remote
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-06-28 11:06:09 +02:00
Tonis Tiigi f7a32361ea
use csvvalue package for parsing csv inputs
This package is better suited for parsing single-line
CSV strings.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-06-27 21:31:11 -07:00
Tonis Tiigi af902caeaa
vendor: update buildkit to 8397d0b9
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-06-27 20:44:07 -07:00
Tõnis Tiigi 04000db8da
Merge pull request #2499 from thaJeztah/bump_buildkit
vendor: buildkit, docker/docker and docker/cli v27.0.1
2024-06-27 20:42:35 -07:00
Tonis Tiigi b8da14166c
build: fix sharedkey computation for local context
When LocalDirs were changed to LocalMounts, this broke the
sharedKey computation that was based on the context directory
path. SharedKey defines if directory is valid candidate for
incremental context transfer and if not set properly then
different directories do metadata-based transfers to same destination.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-06-27 17:53:22 -07:00
Tonis Tiigi c1f680df14
bake: print warnings on progress
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-06-27 17:31:15 -07:00
Talon Bowler b6482ab6bb Add message when --check does not produce warnings or errors
Signed-off-by: Talon Bowler <talon.bowler@docker.com>
2024-06-27 08:24:00 -07:00
Eli Treuherz 6f45b0ea06 Get annotations from exports
Signed-off-by: Eli Treuherz <et@arenko.group>
2024-06-27 13:26:07 +01:00
Eli Treuherz 3971361ed2 Pass in index annotations from builds on multiple nodes
Fixes #2540

Signed-off-by: Eli Treuherz <et@arenko.group>
2024-06-27 13:26:07 +01:00
Tõnis Tiigi 818045482e
Merge pull request #2522 from treuherz/annotation-per-type
Make multi-type annotation settings match docs
2024-06-26 10:07:27 -07:00
CrazyMax f8e1746d0d
Merge pull request #2557 from thaJeztah/test_bump-buildkit
dockerfile, gha: update buildkit to 0.13.2, 0.14.1
2024-06-26 16:54:44 +02:00
Sebastiaan van Stijn 92a6799514
dockerfile, gha: update buildkit to 0.13.2, 0.14.1
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-06-26 15:52:28 +02:00
Sebastiaan van Stijn 9358f84668
vendor: buildkit, docker/docker and docker/cli v27.0.1
diffs:

- https://github.com/docker/cli/compare/v26.1.4..v27.0.1
- https://github.com/docker/docker/compare/v26.1.4..v27.0.1
- https://github.com/moby/buildkit/compare/v0.14.1...aaaf86e5470bffbb395f5c15ad4a1c152642ea30

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-06-26 15:31:47 +02:00
Sebastiaan van Stijn dbdd3601eb
vendor: github.com/containerd/ttrpc v1.2.5
full diff: https://github.com/containerd/ttrpc/compare/v1.2.4...v1.2.5

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-06-26 15:30:40 +02:00
Sebastiaan van Stijn a3c8a72b54
vendor: github.com/klauspost/compress v1.17.9
full diff: https://github.com/klauspost/compress/compare/v1.17.4...v1.17.9

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-06-26 15:30:40 +02:00
Sebastiaan van Stijn 4c3af9becf
vendor: golang.org/x/sys v0.20.0
full diff: https://github.com/golang/sys/compare/v0.18.0...v0.20.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-06-26 15:30:37 +02:00
CrazyMax d8c9ebde1f
Merge pull request #2551 from crazy-max/metadata-warnings-2
build: opt to set progress warnings in response
2024-06-26 08:16:29 +02:00
CrazyMax 01a50aac42
printer: dedup warnings
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-06-26 06:53:35 +02:00
CrazyMax f7bcafed21
build: opt to set progress warnings in response
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-06-26 06:53:35 +02:00
Tõnis Tiigi e5ded4b2de
Merge pull request #2521 from crazy-max/fix-buildinfo
fix assignment of buildinfo-attrs for exporter
2024-06-25 11:36:14 -07:00
Tõnis Tiigi 6ef443de41
Merge pull request #2550 from crazy-max/provenance-mode-commands
build: read provenance response mode in commands pkg
2024-06-25 11:34:23 -07:00
Tõnis Tiigi 076e19d0ce
Merge pull request #2555 from crazy-max/compose-test-cgroup
test: compose cgroup property
2024-06-25 11:25:59 -07:00
CrazyMax 5599699d29
test: compose cgroup property
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-06-25 18:22:50 +02:00
CrazyMax d155747029
build: read provenance response mode in commands pkg
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-06-24 14:32:26 +02:00
CrazyMax 9cebd0c80f
Merge pull request #2545 from docker/dependabot/github_actions/softprops/action-gh-release-2.0.6
build(deps): bump softprops/action-gh-release from 2.0.5 to 2.0.6
2024-06-24 14:30:17 +02:00
CrazyMax 7b1ec7211d
Merge pull request #2547 from glours/bump-compose-go-v2.1.3
bump compose-go to version v2.1.3
2024-06-21 15:20:18 +02:00
Guillaume Lours 689fd74104
bump compose-go to version v2.1.3
Signed-off-by: Guillaume Lours <705411+glours@users.noreply.github.com>
2024-06-21 15:04:10 +02:00
dependabot[bot] 0dfd315daa
build(deps): bump softprops/action-gh-release from 2.0.5 to 2.0.6
Bumps [softprops/action-gh-release](https://github.com/softprops/action-gh-release) from 2.0.5 to 2.0.6.
- [Release notes](https://github.com/softprops/action-gh-release/releases)
- [Changelog](https://github.com/softprops/action-gh-release/blob/master/CHANGELOG.md)
- [Commits](69320dbe05...a74c6b72af)

---
updated-dependencies:
- dependency-name: softprops/action-gh-release
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-06-20 18:46:57 +00:00
CrazyMax 9b100c2552
Merge pull request #2541 from docker/dependabot/github_actions/peter-evans/create-pull-request-6.1.0
build(deps): bump peter-evans/create-pull-request from 6.0.5 to 6.1.0
2024-06-20 19:28:48 +02:00
Tõnis Tiigi 92aaaa8f67
Merge pull request #2524 from thaJeztah/test_engine_27.0
Dockerfile: update docker engine to 27.0.0-rc.2
2024-06-20 10:28:01 -07:00
Tõnis Tiigi 6111d9a00d
Merge pull request #2531 from dvdksn/docs-xref-exportermanuals
docs: link to exporter descriptions from reference docs
2024-06-20 10:26:22 -07:00
Tõnis Tiigi 310aaf1891
Merge pull request #2543 from thaJeztah/bump_testify
vendor: github.com/stretchr/testify v1.9.0
2024-06-20 10:25:37 -07:00
Tõnis Tiigi 6c7e65c789
Merge pull request #2544 from thaJeztah/bump_cobra
vendor: github.com/spf13/cobra v1.8.1
2024-06-20 10:25:16 -07:00
CrazyMax 66b0abf078
Merge pull request #2536 from thompson-shaun/pr-labeler
ci: add pr-labeler
2024-06-20 15:26:28 +02:00
Sebastiaan van Stijn 6efa26c2de
vendor: github.com/spf13/cobra v1.8.1
full diff: https://github.com/spf13/cobra/compare/v1.8.0...v1.8.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-06-20 15:07:27 +02:00
Sebastiaan van Stijn 5b726afa5e
vendor: github.com/stretchr/testify v1.9.0
full diff: https://github.com/stretchr/testify/compare/v1.8.4...v1.9.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-06-20 15:05:45 +02:00
dependabot[bot] 009f318bbd
build(deps): bump peter-evans/create-pull-request from 6.0.5 to 6.1.0
Bumps [peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request) from 6.0.5 to 6.1.0.
- [Release notes](https://github.com/peter-evans/create-pull-request/releases)
- [Commits](6d6857d369...c5a7806660)

---
updated-dependencies:
- dependency-name: peter-evans/create-pull-request
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-06-18 19:03:17 +00:00
Tõnis Tiigi 9f7c8ea3fb
Merge pull request #2538 from tonistiigi/lint-fallback-v1.8.1
build: update lint fallback image to dockerfile 1.8.1
2024-06-18 09:51:41 -07:00
Tonis Tiigi be12199eb9
build: update lint fallback image to dockerfile 1.8.1
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-06-18 09:33:55 -07:00
Tõnis Tiigi 94355517c4
Merge pull request #2537 from tonistiigi/update-buildkit-v0.14.1
vendor: update buildkit v0.14.1
2024-06-18 09:20:28 -07:00
Tonis Tiigi cb1be7214a
vendor: update buildkit v0.14.1
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-06-18 09:07:21 -07:00
CrazyMax f42a4a1e94
Merge pull request #2533 from docker/dependabot/github_actions/docker/bake-action-5
build(deps): bump docker/bake-action from 4 to 5
2024-06-18 17:58:21 +02:00
Shaun Thompson 4d7365018c
ci: add pr-labeler
Signed-off-by: Shaun Thompson <shaun.thompson@docker.com>
2024-06-18 09:10:01 -04:00
Eli Treuherz 3d0951b800 Reduce regex usage in annotation parser
Signed-off-by: Eli Treuherz <et@arenko.group>
2024-06-18 12:31:02 +01:00
Eli Treuherz bcd04d5a64 Style fixes to test
Signed-off-by: Eli Treuherz <et@arenko.group>
2024-06-18 12:31:02 +01:00
Eli Treuherz b00001d8ac Make multi-type annotation settings match docs
The Docker docs in multiple places describe passing an annotation at the
command line like "index,manifest:com.example.name=my-cool-image", and
say that this will result in the annotation being applied to both the
index and the manifest. It doesn't seem like this was actually
implemented, and instead it just results in an annotation key with
"index,manifest:" at the beginning being applied to the manifest.

This change splits the part of the key before the colon by comma, and
creates an annotation for each type/platform given, so the
implementation should now match the docs.

Signed-off-by: Eli Treuherz <et@arenko.group>
2024-06-18 12:31:02 +01:00
Tõnis Tiigi 31187735de
Merge pull request #2535 from thaJeztah/bump_credshelpers
vendor: github.com/docker/docker-credential-helpers v0.8.2
2024-06-17 16:39:23 -07:00
Sebastiaan van Stijn 3373a27f1f
vendor: github.com/docker/docker-credential-helpers v0.8.2
full diff: https://github.com/docker/docker-credential-helpers/compare/v0.8.0...v0.8.2

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-06-17 23:58:24 +02:00
Sebastiaan van Stijn 56698805a9
Dockerfile: update docker engine to 27.0.0-rc.2
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-06-17 23:55:05 +02:00
dependabot[bot] 4c2e0c4307
build(deps): bump docker/bake-action from 4 to 5
Bumps [docker/bake-action](https://github.com/docker/bake-action) from 4 to 5.
- [Release notes](https://github.com/docker/bake-action/releases)
- [Commits](https://github.com/docker/bake-action/compare/v4...v5)

---
updated-dependencies:
- dependency-name: docker/bake-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-06-17 18:12:10 +00:00
David Karlsson fb6a3178c9 docs: link to exporter descriptions from reference docs
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-06-17 17:51:10 +02:00
Tõnis Tiigi 8ca18dee2d
Merge pull request #2518 from daghack/handle-build-err-during-lint-request
update the lint subrequest call to error
2024-06-14 19:44:13 -07:00
Tõnis Tiigi 917d2f4a0a
Merge pull request #2523 from thaJeztah/test_engine_26.1
Dockerfile: update docker engine to 26.1.4
2024-06-14 19:32:47 -07:00
Talon Bowler 366328ba6a Add comment to document the purpose behind the non-standard handling of the error
Signed-off-by: Talon Bowler <talon.bowler@docker.com>
2024-06-13 16:11:35 -07:00
Sebastiaan van Stijn 5f822b36d3
Dockerfile: update docker engine to 26.1.4
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-06-13 19:22:19 +02:00
Tõnis Tiigi e423d096a6
Merge pull request #2508 from crazy-max/integration-tests-coverage
test: setup integration tests coverage
2024-06-13 10:10:32 -07:00
Talon Bowler 927fb6731c update the lint subrequest call to error when a build error was encountered during linting
Signed-off-by: Talon Bowler <talon.bowler@docker.com>
2024-06-13 09:47:05 -07:00
CrazyMax 314ca32446
fix assignment of buildinfo-attrs for exporter
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-06-13 10:56:32 +02:00
CrazyMax 3b25e3fa5c
Merge pull request #2516 from thaJeztah/remove_c8d_errdefs
remove use of deprecated containerd/containerd/errdefs
2024-06-12 09:35:17 +02:00
CrazyMax 41d369120b
ci: enable disable_file_fixes in codecov action
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-06-12 08:47:48 +02:00
CrazyMax 56ffe55f81
codecov: exclude generated files
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-06-12 08:47:47 +02:00
CrazyMax 6d5823beb1
test: setup integration tests coverage
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-06-12 08:46:49 +02:00
Sebastiaan van Stijn c116af7b82
remove use of deprecated containerd/containerd/errdefs
This package has moved to a separate module. Also added linting
rules to prevent accidental reintroduction.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-06-12 01:12:59 +02:00
CrazyMax fb130243f8
Merge pull request #2515 from crazy-max/bump-buildkit
testing: update buildkit to 0.14.0
2024-06-11 23:48:33 +02:00
Tõnis Tiigi 29c8107b85
Merge pull request #2514 from crazy-max/align-build-checks-tests
test: align build call tests
2024-06-11 13:51:46 -07:00
CrazyMax ee3baa54f7
dockerfile: update buildkit to 0.14.0
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-06-11 20:38:39 +02:00
CrazyMax 9de95d81eb
test: align build call tests
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-06-11 20:07:23 +02:00
Tõnis Tiigi d3a53189f7
Merge pull request #2513 from tonistiigi/lint-fallback-1.8.0
build: update lint fallback image to dockerfile 1.8.0
2024-06-11 10:26:20 -07:00
Tonis Tiigi 0496dae9d5
build: update lint fallback image to dockerfile 1.8.0
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-06-11 10:12:52 -07:00
Tõnis Tiigi 40fcf992b1
Merge pull request #2512 from tonistiigi/0611-update-buildkit
vendor: update buildkit to v0.14.0
2024-06-11 10:11:34 -07:00
Tonis Tiigi 85c25f719c
vendor: update buildkit to v0.14.0
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-06-11 09:56:59 -07:00
CrazyMax 875e4cd52e
Merge pull request #2510 from crazy-max/ci-ubuntu24.04
ci: switch to ubuntu-24.04 runner
2024-06-11 15:36:04 +02:00
CrazyMax 24cedc6c0f
ci: switch to ubuntu-24.04 runner
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-06-11 14:32:54 +02:00
Tõnis Tiigi 59f52c9505
Merge pull request #2507 from daghack/update-lint-metric-regex
Update the lint metrics to match agains the rule URL
2024-06-10 13:33:25 -07:00
Talon Bowler 1e916ae6c6 add length check for lint message regex result
Signed-off-by: Talon Bowler <talon.bowler@docker.com>
2024-06-10 12:26:32 -07:00
Talon Bowler d342cb9d03 vendor golang.org/x/text dependency
Signed-off-by: Talon Bowler <talon.bowler@docker.com>
2024-06-10 12:17:48 -07:00
Talon Bowler 9fdc99dc76 Update the lint metrics to match agains the rule URL rather than a prefix on the lint rule
Signed-off-by: Talon Bowler <talon.bowler@docker.com>
2024-06-10 12:11:50 -07:00
Akihiro Suda ab835fd904
Merge pull request #2504 from thaJeztah/bump_pty
vendor: github.com/creack/pty v1.1.21
2024-06-10 06:43:36 +09:00
Sebastiaan van Stijn 87efbd43b5
vendor: github.com/creack/pty v1.1.21
full diff: https://github.com/creack/pty/compare/v1.1.18...v1.1.21

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-06-08 17:47:33 +02:00
Tõnis Tiigi 39db6159f9
Merge pull request #2503 from tonistiigi/20240606-update-buildkit
update buildkit to v0.14.0-rc2
2024-06-06 16:30:31 -07:00
Tonis Tiigi 922328cbaf
build: update lint fallback image to v1.8.0-rc2
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-06-06 16:18:43 -07:00
Tonis Tiigi aa0f90fdd6
vendor: update buildkit to v0.14.0-rc2
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-06-06 16:17:17 -07:00
CrazyMax 82b6826cd7
Merge pull request #2500 from dvdksn/doc-rawjson
docs: mention rawjson progress output mode
2024-06-06 17:26:14 +02:00
David Karlsson 1e3aec1ae2
docs: mention rawjson progress output mode
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-06-06 16:51:42 +02:00
CrazyMax cfef22ddf0
Merge pull request #2502 from thaJeztah/bump_compose_go
vendor: github.com/compose-spec/compose-go/v2 v2.1.2
2024-06-06 15:31:51 +02:00
Sebastiaan van Stijn 9e5ba66553
vendor: github.com/compose-spec/compose-go/v2 v2.1.2
Replaces uses of the github.com/mitchellh/mapstructure module, which
was deprecated by the owner and moved to new maintainership at
github.com/go-viper/mapstructure.

The old module is still referenced as indirect dependency (through
docker/cli and theupdateframework/notary), but not used in code, and
should eventually go away.

full diff: https://github.com/compose-spec/compose-go/compare/v2.1.1...v2.1.2

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-06-06 12:44:51 +02:00
Tonis Tiigi 9ceda78057
remote: ensure that client connection is not established twice
Because remote driver implements Info() by calling
Client() internally, two instances on Client are created
backed by separate TCP connection. This hack avoids it
and improves performance.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-06-05 22:04:59 -07:00
CrazyMax 747b75a217
Merge pull request #2497 from crazy-max/fix-k8s-kubeconfig
k8s: fix concurrent kubeconfig access when loading nodes
2024-06-04 12:10:44 +02:00
Tõnis Tiigi d8de5bb345
Merge pull request #2498 from tonistiigi/0603-lint-fallback-update
build: update lint fallback image
2024-06-03 13:28:21 -07:00
Tonis Tiigi eff1850d53
build: update lint fallback image
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-06-03 13:17:52 -07:00
Tõnis Tiigi a24043e9f1
Merge pull request #2487 from daghack/call-lint
Rename --print to --call and make previous name hidden
2024-06-03 12:39:43 -07:00
Tonis Tiigi 0902294e1a
ensure call aliases also work formatting parameters
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-06-03 12:09:47 -07:00
Jonathan A. Sternberg ef4a165e48
commands: add an alias for --check to be the same as --call=check
This adds an alias for `--check` that causes it to behave the same as
`--call=check`. This is done using `BoolFunc` to call a function when
the option is seen and to set it to the correct value. This should allow
command line flags like `--check --call=targets` to work correctly (even
though they conflict) by making it so the first invocation sets the
print function to `check` and the second overwrites the first. This is
the expected behavior for these types of boolean flags.

`BoolFunc` itself is part of the standard library flags package, but
never seems to have made it into pflag possibly because it was added in
go 1.21.

https://pkg.go.dev/flag#FlagSet.BoolFunc

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-06-03 13:25:21 -05:00
Tonis Tiigi 89810dc998
build: set default call method name to build
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-06-03 10:42:35 -07:00
Talon Bowler 250cd44d70
Adds a --call flag as an alias to the --print flag and hides the later.
Signed-off-by: Talon Bowler <talon.bowler@docker.com>
2024-06-03 10:30:30 -07:00
Tõnis Tiigi 5afb210d43
Merge pull request #2491 from jsternberg/update-buildkit
vendor: update buildkit to v0.14.0-rc1
2024-06-03 10:23:26 -07:00
Tõnis Tiigi 03f84d2e83
Merge pull request #2496 from crazy-max/dial-cmd-flags
dial-stdio: remove extra cmd.flags()
2024-06-03 09:08:32 -07:00
CrazyMax 945e774a02
k8s: fix concurrent kubeconfig access when loading nodes
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-06-03 16:16:24 +02:00
CrazyMax 947d6023e4
Merge pull request #2492 from crazy-max/k8s-timeout
k8s: opt to customize timeout during deployment
2024-06-03 16:10:05 +02:00
CrazyMax c58599ca50
dial-stdio: remove extra cmd.flags()
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-06-03 14:14:55 +02:00
CrazyMax f30e143428
k8s: rename timeout opt and move it out of deployment manifest
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-06-03 10:30:06 +02:00
Arnold Sobanski 53b7cbc5cb
Add parameter provisioningTimeout to Kubernetes driver options.
Signed-off-by: Arnold Sobanski <arnold@l4g.dev>
2024-06-03 10:08:03 +02:00
Tonis Tiigi 9a30215886
tests: avoid early shutdown of sandbox
Because sandbox is closed down when the main test
that created the sandbox returns it can't have subtests
that set themselves as parallel as they would continue
to run in a different lifecycle.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-05-31 17:38:32 -07:00
Jonathan A. Sternberg b1cb658a31
vendor: update buildkit to v0.14.0-rc1
Update buildkit dependency to v0.14.0-rc1. Update the tracing
infrastructure to use the new detect API which updates how the delegated
exporter is configured.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-05-31 16:23:41 -05:00
Tõnis Tiigi bc83ecb538
Merge pull request #2490 from tonistiigi/lint-fallback-update
build: update --print fallback image to 1.8.0-rc1
2024-05-31 14:08:08 -07:00
Tonis Tiigi ceaa4534f9
build: update --print fallback image to 1.8.0-rc1
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-05-31 13:57:56 -07:00
Tõnis Tiigi 9b6c4103af
Merge pull request #2488 from tonistiigi/add-jonathan
add Jonathan to buildx maintainers
2024-05-31 10:07:21 -07:00
Tõnis Tiigi 4549283f44
Merge pull request #2482 from rvoh-tismith/fix/single_source_create
Add `--prefer-index` flag for`imagetools create` on a single source
2024-05-31 09:44:43 -07:00
Tonis Tiigi b2e907d5c2
add Jonathan to buildx maintainers
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-05-31 08:36:15 -07:00
Tõnis Tiigi 7427adb9b0
Merge pull request #2484 from jsternberg/lint-metrics
metrics: record the number of times lint rules are triggered during a build
2024-05-30 15:15:03 -07:00
Jonathan A. Sternberg 1a93bbd3a5
metrics: record the number of times lint rules are triggered during a build
This metric records the number of times it sees a lint warning in the
progress stream and categorizes the number of times each rule has been
triggered. This will only record whether a lint warning was triggered
and not whether the linter was even used or which rules were present.
That information isn't presently part of the stream.

With this change, we might be reaching some of the limitations that
spying on the progress stream gives us for metrics and may want to
consider another way for the build to communicate metrics back to the
client.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-05-30 15:05:00 -05:00
thompson-shaun 1f28985d20
Merge pull request #2425 from glours/bump-compose-go-2.1.0
bump compose-go to v2.1.1
2024-05-30 12:16:33 -04:00
CrazyMax 33a5528003
bump compose-go to v2.1.1
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-05-30 15:37:13 +02:00
Tim Smith 7bfae2b809 Updated the docs.
Signed-off-by: Tim Smith <tismith@rvohealth.com>
2024-05-29 22:54:09 -04:00
Tim Smith 117c9016e1 Updated tests further to make sure the new flag doesn't affect copy an index regardless of what value you specify.
Signed-off-by: Tim Smith <tismith@rvohealth.com>
2024-05-29 21:56:22 -04:00
Tim Smith 388af3576a Updated tests to test new --prefer-index flag
Signed-off-by: Tim Smith <tismith@rvohealth.com>
2024-05-29 21:39:14 -04:00
Tim Smith 2061550bc1 Slightly refactored the mediaType check on single source so that now we return original bytes without filtering on mediaType, based on the preferIndex preference.
Signed-off-by: Tim Smith <tismith@rvohealth.com>
2024-05-29 14:20:53 -04:00
Tim Smith abf6c77d91 Add a --prefer-index flag that allows you to specify the preferred behavior when deciding on how to create an image/manifest from a single source.
Signed-off-by: Tim Smith <tismith@rvohealth.com>
2024-05-29 14:07:28 -04:00
Justin Chadwell 9ad116aa8e
Merge pull request #2478 from thaJeztah/extract_resolve_digest
build: loadInputs: extract resolving digest to a separate function
2024-05-29 11:00:54 +01:00
Tõnis Tiigi e3d5e64ec9
Merge pull request #2475 from thaJeztah/remove_urlutil
remove uses of github.com/docker/docker/builder/remotecontext package
2024-05-28 22:51:36 -07:00
Tim Smith 0808747add Added application/vnd.docker.distribution.manifest.v2+json mediatype to the list of mediatypes we return the original bytes for when calling *Resolver.Combine rather than adding it to a newly created manifest list
Signed-off-by: Tim Smith <tismith@rvohealth.com>
2024-05-28 23:01:14 -04:00
Tõnis Tiigi 2e7da01560
Merge pull request #2473 from tonistiigi/prune-negative-filter
prune: allow negative and prefix filters
2024-05-28 13:53:06 -07:00
Sebastiaan van Stijn 38d7d36f0a
build: loadInputs: extract resolving digest to a separate function
This makes the code slightly more idiomatic, but the errors produced will
change slightly to prevent having to path NamedContext as argument.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-05-27 19:31:32 +02:00
CrazyMax 55c86543ca
Merge pull request #2477 from thaJeztah/remove_redundant_checks
build: loadInputs: remove redundant checks for hasTag, hasDigest
2024-05-27 16:04:07 +02:00
CrazyMax f98ef00ec7
Merge pull request #2454 from kariya-mitsuru/fix-k8s-driver
Fix k8s driver with certs cannot boot
2024-05-27 12:32:38 +02:00
Sebastiaan van Stijn b948b07e2d
remove uses of github.com/docker/docker/builder/remotecontext package
This package is part of the classic builder, and was currently only used
for the IsURL utility, which is a very rudimentary check for a string having
a "https://" or "http://" scheme.

This patch copies the code as non-exported functions where they're used to
remove the dependency.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-05-26 11:06:02 +02:00
Sebastiaan van Stijn 17c0a3794b
build: loadInputs: remove redundant check for hasDigest
hasDigest would always be true when reaching this code, because the function
would return with an error when failing to resolve the digest;

    if !hasDigest {
        return nil, errors.Errorf("oci-layout reference %q could not be resolved", v.Path)
    }

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-05-25 16:36:52 +02:00
Sebastiaan van Stijn c0a986b43b
build: loadInputs: remove redundant check for hasTag
hasTag was always true as it was set to "true" when missing, in which case
the default (`:latest`) tag was applied;

    localPath, tag, hasTag := strings.Cut(localPath, ":")
    if !hasTag {
        tag = "latest"
        hasTag = true
    }

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-05-25 16:32:37 +02:00
Tonis Tiigi 781dcbd196
prune: allow negative and prefix filters
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-05-24 16:57:25 -07:00
Tõnis Tiigi 37c4ff0944
Merge pull request #2467 from tonistiigi/fix-resolvednode-cache-panic
build: fix resolvedNode cache and panic protection
2024-05-22 07:07:36 -07:00
thompson-shaun 6211f56b8d
Merge pull request #2461 from jsternberg/v0.14.1-picks
[v0.14] cherry-picks for v0.14.1
2024-05-21 14:12:17 -04:00
Tõnis Tiigi cc9ea87142
Merge pull request #2460 from jsternberg/vendor-update
vendor: update buildx to latest docker/cli
2024-05-21 09:20:09 -07:00
Tonis Tiigi 035236a5ed
driver: handle nil logger for bootstrap
resolveNode methods can call with nil logger. Although
the results should already be cached now in resolver
this makes the protection more explicit.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-05-20 16:38:01 -07:00
Tonis Tiigi 99777eaf34
build: add cache to resolvedNode
Currently it is possible for boot() to be called
multiple times, resulting multiple slow requests to
establish connection (eg. multiple container inspects
for container driver).

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-05-20 16:31:42 -07:00
Jonathan A. Sternberg cf68b5b878
vendor: update buildx to latest docker/cli
This version of docker/cli has changes to remove compose-cli wrapper and
move all CLI metrics to OTEL.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
(cherry picked from commit 4fc4bc07ae)
2024-05-16 12:14:08 -05:00
Tonis Tiigi 3f1aaa68d5
build: fix multiple named contexts pointing to same bake target
Contexts using target: schema are replaced by input: pointing
to previous build result before build request is sent. Currently
this replacement did not work if multiple contexts pointed to
the same target name.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit f8c6a97edc)
2024-05-16 12:14:08 -05:00
jaihwan104 f6830f3b86
build: exit 1 when manifest merge failed
Signed-off-by: jaihwan104 <42341126+jaihwan104@users.noreply.github.com>
(cherry picked from commit f2823515db)
2024-05-16 12:13:59 -05:00
Jonathan A. Sternberg 4fc4bc07ae
vendor: update buildx to latest docker/cli
This version of docker/cli has changes to remove compose-cli wrapper and
move all CLI metrics to OTEL.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-05-16 12:07:13 -05:00
CrazyMax f6e57cf5b5
build: don't generate metadata file when print flag is used
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
(cherry picked from commit ba264138d6)
2024-05-16 11:31:58 -05:00
Tonis Tiigi b77648d5f8
build: avoid default load with --print
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit fbb0f9b424)
2024-05-16 11:29:59 -05:00
Akihiro Suda afcb609966
Merge pull request #2456 from thaJeztah/rm_k8s_apiserver
driver/kubernetes/util: remove k8s.io/apiserver dependency
2024-05-14 21:30:50 +09:00
Sebastiaan van Stijn 946e0a5d74
driver/kubernetes/util: remove k8s.io/apiserver dependency
Use a simplified local implementation that follow the same semantics,
so that we don't need k8s.io/apiserver as dependency.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-05-14 13:58:56 +02:00
CrazyMax c4db5b252a
Merge pull request #2445 from sumnerwarren/bake-compose-ssh
Bake: support compose ssh config
2024-05-14 10:15:48 +02:00
CrazyMax 8afeb56a3b
Merge pull request #2455 from dvdksn/docs-bakefile-reference-frontmatter
docs: move Bake file reference title to front matter
2024-05-13 18:37:55 +02:00
David Karlsson fd801a12c1 docs: move Bake file reference title to front matter
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-05-13 18:15:17 +02:00
Tõnis Tiigi 2f98e6f3ac
Merge pull request #2444 from tonistiigi/fix-base-duplicate-target-ref
build: fix multiple named contexts pointing to same bake target
2024-05-13 08:48:55 -07:00
Sumner Warren 224c6a59bf
Bake: support compose ssh config
Signed-off-by: Sumner Warren <sumner.warren@gmail.com>
2024-05-13 08:46:17 -04:00
Mitsuru Kariya cbb75bbfd5
Fix k8s driver with certs cannot boot
Signed-off-by: Mitsuru Kariya <mitsuru.kariya@nttdata.com>
2024-05-13 10:33:15 +09:00
CrazyMax 72085dbdf0
Merge pull request #2449 from docker/dependabot/github_actions/softprops/action-gh-release-2.0.5
build(deps): bump softprops/action-gh-release from 2.0.4 to 2.0.5
2024-05-10 11:32:35 +02:00
dependabot[bot] 480b53f529
build(deps): bump softprops/action-gh-release from 2.0.4 to 2.0.5
Bumps [softprops/action-gh-release](https://github.com/softprops/action-gh-release) from 2.0.4 to 2.0.5.
- [Release notes](https://github.com/softprops/action-gh-release/releases)
- [Changelog](https://github.com/softprops/action-gh-release/blob/master/CHANGELOG.md)
- [Commits](9d7c94cfd0...69320dbe05)

---
updated-dependencies:
- dependency-name: softprops/action-gh-release
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-05-07 18:47:50 +00:00
Tonis Tiigi f8c6a97edc
build: fix multiple named contexts pointing to same bake target
Contexts using target: schema are replaced by input: pointing
to previous build result before build request is sent. Currently
this replacement did not work if multiple contexts pointed to
the same target name.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-05-02 18:08:45 -07:00
CrazyMax d4f088e689
Merge pull request #2442 from crazy-max/ci-fix-validate-matrix
ci(validate): fix GOLANGCI_LINT_MULTIPLATFORM type for multiplatform lint
2024-05-02 14:58:55 +02:00
CrazyMax db3a8ad7ca
ci(validate): fix GOLANGCI_LINT_MULTIPLATFORM type for multiplatform lint
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-05-02 11:17:05 +02:00
Tõnis Tiigi 1d88c4b169
Merge pull request #2439 from crazy-max/ci-split-validate
ci(validate): split lint
2024-05-01 13:58:20 -07:00
CrazyMax 6d95fb586e
ci(validate): split lint
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-30 10:19:06 +02:00
Tõnis Tiigi 1fb5d2a9ee
Merge pull request #2422 from crazy-max/skip-provenance-internal
build: don't generate metadata file when print flag is used
2024-04-29 17:12:20 -07:00
CrazyMax ba264138d6
build: don't generate metadata file when print flag is used
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-26 10:53:46 +02:00
CrazyMax 6375dc7230
Merge pull request #2432 from docker/dependabot/github_actions/peter-evans/create-pull-request-6.0.5
build(deps): bump peter-evans/create-pull-request from 6.0.4 to 6.0.5
2024-04-26 09:06:56 +02:00
CrazyMax 9cc6c7df70
Merge pull request #2431 from tonistiigi/make-bake-tidy
make: tidy redirects to bake
2024-04-26 09:06:00 +02:00
Tonis Tiigi 7ea5cffb98
make: tidy redirects to bake
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-04-25 17:08:21 -07:00
dependabot[bot] d2d21577fb
build(deps): bump peter-evans/create-pull-request from 6.0.4 to 6.0.5
Bumps [peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request) from 6.0.4 to 6.0.5.
- [Release notes](https://github.com/peter-evans/create-pull-request/releases)
- [Commits](9153d834b6...6d6857d369)

---
updated-dependencies:
- dependency-name: peter-evans/create-pull-request
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-25 18:36:26 +00:00
CrazyMax e344e2251b
Merge pull request #2430 from tonistiigi/linter-updates
linter updates and gopls linting
2024-04-25 09:16:56 +02:00
CrazyMax 833fe3b04f
Merge pull request #2427 from dvdksn/remove-doc-stubs
docs: remove stub files and update links
2024-04-25 09:15:11 +02:00
Tonis Tiigi d0cc9ed0cb
hack: add gopls based linters
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-04-24 18:11:30 -07:00
Tonis Tiigi b30566438b
lint: gopls fixes
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-04-24 17:58:17 -07:00
Tonis Tiigi ec98985b4e
hack: linter updates
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-04-24 17:20:27 -07:00
Tonis Tiigi 9428447cd2
lint: unusedwrite fixes
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-04-24 17:19:52 -07:00
Tonis Tiigi 6112c41637
lint: nilness fixes
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-04-24 17:19:32 -07:00
Tõnis Tiigi a727de7d5f
Merge pull request #2421 from tonistiigi/print-default-load
build: avoid default load with --print
2024-04-24 16:38:21 -07:00
Tõnis Tiigi 4a8fcb7aa0
Merge pull request #2423 from crazy-max/test-build-print
test: build print
2024-04-24 16:38:03 -07:00
Tõnis Tiigi 771e66bf7a
Merge pull request #2424 from jaihwan104/exit-1-when-manifest-merge-failed
fix exit code when manifest merge failed
2024-04-24 16:37:18 -07:00
David Karlsson 7e0ab1a003 docs: remove stub files and update links
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-04-23 13:39:56 +02:00
Guillaume Lours e3e16ad088
bumpo compose-go to v2.1.0
Signed-off-by: Guillaume Lours <705411+glours@users.noreply.github.com>
2024-04-23 10:28:28 +02:00
jaihwan104 f2823515db build: exit 1 when manifest merge failed
Signed-off-by: jaihwan104 <42341126+jaihwan104@users.noreply.github.com>
2024-04-22 23:56:10 +09:00
CrazyMax 5ac9b78384
test: build print
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-19 10:51:27 +02:00
Tonis Tiigi fbb0f9b424
build: avoid default load with --print
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-04-18 18:12:39 -07:00
CrazyMax 699fa43f7f
Merge pull request #2419 from docker/dependabot/github_actions/peter-evans/create-pull-request-6.0.4
build(deps): bump peter-evans/create-pull-request from 6.0.3 to 6.0.4
2024-04-18 16:57:16 +02:00
dependabot[bot] bdf27ee797
build(deps): bump peter-evans/create-pull-request from 6.0.3 to 6.0.4
Bumps [peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request) from 6.0.3 to 6.0.4.
- [Release notes](https://github.com/peter-evans/create-pull-request/releases)
- [Commits](c55203cfde...9153d834b6)

---
updated-dependencies:
- dependency-name: peter-evans/create-pull-request
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-17 18:53:16 +00:00
Tõnis Tiigi 171fcbeb69
Merge pull request #2417 from tonistiigi/update-buildkit-240417
vendor: update buildkit to 71f99c52a669
2024-04-17 10:02:29 -07:00
Tonis Tiigi 370a5aa127
update lint fallback image
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-04-17 09:18:52 -07:00
Tonis Tiigi 13653fb84d
vendor: update buildkit to 71f99c52a669
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-04-17 08:21:11 -07:00
Tõnis Tiigi 1b16594f4a
Merge pull request #2415 from igaskin/scheduler-name
feat: adding option to add scheduler name to kubernetes driver
2024-04-17 08:18:23 -07:00
Tõnis Tiigi 3905e8cf06
Merge pull request #2416 from crazy-max/print-internal
build: mark information requests as internal
2024-04-17 08:15:55 -07:00
CrazyMax 177b95c972
build: mark information requests as internal
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-17 16:56:43 +02:00
Isaac Gaskin 74fdbb5e7f
feat: adding option to add scheduler name to kubernetes driver
this allows for custom scheduling of deployments

Signed-off-by: Isaac Gaskin <isaac.gaskin@circle.com>
2024-04-16 14:51:59 -07:00
Tõnis Tiigi ac331d3569
Merge pull request #2401 from crazy-max/ci-k3s-update
ci: switch to reusable workflow to install k3s
2024-04-15 16:00:55 -07:00
Tõnis Tiigi 07c9b45bae
Merge pull request #2408 from tonistiigi/print-statuscode
build: support statuscode response for print requests
2024-04-15 15:58:52 -07:00
Tõnis Tiigi b91957444b
Merge pull request #2406 from tonistiigi/print-lint-fallback
build: add fallback image for --print=lint
2024-04-15 15:58:34 -07:00
Tonis Tiigi 46c44c58ae
build: support statuscode response for print requests
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-04-15 10:38:54 -07:00
CrazyMax 6aed54c35a
Merge pull request #2405 from docker/dependabot/github_actions/peter-evans/create-pull-request-6.0.3
build(deps): bump peter-evans/create-pull-request from 6.0.2 to 6.0.3
2024-04-13 14:54:34 +02:00
Tonis Tiigi 126fe653c7
build: refactor print fallbacks to own function
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-04-12 17:09:43 -07:00
Tonis Tiigi f0cbc95eaf
build: add fallback image for --print=lint
Fallback to known supporting image if lint called
on old frontend.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-04-12 17:09:38 -07:00
dependabot[bot] 1a0f9fa96c
build(deps): bump peter-evans/create-pull-request from 6.0.2 to 6.0.3
Bumps [peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request) from 6.0.2 to 6.0.3.
- [Release notes](https://github.com/peter-evans/create-pull-request/releases)
- [Commits](70a41aba78...c55203cfde)

---
updated-dependencies:
- dependency-name: peter-evans/create-pull-request
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-12 18:33:29 +00:00
CrazyMax df7a3db947
Merge pull request #2384 from Usual-Coder/feature-hcl-index
bake: add `indexof` hcl func
2024-04-11 17:27:21 +02:00
Tõnis Tiigi d294232cb5
Merge pull request #2404 from tonistiigi/buildkit-vendor-lint-update
vendor: update buildkit v0.14-dev version 549891b
2024-04-11 08:24:21 -07:00
CrazyMax 0a7f5c4d94
bake: test indexof hcl func and make it private
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-11 17:11:44 +02:00
Usual Coder 5777d980b5
bake: add indexof hcl func
Signed-off-by: Usual Coder <34403413+Usual-Coder@users.noreply.github.com>
2024-04-11 17:01:53 +02:00
Tonis Tiigi 46cf94092c
commands: use vendored formatter for lint responses
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-04-11 07:52:07 -07:00
Tonis Tiigi da3435ed3a
vendor: update buildkit v0.14-dev version 549891b
Brings in formatter for lint requests.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-04-11 07:49:31 -07:00
Tõnis Tiigi 3e90cc4b84
Merge pull request #2280 from crazy-max/provenance-metadata
build: set record provenance in response
2024-04-11 07:31:12 -07:00
CrazyMax 6418669e75
Merge pull request #2402 from crazy-max/bump-docker
vendor: github.com/docker/cli b6c552212837 (v26.1.0-dev)
2024-04-11 15:14:05 +02:00
CrazyMax 188495aa93
vendor: github.com/docker/cli b6c552212837 (v26.1.0-dev)
full diff: 155dc5e4e4...b6c5522128

Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-11 14:57:31 +02:00
CrazyMax 54a5c1ff93
ci: switch to reusable workflow to install k3s
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-11 10:15:37 +02:00
CrazyMax 2e2f9f571f
build: set record provenance in response
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-11 10:11:27 +02:00
CrazyMax d2ac1f2d6e
Merge pull request #2322 from crazy-max/test-buildkit-multi-ver
tests: matrix with buildkit versions
2024-04-11 10:10:21 +02:00
CrazyMax 7e3acad9f4
ci: remove buildkit-edge job
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-11 09:55:00 +02:00
CrazyMax e04637cf34
ci: use string type for experimental so it can appear on actions page
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-11 09:55:00 +02:00
CrazyMax b9c5f9f1ee
ci: run docker worker in dedicated matrix
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-11 09:48:32 +02:00
CrazyMax 92ab188781
dockerfile: update buildkit to 0.13.1
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-11 09:43:14 +02:00
CrazyMax dd4d52407f
tests: skip according to buildkit version constraint
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-11 09:43:14 +02:00
CrazyMax 7432b483ce
dockerfile: add undock for integration tests
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-11 09:42:19 +02:00
CrazyMax 6e3164dc6f
tests: matrix with buildkit versions
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-11 09:42:19 +02:00
CrazyMax 2fdb1682f8
Merge pull request #2399 from thaJeztah/bump_x_net
vendor: golang.org/x/sys v0.18.0, golang.org/x/term v0.18.0, golang.org/x/crypto v0.21.0, golang.org/x/net v0.23.0
2024-04-10 19:20:40 +02:00
Sebastiaan van Stijn 7f1eaa2a8a
vendor: golang.org/x/net v0.23.0
full diff: https://github.com/golang/net/compare/v0.22.0...v0.23.0

Includes a fix for CVE-2023-45288, which is also addressed in go1.22.2
and go1.21.9;

> http2: close connections when receiving too many headers
>
> Maintaining HPACK state requires that we parse and process
> all HEADERS and CONTINUATION frames on a connection.
> When a request's headers exceed MaxHeaderBytes, we don't
> allocate memory to store the excess headers but we do
> parse them. This permits an attacker to cause an HTTP/2
> endpoint to read arbitrary amounts of data, all associated
> with a request which is going to be rejected.
>
> Set a limit on the amount of excess header frames we
> will process before closing a connection.
>
> Thanks to Bartek Nowotarski for reporting this issue.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-04-10 17:22:06 +02:00
Sebastiaan van Stijn fbddc9ebea
vendor: golang.org/x/net v0.22.0, golang.org/x/crypto v0.21.0
full diffs changes relevant to vendored code:

- https://github.com/golang/net/compare/v0.20.0...v0.22.0
    - http2: remove suspicious uint32->v conversion in frame code
    - http2: send an error of FLOW_CONTROL_ERROR when exceed the maximum octets
- https://github.com/golang/crypto/compare/v0.18.0...v0.21.0
    - x/crypto/internal/poly1305: improve sum_ppc64le.s

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-04-10 17:14:09 +02:00
Sebastiaan van Stijn d347499112
vendor: golang.org/x/term v0.18.0
no changes in vendored code

full diff: https://github.com/golang/term/compare/v0.16.0...v0.18.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-04-10 17:02:36 +02:00
Sebastiaan van Stijn b1fb67f44a
vendor: golang.org/x/sys v0.18.0
full diff: https://github.com/golang/sys/compare/v0.16.0...v0.18.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-04-10 17:01:00 +02:00
CrazyMax a9575a872a
Merge pull request #2392 from crazy-max/update-hcl
vendor: update hcl dependencies
2024-04-10 08:48:10 +02:00
Tõnis Tiigi 60f48059a7
Merge pull request #2394 from crazy-max/fix-stdin-controller
build: fix stdin handling when building with controller
2024-04-09 09:57:31 -07:00
CrazyMax ffff87be03
build: fix stdin handling when building with controller
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-09 14:49:30 +02:00
CrazyMax 0a3e5e5257
Merge pull request #2393 from crazy-max/fix-go-mod
go.mod: move indirect deps to the right require block
2024-04-09 10:17:10 +02:00
CrazyMax 151b0de8f2
go.mod: move indirect deps to the right require block
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-09 10:01:07 +02:00
CrazyMax e40c630758
Merge pull request #2391 from crazy-max/update-compose
vendor: update compose-go to v2.0.2
2024-04-09 09:58:30 +02:00
CrazyMax ea3338c3f3
vendor: update github.com/zclconf/go-cty to v1.14.4
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-09 09:41:03 +02:00
CrazyMax 744c055560
vendor: update github.com/hashicorp/hcl/v2 to v2.20.1
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-09 09:39:15 +02:00
CrazyMax ca0b583f5a
vendor: update compose-go to v2.0.2
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-09 09:20:12 +02:00
CrazyMax e7f2da9c4f
Merge pull request #2385 from davix/patch-1
Fix typo in buildx_build.md
2024-04-09 09:14:30 +02:00
CrazyMax d805c784f2
Merge pull request #2378 from dvdksn/docs-crossref-secrets
docs: add cross-reference about build secrets
2024-04-09 08:52:42 +02:00
Wei a2866b79e3
Fix typo in buildx_build.md
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-09 08:49:25 +02:00
Akihiro Suda 12e1f65eb3
Merge pull request #2370 from Moleus/feat-ephemeral-storage-opts
driver: add ephemeral-storage options to kuberentes-driver
2024-04-09 09:04:25 +09:00
Tõnis Tiigi 0d6b3a9d1d
Merge pull request #2336 from crazy-max/bake-load-override
bake: load override
2024-04-08 16:12:22 -07:00
CrazyMax 4b3c3c8401
Merge pull request #2259 from namespacelabs/master
Implement ability to load images by default in non-Docker build drivers.
2024-04-05 16:13:14 +02:00
Niklas Gehlen ccc314a823
Implement new driver-opt: default-load
This eases build driver migrations, as it allows aligning the default behavior.
See also https://docs.docker.com/build/drivers/

Signed-off-by: Niklas Gehlen <niklas@namespacelabs.com>
2024-04-05 15:30:33 +02:00
CrazyMax dc4b4c36bd
bake: load override
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-05 13:03:15 +02:00
CrazyMax 5c29e6e26e
Merge pull request #2374 from tonistiigi/print-json-format
handle json formatting for print
2024-04-05 09:08:27 +02:00
CrazyMax 6a0d5b771f
Merge pull request #2376 from crazy-max/ci-test-experimental
tests: test with buildx experimental
2024-04-04 19:51:10 +02:00
CrazyMax 59cc10767e
Merge pull request #2363 from crazy-max/bake-remote-token
bake: git auth support for remote definitions
2024-04-04 19:37:16 +02:00
CrazyMax b61b29f603
tests: test with buildx experimental
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-04 19:32:20 +02:00
CrazyMax 7cfef05661
Merge pull request #2381 from crazy-max/test-secret
tests: build secret
2024-04-04 19:23:03 +02:00
CrazyMax 4d39259f8e
bake: git auth support for remote definitions
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-04 14:12:48 +02:00
CrazyMax 15fd39ebec
tests: build secret
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-04 13:09:42 +02:00
CrazyMax a7d59ae332
Merge pull request #2373 from jsternberg/docker-cli-meter-provider
metricutil: switch to using the cli meter provider
2024-04-04 11:10:46 +02:00
David Karlsson e18a2f6e58 docs: add cross-reference about build secrets
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-04-03 10:37:17 +02:00
Tõnis Tiigi 38fbd9a85c
Merge pull request #2377 from crazy-max/test-stdin
tests: build from stdin
2024-04-02 09:54:45 -07:00
CrazyMax 84ddbc2b3b
Merge pull request #2375 from crazy-max/bump-docker-26
vendor: github.com/docker/docker v26.0.0
2024-04-02 16:40:14 +02:00
Jonathan A. Sternberg b4799f9d16
metricutil: switch to using the cli meter provider
The meter provider initialization that was located here has now been
moved to a common area in the docker cli. This upgrades our CLI version
and then uses this common code instead of our own version.

As a piece of additional functionality, the docker OTEL endpoint can now
be overwritten with `DOCKER_CLI_OTEL_EXPORTER_OTLP_ENDPOINT` for
testing.

This removes the OTLP exporter from the CLI that was previously locked
behind `BUILDX_EXPERIMENTAL`. I do plan for this to return, but as a
proper part of the `docker/cli` implementation rather than something
special with `buildx`.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-04-02 09:36:55 -05:00
CrazyMax 7cded6b33b
tests: build from stdin
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-02 15:10:18 +02:00
CrazyMax 1b36bd0c4a
vendor: github.com/docker/docker v26.0.0
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-04-02 11:29:15 +02:00
CrazyMax 7dc5639216
Merge pull request #2372 from jsternberg/bump-docker
vendor: github.com/docker/docker and github.com/docker/cli v26.0.0
2024-04-02 11:20:38 +02:00
Tonis Tiigi 858e347306
handle json formatting for print
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-04-01 16:46:04 -07:00
Jonathan A. Sternberg adb9bc86e5
vendor: github.com/docker/docker and github.com/docker/cli v26.0.0
Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-04-01 13:05:55 -05:00
Moleus ef2e30deba
driver: add ephemeral-storage options to kuberentes-driver
Signed-off-by: Moleus <fafufuburr@gmail.com>
2024-04-01 13:10:44 +03:00
Tõnis Tiigi c690d460e8
Merge pull request #2362 from jsternberg/single-tracer-delegate-client
driver: initialize tracer delegate in driver handle instead of individual plugins
2024-03-29 11:47:41 -07:00
Tõnis Tiigi 35781a6c78
Merge pull request #2366 from crazy-max/update-buildkit
vendor: github.com/moby/buildkit 25bec7145b39 (v0.14.0-dev)
2024-03-29 10:59:43 -07:00
CrazyMax de5efcb03b
vendor: github.com/moby/buildkit 25bec7145b39 (v0.14.0-dev)
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-03-28 17:51:45 +01:00
Jonathan A. Sternberg 5c89004bb6
driver: initialize tracer delegate in driver handle instead of individual plugins
This refactors the driver handle to initialize the tracer delegate
inside of the driver handle instead of the individual plugins.

This provides more uniformity to how the tracer delegate is created by
allowing the driver handle to pass additional client options to the
drivers when they create the client. It also avoids creating the tracer
delegate client multiple times because the driver handle will only
initialize the client once. This prevents some drivers, like the remote
driver, from accidentally registering multiple clients as tracer
delegates.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-03-27 15:13:43 -05:00
Tõnis Tiigi 8abef59087
Merge pull request #2344 from jsternberg/progress-metrics-non-experimental
progress: remove the experimental label from progress metrics
2024-03-22 09:23:39 -07:00
Jonathan A. Sternberg 4999908fbc
progress: remove the experimental label from progress metrics
Removes the experimental label from progress metrics. User-metrics
themselves are still experimental so this is still blocked behind the
experimental flag, but this will allow the docker otlp endpoint to
receive these metrics.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-03-19 08:23:32 -05:00
Tõnis Tiigi 4af0ed5159
Merge pull request #2323 from jsternberg/build-idle-time-metric
metrics: measure idle time during builds
2024-03-18 15:15:29 -07:00
Jonathan A. Sternberg a4a8846e46
metrics: measure idle time during builds
This measures the amount of time spent idle during the build. This is
done by collecting the set of task times, determining which sections
contain gaps where no task is running, and aggregating that duration
into a metric.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-03-18 08:43:15 -05:00
Tõnis Tiigi 520dc5968a
Merge pull request #2298 from LaurentGoderre/imagetools-inspect-tests
Add tests for imagetools inspect
2024-03-15 13:04:06 -07:00
Tõnis Tiigi 324afe60ad
Merge pull request #2341 from crazy-max/tests-refactor-worker-handling
tests: refactor worker handling in sandbox
2024-03-15 12:53:27 -07:00
CrazyMax c0c3a55fca
Merge pull request #2343 from crazy-max/experimental-ref
chore: check experimental from confutil
2024-03-15 19:24:44 +01:00
CrazyMax 2a30229916
chore: check experimental from confutil
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-03-15 11:52:41 +01:00
Tõnis Tiigi ed76661b0d
Merge pull request #2317 from jsternberg/build-export-image-metric
metrics: measure export image operation
2024-03-14 14:59:35 -07:00
Jonathan A. Sternberg a0cce9b31e
metrics: measure export image operation
This measures the amount of time it takes to export to a specific
format.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-03-14 16:08:19 -05:00
Tõnis Tiigi d410597f5a
Merge pull request #2316 from jsternberg/build-exec-command-time
metrics: measure run operations for exec operations
2024-03-14 13:13:51 -07:00
Jonathan A. Sternberg 9016d85718
metrics: measure run operations for exec operations
This measures the duration of exec operations. It does not factor in
whether the operation was cached or not so this should include the
amount of time to determine whether an operation was cached.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-03-14 14:51:27 -05:00
Tõnis Tiigi 2565c74a89
Merge pull request #2254 from crazy-max/rm-local-dirs
chore: switch to LocalMounts implementation
2024-03-14 11:34:12 -07:00
Tõnis Tiigi eab5cccbb4
Merge pull request #2271 from jsternberg/build-image-transfer-metric
metrics: measure image transfers for image source operations
2024-03-14 10:28:50 -07:00
CrazyMax e2be765e7b
tests: refactor worker handling in sandbox
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-03-14 13:42:37 +01:00
CrazyMax 276dd5150f
Merge pull request #2339 from crazy-max/ci-lint-multi
ci: enable multi-platform lint only for upstream repo
2024-03-14 10:59:34 +01:00
CrazyMax 5c69fa267f
ci: enable multi-platform lint only for upstream repo
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-03-14 10:39:50 +01:00
CrazyMax b240a00def
chore: switch to LocalMounts implementation
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-03-13 18:59:14 +01:00
Tõnis Tiigi a8af6fa013
Merge pull request #2332 from crazy-max/build-move-opts
build: move funcs related to solve opts handling
2024-03-13 10:58:26 -07:00
CrazyMax 7eb3dfbd22
Merge pull request #2335 from docker/dependabot/github_actions/softprops/action-gh-release-2.0.4
build(deps): bump softprops/action-gh-release from 2.0.3 to 2.0.4
2024-03-13 10:12:48 +01:00
CrazyMax 4b24f66a10
Merge pull request #2334 from docker/dependabot/github_actions/peter-evans/create-pull-request-6.0.2
build(deps): bump peter-evans/create-pull-request from 6.0.1 to 6.0.2
2024-03-13 10:12:33 +01:00
CrazyMax 8d5b967f2d
ci: set comment version for peter-evans/create-pull-request
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-03-13 09:44:40 +01:00
CrazyMax 8842e19869
ci: update comment version for softprops/action-gh-release update
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-03-13 09:43:39 +01:00
dependabot[bot] a0ce8bec97
build(deps): bump softprops/action-gh-release from 2.0.3 to 2.0.4
Bumps [softprops/action-gh-release](https://github.com/softprops/action-gh-release) from 2.0.3 to 2.0.4.
- [Release notes](https://github.com/softprops/action-gh-release/releases)
- [Changelog](https://github.com/softprops/action-gh-release/blob/master/CHANGELOG.md)
- [Commits](3198ee18f8...9d7c94cfd0)

---
updated-dependencies:
- dependency-name: softprops/action-gh-release
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-12 18:19:57 +00:00
dependabot[bot] 84d79df93b
build(deps): bump peter-evans/create-pull-request from 6.0.1 to 6.0.2
Bumps [peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request) from 6.0.1 to 6.0.2.
- [Release notes](https://github.com/peter-evans/create-pull-request/releases)
- [Commits](a4f52f8033...70a41aba78)

---
updated-dependencies:
- dependency-name: peter-evans/create-pull-request
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-12 18:19:54 +00:00
Tõnis Tiigi df4b13320d
Merge pull request #2330 from crazy-max/fix-bake-load-push
bake: fix output handling for push
2024-03-12 09:34:07 -07:00
Tõnis Tiigi bb511110d6
Merge pull request #2327 from tonistiigi/remote-connhelper-fix
remote: fix connhelpers with custom dialer
2024-03-12 09:01:23 -07:00
CrazyMax 47cf4a5dbe
bake: fix output handling for push
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-03-12 13:13:13 +01:00
CrazyMax cfbed42fa7
Merge pull request #2331 from docker/dependabot/github_actions/softprops/action-gh-release-2
build(deps): bump softprops/action-gh-release from 1 to 2
2024-03-12 10:38:23 +01:00
CrazyMax ff27ab7e86
ci: update comment version for softprops/action-gh-release update
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-03-12 09:24:28 +01:00
CrazyMax 5655e5e2b6
build: don't export LoadInputs
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-03-12 08:48:45 +01:00
CrazyMax 4b516af1f6
build: move funcs related to solve opts handling
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-03-12 08:48:45 +01:00
CrazyMax b1490ed5ce
tests: create remote with container helper
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-03-12 08:44:36 +01:00
dependabot[bot] ea830c9758
build(deps): bump softprops/action-gh-release from 1 to 2
Bumps [softprops/action-gh-release](https://github.com/softprops/action-gh-release) from 1 to 2.
- [Release notes](https://github.com/softprops/action-gh-release/releases)
- [Changelog](https://github.com/softprops/action-gh-release/blob/master/CHANGELOG.md)
- [Commits](de2c0eb89a...3198ee18f8)

---
updated-dependencies:
- dependency-name: softprops/action-gh-release
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-11 18:14:17 +00:00
Tonis Tiigi 8f576e5790
remote: fix connhelpers with custom dialer
With the new dial-stdio command the dialer is split
from `Client` function in order to access it directly.

This breaks the custom connhelpers functionality
as support for connhelpers is a feature of the default
dialer. If client defines a custom dialer then only
it is used without extra modifications. This means
that remote driver dialer needs to detect the
connhelpers on its own.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-03-08 18:35:53 -08:00
CrazyMax 4327ee73b1
Merge pull request #2321 from crazy-max/docker-use-bin-images
dockerfile: use moby-bin and cli-bin images for docker binaries
2024-03-07 13:46:01 +01:00
CrazyMax 70a28fed12
dockerfile: use moby-bin and cli-bin images for docker binaries
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-03-07 13:10:01 +01:00
CrazyMax fc22d39d6d
Merge pull request #2319 from dvdksn/doc-securitysandbox-link
docs: fix link to new target in dockerfile reference
2024-03-07 10:36:03 +01:00
David Karlsson 1cc5e39cb8 docs: fix link to new target in dockerfile reference
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-03-07 10:07:43 +01:00
CrazyMax 1815e4d9b2
Merge pull request #2314 from dvdksn/docs-vendor
ci: use make target for vendoring docs release
2024-03-06 14:42:03 +01:00
David Karlsson 2ec1dbd1b6 ci: use make target for vendoring docs release
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-03-06 14:25:49 +01:00
CrazyMax a6163470b7
Merge pull request #2312 from crazy-max/ci-docs-no-provenance
ci: disable provenance for docs generation
2024-03-06 09:29:31 +01:00
CrazyMax 3dfb102f82
ci: disable provenance for docs generation
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-03-06 09:09:43 +01:00
CrazyMax 253cbee5c7
Merge pull request #2310 from crazy-max/fix-docs-release
ci(docs-release): fix vendoring step
2024-03-06 08:59:11 +01:00
CrazyMax c1dfa74b98
ci(docs-release): manual trigger support
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-03-06 08:40:44 +01:00
CrazyMax 647491dd99
ci(docs-release): fix vendoring step
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-03-06 08:40:43 +01:00
Jonathan A. Sternberg 9a71895a48
metrics: measure image transfers for image source operations
This measures the transfer size and duration for image pulls along with
the time spent extracting the image contents.

Signed-off-by: Jonathan A. Sternberg <jonathan.sternberg@docker.com>
2024-03-05 16:33:20 -06:00
Laurent Goderre abff444562 Added test for imagetool inspect load
Signed-off-by: Laurent Goderre <laurent.goderre@docker.com>
2024-03-05 13:56:46 -05:00
Laurent Goderre 1d0b542b1b Add unit test for SBOM and Provenance scanning
Signed-off-by: Laurent Goderre <laurent.goderre@docker.com>
2024-03-05 13:15:21 -05:00
Laurent Goderre 6c485a98be Add tests for imagetools inspect
Signed-off-by: Laurent Goderre <laurent.goderre@docker.com>
2024-03-05 13:13:23 -05:00
Tõnis Tiigi 9ebfde4897
Merge pull request #2302 from crazy-max/multi-load-push
build: handle push/load shorthands for multi exporters
2024-03-05 09:09:30 -08:00
Tõnis Tiigi e4ee2ca1fd
Merge pull request #2308 from tonistiigi/vendor-buildkit-240305
vendor: update to buildkit v0.13.0
2024-03-05 09:09:07 -08:00
Tonis Tiigi 849456c198
vendor: update to buildkit v0.13.0
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2024-03-05 08:53:44 -08:00
CrazyMax 9a2536dd0d
test: multi exporters
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-03-05 17:05:59 +01:00
CrazyMax a03263acf8
build: handle push/load shorthands for multi exporters
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-03-05 17:05:59 +01:00
CrazyMax 0c0dcb7c8c
Merge pull request #2299 from vvoland/vendor-moby-v26
vendor: github.com/docker/docker v26.0.0-rc1
2024-03-05 08:58:41 +01:00
Paweł Gronowski 9bce433154
vendor: github.com/docker/docker v26.0.0-rc1
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-03-01 12:29:55 +01:00
Paweł Gronowski 04f0fc5871
Replace deprecated docker types usage
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-03-01 12:29:54 +01:00
CrazyMax e7da2b0686
Merge pull request #2296 from dvdksn/docs-release-fix-dirnames
ci(fix): remove underscore in docs data dir
2024-02-29 12:02:09 +01:00
David Karlsson eab565afe7 ci(fix): remove underscore in docs data dir
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-02-29 11:29:28 +01:00
CrazyMax 7d952441ea
Merge pull request #2295 from dvdksn/fix-docs-release-workflow
ci: fix docs-release workflow
2024-02-29 11:26:58 +01:00
David Karlsson 835a6b1096 ci: fix docs-release workflow
Automatically create PR for updating docs on release

Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-02-29 10:43:57 +01:00
5095 changed files with 467406 additions and 307247 deletions

View File

@ -188,6 +188,89 @@ To generate new vendored files with go modules run:
$ make vendor
```
### Generate profiling data
You can configure Buildx to generate [`pprof`](https://github.com/google/pprof)
memory and CPU profiles to analyze and optimize your builds. These profiles are
useful for identifying performance bottlenecks, detecting memory
inefficiencies, and ensuring the program (Buildx) runs efficiently.
The following environment variables control whether Buildx generates profiling
data for builds:
```console
$ export BUILDX_CPU_PROFILE=buildx_cpu.prof
$ export BUILDX_MEM_PROFILE=buildx_mem.prof
```
When set, Buildx emits profiling samples for the builds to the location
specified by the environment variable.
To analyze and visualize profiling samples, you need `pprof` from the Go
toolchain, and (optionally) GraphViz for visualization in a graphical format.
To inspect profiling data with `pprof`:
1. Build a local binary of Buildx from source.
```console
$ docker buildx bake
```
The binary gets exported to `./bin/build/buildx`.
2. Run a build and with the environment variables set to generate profiling data.
```console
$ export BUILDX_CPU_PROFILE=buildx_cpu.prof
$ export BUILDX_MEM_PROFILE=buildx_mem.prof
$ ./bin/build/buildx bake
```
This creates `buildx_cpu.prof` and `buildx_mem.prof` for the build.
3. Start `pprof` and specify the filename of the profile that you want to
analyze.
```console
$ go tool pprof buildx_cpu.prof
```
This opens the `pprof` interactive console. From here, you can inspect the
profiling sample using various commands. For example, use `top 10` command
to view the top 10 most time-consuming entries.
```plaintext
(pprof) top 10
Showing nodes accounting for 3.04s, 91.02% of 3.34s total
Dropped 123 nodes (cum <= 0.02s)
Showing top 10 nodes out of 159
flat flat% sum% cum cum%
1.14s 34.13% 34.13% 1.14s 34.13% syscall.syscall
0.91s 27.25% 61.38% 0.91s 27.25% runtime.kevent
0.35s 10.48% 71.86% 0.35s 10.48% runtime.pthread_cond_wait
0.22s 6.59% 78.44% 0.22s 6.59% runtime.pthread_cond_signal
0.15s 4.49% 82.93% 0.15s 4.49% runtime.usleep
0.10s 2.99% 85.93% 0.10s 2.99% runtime.memclrNoHeapPointers
0.10s 2.99% 88.92% 0.10s 2.99% runtime.memmove
0.03s 0.9% 89.82% 0.03s 0.9% runtime.madvise
0.02s 0.6% 90.42% 0.02s 0.6% runtime.(*mspan).typePointersOfUnchecked
0.02s 0.6% 91.02% 0.02s 0.6% runtime.pcvalue
```
To view the call graph in a GUI, run `go tool pprof -http=:8081 <sample>`.
> [!NOTE]
> Requires [GraphViz](https://www.graphviz.org/) to be installed.
```console
$ go tool pprof -http=:8081 buildx_cpu.prof
Serving web UI on http://127.0.0.1:8081
http://127.0.0.1:8081
```
For more information about using `pprof` and how to interpret the call graph,
refer to the [`pprof` README](https://github.com/google/pprof/blob/main/doc/README.md).
### Conventions
@ -343,4 +426,4 @@ The rules:
If you are having trouble getting into the mood of idiomatic Go, we recommend
reading through [Effective Go](https://golang.org/doc/effective_go.html). The
[Go Blog](https://blog.golang.org) is also a great resource.
[Go Blog](https://blog.golang.org) is also a great resource.

50
.github/SECURITY.md vendored
View File

@ -1,12 +1,44 @@
# Reporting security issues
# Security Policy
The project maintainers take security seriously. If you discover a security
issue, please bring it to their attention right away!
The maintainers of Docker Buildx take security seriously. If you discover
a security issue, please bring it to their attention right away!
**Please _DO NOT_ file a public issue**, instead send your report privately to
[security@docker.com](mailto:security@docker.com).
## Reporting a Vulnerability
Security reports are greatly appreciated, and we will publicly thank you for it.
We also like to send gifts&mdash;if you're into schwag, make sure to let
us know. We currently do not offer a paid security bounty program, but are not
ruling it out in the future.
Please **DO NOT** file a public issue, instead send your report privately
to [security@docker.com](mailto:security@docker.com).
Reporter(s) can expect a response within 72 hours, acknowledging the issue was
received.
## Review Process
After receiving the report, an initial triage and technical analysis is
performed to confirm the report and determine its scope. We may request
additional information in this stage of the process.
Once a reviewer has confirmed the relevance of the report, a draft security
advisory will be created on GitHub. The draft advisory will be used to discuss
the issue with maintainers, the reporter(s), and where applicable, other
affected parties under embargo.
If the vulnerability is accepted, a timeline for developing a patch, public
disclosure, and patch release will be determined. If there is an embargo period
on public disclosure before the patch release, the reporter(s) are expected to
participate in the discussion of the timeline and abide by agreed upon dates
for public disclosure.
## Accreditation
Security reports are greatly appreciated and we will publicly thank you,
although we will keep your name confidential if you request it. We also like to
send gifts - if you're into swag, make sure to let us know. We do not currently
offer a paid security bounty program at this time.
## Supported Versions
Once a new feature release is cut, support for the previous feature release is
discontinued. An exception may be made for urgent security releases that occur
shortly after a new feature release. Buildx does not offer LTS (Long-Term Support)
releases. Refer to the [Support Policy](https://github.com/docker/buildx/blob/master/PROJECT.md#support-policy)
for further details.

View File

@ -11,5 +11,5 @@ updates:
# trigger a new version: https://github.com/docker/buildx/pull/2222#issuecomment-1919092153
- dependency-name: "docker/docs"
labels:
- "dependencies"
- "area/dependencies"
- "bot"

104
.github/labeler.yml vendored Normal file
View File

@ -0,0 +1,104 @@
# Add 'area/project' label to changes in basic project documentation and .github folder, excluding .github/workflows
area/project:
- all:
- changed-files:
- any-glob-to-any-file:
- .github/**
- LICENSE
- AUTHORS
- MAINTAINERS
- PROJECT.md
- README.md
- .gitignore
- codecov.yml
- all-globs-to-all-files: '!.github/workflows/*'
# Add 'area/github-actions' label to changes in the .github/workflows folder
area/ci:
- changed-files:
- any-glob-to-any-file: '.github/workflows/**'
# Add 'area/bake' label to changes in the bake
area/bake:
- changed-files:
- any-glob-to-any-file: 'bake/**'
# Add 'area/bake/compose' label to changes in the bake+compose
area/bake/compose:
- changed-files:
- any-glob-to-any-file:
- bake/compose.go
- bake/compose_test.go
# Add 'area/build' label to changes in build files
area/build:
- changed-files:
- any-glob-to-any-file: 'build/**'
# Add 'area/builder' label to changes in builder files
area/builder:
- changed-files:
- any-glob-to-any-file: 'builder/**'
# Add 'area/cli' label to changes in the CLI
area/cli:
- changed-files:
- any-glob-to-any-file:
- cmd/**
- commands/**
# Add 'area/docs' label to markdown files in the docs folder
area/docs:
- changed-files:
- any-glob-to-any-file: 'docs/**/*.md'
# Add 'area/dependencies' label to changes in go dependency files
area/dependencies:
- changed-files:
- any-glob-to-any-file:
- go.mod
- go.sum
- vendor/**
# Add 'area/driver' label to changes in the driver folder
area/driver:
- changed-files:
- any-glob-to-any-file: 'driver/**'
# Add 'area/driver/docker' label to changes in the docker driver
area/driver/docker:
- changed-files:
- any-glob-to-any-file: 'driver/docker/**'
# Add 'area/driver/docker-container' label to changes in the docker-container driver
area/driver/docker-container:
- changed-files:
- any-glob-to-any-file: 'driver/docker-container/**'
# Add 'area/driver/kubernetes' label to changes in the kubernetes driver
area/driver/kubernetes:
- changed-files:
- any-glob-to-any-file: 'driver/kubernetes/**'
# Add 'area/driver/remote' label to changes in the remote driver
area/driver/remote:
- changed-files:
- any-glob-to-any-file: 'driver/remote/**'
# Add 'area/hack' label to changes in the hack folder
area/hack:
- changed-files:
- any-glob-to-any-file: 'hack/**'
# Add 'area/history' label to changes in history command
area/history:
- changed-files:
- any-glob-to-any-file: 'commands/history/**'
# Add 'area/tests' label to changes in test files
area/tests:
- changed-files:
- any-glob-to-any-file:
- tests/**
- '**/*_test.go'

View File

@ -1,5 +1,14 @@
name: build
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
@ -19,68 +28,100 @@ on:
- 'docs/**'
env:
BUILDX_VERSION: "latest"
BUILDKIT_IMAGE: "moby/buildkit:latest"
SETUP_BUILDX_VERSION: "edge"
SETUP_BUILDKIT_IMAGE: "moby/buildkit:latest"
SCOUT_VERSION: "1.11.0"
REPO_SLUG: "docker/buildx-bin"
DESTDIR: "./bin"
TEST_CACHE_SCOPE: "test"
TESTFLAGS: "-v --parallel=6 --timeout=30m"
GOTESTSUM_FORMAT: "standard-verbose"
GO_VERSION: "1.21"
GOTESTSUM_VERSION: "v1.9.0" # same as one in Dockerfile
GO_VERSION: "1.24"
GOTESTSUM_VERSION: "v1.12.0" # same as one in Dockerfile
jobs:
prepare-test-integration:
runs-on: ubuntu-22.04
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up QEMU
uses: docker/setup-qemu-action@v3
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.BUILDX_VERSION }}
driver-opts: image=${{ env.BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Build
uses: docker/bake-action@v4
with:
targets: integration-test-base
set: |
*.cache-from=type=gha,scope=${{ env.TEST_CACHE_SCOPE }}
*.cache-to=type=gha,scope=${{ env.TEST_CACHE_SCOPE }}
test-integration:
runs-on: ubuntu-22.04
needs:
- prepare-test-integration
runs-on: ubuntu-24.04
env:
TESTFLAGS_DOCKER: "-v --parallel=1 --timeout=30m"
TEST_IMAGE_BUILD: "0"
TEST_IMAGE_ID: "buildx-tests"
TEST_COVERAGE: "1"
strategy:
fail-fast: false
matrix:
buildkit:
- master
- latest
- buildx-stable-1
- v0.23.2
- v0.22.0
- v0.21.1
worker:
- docker
- docker\+containerd # same as docker, but with containerd snapshotter
- docker-container
- remote
pkg:
- ./tests
mode:
- ""
- experimental
include:
- worker: docker
pkg: ./tests
- worker: docker+containerd # same as docker, but with containerd snapshotter
pkg: ./tests
- worker: docker
pkg: ./tests
mode: experimental
- worker: docker+containerd # same as docker, but with containerd snapshotter
pkg: ./tests
mode: experimental
- worker: "docker@27.5"
pkg: ./tests
- worker: "docker+containerd@27.5" # same as docker, but with containerd snapshotter
pkg: ./tests
- worker: "docker@27.5"
pkg: ./tests
mode: experimental
- worker: "docker+containerd@27.5" # same as docker, but with containerd snapshotter
pkg: ./tests
mode: experimental
- worker: "docker@26.1"
pkg: ./tests
- worker: "docker+containerd@26.1" # same as docker, but with containerd snapshotter
pkg: ./tests
- worker: "docker@26.1"
pkg: ./tests
mode: experimental
- worker: "docker+containerd@26.1" # same as docker, but with containerd snapshotter
pkg: ./tests
mode: experimental
steps:
-
name: Prepare
run: |
echo "TESTREPORTS_NAME=${{ github.job }}-$(echo "${{ matrix.pkg }}-${{ matrix.worker }}" | tr -dc '[:alnum:]-\n\r' | tr '[:upper:]' '[:lower:]')" >> $GITHUB_ENV
echo "TESTREPORTS_NAME=${{ github.job }}-$(echo "${{ matrix.pkg }}-${{ matrix.buildkit }}-${{ matrix.worker }}-${{ matrix.mode }}" | tr -dc '[:alnum:]-\n\r' | tr '[:upper:]' '[:lower:]')" >> $GITHUB_ENV
if [ -n "${{ matrix.buildkit }}" ]; then
echo "TEST_BUILDKIT_TAG=${{ matrix.buildkit }}" >> $GITHUB_ENV
fi
testFlags="--run=//worker=$(echo "${{ matrix.worker }}" | sed 's/\+/\\+/g')$"
case "${{ matrix.worker }}" in
docker | docker+containerd | docker@* | docker+containerd@*)
echo "TESTFLAGS=${{ env.TESTFLAGS_DOCKER }} $testFlags" >> $GITHUB_ENV
;;
*)
echo "TESTFLAGS=${{ env.TESTFLAGS }} $testFlags" >> $GITHUB_ENV
;;
esac
if [[ "${{ matrix.worker }}" == "docker"* ]]; then
echo "TEST_DOCKERD=1" >> $GITHUB_ENV
fi
if [ "${{ matrix.mode }}" = "experimental" ]; then
echo "TEST_BUILDX_EXPERIMENTAL=1" >> $GITHUB_ENV
fi
-
name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v5
with:
fetch-depth: 0
-
@ -90,16 +131,16 @@ jobs:
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.BUILDX_VERSION }}
driver-opts: image=${{ env.BUILDKIT_IMAGE }}
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Build test image
uses: docker/bake-action@v4
uses: docker/bake-action@v6
with:
source: .
targets: integration-test
set: |
*.cache-from=type=gha,scope=${{ env.TEST_CACHE_SCOPE }}
*.output=type=docker,name=${{ env.TEST_IMAGE_ID }}
-
name: Test
@ -107,17 +148,16 @@ jobs:
./hack/test
env:
TEST_REPORT_SUFFIX: "-${{ env.TESTREPORTS_NAME }}"
TEST_DOCKERD: "${{ startsWith(matrix.worker, 'docker') && '1' || '0' }}"
TESTFLAGS: "${{ (matrix.worker == 'docker' || matrix.worker == 'docker\\+containerd') && env.TESTFLAGS_DOCKER || env.TESTFLAGS }} --run=//worker=${{ matrix.worker }}$"
TESTPKGS: "${{ matrix.pkg }}"
-
name: Send to Codecov
if: always()
uses: codecov/codecov-action@v4
uses: codecov/codecov-action@v5
with:
directory: ./bin/testreports
flags: integration
token: ${{ secrets.CODECOV_TOKEN }}
disable_file_fixes: true
-
name: Generate annotations
if: always()
@ -138,18 +178,23 @@ jobs:
fail-fast: false
matrix:
os:
- ubuntu-22.04
- macos-12
- ubuntu-24.04
- macos-14
- windows-2022
env:
SKIP_INTEGRATION_TESTS: 1
steps:
-
name: Setup Git config
run: |
git config --global core.autocrlf false
git config --global core.eol lf
-
name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v5
-
name: Set up Go
uses: actions/setup-go@v5
uses: actions/setup-go@v6
with:
go-version: "${{ env.GO_VERSION }}"
-
@ -184,12 +229,13 @@ jobs:
-
name: Send to Codecov
if: always()
uses: codecov/codecov-action@v4
uses: codecov/codecov-action@v5
with:
directory: ${{ env.TESTREPORTS_DIR }}
env_vars: RUNNER_OS
flags: unit
token: ${{ secrets.CODECOV_TOKEN }}
disable_file_fixes: true
-
name: Generate annotations
if: always()
@ -204,14 +250,110 @@ jobs:
name: test-reports-${{ env.TESTREPORTS_NAME }}
path: ${{ env.TESTREPORTS_BASEDIR }}
prepare-binaries:
test-bsd-unit:
runs-on: ubuntu-22.04
continue-on-error: true
strategy:
fail-fast: false
matrix:
os:
- freebsd
- netbsd
- openbsd
env:
# https://github.com/hashicorp/vagrant/issues/13652
VAGRANT_DISABLE_STRICT_DEPENDENCY_ENFORCEMENT: 1
steps:
-
name: Prepare
run: |
echo "VAGRANT_FILE=hack/Vagrantfile.${{ matrix.os }}" >> $GITHUB_ENV
# Sets semver Go version to be able to download tarball during vagrant setup
goVersion=$(curl --silent "https://go.dev/dl/?mode=json&include=all" | jq -r '.[].files[].version' | uniq | sed -e 's/go//' | sort -V | grep $GO_VERSION | tail -1)
echo "GO_VERSION=$goVersion" >> $GITHUB_ENV
-
name: Checkout
uses: actions/checkout@v5
-
name: Cache Vagrant boxes
uses: actions/cache@v4
with:
path: ~/.vagrant.d/boxes
key: ${{ runner.os }}-vagrant-${{ matrix.os }}-${{ hashFiles(env.VAGRANT_FILE) }}
restore-keys: |
${{ runner.os }}-vagrant-${{ matrix.os }}-
-
name: Install vagrant
run: |
set -x
wget -O - https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt-get update
sudo apt-get install -y libvirt-dev libvirt-daemon libvirt-daemon-system vagrant vagrant-libvirt ruby-libvirt
sudo systemctl enable --now libvirtd
sudo chmod a+rw /var/run/libvirt/libvirt-sock
vagrant plugin install vagrant-libvirt
vagrant --version
-
name: Set up vagrant
run: |
ln -sf ${{ env.VAGRANT_FILE }} Vagrantfile
vagrant up --no-tty
-
name: Test
run: |
vagrant ssh -- "cd /vagrant; SKIP_INTEGRATION_TESTS=1 go test -mod=vendor -coverprofile=coverage.txt -covermode=atomic ${{ env.TESTFLAGS }} ./..."
vagrant ssh -c "sudo cat /vagrant/coverage.txt" > coverage.txt
-
name: Upload coverage
if: always()
uses: codecov/codecov-action@v5
with:
files: ./coverage.txt
env_vars: RUNNER_OS
flags: unit,${{ matrix.os }}
token: ${{ secrets.CODECOV_TOKEN }}
env:
RUNNER_OS: ${{ matrix.os }}
govulncheck:
runs-on: ubuntu-24.04
permissions:
# same as global permission
contents: read
# required to write sarif report
security-events: write
steps:
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Run
uses: docker/bake-action@v6
with:
targets: govulncheck
env:
GOVULNCHECK_FORMAT: sarif
-
name: Upload SARIF report
if: ${{ github.ref == 'refs/heads/master' && github.repository == 'docker/buildx' }}
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: ${{ env.DESTDIR }}/govulncheck.out
prepare-binaries:
runs-on: ubuntu-24.04
outputs:
matrix: ${{ steps.platforms.outputs.matrix }}
steps:
-
name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v5
-
name: Create matrix
id: platforms
@ -223,7 +365,7 @@ jobs:
echo ${{ steps.platforms.outputs.matrix }}
binaries:
runs-on: ubuntu-22.04
runs-on: ubuntu-24.04
needs:
- prepare-binaries
strategy:
@ -238,7 +380,7 @@ jobs:
echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV
-
name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v5
-
name: Set up QEMU
uses: docker/setup-qemu-action@v3
@ -246,8 +388,8 @@ jobs:
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.BUILDX_VERSION }}
driver-opts: image=${{ env.BUILDKIT_IMAGE }}
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Build
@ -266,15 +408,24 @@ jobs:
if-no-files-found: error
bin-image:
runs-on: ubuntu-22.04
runs-on: ubuntu-24.04
needs:
- test-integration
- test-unit
if: ${{ github.event_name != 'pull_request' && github.repository == 'docker/buildx' }}
steps:
-
name: Free disk space
uses: jlumbroso/free-disk-space@54081f138730dfa15788a46383842cd2f914a1be # v1.3.1
with:
android: true
dotnet: true
haskell: true
large-packages: true
swap-storage: true
-
name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v5
-
name: Set up QEMU
uses: docker/setup-qemu-action@v3
@ -282,8 +433,8 @@ jobs:
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.BUILDX_VERSION }}
driver-opts: image=${{ env.BUILDKIT_IMAGE }}
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Docker meta
@ -306,8 +457,9 @@ jobs:
password: ${{ secrets.DOCKERPUBLICBOT_WRITE_PAT }}
-
name: Build and push image
uses: docker/bake-action@v4
uses: docker/bake-action@v6
with:
source: .
files: |
./docker-bake.hcl
${{ steps.meta.outputs.bake-file }}
@ -318,8 +470,42 @@ jobs:
*.cache-from=type=gha,scope=bin-image
*.cache-to=type=gha,scope=bin-image,mode=max
scout:
runs-on: ubuntu-24.04
if: ${{ github.ref == 'refs/heads/master' && github.repository == 'docker/buildx' }}
permissions:
# same as global permission
contents: read
# required to write sarif report
security-events: write
needs:
- bin-image
steps:
-
name: Login to DockerHub
uses: docker/login-action@v3
with:
username: ${{ vars.DOCKERPUBLICBOT_USERNAME }}
password: ${{ secrets.DOCKERPUBLICBOT_WRITE_PAT }}
-
name: Scout
id: scout
uses: crazy-max/.github/.github/actions/docker-scout@ccae1c98f1237b5c19e4ef77ace44fa68b3bc7e4
with:
version: ${{ env.SCOUT_VERSION }}
format: sarif
image: registry://${{ env.REPO_SLUG }}:master
-
name: Upload SARIF report
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: ${{ steps.scout.outputs.result-file }}
release:
runs-on: ubuntu-22.04
runs-on: ubuntu-24.04
permissions:
# required to create GitHub release
contents: write
needs:
- test-integration
- test-unit
@ -327,10 +513,10 @@ jobs:
steps:
-
name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v5
-
name: Download binaries
uses: actions/download-artifact@v4
uses: actions/download-artifact@v5
with:
path: ${{ env.DESTDIR }}
pattern: buildx-*
@ -349,33 +535,9 @@ jobs:
-
name: GitHub Release
if: startsWith(github.ref, 'refs/tags/v')
uses: softprops/action-gh-release@de2c0eb89ae2a093876385947365aca7b0e5f844 # v0.1.15
uses: softprops/action-gh-release@72f2c25fcb47643c292f7107632f7a47c1df5cd8 # v2.3.2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
draft: true
files: ${{ env.DESTDIR }}/*
buildkit-edge:
runs-on: ubuntu-22.04
continue-on-error: true
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up QEMU
uses: docker/setup-qemu-action@v3
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.BUILDX_VERSION }}
driver-opts: image=moby/buildkit:master
buildkitd-flags: --debug
-
# Just run a bake target to check eveything runs fine
name: Build
uses: docker/bake-action@v4
with:
targets: binaries

View File

@ -1,5 +1,14 @@
name: codeql
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
on:
push:
branches:
@ -7,24 +16,23 @@ on:
- 'v[0-9]*'
pull_request:
permissions:
actions: read
contents: read
security-events: write
env:
GO_VERSION: "1.21"
GO_VERSION: "1.24"
jobs:
codeql:
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
permissions:
contents: read
actions: read
security-events: write
steps:
-
name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v5
-
name: Set up Go
uses: actions/setup-go@v5
uses: actions/setup-go@v6
with:
go-version: ${{ env.GO_VERSION }}
-

View File

@ -1,18 +1,39 @@
name: docs-release
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
on:
workflow_dispatch:
inputs:
tag:
description: 'Git tag'
required: true
release:
types:
- released
env:
SETUP_BUILDX_VERSION: "edge"
SETUP_BUILDKIT_IMAGE: "moby/buildkit:latest"
jobs:
open-pr:
runs-on: ubuntu-22.04
if: ${{ github.event.release.prerelease != true && github.repository == 'docker/buildx' }}
runs-on: ubuntu-24.04
if: ${{ (github.event.release.prerelease != true || github.event.inputs.tag != '') && github.repository == 'docker/buildx' }}
permissions:
contents: write
pull-requests: write
steps:
-
name: Checkout docs repo
uses: actions/checkout@v4
uses: actions/checkout@v5
with:
token: ${{ secrets.GHPAT_DOCS_DISPATCH }}
repository: docker/docs
@ -20,39 +41,51 @@ jobs:
-
name: Prepare
run: |
rm -rf ./_data/buildx/*
rm -rf ./data/buildx/*
if [ -n "${{ github.event.inputs.tag }}" ]; then
echo "RELEASE_NAME=${{ github.event.inputs.tag }}" >> $GITHUB_ENV
else
echo "RELEASE_NAME=${{ github.event.release.name }}" >> $GITHUB_ENV
fi
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
-
name: Build docs
uses: docker/bake-action@v4
with:
source: ${{ github.server_url }}/${{ github.repository }}.git#${{ github.event.release.name }}
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Generate yaml
uses: docker/bake-action@v6
with:
source: ${{ github.server_url }}/${{ github.repository }}.git#${{ env.RELEASE_NAME }}
targets: update-docs
provenance: false
set: |
*.output=/tmp/buildx-docs
env:
DOCS_FORMATS: yaml
-
name: Copy files
name: Copy yaml
run: |
cp /tmp/buildx-docs/out/reference/*.yaml ./_data/buildx/
cp /tmp/buildx-docs/out/reference/*.yaml ./data/buildx/
-
name: Commit changes
name: Update vendor
run: |
git add -A .
make vendor
env:
VENDOR_MODULE: github.com/docker/buildx@${{ env.RELEASE_NAME }}
-
name: Create PR on docs repo
uses: peter-evans/create-pull-request@a4f52f8033a6168103c2538976c07b467e8163bc
uses: peter-evans/create-pull-request@271a8d0340265f705b14b6d32b9829c1cb33d45e # v7.0.8
with:
token: ${{ secrets.GHPAT_DOCS_DISPATCH }}
push-to-fork: docker-tools-robot/docker.github.io
commit-message: "build: update buildx reference to ${{ github.event.release.name }}"
commit-message: "vendor: github.com/docker/buildx ${{ env.RELEASE_NAME }}"
signoff: true
branch: dispatch/buildx-ref-${{ github.event.release.name }}
branch: dispatch/buildx-ref-${{ env.RELEASE_NAME }}
delete-branch: true
title: Update buildx reference to ${{ github.event.release.name }}
title: Update buildx reference to ${{ env.RELEASE_NAME }}
body: |
Update the buildx reference documentation to keep in sync with the latest release `${{ github.event.release.name }}`
Update the buildx reference documentation to keep in sync with the latest release `${{ env.RELEASE_NAME }}`
draft: false

View File

@ -3,6 +3,15 @@
# https://github.com/docker/docker.github.io/blob/98c7c9535063ae4cd2cd0a31478a21d16d2f07a3/docker-bake.hcl#L34-L36
name: docs-upstream
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
@ -20,23 +29,27 @@ on:
- '.github/workflows/docs-upstream.yml'
- 'docs/**'
env:
SETUP_BUILDX_VERSION: "edge"
SETUP_BUILDKIT_IMAGE: "moby/buildkit:latest"
jobs:
docs-yaml:
runs-on: ubuntu-22.04
runs-on: ubuntu-24.04
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: latest
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Build reference YAML docs
uses: docker/bake-action@v4
uses: docker/bake-action@v6
with:
targets: update-docs
provenance: false
set: |
*.output=/tmp/buildx-docs
*.cache-from=type=gha,scope=docs-yaml
@ -52,7 +65,7 @@ jobs:
retention-days: 1
validate:
uses: docker/docs/.github/workflows/validate-upstream.yml@6b73b05acb21edf7995cc5b3c6672d8e314cee7a # pin for artifact v4 support: https://github.com/docker/docs/pull/19220
uses: docker/docs/.github/workflows/validate-upstream.yml@main
needs:
- docs-yaml
with:

View File

@ -1,5 +1,14 @@
name: e2e
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
@ -17,23 +26,25 @@ on:
- 'docs/**'
env:
SETUP_BUILDX_VERSION: "edge"
SETUP_BUILDKIT_IMAGE: "moby/buildkit:latest"
DESTDIR: "./bin"
K3S_VERSION: "v1.21.2-k3s1"
K3S_VERSION: "v1.32.2+k3s1"
jobs:
build:
runs-on: ubuntu-22.04
runs-on: ubuntu-24.04
steps:
- name: Checkout
uses: actions/checkout@v4
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: latest
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Build
uses: docker/bake-action@v4
uses: docker/bake-action@v6
with:
targets: binaries
set: |
@ -54,7 +65,7 @@ jobs:
retention-days: 7
driver:
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
needs:
- build
strategy:
@ -82,6 +93,10 @@ jobs:
driver-opt: qemu.install=true
- driver: remote
endpoint: tcp://localhost:1234
- driver: docker-container
metadata-provenance: max
- driver: docker-container
metadata-warnings: true
exclude:
- driver: docker
multi-node: mnode-true
@ -96,14 +111,14 @@ jobs:
steps:
-
name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v5
-
name: Set up QEMU
uses: docker/setup-qemu-action@v3
if: matrix.driver == 'docker' || matrix.driver == 'docker-container'
-
name: Install buildx
uses: actions/download-artifact@v4
uses: actions/download-artifact@v5
with:
name: binary
path: /home/runner/.docker/cli-plugins
@ -129,70 +144,18 @@ jobs:
else
echo "MULTI_NODE=0" >> $GITHUB_ENV
fi
if [ -n "${{ matrix.metadata-provenance }}" ]; then
echo "BUILDX_METADATA_PROVENANCE=${{ matrix.metadata-provenance }}" >> $GITHUB_ENV
fi
if [ -n "${{ matrix.metadata-warnings }}" ]; then
echo "BUILDX_METADATA_WARNINGS=${{ matrix.metadata-warnings }}" >> $GITHUB_ENV
fi
-
name: Install k3s
if: matrix.driver == 'kubernetes'
uses: actions/github-script@v7
uses: crazy-max/.github/.github/actions/install-k3s@7730d1434364d4b9aded32735b078a7ace5ea79a
with:
script: |
const fs = require('fs');
let wait = function(milliseconds) {
return new Promise((resolve, reject) => {
if (typeof(milliseconds) !== 'number') {
throw new Error('milleseconds not a number');
}
setTimeout(() => resolve("done!"), milliseconds)
});
}
try {
const kubeconfig="/tmp/buildkit-k3s/kubeconfig.yaml";
core.info(`storing kubeconfig in ${kubeconfig}`);
await exec.exec('docker', ["run", "-d",
"--privileged",
"--name=buildkit-k3s",
"-e", "K3S_KUBECONFIG_OUTPUT="+kubeconfig,
"-e", "K3S_KUBECONFIG_MODE=666",
"-v", "/tmp/buildkit-k3s:/tmp/buildkit-k3s",
"-p", "6443:6443",
"-p", "80:80",
"-p", "443:443",
"-p", "8080:8080",
"rancher/k3s:${{ env.K3S_VERSION }}", "server"
]);
await wait(10000);
core.exportVariable('KUBECONFIG', kubeconfig);
let nodeName;
for (let count = 1; count <= 5; count++) {
try {
const nodeNameOutput = await exec.getExecOutput("kubectl get nodes --no-headers -oname");
nodeName = nodeNameOutput.stdout
} catch (error) {
core.info(`Unable to resolve node name (${error.message}). Attempt ${count} of 5.`)
} finally {
if (nodeName) {
break;
}
await wait(5000);
}
}
if (!nodeName) {
throw new Error(`Unable to resolve node name after 5 attempts.`);
}
await exec.exec(`kubectl wait --for=condition=Ready ${nodeName}`);
} catch (error) {
core.setFailed(error.message);
}
-
name: Print KUBECONFIG
if: matrix.driver == 'kubernetes'
run: |
yq ${{ env.KUBECONFIG }}
version: ${{ env.K3S_VERSION }}
-
name: Launch remote buildkitd
if: matrix.driver == 'remote'
@ -214,3 +177,78 @@ jobs:
DRIVER_OPT: ${{ matrix.driver-opt }}
ENDPOINT: ${{ matrix.endpoint }}
PLATFORMS: ${{ matrix.platforms }}
bake:
runs-on: ubuntu-24.04
needs:
- build
env:
DOCKER_BUILD_CHECKS_ANNOTATIONS: false
DOCKER_BUILD_SUMMARY: false
strategy:
fail-fast: false
matrix:
include:
-
# https://github.com/docker/bake-action/blob/v5.11.0/.github/workflows/ci.yml#L227-L237
source: "https://github.com/docker/bake-action.git#v5.11.0:test/go"
overrides: |
*.output=/tmp/bake-build
-
# https://github.com/tonistiigi/xx/blob/2fc85604e7280bfb3f626569bd4c5413c43eb4af/.github/workflows/ld.yml#L90-L98
source: "https://github.com/tonistiigi/xx.git#2fc85604e7280bfb3f626569bd4c5413c43eb4af"
targets: |
ld64-static-tgz
overrides: |
ld64-static-tgz.output=type=local,dest=./dist
ld64-static-tgz.platform=linux/amd64
ld64-static-tgz.cache-from=type=gha,scope=xx-ld64-static-tgz
ld64-static-tgz.cache-to=type=gha,scope=xx-ld64-static-tgz
-
# https://github.com/moby/buildkit-bench/blob/54c194011c4fc99a94aa75d4b3d4f3ffd4c4ce27/docker-bake.hcl#L154-L160
source: "https://github.com/moby/buildkit-bench.git#54c194011c4fc99a94aa75d4b3d4f3ffd4c4ce27"
targets: |
tests-buildkit
envs: |
BUILDKIT_REFS=v0.18.2
steps:
-
name: Checkout
uses: actions/checkout@v5
-
name: Expose GitHub Runtime
uses: crazy-max/ghaction-github-runtime@v3
-
name: Environment variables
if: matrix.envs != ''
run: |
for l in "${{ matrix.envs }}"; do
echo "${l?}" >> $GITHUB_ENV
done
-
name: Set up QEMU
uses: docker/setup-qemu-action@v3
-
name: Install buildx
uses: actions/download-artifact@v5
with:
name: binary
path: /home/runner/.docker/cli-plugins
-
name: Fix perms and check
run: |
chmod +x /home/runner/.docker/cli-plugins/docker-buildx
docker buildx version
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Build
uses: docker/bake-action@v6
with:
source: ${{ matrix.source }}
targets: ${{ matrix.targets }}
set: ${{ matrix.overrides }}

32
.github/workflows/labeler.yml vendored Normal file
View File

@ -0,0 +1,32 @@
name: labeler
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
pull_request_target:
jobs:
labeler:
runs-on: ubuntu-latest
permissions:
# same as global permission
contents: read
# required for writing labels
pull-requests: write
steps:
-
name: Run
uses: actions/labeler@v6
with:
sync-labels: true

17
.github/workflows/pr-assign-author.yml vendored Normal file
View File

@ -0,0 +1,17 @@
name: pr-assign-author
permissions:
contents: read
on:
pull_request_target:
types:
- opened
- reopened
jobs:
run:
uses: crazy-max/.github/.github/workflows/pr-assign-author.yml@c27924b5b93ccfe6dcc0d7b22e779ef3c05f9a92
permissions:
contents: read
pull-requests: write

View File

@ -1,5 +1,14 @@
name: validate
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
@ -16,29 +25,49 @@ on:
paths-ignore:
- '.github/releases.json'
env:
SETUP_BUILDX_VERSION: "edge"
SETUP_BUILDKIT_IMAGE: "moby/buildkit:latest"
jobs:
validate:
runs-on: ubuntu-22.04
env:
GOLANGCI_LINT_MULTIPLATFORM: 1
strategy:
fail-fast: false
matrix:
target:
- lint
- validate-vendor
- validate-docs
- validate-generated-files
prepare:
runs-on: ubuntu-24.04
outputs:
includes: ${{ steps.generate.outputs.matrix }}
steps:
-
name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v5
-
name: Generate matrix
id: generate
uses: docker/bake-action/subaction/matrix@v6
with:
target: validate
fields: platforms
env:
GOLANGCI_LINT_MULTIPLATFORM: ${{ github.repository == 'docker/buildx' && '1' || '' }}
validate:
runs-on: ubuntu-24.04
needs:
- prepare
strategy:
fail-fast: false
matrix:
include: ${{ fromJson(needs.prepare.outputs.includes) }}
steps:
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: latest
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Run
run: |
make ${{ matrix.target }}
name: Validate
uses: docker/bake-action@v6
with:
targets: ${{ matrix.target }}
set: |
*.platform=${{ matrix.platforms }}

View File

@ -1,69 +1,119 @@
run:
timeout: 30m
skip-files:
- ".*\\.pb\\.go$"
version: "2"
run:
modules-download-mode: vendor
build-tags:
linters:
default: none
enable:
- gofmt
- govet
- bodyclose
- depguard
- goimports
- forbidigo
- gocritic
- gosec
- govet
- ineffassign
- makezero
- misspell
- unused
- noctx
- nolintlint
- revive
- staticcheck
- typecheck
- nolintlint
- gosec
- forbidigo
disable-all: true
linters-settings:
depguard:
- testifylint
- unused
- whitespace
settings:
depguard:
rules:
main:
deny:
- pkg: "github.com/containerd/containerd/errdefs"
desc: The containerd errdefs package was migrated to a separate module. Use github.com/containerd/errdefs instead.
- pkg: "github.com/containerd/containerd/log"
desc: The containerd log package was migrated to a separate module. Use github.com/containerd/log instead.
- pkg: "github.com/containerd/containerd/platforms"
desc: The containerd platforms package was migrated to a separate module. Use github.com/containerd/platforms instead.
- pkg: "io/ioutil"
desc: The io/ioutil package has been deprecated.
forbidigo:
forbid:
- pattern: ^context\.WithCancel(# use context\.WithCancelCause instead)?$
- pattern: ^context\.WithDeadline(# use context\.WithDeadline instead)?$
- pattern: ^context\.WithTimeout(# use context\.WithTimeoutCause instead)?$
- pattern: ^ctx\.Err(# use context\.Cause instead)?$
- pattern: ^fmt\.Errorf(# use errors\.Errorf instead)?$
- pattern: ^platforms\.DefaultString(# use platforms\.Format(platforms\.DefaultSpec()) instead\.)?$
gocritic:
disabled-checks:
- "ifElseChain"
- "assignOp"
- "appendAssign"
- "singleCaseSwitch"
gosec:
excludes:
- G204
- G402
- G115
config:
G306: "0644"
govet:
enable:
- nilness
- unusedwrite
importas:
alias:
- pkg: "github.com/containerd/errdefs"
alias: "cerrdefs"
- pkg: "github.com/docker/docker/client"
alias: "dockerclient"
- pkg: "github.com/opencontainers/image-spec/specs-go/v1"
alias: "ocispecs"
- pkg: "github.com/opencontainers/go-digest"
alias: "digest"
testifylint:
disable:
- empty
- bool-compare
- len
- negative-positive
exclusions:
generated: lax
presets:
- comments
- common-false-positives
- legacy
- std-error-handling
rules:
main:
deny:
# The io/ioutil package has been deprecated.
# https://go.dev/doc/go1.16#ioutil
- pkg: "io/ioutil"
desc: The io/ioutil package has been deprecated.
forbidigo:
forbid:
- '^fmt\.Errorf(# use errors\.Errorf instead)?$'
gosec:
excludes:
- G204 # Audit use of command execution
- G402 # TLS MinVersion too low
config:
G306: "0644"
- linters:
- revive
text: stutters
- linters:
- revive
text: empty-block
- linters:
- revive
text: superfluous-else
- linters:
- revive
text: unused-parameter
- linters:
- revive
text: redefines-builtin-id
- linters:
- revive
text: if-return
paths:
- .*\.pb\.go$
formatters:
enable:
- gofmt
- goimports
exclusions:
generated: lax
paths:
- .*\.pb\.go$
issues:
exclude-rules:
- linters:
- revive
text: "stutters"
- linters:
- revive
text: "empty-block"
- linters:
- revive
text: "superfluous-else"
- linters:
- revive
text: "unused-parameter"
- linters:
- revive
text: "redefines-builtin-id"
- linters:
- revive
text: "if-return"
# show all
max-issues-per-linter: 0
max-same-issues: 0
max-issues-per-linter: 0
max-same-issues: 0

View File

@ -1,11 +1,25 @@
# This file lists all individuals having contributed content to the repository.
# For how it is generated, see hack/dockerfiles/authors.Dockerfile.
Batuhan Apaydın <batuhan.apaydin@trendyol.com>
Batuhan Apaydın <batuhan.apaydin@trendyol.com> <developerguy2@gmail.com>
CrazyMax <github@crazymax.dev>
CrazyMax <github@crazymax.dev> <1951866+crazy-max@users.noreply.github.com>
CrazyMax <github@crazymax.dev> <crazy-max@users.noreply.github.com>
David Karlsson <david.karlsson@docker.com>
David Karlsson <david.karlsson@docker.com> <35727626+dvdksn@users.noreply.github.com>
jaihwan104 <jaihwan104@woowahan.com>
jaihwan104 <jaihwan104@woowahan.com> <42341126+jaihwan104@users.noreply.github.com>
Kenyon Ralph <kenyon@kenyonralph.com>
Kenyon Ralph <kenyon@kenyonralph.com> <quic_kralph@quicinc.com>
Sebastiaan van Stijn <github@gone.nl>
Sebastiaan van Stijn <github@gone.nl> <thaJeztah@users.noreply.github.com>
Shaun Thompson <shaun.thompson@docker.com>
Shaun Thompson <shaun.thompson@docker.com> <shaun.b.thompson@gmail.com>
Silvin Lubecki <silvin.lubecki@docker.com>
Silvin Lubecki <silvin.lubecki@docker.com> <31478878+silvin-lubecki@users.noreply.github.com>
Talon Bowler <talon.bowler@docker.com>
Talon Bowler <talon.bowler@docker.com> <nolat301@gmail.com>
Tibor Vass <tibor@docker.com>
Tibor Vass <tibor@docker.com> <tiborvass@users.noreply.github.com>
Tõnis Tiigi <tonistiigi@gmail.com>

69
AUTHORS
View File

@ -1,45 +1,112 @@
# This file lists all individuals having contributed content to the repository.
# For how it is generated, see hack/dockerfiles/authors.Dockerfile.
accetto <34798830+accetto@users.noreply.github.com>
Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
Aleksa Sarai <cyphar@cyphar.com>
Alex Couture-Beil <alex@earthly.dev>
Andrew Haines <andrew.haines@zencargo.com>
Andy Caldwell <andrew.caldwell@metaswitch.com>
Andy MacKinlay <admackin@users.noreply.github.com>
Anthony Poschen <zanven42@gmail.com>
Arnold Sobanski <arnold@l4g.dev>
Artur Klauser <Artur.Klauser@computer.org>
Batuhan Apaydın <developerguy2@gmail.com>
Avi Deitcher <avi@deitcher.net>
Batuhan Apaydın <batuhan.apaydin@trendyol.com>
Ben Peachey <potherca@gmail.com>
Bertrand Paquet <bertrand.paquet@gmail.com>
Bin Du <bindu@microsoft.com>
Brandon Philips <brandon@ifup.org>
Brian Goff <cpuguy83@gmail.com>
Bryce Lampe <bryce@pulumi.com>
Cameron Adams <pnzreba@gmail.com>
Christian Dupuis <cd@atomist.com>
Cory Snider <csnider@mirantis.com>
CrazyMax <github@crazymax.dev>
David Gageot <david.gageot@docker.com>
David Karlsson <david.karlsson@docker.com>
David Scott <dave@recoil.org>
dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Devin Bayer <dev@doubly.so>
Djordje Lukic <djordje.lukic@docker.com>
Dmitry Makovey <dmakovey@gitlab.com>
Dmytro Makovey <dmytro.makovey@docker.com>
Donghui Wang <977675308@qq.com>
Doug Borg <dougborg@apple.com>
Edgar Lee <edgarl@netflix.com>
Eli Treuherz <et@arenko.group>
Eliott Wiener <eliottwiener@gmail.com>
Elran Shefer <elran.shefer@velocity.tech>
faust <faustin@fala.red>
Felipe Santos <felipecassiors@gmail.com>
Felix de Souza <fdesouza@palantir.com>
Fernando Miguel <github@FernandoMiguel.net>
gfrancesco <gfrancesco@users.noreply.github.com>
gracenoah <gracenoahgh@gmail.com>
Guillaume Lours <705411+glours@users.noreply.github.com>
guoguangwu <guoguangwu@magic-shield.com>
Hollow Man <hollowman@hollowman.ml>
Ian King'ori <kingorim.ian@gmail.com>
idnandre <andre@idntimes.com>
Ilya Dmitrichenko <errordeveloper@gmail.com>
Isaac Gaskin <isaac.gaskin@circle.com>
Jack Laxson <jackjrabbit@gmail.com>
jaihwan104 <jaihwan104@woowahan.com>
Jean-Yves Gastaud <jygastaud@gmail.com>
Jhan S. Álvarez <51450231+yastanotheruser@users.noreply.github.com>
Jonathan A. Sternberg <jonathan.sternberg@docker.com>
Jonathan Piché <jpiche@coveo.com>
Justin Chadwell <me@jedevc.com>
Kenyon Ralph <kenyon@kenyonralph.com>
khs1994 <khs1994@khs1994.com>
Kijima Daigo <norimaking777@gmail.com>
Kohei Tokunaga <ktokunaga.mail@gmail.com>
Kotaro Adachi <k33asby@gmail.com>
Kushagra Mansingh <12158241+kushmansingh@users.noreply.github.com>
l00397676 <lujingxiao@huawei.com>
Laura Brehm <laurabrehm@hey.com>
Laurent Goderre <laurent.goderre@docker.com>
Mark Hildreth <113933455+markhildreth-gravity@users.noreply.github.com>
Mayeul Blanzat <mayeul.blanzat@datadoghq.com>
Michal Augustyn <michal.augustyn@mail.com>
Milas Bowman <milas.bowman@docker.com>
Mitsuru Kariya <mitsuru.kariya@nttdata.com>
Moleus <fafufuburr@gmail.com>
Nick Santos <nick.santos@docker.com>
Nick Sieger <nick@nicksieger.com>
Nicolas De Loof <nicolas.deloof@gmail.com>
Niklas Gehlen <niklas@namespacelabs.com>
Patrick Van Stee <patrick@vanstee.me>
Paweł Gronowski <pawel.gronowski@docker.com>
Phong Tran <tran.pho@northeastern.edu>
Qasim Sarfraz <qasimsarfraz@microsoft.com>
Rob Murray <rob.murray@docker.com>
robertlestak <robert.lestak@umusic.com>
Saul Shanabrook <s.shanabrook@gmail.com>
Sean P. Kane <spkane00@gmail.com>
Sebastiaan van Stijn <github@gone.nl>
Shaun Thompson <shaun.thompson@docker.com>
SHIMA Tatsuya <ts1s1andn@gmail.com>
Silvin Lubecki <silvin.lubecki@docker.com>
Simon A. Eugster <simon.eu@gmail.com>
Solomon Hykes <sh.github.6811@hykes.org>
Sumner Warren <sumner.warren@gmail.com>
Sune Keller <absukl@almbrand.dk>
Talon Bowler <talon.bowler@docker.com>
Tianon Gravi <admwiggin@gmail.com>
Tibor Vass <tibor@docker.com>
Tim Smith <tismith@rvohealth.com>
Timofey Kirillov <timofey.kirillov@flant.com>
Tyler Smith <tylerlwsmith@gmail.com>
Tõnis Tiigi <tonistiigi@gmail.com>
Ulysses Souza <ulyssessouza@gmail.com>
Usual Coder <34403413+Usual-Coder@users.noreply.github.com>
Wang Jinglei <morlay.null@gmail.com>
Wei <daviseago@gmail.com>
Wojciech M <wmiedzybrodzki@outlook.com>
Xiang Dai <764524258@qq.com>
Zachary Povey <zachary.povey@autotrader.co.uk>
zelahi <elahi.zuhayr@gmail.com>
Zero <tobewhatwewant@gmail.com>
zhyon404 <zhyong4@gmail.com>
Zsolt <zsolt.szeberenyi@figured.com>

View File

@ -1,17 +1,32 @@
# syntax=docker/dockerfile:1
ARG GO_VERSION=1.21
ARG XX_VERSION=1.4.0
ARG GO_VERSION=1.24
ARG ALPINE_VERSION=3.22
ARG XX_VERSION=1.6.1
ARG DOCKER_VERSION=25.0.2
ARG GOTESTSUM_VERSION=v1.9.0
ARG REGISTRY_VERSION=2.8.0
ARG BUILDKIT_VERSION=v0.12.5
# for testing
ARG DOCKER_VERSION=28.4
ARG DOCKER_VERSION_ALT_27=27.5.1
ARG DOCKER_VERSION_ALT_26=26.1.3
ARG DOCKER_CLI_VERSION=${DOCKER_VERSION}
ARG GOTESTSUM_VERSION=v1.12.0
ARG REGISTRY_VERSION=3.0.0
ARG BUILDKIT_VERSION=v0.23.2
ARG COMPOSE_VERSION=v2.39.1
ARG UNDOCK_VERSION=0.9.0
# xx is a helper for cross-compilation
FROM --platform=$BUILDPLATFORM tonistiigi/xx:${XX_VERSION} AS xx
FROM --platform=$BUILDPLATFORM golang:${GO_VERSION}-alpine AS golatest
FROM --platform=$BUILDPLATFORM golang:${GO_VERSION}-alpine${ALPINE_VERSION} AS golatest
FROM moby/moby-bin:$DOCKER_VERSION AS docker-engine
FROM dockereng/cli-bin:$DOCKER_CLI_VERSION AS docker-cli
FROM moby/moby-bin:$DOCKER_VERSION_ALT_27 AS docker-engine-alt27
FROM moby/moby-bin:$DOCKER_VERSION_ALT_26 AS docker-engine-alt26
FROM dockereng/cli-bin:$DOCKER_VERSION_ALT_27 AS docker-cli-alt27
FROM dockereng/cli-bin:$DOCKER_VERSION_ALT_26 AS docker-cli-alt26
FROM registry:$REGISTRY_VERSION AS registry
FROM moby/buildkit:$BUILDKIT_VERSION AS buildkit
FROM docker/compose-bin:$COMPOSE_VERSION AS compose
FROM crazymax/undock:$UNDOCK_VERSION AS undock
FROM golatest AS gobase
COPY --from=xx / /
@ -20,32 +35,38 @@ ENV GOFLAGS=-mod=vendor
ENV CGO_ENABLED=0
WORKDIR /src
FROM registry:$REGISTRY_VERSION AS registry
FROM moby/buildkit:$BUILDKIT_VERSION AS buildkit
FROM gobase AS docker
ARG TARGETPLATFORM
ARG DOCKER_VERSION
WORKDIR /opt/docker
RUN DOCKER_ARCH=$(case ${TARGETPLATFORM:-linux/amd64} in \
"linux/amd64") echo "x86_64" ;; \
"linux/arm/v6") echo "armel" ;; \
"linux/arm/v7") echo "armhf" ;; \
"linux/arm64") echo "aarch64" ;; \
"linux/ppc64le") echo "ppc64le" ;; \
"linux/s390x") echo "s390x" ;; \
*) echo "" ;; esac) \
&& echo "DOCKER_ARCH=$DOCKER_ARCH" \
&& wget -qO- "https://download.docker.com/linux/static/stable/${DOCKER_ARCH}/docker-${DOCKER_VERSION}.tgz" | tar xvz --strip 1
RUN ./dockerd --version && ./containerd --version && ./ctr --version && ./runc --version
FROM gobase AS gotestsum
ARG GOTESTSUM_VERSION
ENV GOFLAGS=
RUN --mount=target=/root/.cache,type=cache \
GOBIN=/out/ go install "gotest.tools/gotestsum@${GOTESTSUM_VERSION}" && \
/out/gotestsum --version
ENV GOFLAGS=""
RUN --mount=target=/root/.cache,type=cache <<EOT
set -ex
go install "gotest.tools/gotestsum@${GOTESTSUM_VERSION}"
go install "github.com/wadey/gocovmerge@latest"
mkdir /out
/go/bin/gotestsum --version
mv /go/bin/gotestsum /out
mv /go/bin/gocovmerge /out
EOT
COPY --chmod=755 <<"EOF" /out/gotestsumandcover
#!/bin/sh
set -x
if [ -z "$GO_TEST_COVERPROFILE" ]; then
exec gotestsum "$@"
fi
coverdir="$(dirname "$GO_TEST_COVERPROFILE")"
mkdir -p "$coverdir/helpers"
gotestsum "$@" "-coverprofile=$GO_TEST_COVERPROFILE"
ecode=$?
go tool covdata textfmt -i=$coverdir/helpers -o=$coverdir/helpers-report.txt
gocovmerge "$coverdir/helpers-report.txt" "$GO_TEST_COVERPROFILE" > "$coverdir/merged-report.txt"
mv "$coverdir/merged-report.txt" "$GO_TEST_COVERPROFILE"
rm "$coverdir/helpers-report.txt"
for f in "$coverdir/helpers"/*; do
rm "$f"
done
rmdir "$coverdir/helpers"
exit $ecode
EOF
FROM gobase AS buildx-version
RUN --mount=type=bind,target=. <<EOT
@ -57,6 +78,7 @@ EOT
FROM gobase AS buildx-build
ARG TARGETPLATFORM
ARG GO_EXTRA_FLAGS
RUN --mount=type=bind,target=. \
--mount=type=cache,target=/root/.cache \
--mount=type=cache,target=/go/pkg/mod \
@ -64,6 +86,7 @@ RUN --mount=type=bind,target=. \
set -e
xx-go --wrap
DESTDIR=/usr/bin VERSION=$(cat /buildx-version/version) REVISION=$(cat /buildx-version/revision) GO_EXTRA_LDFLAGS="-s -w" ./hack/build
file /usr/bin/docker-buildx
xx-verify --static /usr/bin/docker-buildx
EOT
@ -82,7 +105,10 @@ FROM scratch AS binaries-unix
COPY --link --from=buildx-build /usr/bin/docker-buildx /buildx
FROM binaries-unix AS binaries-darwin
FROM binaries-unix AS binaries-freebsd
FROM binaries-unix AS binaries-linux
FROM binaries-unix AS binaries-netbsd
FROM binaries-unix AS binaries-openbsd
FROM scratch AS binaries-windows
COPY --link --from=buildx-build /usr/bin/docker-buildx /buildx.exe
@ -103,18 +129,27 @@ RUN apk add --no-cache \
shadow-uidmap \
xfsprogs \
xz
COPY --link --from=gotestsum /out/gotestsum /usr/bin/
COPY --link --from=gotestsum /out /usr/bin/
COPY --link --from=registry /bin/registry /usr/bin/
COPY --link --from=docker /opt/docker/* /usr/bin/
COPY --link --from=docker-engine / /usr/bin/
COPY --link --from=docker-cli / /usr/bin/
COPY --link --from=docker-engine-alt27 / /opt/docker-alt-27/
COPY --link --from=docker-engine-alt26 / /opt/docker-alt-26/
COPY --link --from=docker-cli-alt27 / /opt/docker-alt-27/
COPY --link --from=docker-cli-alt26 / /opt/docker-alt-26/
COPY --link --from=buildkit /usr/bin/buildkitd /usr/bin/
COPY --link --from=buildkit /usr/bin/buildctl /usr/bin/
COPY --link --from=compose /docker-compose /usr/bin/compose
COPY --link --from=undock /usr/local/bin/undock /usr/bin/
COPY --link --from=binaries /buildx /usr/bin/
RUN mkdir -p /usr/local/lib/docker/cli-plugins && ln -s /usr/bin/buildx /usr/local/lib/docker/cli-plugins/docker-buildx
ENV TEST_DOCKER_EXTRA="docker@27.5=/opt/docker-alt-27,docker@26.1=/opt/docker-alt-26"
FROM integration-test-base AS integration-test
COPY . .
# Release
FROM --platform=$BUILDPLATFORM alpine AS releaser
FROM --platform=$BUILDPLATFORM alpine:${ALPINE_VERSION} AS releaser
WORKDIR /work
ARG TARGETPLATFORM
RUN --mount=from=binaries \
@ -129,7 +164,7 @@ COPY --from=releaser /out/ /
# Shell
FROM docker:$DOCKER_VERSION AS dockerd-release
FROM alpine AS shell
FROM alpine:${ALPINE_VERSION} AS shell
RUN apk add --no-cache iptables tmux git vim less openssh
RUN mkdir -p /usr/local/lib/docker/cli-plugins && ln -s /usr/local/bin/buildx /usr/local/lib/docker/cli-plugins/docker-buildx
COPY ./hack/demo-env/entrypoint.sh /usr/local/bin

View File

@ -153,6 +153,7 @@ made through a pull request.
"akihirosuda",
"crazy-max",
"jedevc",
"jsternberg",
"tiborvass",
"tonistiigi",
]
@ -194,6 +195,11 @@ made through a pull request.
Email = "me@jedevc.com"
GitHub = "jedevc"
[people.jsternberg]
Name = "Jonathan Sternberg"
Email = "jonathan.sternberg@docker.com"
GitHub = "jsternberg"
[people.thajeztah]
Name = "Sebastiaan van Stijn"
Email = "github@gone.nl"

View File

@ -8,6 +8,8 @@ endif
export BUILDX_CMD ?= docker buildx
BAKE_TARGETS := binaries binaries-cross lint lint-gopls validate-vendor validate-docs validate-authors
.PHONY: all
all: binaries
@ -19,13 +21,9 @@ build:
shell:
./hack/shell
.PHONY: binaries
binaries:
$(BUILDX_CMD) bake binaries
.PHONY: binaries-cross
binaries-cross:
$(BUILDX_CMD) bake binaries-cross
.PHONY: $(BAKE_TARGETS)
$(BAKE_TARGETS):
$(BUILDX_CMD) bake $@
.PHONY: install
install: binaries
@ -37,11 +35,7 @@ release:
./hack/release
.PHONY: validate-all
validate-all: lint test validate-vendor validate-docs validate-generated-files
.PHONY: lint
lint:
$(BUILDX_CMD) bake lint
validate-all: lint test validate-vendor validate-docs
.PHONY: test
test:
@ -55,22 +49,6 @@ test-unit:
test-integration:
TESTPKGS=./tests ./hack/test
.PHONY: validate-vendor
validate-vendor:
$(BUILDX_CMD) bake validate-vendor
.PHONY: validate-docs
validate-docs:
$(BUILDX_CMD) bake validate-docs
.PHONY: validate-authors
validate-authors:
$(BUILDX_CMD) bake validate-authors
.PHONY: validate-generated-files
validate-generated-files:
$(BUILDX_CMD) bake validate-generated-files
.PHONY: test-driver
test-driver:
./hack/test-driver
@ -90,7 +68,3 @@ authors:
.PHONY: mod-outdated
mod-outdated:
$(BUILDX_CMD) bake mod-outdated
.PHONY: generated-files
generated-files:
$(BUILDX_CMD) bake update-generated-files

452
PROJECT.md Normal file
View File

@ -0,0 +1,452 @@
# Project processing guide <!-- omit from toc -->
- [Project scope](#project-scope)
- [Labels](#labels)
- [Global](#global)
- [`area/`](#area)
- [`exp/`](#exp)
- [`impact/`](#impact)
- [`kind/`](#kind)
- [`needs/`](#needs)
- [`priority/`](#priority)
- [`status/`](#status)
- [Types of releases](#types-of-releases)
- [Feature releases](#feature-releases)
- [Release Candidates](#release-candidates)
- [Support Policy](#support-policy)
- [Contributing to Releases](#contributing-to-releases)
- [Patch releases](#patch-releases)
- [Milestones](#milestones)
- [Triage process](#triage-process)
- [Verify essential information](#verify-essential-information)
- [Classify the issue](#classify-the-issue)
- [Prioritization guidelines for `kind/bug`](#prioritization-guidelines-for-kindbug)
- [Issue lifecycle](#issue-lifecycle)
- [Examples](#examples)
- [Submitting a bug](#submitting-a-bug)
- [Pull request review process](#pull-request-review-process)
- [Handling stalled issues and pull requests](#handling-stalled-issues-and-pull-requests)
- [Moving to a discussion](#moving-to-a-discussion)
- [Workflow automation](#workflow-automation)
- [Exempting an issue/PR from stale bot processing](#exempting-an-issuepr-from-stale-bot-processing)
- [Updating dependencies](#updating-dependencies)
---
## Project scope
**Docker Buildx** is a Docker CLI plugin designed to extend build capabilities using BuildKit. It provides advanced features for building container images, supporting multiple builder instances, multi-node builds, and high-level build constructs. Buildx enhances the Docker build process, making it more efficient and flexible, and is compatible with both Docker and Kubernetes environments. Key features include:
- **Familiar user experience:** Buildx offers a user experience similar to legacy docker build, ensuring a smooth transition from legacy commands
- **Full BuildKit capabilities:** Leverage the full feature set of [`moby/buildkit`](https://github.com/moby/buildkit) when using the container driver
- **Multiple builder instances:** Supports the use of multiple builder instances, allowing concurrent builds and effective management and monitoring of these builders.
- **Multi-node builds:** Use multiple nodes to build cross-platform images
- **Compose integration:** Build complex, multi-services files as defined in compose
- **High-level build constructs via `bake`:** Introduces high-level build constructs for more complex build workflows
- **In-container driver support:** Support in-container drivers for both Docker and Kubernetes environments to support isolation/security.
## Labels
Below are common groups, labels, and their intended usage to support issues, pull requests, and discussion processing.
### Global
General attributes that can apply to nearly any issue or pull request.
| Label | Applies to | Description |
| ------------------- | ----------- | ------------------------------------------------------------------------- |
| `bot` | Issues, PRs | Created by a bot |
| `good first issue ` | Issues | Suitable for first-time contributors |
| `help wanted` | Issues, PRs | Assistance requested |
| `lgtm` | PRs | “Looks good to me” approval |
| `stale` | Issues, PRs | The issue/PR has not had activity for a while |
| `rotten` | Issues, PRs | The issue/PR has not had activity since being marked stale and was closed |
| `frozen` | Issues, PRs | The issue/PR should be skipped by the stale-bot |
| `dco/no` | PRs | The PR is missing a developer certificate of origin sign-off |
### `area/`
Area or component of the project affected. Please note that the table below may not be inclusive of all current options.
| Label | Applies to | Description |
| ------------------------------ | ---------- | -------------------------- |
| `area/bake` | Any | `bake` |
| `area/bake/compose` | Any | `bake/compose` |
| `area/build` | Any | `build` |
| `area/builder` | Any | `builder` |
| `area/buildkit` | Any | Relates to `moby/buildkit` |
| `area/cache` | Any | `cache` |
| `area/checks` | Any | `checks` |
| `area/ci` | Any | Project CI |
| `area/cli` | Any | `cli` |
| `area/debug` | Any | `debug` |
| `area/dependencies` | Any | Project dependencies |
| `area/dockerfile` | Any | `dockerfile` |
| `area/docs` | Any | `docs` |
| `area/driver` | Any | `driver` |
| `area/driver/docker` | Any | `driver/docker` |
| `area/driver/docker-container` | Any | `driver/docker-container` |
| `area/driver/kubernetes` | Any | `driver/kubernetes` |
| `area/driver/remote` | Any | `driver/remote` |
| `area/feature-parity` | Any | `feature-parity` |
| `area/github-actions` | Any | `github-actions` |
| `area/hack` | Any | Project hack/support |
| `area/imagetools` | Any | `imagetools` |
| `area/metrics` | Any | `metrics` |
| `area/moby` | Any | Relates to `moby/moby` |
| `area/project` | Any | Project support |
| `area/qemu` | Any | `qemu` |
| `area/tests` | Any | Project testing |
| `area/windows` | Any | `windows` |
### `exp/`
Estimated experience level to complete the item
| Label | Applies to | Description |
| ------------------ | ---------- | ------------------------------------------------------------------------------- |
| `exp/beginner` | Issue | Suitable for contributors new to the project or technology stack |
| `exp/intermediate` | Issue | Requires some familiarity with the project and technology |
| `exp/expert` | Issue | Requires deep understanding and advanced skills with the project and technology |
### `impact/`
Potential impact areas of the issue or pull request.
| Label | Applies to | Description |
| -------------------- | ---------- | -------------------------------------------------- |
| `impact/breaking` | PR | Change is API-breaking |
| `impact/changelog` | PR | When complete, the item should be in the changelog |
| `impact/deprecation` | PR | Change is a deprecation of a feature |
### `kind/`
The type of issue, pull request or discussion
| Label | Applies to | Description |
| ------------------ | ----------------- | ------------------------------------------------------- |
| `kind/bug` | Issue, PR | Confirmed bug |
| `kind/chore` | Issue, PR | Project support tasks |
| `kind/docs` | Issue, PR | Additions or modifications to the documentation |
| `kind/duplicate` | Any | Duplicate of another item |
| `kind/enhancement` | Any | Enhancement of an existing feature |
| `kind/feature` | Any | A brand new feature |
| `kind/maybe-bug` | Issue, PR | Unconfirmed bug, turns into kind/bug when confirmed |
| `kind/proposal` | Issue, Discussion | A proposed major change |
| `kind/refactor` | Issue, PR | Refactor of existing code |
| `kind/support` | Any | A question, discussion, or other user support item |
| `kind/tests` | Issue, PR | Additions or modifications to the project testing suite |
### `needs/`
Actions or missing requirements needed by the issue or pull request.
| Label | Applies to | Description |
| --------------------------- | ---------- | ----------------------------------------------------- |
| `needs/assignee` | Issue, PR | Needs an assignee |
| `needs/code-review` | PR | Needs review of code |
| `needs/design-review` | Issue, PR | Needs review of design |
| `needs/docs-review` | Issue, PR | Needs review by the documentation team |
| `needs/docs-update` | Issue, PR | Needs an update to the docs |
| `needs/follow-on-work` | Issue, PR | Needs follow-on work/PR |
| `needs/issue` | PR | Needs an issue |
| `needs/maintainer-decision` | Issue, PR | Needs maintainer discussion/decision before advancing |
| `needs/milestone` | Issue, PR | Needs milestone assignment |
| `needs/more-info` | Any | Needs more information from the author |
| `needs/more-investigation` | Issue, PR | Needs further investigation |
| `needs/priority` | Issue, PR | Needs priority assignment |
| `needs/pull-request` | Issue | Needs a pull request |
| `needs/rebase` | PR | Needs rebase to target branch |
| `needs/reproduction` | Issue, PR | Needs reproduction steps |
### `priority/`
Level of urgency of a `kind/bug` issue or pull request.
| Label | Applies to | Description |
| ------------- | ---------- | ----------------------------------------------------------------------- |
| `priority/P0` | Issue, PR | Urgent: Security, critical bugs, blocking issues. |
| `priority/P1` | Issue, PR | Important: This is a top priority and a must-have for the next release. |
| `priority/P2` | Issue, PR | Normal: Default priority |
### `status/`
Current lifecycle state of the issue or pull request.
| Label | Applies to | Description |
| --------------------- | ---------- | ---------------------------------------------------------------------- |
| `status/accepted` | Issue, PR | The issue has been reviewed and accepted for implementation |
| `status/active` | PR | The PR is actively being worked on by a maintainer or community member |
| `status/blocked` | Issue, PR | The issue/PR is blocked from advancing to another status |
| `status/do-not-merge` | PR | Should not be merged pending further review or changes |
| `status/transfer` | Any | Transferred to another project |
| `status/triage` | Any | The item needs to be sorted by maintainers |
| `status/wontfix` | Issue, PR | The issue/PR will not be fixed or addressed as described |
## Types of releases
This project has feature releases, patch releases, and security releases.
### Feature releases
Feature releases are made from the development branch, followed by cutting a release branch for future patch releases, which may also occur during the code freeze period.
#### Release Candidates
Users can expect 2-3 release candidate (RC) test releases prior to a feature release. The first RC is typically released about one to two weeks before the final release.
#### Support Policy
Once a new feature release is cut, support for the previous feature release is discontinued. An exception may be made for urgent security releases that occur shortly after a new feature release. Buildx does not offer LTS (Long-Term Support) releases.
#### Contributing to Releases
Anyone can request that an issue or PR be included in the next feature or patch release milestone, provided it meets the necessary requirements.
### Patch releases
Patch releases should only include the most critical patches. Stability is vital, so everyone should always use the latest patch release.
If a fix is needed but does not qualify for a patch release because of its code size or other criteria that make it too unpredictable, we will prioritize cutting a new feature release sooner rather than making an exception for backporting.
Following PRs are included in patch releases
- `priority/P0` fixes
- `priority/P1` fixes, assuming maintainers dont object because of the patch size
- `priority/P2` fixes, only if (both required)
- proposed by maintainer
- the patch is trivial and self-contained
- Documentation-only patches
- Vendored dependency updates, only if:
- Fixing (qualifying) bug or security issue in Buildx
- The patch is small, else a forked version of the dependency with only the patches required
New features do not qualify for patch release.
## Milestones
Milestones are used to help identify what releases a contribution will be in.
- The `v0.next` milestone collects unblocked items planned for the next 2-3 feature releases but not yet assigned to a specific version milestone.
- The `v0.backlog` milestone gathers all triaged items considered for the long-term (beyond the next 3 feature releases) or currently unfit for a future release due to certain conditions. These items may be blocked and need to be unblocked before progressing.
## Triage process
Triage provides an important way to contribute to an open-source project. When submitted without an issue this process applies to Pull Requests as well. Triage helps ensure work items are resolved quickly by:
- Ensuring the issue's intent and purpose are described precisely. This is necessary because it can be difficult for an issue to explain how an end user experiences a problem and what actions they took to arrive at the problem.
- Giving a contributor the information they need before they commit to resolving an issue.
- Lowering the issue count by preventing duplicate issues.
- Streamlining the development process by preventing duplicate discussions.
If you don't have time to code, consider helping with triage. The community will thank you for saving them time by spending some of yours. The same basic process should be applied upon receipt of a new issue.
1. Verify essential information
2. Classify the issue
3. Prioritizing the issue
### Verify essential information
Before advancing the triage process, ensure the issue contains all necessary information to be properly understood and assessed. The required information may vary by issue type, but typically includes the system environment, version numbers, reproduction steps, expected outcomes, and actual results.
- **Exercising Judgment**: Use your best judgment to assess the issue descriptions completeness.
- **Communicating Needs**: If the information provided is insufficient, kindly request additional details from the author. Explain that this information is crucial for clarity and resolution of the issue, and apply the `needs/more-information` label to indicate a response from the author is required.
### Classify the issue
An issue will typically have multiple labels. These are used to help communicate key information about context, requirements, and status. At a minimum, a properly classified issue should have:
- (Required) One or more [`area/*`](#area) labels
- (Required) One [`kind/*`](#kind) label to indicate the type of issue
- (Required if `kind/bug`) A [`priority/*`](#priority) label
When assigning a decision the following labels should be present:
- (Required) One [`status/*`](#status) label to indicate lifecycle status
Additional labels can provide more clarity:
- Zero or more [`needs/*`](#needs) labels to indicate missing items
- Zero or more [`impact/*`](#impact) labels
- One [`exp/*`](#exp) label
## Prioritization guidelines for `kind/bug`
When an issue or pull request of `kind/bug` is correctly categorized and attached to a milestone, the labels indicate the urgency with which it should be completed.
**priority/P0**
Fixing this item is the highest priority. A patch release will follow as soon as a patch is available and verified. This level is used exclusively for bugs.
Examples:
- Regression in a critical code path
- Panic in a critical code path
- Corruption in critical code path or rest of the system
- Leaked zero-day critical security
**priority/P1**
Items with this label should be fixed with high priority and almost always included in a patch release. Unless waiting for another issue, patch releases should happen within a week. This level is not used for features or enhancements.
Examples:
- Any regression, panic
- Measurable performance regression
- A major bug in a new feature in the latest release
- Incompatibility with upgraded external dependency
**priority/P2**
This is the default priority and is implied in the absence of a `priority/` label. Bugs with this priority should be included in the next feature release but may land in a patch release if they are ready and unlikely to impact other functionality adversely. Non-bug issues with this priority should also be included in the next feature release if they are available and ready.
Examples:
- Confirmed bugs
- Bugs in non-default configurations
- Most enhancements
## Issue lifecycle
```mermaid
flowchart LR
create([New issue]) --> triage
subgraph triage[Triage Loop]
review[Review]
end
subgraph decision[Decision]
accept[Accept]
close[Close]
end
triage -- if accepted --> accept[Assign status, milestone]
triage -- if rejected --> close[Assign status, close issue]
```
### Examples
#### Submitting a bug
To help illustrate the issue life cycle lets walk through submitting an issue as a potential bug in CI that enters a feedback loop and is eventually accepted as P2 priority and placed on the backlog.
```mermaid
flowchart LR
new([New issue])
subgraph triage[Triage]
direction LR
create["Action: Submit issue via Bug form\nLabels: kind/maybe-bug, status/triage"]
style create text-align:left
subgraph review[Review]
direction TB
classify["Action: Maintainer reviews issue, requests more info\nLabels: kind/maybe-bug, status/triage, needs/more-info, area/*"]
style classify text-align:left
update["Action: Author updates issue\nLabels: kind/maybe-bug, status/triage, needs/more-info, area/*"]
style update text-align:left
classify --> update
update --> classify
end
create --> review
end
subgraph decision[Decision]
accept["Action: Maintainer reviews updates, accepts, assigns milestone\nLabels: kind/bug, priority/P2, status/accepted, area/*, impact/*"]
style accept text-align: left
end
new --> triage
triage --> decision
```
## Pull request review process
A thorough and timely review process for pull requests (PRs) is crucial for maintaining the integrity and quality of the project while fostering a collaborative environment.
- **Labeling**: Most labels should be inherited from a linked issue. If no issue is linked an extended review process may be required.
- **Continuous Integration**: With few exceptions, it is crucial that all Continuous Integration (CI) workflows pass successfully.
- **Draft Status**: Incomplete or long-running PRs should be placed in "Draft" status. They may revert to "Draft" status upon initial review if significant rework is required.
```mermaid
flowchart LR
triage([Triage])
draft[Draft PR]
review[PR Review]
closed{{Close PR}}
merge{{Merge PR}}
subgraph feedback1[Feedback Loop]
draft
end
subgraph feedback2[Feedback Loop]
review
end
triage --> draft
draft --> review
review --> closed
review --> draft
review --> merge
```
## Handling stalled issues and pull requests
Unfortunately, some issues or pull requests can remain inactive for extended periods. To mitigate this, automation is employed to prompt both the author and maintainers, ensuring that all contributions receive appropriate attention.
**For Authors:**
- **Closure of Inactive Items**: If your issue or PR becomes irrelevant or is no longer needed, please close it to help keep the project clean.
- **Prompt Responses**: If additional information is requested, please respond promptly to facilitate progress.
**For Maintainers:**
- **Timely Responses**: Endeavor to address issues and PRs within a reasonable timeframe to keep the community actively engaged.
- **Engagement with Stale Issues**: If an issue becomes stale due to maintainer inaction, re-engage with the author to reassess and revitalize the discussion.
**Stale and Rotten Policy:**
- An issue or PR will be labeled as **`stale`** after 14 calendar days of inactivity. If it remains inactive for another 30 days, it will be labeled as **`rotten`** and closed.
- Authors whose issues or PRs have been closed are welcome to re-open them or create new ones and link to the original.
**Skipping Stale Processing:**
- To prevent an issue or PR from being marked as stale, label it as **`frozen`**.
**Exceptions to Stale Processing:**
- Issues or PRs marked as **`frozen`**.
- Issues or PRs assigned to a milestone.
## Moving to a discussion
Sometimes, an issue or pull request may not be the appropriate medium for what is essentially a discussion. In such cases, the issue or PR will either be converted to a discussion or a new discussion will be created. The original item will then be labeled appropriately (**`kind/discussion`** or **`kind/question`**) and closed.
If you believe this conversion was made in error, please express your concerns in the new discussion thread. If necessary, a reversal to the original issue or PR format can be facilitated.
## Workflow automation
To help expedite common operations, avoid errors and reduce toil some workflow automation is used by the project. This can include:
- Stale issue or pull request processing
- Auto-labeling actions
- Auto-response actions
- Label carry over from issue to pull request
### Exempting an issue/PR from stale bot processing
The stale item handling is configured in the [repository](link-to-config-file). To exempt an issue or PR from stale processing you can:
- Add the item to a milestone
- Add the `frozen` label to the item
## Updating dependencies
- **Runtime Dependencies**: Use the latest stable release available when the first Release Candidate (RC) of a new feature release is cut. For patch releases, update to the latest corresponding patch release of the dependency.
- **Other Dependencies**: Always permitted to update to the latest patch release in the development branch. Updates to a new feature release require justification, unless the dependency is outdated. Prefer tagged versions of dependencies unless a specific untagged commit is needed. Go modules should specify the lowest compatible version; there is no requirement to update all dependencies to their latest versions before cutting a new Buildx feature release.
- **Patch Releases**: Vendored dependency updates are considered for patch releases, except in the rare cases specified previously.
- **Security Considerations**: A security scanner report indicating a non-exploitable issue via Buildx does not justify backports.

144
README.md
View File

@ -1,4 +1,4 @@
# buildx
# Buildx
[![GitHub release](https://img.shields.io/github/release/docker/buildx.svg?style=flat-square)](https://github.com/docker/buildx/releases/latest)
[![PkgGoDev](https://img.shields.io/badge/go.dev-docs-007d9c?style=flat-square&logo=go&logoColor=white)](https://pkg.go.dev/github.com/docker/buildx)
@ -6,80 +6,60 @@
[![Go Report Card](https://goreportcard.com/badge/github.com/docker/buildx?style=flat-square)](https://goreportcard.com/report/github.com/docker/buildx)
[![codecov](https://img.shields.io/codecov/c/github/docker/buildx?logo=codecov&style=flat-square)](https://codecov.io/gh/docker/buildx)
`buildx` is a Docker CLI plugin for extended build capabilities with
Buildx is a Docker CLI plugin for extended build capabilities with
[BuildKit](https://github.com/moby/buildkit).
Key features:
> [!TIP]
> **Key features**
> - Familiar UI from `docker build`
> - Full BuildKit capabilities with container driver
> - Multiple builder instance support
> - Multi-node builds for cross-platform images
> - Compose build support
> - High-level builds with [Bake](https://docs.docker.com/build/bake/)
> - In-container driver support (both Docker and Kubernetes)
- Familiar UI from `docker build`
- Full BuildKit capabilities with container driver
- Multiple builder instance support
- Multi-node builds for cross-platform images
- Compose build support
- High-level build constructs (`bake`)
- In-container driver support (both Docker and Kubernetes)
# Table of Contents
___
- [Installing](#installing)
- [Windows and macOS](#windows-and-macos)
- [Linux packages](#linux-packages)
- [Manual download](#manual-download)
- [Dockerfile](#dockerfile)
- [Set buildx as the default builder](#set-buildx-as-the-default-builder)
- [Building](#building)
- [Getting started](#getting-started)
- [Building with buildx](#building-with-buildx)
- [Building with Buildx](#building-with-buildx)
- [Working with builder instances](#working-with-builder-instances)
- [Building multi-platform images](#building-multi-platform-images)
- [Reference](docs/reference/buildx.md)
- [`buildx bake`](docs/reference/buildx_bake.md)
- [`buildx build`](docs/reference/buildx_build.md)
- [`buildx create`](docs/reference/buildx_create.md)
- [`buildx du`](docs/reference/buildx_du.md)
- [`buildx imagetools`](docs/reference/buildx_imagetools.md)
- [`buildx imagetools create`](docs/reference/buildx_imagetools_create.md)
- [`buildx imagetools inspect`](docs/reference/buildx_imagetools_inspect.md)
- [`buildx inspect`](docs/reference/buildx_inspect.md)
- [`buildx ls`](docs/reference/buildx_ls.md)
- [`buildx prune`](docs/reference/buildx_prune.md)
- [`buildx rm`](docs/reference/buildx_rm.md)
- [`buildx stop`](docs/reference/buildx_stop.md)
- [`buildx use`](docs/reference/buildx_use.md)
- [`buildx version`](docs/reference/buildx_version.md)
- [Contributing](#contributing)
For more information on how to use Buildx, see
[Docker Build docs](https://docs.docker.com/build/).
## Installing
# Installing
Using Buildx with Docker requires Docker engine 19.03 or newer.
Using `buildx` with Docker requires Docker engine 19.03 or newer.
> **Warning**
>
> [!WARNING]
> Using an incompatible version of Docker may result in unexpected behavior,
> and will likely cause issues, especially when using Buildx builders with more
> recent versions of BuildKit.
## Windows and macOS
### Windows and macOS
Docker Buildx is included in [Docker Desktop](https://docs.docker.com/desktop/)
for Windows and macOS.
## Linux packages
### Linux packages
Docker Engine package repositories contain Docker Buildx packages when installed according to the
[Docker Engine install documentation](https://docs.docker.com/engine/install/). Install the
`docker-buildx-plugin` package to install the Buildx plugin.
## Manual download
### Manual download
> **Important**
>
> This section is for unattended installation of the buildx component. These
> [!IMPORTANT]
> This section is for unattended installation of the Buildx component. These
> instructions are mostly suitable for testing purposes. We do not recommend
> installing buildx using manual download in production environments as they
> installing Buildx using manual download in production environments as they
> will not be updated automatically with security updates.
>
> On Windows and macOS, we recommend that you install [Docker Desktop](https://docs.docker.com/desktop/)
@ -89,11 +69,11 @@ You can also download the latest binary from the [GitHub releases page](https://
Rename the relevant binary and copy it to the destination matching your OS:
| OS | Binary name | Destination folder |
| -------- | -------------------- | -----------------------------------------|
| Linux | `docker-buildx` | `$HOME/.docker/cli-plugins` |
| macOS | `docker-buildx` | `$HOME/.docker/cli-plugins` |
| Windows | `docker-buildx.exe` | `%USERPROFILE%\.docker\cli-plugins` |
| OS | Binary name | Destination folder |
|---------|---------------------|-------------------------------------|
| Linux | `docker-buildx` | `$HOME/.docker/cli-plugins` |
| macOS | `docker-buildx` | `$HOME/.docker/cli-plugins` |
| Windows | `docker-buildx.exe` | `%USERPROFILE%\.docker\cli-plugins` |
Or copy it into one of these folders for installing it system-wide.
@ -107,14 +87,13 @@ On Windows:
* `C:\ProgramData\Docker\cli-plugins`
* `C:\Program Files\Docker\cli-plugins`
> **Note**
>
> [!NOTE]
> On Unix environments, it may also be necessary to make it executable with `chmod +x`:
> ```shell
> $ chmod +x ~/.docker/cli-plugins/docker-buildx
> ```
## Dockerfile
### Dockerfile
Here is how to install and use Buildx inside a Dockerfile through the
[`docker/buildx-bin`](https://hub.docker.com/r/docker/buildx-bin) image:
@ -126,15 +105,7 @@ COPY --from=docker/buildx-bin /buildx /usr/libexec/docker/cli-plugins/docker-bui
RUN docker buildx version
```
# Set buildx as the default builder
Running the command [`docker buildx install`](docs/reference/buildx_install.md)
sets up docker builder command as an alias to `docker buildx build`. This
results in the ability to have `docker build` use the current buildx builder.
To remove this alias, run [`docker buildx uninstall`](docs/reference/buildx_uninstall.md).
# Building
## Building
```console
# Buildx 0.6+
@ -152,19 +123,19 @@ $ git clone https://github.com/docker/buildx.git && cd buildx
$ make install
```
# Getting started
## Getting started
## Building with buildx
### Building with Buildx
Buildx is a Docker CLI plugin that extends the `docker build` command with the
full support of the features provided by [Moby BuildKit](https://github.com/moby/buildkit)
full support of the features provided by [Moby BuildKit](https://docs.docker.com/build/buildkit/)
builder toolkit. It provides the same user experience as `docker build` with
many new features like creating scoped builder instances and building against
multiple nodes concurrently.
After installation, buildx can be accessed through the `docker buildx` command
with Docker 19.03. `docker buildx build` is the command for starting a new
build. With Docker versions older than 19.03 buildx binary can be called
After installation, Buildx can be accessed through the `docker buildx` command
with Docker 19.03. `docker buildx build` is the command for starting a new
build. With Docker versions older than 19.03 Buildx binary can be called
directly to access the `docker buildx` subcommands.
```console
@ -183,20 +154,25 @@ are not yet available for regular `docker build` like building manifest lists,
distributed caching, and exporting build results to OCI image tarballs.
Buildx is flexible and can be run in different configurations that are exposed
through various "drivers". Each driver defines how and where a build should
run, and have different feature sets.
through various [drivers](https://docs.docker.com/build/builders/drivers/).
Each driver defines how and where a build should run, and have different
feature sets.
We currently support the following drivers:
- The `docker` driver ([guide](docs/manuals/drivers/docker.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
- The `docker-container` driver ([guide](docs/manuals/drivers/docker-container.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
- The `kubernetes` driver ([guide](docs/manuals/drivers/kubernetes.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
- The `remote` driver ([guide](docs/manuals/drivers/remote.md))
- The `docker` driver ([manual](https://docs.docker.com/build/builders/drivers/docker/))
- The `docker-container` driver ([manual](https://docs.docker.com/build/builders/drivers/docker-container/))
- The `kubernetes` driver ([manual](https://docs.docker.com/build/drivers/kubernetes/))
- The `remote` driver ([manual](https://docs.docker.com/build/builders/drivers/remote/))
For more information on drivers, see the [drivers guide](docs/manuals/drivers/index.md).
For more information, see the [builders](https://docs.docker.com/build/builders/)
and [drivers](https://docs.docker.com/build/builders/drivers/) guide.
## Working with builder instances
> [!NOTE]
> For more information, see [Docker Build docs](https://docs.docker.com/build/concepts/overview/).
By default, buildx will initially use the `docker` driver if it is supported,
### Working with builder instances
By default, Buildx will initially use the `docker` driver if it is supported,
providing a very similar user experience to the native `docker build`. Note that
you must use a local shared daemon to build your applications.
@ -215,7 +191,7 @@ while creating the new builder. After creating a new instance, you can manage it
lifecycle using the [`docker buildx inspect`](docs/reference/buildx_inspect.md),
[`docker buildx stop`](docs/reference/buildx_stop.md), and
[`docker buildx rm`](docs/reference/buildx_rm.md) commands. To list all
available builders, use [`buildx ls`](docs/reference/buildx_ls.md). After
available builders, use [`docker buildx ls`](docs/reference/buildx_ls.md). After
creating a new builder you can also append new nodes to it.
To switch between different builders, use [`docker buildx use <name>`](docs/reference/buildx_use.md).
@ -226,10 +202,13 @@ Docker also features a [`docker context`](https://docs.docker.com/engine/referen
command that can be used for giving names for remote Docker API endpoints.
Buildx integrates with `docker context` so that all of your contexts
automatically get a default builder instance. While creating a new builder
instance or when adding a node to it you can also set the context name as the
instance or when adding a node to it, you can also set the context name as the
target.
## Building multi-platform images
> [!NOTE]
> For more information, see [Builders docs](https://docs.docker.com/build/builders/).
### Building multi-platform images
BuildKit is designed to work well for building for multiple platforms and not
only for the architecture and operating system that the user invoking the build
@ -242,8 +221,8 @@ platform for the build output, (for example, `linux/amd64`, `linux/arm64`, or
When the current builder instance is backed by the `docker-container` or
`kubernetes` driver, you can specify multiple platforms together. In this case,
it builds a manifest list which contains images for all specified architectures.
When you use this image in [`docker run`](https://docs.docker.com/engine/reference/commandline/run/)
or [`docker service`](https://docs.docker.com/engine/reference/commandline/service/),
When you use this image in [`docker run`](https://docs.docker.com/reference/cli/docker/container/run/)
or [`docker service`](https://docs.docker.com/reference/cli/docker/service/),
Docker picks the correct image based on the node's platform.
You can build multi-platform images using three different strategies that are
@ -307,11 +286,10 @@ COPY --from=build /log /log
You can also use [`tonistiigi/xx`](https://github.com/tonistiigi/xx) Dockerfile
cross-compilation helpers for more advanced use-cases.
## High-level build options
> [!NOTE]
> For more information, see [Multi-platform builds docs](https://docs.docker.com/build/building/multi-platform/).
See [High-level builds with Bake](https://docs.docker.com/build/bake/) for more details.
# Contributing
## Contributing
Want to contribute to Buildx? Awesome! You can find information about
contributing to this project in the [CONTRIBUTING.md](/.github/CONTRIBUTING.md)

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -5,15 +5,19 @@ import (
"fmt"
"os"
"path/filepath"
"slices"
"strings"
"github.com/compose-spec/compose-go/v2/consts"
"github.com/compose-spec/compose-go/v2/dotenv"
"github.com/compose-spec/compose-go/v2/loader"
composeschema "github.com/compose-spec/compose-go/v2/schema"
composetypes "github.com/compose-spec/compose-go/v2/types"
"github.com/docker/buildx/util/buildflags"
dockeropts "github.com/docker/cli/opts"
"github.com/docker/go-units"
"github.com/pkg/errors"
"gopkg.in/yaml.v3"
"go.yaml.in/yaml/v3"
)
func ParseComposeFiles(fs []File) (*Config, error) {
@ -32,17 +36,7 @@ func ParseComposeFiles(fs []File) (*Config, error) {
}
func ParseCompose(cfgs []composetypes.ConfigFile, envs map[string]string) (*Config, error) {
if envs == nil {
envs = make(map[string]string)
}
cfg, err := loader.LoadWithContext(context.Background(), composetypes.ConfigDetails{
ConfigFiles: cfgs,
Environment: envs,
}, func(options *loader.Options) {
options.SetProjectName("bake", false)
options.SkipNormalization = true
options.Profiles = []string{"*"}
})
cfg, err := loadComposeFiles(cfgs, envs)
if err != nil {
return nil, err
}
@ -55,7 +49,6 @@ func ParseCompose(cfgs []composetypes.ConfigFile, envs map[string]string) (*Conf
g := &Group{Name: "default"}
for _, s := range cfg.Services {
s := s
if s.Build == nil {
continue
}
@ -83,10 +76,7 @@ func ParseCompose(cfgs []composetypes.ConfigFile, envs map[string]string) (*Conf
var additionalContexts map[string]string
if s.Build.AdditionalContexts != nil {
additionalContexts = map[string]string{}
for k, v := range s.Build.AdditionalContexts {
additionalContexts[k] = v
}
additionalContexts = composeToBuildkitNamedContexts(s.Build.AdditionalContexts)
}
var shmSize *string
@ -96,6 +86,12 @@ func ParseCompose(cfgs []composetypes.ConfigFile, envs map[string]string) (*Conf
shmSize = &shmSizeStr
}
var networkModeP *string
if s.Build.Network != "" {
networkMode := s.Build.Network
networkModeP = &networkMode
}
var ulimits []string
if s.Build.Ulimits != nil {
for n, u := range s.Build.Ulimits {
@ -107,7 +103,24 @@ func ParseCompose(cfgs []composetypes.ConfigFile, envs map[string]string) (*Conf
}
}
var secrets []string
extraHosts := map[string]*string{}
if s.Build.ExtraHosts != nil {
for k, v := range s.Build.ExtraHosts {
vv := strings.Join(v, ",")
extraHosts[k] = &vv
}
}
var ssh []*buildflags.SSH
for _, bkey := range s.Build.SSH {
sshkey := composeToBuildkitSSH(bkey)
ssh = append(ssh, sshkey)
}
slices.SortFunc(ssh, func(a, b *buildflags.SSH) int {
return a.Less(b)
})
var secrets []*buildflags.Secret
for _, bs := range s.Build.Secrets {
secret, err := composeToBuildkitSecret(bs, cfg.Secrets[bs.Source])
if err != nil {
@ -119,10 +132,41 @@ func ParseCompose(cfgs []composetypes.ConfigFile, envs map[string]string) (*Conf
// compose does not support nil values for labels
labels := map[string]*string{}
for k, v := range s.Build.Labels {
v := v
labels[k] = &v
}
cacheFrom, err := buildflags.ParseCacheEntry(s.Build.CacheFrom)
if err != nil {
return nil, err
}
cacheTo, err := buildflags.ParseCacheEntry(s.Build.CacheTo)
if err != nil {
return nil, err
}
var inAttests []string
if s.Build.SBOM != "" {
inAttests = append(inAttests, buildflags.CanonicalizeAttest("sbom", s.Build.SBOM))
}
if s.Build.Provenance != "" {
inAttests = append(inAttests, buildflags.CanonicalizeAttest("provenance", s.Build.Provenance))
}
attests, err := buildflags.ParseAttests(inAttests)
if err != nil {
return nil, err
}
var noCache *bool
if s.Build.NoCache {
noCache = &s.Build.NoCache
}
var pull *bool
if s.Build.Pull {
pull = &s.Build.Pull
}
g.Targets = append(g.Targets, targetName)
t := &Target{
Name: targetName,
@ -139,12 +183,18 @@ func ParseCompose(cfgs []composetypes.ConfigFile, envs map[string]string) (*Conf
val, ok := cfg.Environment[val]
return val, ok
})),
CacheFrom: s.Build.CacheFrom,
CacheTo: s.Build.CacheTo,
NetworkMode: &s.Build.Network,
CacheFrom: cacheFrom,
CacheTo: cacheTo,
NetworkMode: networkModeP,
Platforms: s.Build.Platforms,
SSH: ssh,
Secrets: secrets,
ShmSize: shmSize,
Ulimits: ulimits,
ExtraHosts: extraHosts,
Attest: attests,
NoCache: noCache,
Pull: pull,
}
if err = t.composeExtTarget(s.Build.Extensions); err != nil {
return nil, err
@ -159,16 +209,79 @@ func ParseCompose(cfgs []composetypes.ConfigFile, envs map[string]string) (*Conf
c.Targets = append(c.Targets, t)
}
c.Groups = append(c.Groups, g)
}
return &c, nil
}
func loadComposeFiles(cfgs []composetypes.ConfigFile, envs map[string]string, options ...func(*loader.Options)) (*composetypes.Project, error) {
if envs == nil {
envs = make(map[string]string)
}
cfgDetails := composetypes.ConfigDetails{
ConfigFiles: cfgs,
Environment: envs,
}
raw, err := loader.LoadModelWithContext(context.Background(), cfgDetails, append([]func(*loader.Options){func(opts *loader.Options) {
projectName := "bake"
if v, ok := envs[consts.ComposeProjectName]; ok && v != "" {
projectName = v
}
opts.SetProjectName(projectName, false)
opts.SkipNormalization = true
opts.SkipValidation = true
}}, options...)...)
if err != nil {
return nil, err
}
filtered := make(map[string]any)
for _, key := range []string{"services", "secrets"} {
if key == "services" {
if services, ok := raw["services"].(map[string]any); ok {
filteredServices := make(map[string]any)
for svcName, svc := range services {
if svc == nil {
filteredServices[svcName] = map[string]any{}
} else if svcMap, ok := svc.(map[string]any); ok {
filteredService := make(map[string]any)
for _, svcField := range []string{"image", "build", "environment", "env_file"} {
if val, ok := svcMap[svcField]; ok {
filteredService[svcField] = val
}
}
filteredServices[svcName] = filteredService
}
}
filtered["services"] = filteredServices
}
} else if v, ok := raw[key]; ok {
filtered[key] = v
}
}
if len(filtered) == 0 {
return nil, errors.New("empty compose file")
}
if err := composeschema.Validate(filtered); err != nil {
return nil, err
}
return loader.ModelToProject(filtered, loader.ToOptions(&cfgDetails, append([]func(*loader.Options){func(options *loader.Options) {
options.SkipNormalization = true
options.Profiles = []string{"*"}
}}, options...)), composetypes.ConfigDetails{
ConfigFiles: cfgs,
Environment: envs,
})
}
func validateComposeFile(dt []byte, fn string) (bool, error) {
envs, err := composeEnv()
if err != nil {
return true, err
return false, err
}
fnl := strings.ToLower(fn)
if strings.HasSuffix(fnl, ".yml") || strings.HasSuffix(fnl, ".yaml") {
@ -182,16 +295,7 @@ func validateComposeFile(dt []byte, fn string) (bool, error) {
}
func validateCompose(dt []byte, envs map[string]string) error {
_, err := loader.Load(composetypes.ConfigDetails{
ConfigFiles: []composetypes.ConfigFile{
{
Content: dt,
},
},
Environment: envs,
}, func(options *loader.Options) {
options.SetProjectName("bake", false)
options.SkipNormalization = true
_, err := loadComposeFiles([]composetypes.ConfigFile{{Content: dt}}, envs, func(options *loader.Options) {
// consistency is checked later in ParseCompose to ensure multiple
// compose files can be merged together
options.SkipConsistencyCheck = true
@ -220,10 +324,13 @@ func loadDotEnv(curenv map[string]string, workingDir string) (map[string]string,
return nil, err
}
if _, err = os.Stat(ef); os.IsNotExist(err) {
return curenv, nil
} else if err != nil {
if st, err := os.Stat(ef); err != nil {
if os.IsNotExist(err) {
return curenv, nil
}
return nil, err
} else if st.IsDir() {
return curenv, nil
}
dt, err := os.ReadFile(ef)
@ -231,7 +338,10 @@ func loadDotEnv(curenv map[string]string, workingDir string) (map[string]string,
return nil, err
}
envs, err := dotenv.UnmarshalBytesWithLookup(dt, nil)
envs, err := dotenv.UnmarshalBytesWithLookup(dt, func(k string) (string, bool) {
v, ok := curenv[k]
return v, ok
})
if err != nil {
return nil, err
}
@ -275,13 +385,15 @@ type xbake struct {
NoCacheFilter stringArray `yaml:"no-cache-filter,omitempty"`
Contexts stringMap `yaml:"contexts,omitempty"`
// don't forget to update documentation if you add a new field:
// docs/manuals/bake/compose-file.md#extension-field-with-x-bake
// https://github.com/docker/docs/blob/main/content/build/bake/compose-file.md#extension-field-with-x-bake
}
type stringMap map[string]string
type stringArray []string
type (
stringMap map[string]string
stringArray []string
)
func (sa *stringArray) UnmarshalYAML(unmarshal func(interface{}) error) error {
func (sa *stringArray) UnmarshalYAML(unmarshal func(any) error) error {
var multi []string
err := unmarshal(&multi)
if err != nil {
@ -298,7 +410,7 @@ func (sa *stringArray) UnmarshalYAML(unmarshal func(interface{}) error) error {
// composeExtTarget converts Compose build extension x-bake to bake Target
// https://github.com/compose-spec/compose-spec/blob/master/spec.md#extension
func (t *Target) composeExtTarget(exts map[string]interface{}) error {
func (t *Target) composeExtTarget(exts map[string]any) error {
var xb xbake
ext, ok := exts["x-bake"]
@ -315,22 +427,45 @@ func (t *Target) composeExtTarget(exts map[string]interface{}) error {
t.Tags = dedupSlice(append(t.Tags, xb.Tags...))
}
if len(xb.CacheFrom) > 0 {
t.CacheFrom = dedupSlice(append(t.CacheFrom, xb.CacheFrom...))
cacheFrom, err := buildflags.ParseCacheEntry(xb.CacheFrom)
if err != nil {
return err
}
t.CacheFrom = t.CacheFrom.Merge(cacheFrom)
}
if len(xb.CacheTo) > 0 {
t.CacheTo = dedupSlice(append(t.CacheTo, xb.CacheTo...))
cacheTo, err := buildflags.ParseCacheEntry(xb.CacheTo)
if err != nil {
return err
}
t.CacheTo = t.CacheTo.Merge(cacheTo)
}
if len(xb.Secrets) > 0 {
t.Secrets = dedupSlice(append(t.Secrets, xb.Secrets...))
secrets, err := parseArrValue[buildflags.Secret](xb.Secrets)
if err != nil {
return err
}
t.Secrets = t.Secrets.Merge(secrets)
}
if len(xb.SSH) > 0 {
t.SSH = dedupSlice(append(t.SSH, xb.SSH...))
ssh, err := parseArrValue[buildflags.SSH](xb.SSH)
if err != nil {
return err
}
t.SSH = t.SSH.Merge(ssh)
slices.SortFunc(t.SSH, func(a, b *buildflags.SSH) int {
return a.Less(b)
})
}
if len(xb.Platforms) > 0 {
t.Platforms = dedupSlice(append(t.Platforms, xb.Platforms...))
}
if len(xb.Outputs) > 0 {
t.Outputs = dedupSlice(append(t.Outputs, xb.Outputs...))
outputs, err := parseArrValue[buildflags.ExportEntry](xb.Outputs)
if err != nil {
return err
}
t.Outputs = t.Outputs.Merge(outputs)
}
if xb.Pull != nil {
t.Pull = xb.Pull
@ -342,7 +477,7 @@ func (t *Target) composeExtTarget(exts map[string]interface{}) error {
t.NoCacheFilter = dedupSlice(append(t.NoCacheFilter, xb.NoCacheFilter...))
}
if len(xb.Contexts) > 0 {
t.Contexts = dedupMap(t.Contexts, xb.Contexts)
t.Contexts = dedupMap(t.Contexts, composeToBuildkitNamedContexts(xb.Contexts))
}
return nil
@ -350,21 +485,43 @@ func (t *Target) composeExtTarget(exts map[string]interface{}) error {
// composeToBuildkitSecret converts secret from compose format to buildkit's
// csv format.
func composeToBuildkitSecret(inp composetypes.ServiceSecretConfig, psecret composetypes.SecretConfig) (string, error) {
func composeToBuildkitSecret(inp composetypes.ServiceSecretConfig, psecret composetypes.SecretConfig) (*buildflags.Secret, error) {
if psecret.External {
return "", errors.Errorf("unsupported external secret %s", psecret.Name)
return nil, errors.Errorf("unsupported external secret %s", psecret.Name)
}
var bkattrs []string
secret := &buildflags.Secret{}
if inp.Source != "" {
bkattrs = append(bkattrs, "id="+inp.Source)
secret.ID = inp.Source
}
if psecret.File != "" {
bkattrs = append(bkattrs, "src="+psecret.File)
secret.FilePath = psecret.File
}
if psecret.Environment != "" {
bkattrs = append(bkattrs, "env="+psecret.Environment)
secret.Env = psecret.Environment
}
return strings.Join(bkattrs, ","), nil
return secret, nil
}
// composeToBuildkitSSH converts secret from compose format to buildkit's
// csv format.
func composeToBuildkitSSH(sshKey composetypes.SSHKey) *buildflags.SSH {
bkssh := &buildflags.SSH{ID: sshKey.ID}
if sshKey.Path != "" {
bkssh.Paths = []string{sshKey.Path}
}
return bkssh
}
func composeToBuildkitNamedContexts(m map[string]string) map[string]string {
out := make(map[string]string, len(m))
for k, v := range m {
if strings.HasPrefix(v, "service:") || strings.HasPrefix(v, "target:") {
if parts := strings.SplitN(v, ":", 2); len(parts) == 2 {
v = "target:" + sanitizeTargetName(parts[1])
}
}
out[k] = v
}
return out
}

View File

@ -12,7 +12,7 @@ import (
)
func TestParseCompose(t *testing.T) {
var dt = []byte(`
dt := []byte(`
services:
db:
build: ./db
@ -32,6 +32,13 @@ services:
- type=local,src=path/to/cache
cache_to:
- type=local,dest=path/to/cache
extra_hosts:
- "somehost:162.242.195.82"
- "somehost:162.242.195.83"
- "myhostv6:::1"
ssh:
- key=/path/to/key
- default
secrets:
- token
- aws
@ -71,13 +78,15 @@ secrets:
require.Equal(t, "Dockerfile-alternate", *c.Targets[1].Dockerfile)
require.Equal(t, 1, len(c.Targets[1].Args))
require.Equal(t, ptrstr("123"), c.Targets[1].Args["buildno"])
require.Equal(t, []string{"type=local,src=path/to/cache"}, c.Targets[1].CacheFrom)
require.Equal(t, []string{"type=local,dest=path/to/cache"}, c.Targets[1].CacheTo)
require.Equal(t, []string{"type=local,src=path/to/cache"}, stringify(c.Targets[1].CacheFrom))
require.Equal(t, []string{"type=local,dest=path/to/cache"}, stringify(c.Targets[1].CacheTo))
require.Equal(t, map[string]*string{"myhostv6": ptrstr("::1"), "somehost": ptrstr("162.242.195.82,162.242.195.83")}, c.Targets[1].ExtraHosts)
require.Equal(t, "none", *c.Targets[1].NetworkMode)
require.Equal(t, []string{"default", "key=/path/to/key"}, stringify(c.Targets[1].SSH))
require.Equal(t, []string{
"id=token,env=ENV_TOKEN",
"id=aws,src=/root/.aws/credentials",
}, c.Targets[1].Secrets)
"id=token,env=ENV_TOKEN",
}, stringify(c.Targets[1].Secrets))
require.Equal(t, "webapp2", c.Targets[2].Name)
require.Equal(t, "dir", *c.Targets[2].Context)
@ -85,7 +94,7 @@ secrets:
}
func TestNoBuildOutOfTreeService(t *testing.T) {
var dt = []byte(`
dt := []byte(`
services:
external:
image: "verycooldb:1337"
@ -99,7 +108,7 @@ services:
}
func TestParseComposeTarget(t *testing.T) {
var dt = []byte(`
dt := []byte(`
services:
db:
build:
@ -125,7 +134,7 @@ services:
}
func TestComposeBuildWithoutContext(t *testing.T) {
var dt = []byte(`
dt := []byte(`
services:
db:
build:
@ -149,7 +158,7 @@ services:
}
func TestBuildArgEnvCompose(t *testing.T) {
var dt = []byte(`
dt := []byte(`
version: "3.8"
services:
example:
@ -175,7 +184,7 @@ services:
}
func TestInconsistentComposeFile(t *testing.T) {
var dt = []byte(`
dt := []byte(`
services:
webapp:
entrypoint: echo 1
@ -183,10 +192,11 @@ services:
_, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.Error(t, err)
require.ErrorContains(t, err, `has neither an image nor a build context specified`)
}
func TestAdvancedNetwork(t *testing.T) {
var dt = []byte(`
dt := []byte(`
services:
db:
networks:
@ -211,7 +221,7 @@ networks:
}
func TestTags(t *testing.T) {
var dt = []byte(`
dt := []byte(`
services:
example:
image: example
@ -229,7 +239,7 @@ services:
}
func TestDependsOnList(t *testing.T) {
var dt = []byte(`
dt := []byte(`
version: "3.8"
services:
@ -265,7 +275,7 @@ networks:
}
func TestComposeExt(t *testing.T) {
var dt = []byte(`
dt := []byte(`
services:
addon:
image: ct-addon:bar
@ -278,6 +288,8 @@ services:
- user/app:cache
tags:
- ct-addon:baz
ssh:
key: /path/to/key
args:
CT_ECR: foo
CT_TAG: bar
@ -287,6 +299,9 @@ services:
tags:
- ct-addon:foo
- ct-addon:alp
ssh:
- default
- other=path/to/otherkey
platforms:
- linux/amd64
- linux/arm64
@ -327,22 +342,23 @@ services:
require.Equal(t, map[string]*string{"CT_ECR": ptrstr("foo"), "CT_TAG": ptrstr("bar")}, c.Targets[0].Args)
require.Equal(t, []string{"ct-addon:baz", "ct-addon:foo", "ct-addon:alp"}, c.Targets[0].Tags)
require.Equal(t, []string{"linux/amd64", "linux/arm64"}, c.Targets[0].Platforms)
require.Equal(t, []string{"user/app:cache", "type=local,src=path/to/cache"}, c.Targets[0].CacheFrom)
require.Equal(t, []string{"user/app:cache", "type=local,dest=path/to/cache"}, c.Targets[0].CacheTo)
require.Equal(t, []string{"type=local,src=path/to/cache", "user/app:cache"}, stringify(c.Targets[0].CacheFrom))
require.Equal(t, []string{"type=local,dest=path/to/cache", "user/app:cache"}, stringify(c.Targets[0].CacheTo))
require.Equal(t, []string{"default", "key=/path/to/key", "other=path/to/otherkey"}, stringify(c.Targets[0].SSH))
require.Equal(t, newBool(true), c.Targets[0].Pull)
require.Equal(t, map[string]string{"alpine": "docker-image://alpine:3.13"}, c.Targets[0].Contexts)
require.Equal(t, []string{"ct-fake-aws:bar"}, c.Targets[1].Tags)
require.Equal(t, []string{"id=mysecret,src=/local/secret", "id=mysecret2,src=/local/secret2"}, c.Targets[1].Secrets)
require.Equal(t, []string{"default"}, c.Targets[1].SSH)
require.Equal(t, []string{"id=mysecret,src=/local/secret", "id=mysecret2,src=/local/secret2"}, stringify(c.Targets[1].Secrets))
require.Equal(t, []string{"default"}, stringify(c.Targets[1].SSH))
require.Equal(t, []string{"linux/arm64"}, c.Targets[1].Platforms)
require.Equal(t, []string{"type=docker"}, c.Targets[1].Outputs)
require.Equal(t, []string{"type=docker"}, stringify(c.Targets[1].Outputs))
require.Equal(t, newBool(true), c.Targets[1].NoCache)
require.Equal(t, ptrstr("128MiB"), c.Targets[1].ShmSize)
require.Equal(t, []string{"nofile=1024:1024"}, c.Targets[1].Ulimits)
}
func TestComposeExtDedup(t *testing.T) {
var dt = []byte(`
dt := []byte(`
services:
webapp:
image: app:bar
@ -353,6 +369,8 @@ services:
- user/app:cache
tags:
- ct-addon:foo
ssh:
- default
x-bake:
tags:
- ct-addon:foo
@ -362,14 +380,18 @@ services:
- type=local,src=path/to/cache
cache-to:
- type=local,dest=path/to/cache
ssh:
- default
- key=path/to/key
`)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
require.Equal(t, []string{"ct-addon:foo", "ct-addon:baz"}, c.Targets[0].Tags)
require.Equal(t, []string{"user/app:cache", "type=local,src=path/to/cache"}, c.Targets[0].CacheFrom)
require.Equal(t, []string{"user/app:cache", "type=local,dest=path/to/cache"}, c.Targets[0].CacheTo)
require.Equal(t, []string{"type=local,src=path/to/cache", "user/app:cache"}, stringify(c.Targets[0].CacheFrom))
require.Equal(t, []string{"type=local,dest=path/to/cache", "user/app:cache"}, stringify(c.Targets[0].CacheTo))
require.Equal(t, []string{"default", "key=path/to/key"}, stringify(c.Targets[0].SSH))
}
func TestEnv(t *testing.T) {
@ -380,7 +402,7 @@ func TestEnv(t *testing.T) {
_, err = envf.WriteString("FOO=bsdf -csdf\n")
require.NoError(t, err)
var dt = []byte(`
dt := []byte(`
services:
scratch:
build:
@ -408,7 +430,7 @@ func TestDotEnv(t *testing.T) {
err := os.WriteFile(filepath.Join(tmpdir, ".env"), []byte("FOO=bar"), 0644)
require.NoError(t, err)
var dt = []byte(`
dt := []byte(`
services:
scratch:
build:
@ -427,7 +449,7 @@ services:
}
func TestPorts(t *testing.T) {
var dt = []byte(`
dt := []byte(`
services:
foo:
build:
@ -447,6 +469,21 @@ services:
require.NoError(t, err)
}
func TestPlatforms(t *testing.T) {
dt := []byte(`
services:
foo:
build:
context: .
platforms:
- linux/amd64
- linux/arm64
`)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Equal(t, []string{"linux/amd64", "linux/arm64"}, c.Targets[0].Platforms)
}
func newBool(val bool) *bool {
b := val
return &b
@ -487,7 +524,6 @@ func TestServiceName(t *testing.T) {
},
}
for _, tt := range cases {
tt := tt
t.Run(tt.svc, func(t *testing.T) {
_, err := ParseCompose([]composetypes.ConfigFile{{Content: []byte(`
services:
@ -558,7 +594,6 @@ services:
},
}
for _, tt := range cases {
tt := tt
t.Run(tt.name, func(t *testing.T) {
_, err := ParseCompose([]composetypes.ConfigFile{{Content: tt.dt}}, nil)
if tt.wantErr {
@ -586,7 +621,7 @@ services:
foo:
`),
isCompose: true,
wantErr: true,
wantErr: false,
},
{
name: "build",
@ -632,9 +667,54 @@ target "default" {
isCompose: false,
wantErr: false,
},
{
name: "json",
fn: "docker-bake.json",
dt: []byte(`
{
"group": [
{
"targets": [
"my-service"
]
}
],
"target": [
{
"context": ".",
"dockerfile": "Dockerfile"
}
]
}
`),
isCompose: false,
wantErr: false,
},
{
name: "json unknown ext",
fn: "docker-bake.foo",
dt: []byte(`
{
"group": [
{
"targets": [
"my-service"
]
}
],
"target": [
{
"context": ".",
"dockerfile": "Dockerfile"
}
]
}
`),
isCompose: false,
wantErr: true,
},
}
for _, tt := range cases {
tt := tt
t.Run(tt.name, func(t *testing.T) {
isCompose, err := validateComposeFile(tt.dt, tt.fn)
assert.Equal(t, tt.isCompose, isCompose)
@ -648,7 +728,7 @@ target "default" {
}
func TestComposeNullArgs(t *testing.T) {
var dt = []byte(`
dt := []byte(`
services:
scratch:
build:
@ -664,7 +744,7 @@ services:
}
func TestDependsOn(t *testing.T) {
var dt = []byte(`
dt := []byte(`
services:
foo:
build:
@ -695,7 +775,7 @@ services:
`), 0644)
require.NoError(t, err)
var dt = []byte(`
dt := []byte(`
include:
- compose-foo.yml
@ -724,7 +804,7 @@ services:
}
func TestDevelop(t *testing.T) {
var dt = []byte(`
dt := []byte(`
services:
scratch:
build:
@ -742,6 +822,293 @@ services:
require.NoError(t, err)
}
func TestCgroup(t *testing.T) {
dt := []byte(`
services:
scratch:
build:
context: ./webapp
cgroup: private
`)
_, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
}
func TestProjectName(t *testing.T) {
dt := []byte(`
services:
scratch:
build:
context: ./webapp
args:
PROJECT_NAME: ${COMPOSE_PROJECT_NAME}
`)
t.Run("default", func(t *testing.T) {
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Len(t, c.Targets, 1)
require.Len(t, c.Targets[0].Args, 1)
require.Equal(t, map[string]*string{"PROJECT_NAME": ptrstr("bake")}, c.Targets[0].Args)
})
t.Run("env", func(t *testing.T) {
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, map[string]string{"COMPOSE_PROJECT_NAME": "foo"})
require.NoError(t, err)
require.Len(t, c.Targets, 1)
require.Len(t, c.Targets[0].Args, 1)
require.Equal(t, map[string]*string{"PROJECT_NAME": ptrstr("foo")}, c.Targets[0].Args)
})
}
func TestServiceContext(t *testing.T) {
dt := []byte(`
services:
base:
build:
dockerfile: baseapp.Dockerfile
command: ./entrypoint.sh
webapp:
build:
context: ./dir
additional_contexts:
base: service:base
`)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Equal(t, 1, len(c.Groups))
require.Equal(t, "default", c.Groups[0].Name)
sort.Strings(c.Groups[0].Targets)
require.Equal(t, []string{"base", "webapp"}, c.Groups[0].Targets)
require.Equal(t, 2, len(c.Targets))
sort.Slice(c.Targets, func(i, j int) bool {
return c.Targets[i].Name < c.Targets[j].Name
})
require.Equal(t, "webapp", c.Targets[1].Name)
require.Equal(t, map[string]string{"base": "target:base"}, c.Targets[1].Contexts)
}
func TestServiceContextDot(t *testing.T) {
dt := []byte(`
services:
base.1:
build:
dockerfile: baseapp.Dockerfile
command: ./entrypoint.sh
foo.1:
build:
dockerfile: fooapp.Dockerfile
command: ./entrypoint.sh
webapp:
build:
context: ./dir
additional_contexts:
base: service:base.1
x-bake:
contexts:
foo: target:foo.1
`)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Equal(t, 1, len(c.Groups))
require.Equal(t, "default", c.Groups[0].Name)
sort.Strings(c.Groups[0].Targets)
require.Equal(t, []string{"base_1", "foo_1", "webapp"}, c.Groups[0].Targets)
require.Equal(t, 3, len(c.Targets))
sort.Slice(c.Targets, func(i, j int) bool {
return c.Targets[i].Name < c.Targets[j].Name
})
require.Equal(t, "webapp", c.Targets[2].Name)
require.Equal(t, map[string]string{"base": "target:base_1", "foo": "target:foo_1"}, c.Targets[2].Contexts)
}
func TestDotEnvDir(t *testing.T) {
tmpdir := t.TempDir()
require.NoError(t, os.Mkdir(filepath.Join(tmpdir, ".env"), 0755))
dt := []byte(`
services:
foo:
build:
context: .
`)
chdir(t, tmpdir)
_, err := ParseComposeFiles([]File{{Name: "compose.yml", Data: dt}})
require.NoError(t, err)
}
func TestDotEnvEvaluate(t *testing.T) {
tmpdir := t.TempDir()
err := os.WriteFile(filepath.Join(tmpdir, ".env"), []byte(`
TEST_VALUE=${SYSTEM_VALUE:?system_value_not_set}
FOO_VALUE=${TEST_VALUE:?test_value_not_set}
`), 0644)
require.NoError(t, err)
dt := []byte(`
services:
test:
build:
args:
TEST_VALUE:
FOO_VALUE:
`)
t.Setenv("SYSTEM_VALUE", "abc")
chdir(t, tmpdir)
c, err := ParseComposeFiles([]File{{Name: "compose.yml", Data: dt}})
require.NoError(t, err)
require.Equal(t, map[string]*string{"TEST_VALUE": ptrstr("abc"), "FOO_VALUE": ptrstr("abc")}, c.Targets[0].Args)
}
func TestUnknownField(t *testing.T) {
tmpdir := t.TempDir()
dt := []byte(`
services:
webapp:
bar: baz
build:
context: .
foo:
- bar.baz
`)
chdir(t, tmpdir)
_, err := ParseComposeFiles([]File{{Name: "compose.yml", Data: dt}})
require.NoError(t, err)
}
func TestUnknownBuildField(t *testing.T) {
tmpdir := t.TempDir()
dt := []byte(`
services:
webapp:
build:
context: .
foo: bar
`)
chdir(t, tmpdir)
_, err := ParseComposeFiles([]File{{Name: "compose.yml", Data: dt}})
require.Error(t, err)
require.ErrorContains(t, err, `additional properties 'foo' not allowed`)
}
func TestEmptyComposeFile(t *testing.T) {
tmpdir := t.TempDir()
chdir(t, tmpdir)
_, err := ParseComposeFiles([]File{{Name: "compose.yml", Data: []byte(``)}})
require.Error(t, err)
require.ErrorContains(t, err, `empty compose file`) // https://github.com/compose-spec/compose-go/blob/a42e7579d813e64c0c1f598a666358bc0c0a0eb4/loader/loader.go#L542
}
func TestParseComposeAttests(t *testing.T) {
dt := []byte(`
services:
app:
build:
context: .
sbom: true
provenance: mode=max
`)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
target := c.Targets[0]
require.Equal(t, "app", target.Name)
require.NotNil(t, target.Attest)
require.Len(t, target.Attest, 2)
attestMap := target.Attest.ToMap()
require.Contains(t, attestMap, "sbom")
require.Contains(t, attestMap, "provenance")
// Check the actual content - sbom=true should result in disabled=false (not disabled)
require.Equal(t, "type=sbom", *attestMap["sbom"])
require.Equal(t, "type=provenance,mode=max", *attestMap["provenance"])
}
func TestParseComposeAttestsDisabled(t *testing.T) {
dt := []byte(`
services:
app:
build:
context: .
sbom: false
provenance: false
`)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
target := c.Targets[0]
require.Equal(t, "app", target.Name)
require.NotNil(t, target.Attest)
require.Len(t, target.Attest, 2)
attestMap := target.Attest.ToMap()
require.Contains(t, attestMap, "sbom")
require.Contains(t, attestMap, "provenance")
// When disabled=true, the value should be nil
require.Nil(t, attestMap["sbom"])
require.Nil(t, attestMap["provenance"])
}
func TestParseComposePull(t *testing.T) {
dt := []byte(`
services:
app:
build:
context: .
pull: true
`)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
target := c.Targets[0]
require.Equal(t, "app", target.Name)
require.Equal(t, true, *target.Pull)
}
func TestParseComposeNoCache(t *testing.T) {
dt := []byte(`
services:
app:
build:
context: .
no_cache: true
`)
c, err := ParseCompose([]composetypes.ConfigFile{{Content: dt}}, nil)
require.NoError(t, err)
require.Equal(t, 1, len(c.Targets))
target := c.Targets[0]
require.Equal(t, "app", target.Name)
require.Equal(t, true, *target.NoCache)
}
// chdir changes the current working directory to the named directory,
// and then restore the original working directory at the end of the test.
func chdir(t *testing.T, dir string) {

659
bake/entitlements.go Normal file
View File

@ -0,0 +1,659 @@
package bake
import (
"bufio"
"cmp"
"context"
"fmt"
"io"
"io/fs"
"os"
"path/filepath"
"slices"
"strconv"
"strings"
"syscall"
"github.com/containerd/console"
"github.com/docker/buildx/build"
"github.com/docker/buildx/util/osutil"
"github.com/moby/buildkit/util/entitlements"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/tonistiigi/go-csvvalue"
)
type EntitlementKey string
const (
EntitlementKeyNetworkHost EntitlementKey = "network.host"
EntitlementKeySecurityInsecure EntitlementKey = "security.insecure"
EntitlementKeyDevice EntitlementKey = "device"
EntitlementKeyFSRead EntitlementKey = "fs.read"
EntitlementKeyFSWrite EntitlementKey = "fs.write"
EntitlementKeyFS EntitlementKey = "fs"
EntitlementKeyImagePush EntitlementKey = "image.push"
EntitlementKeyImageLoad EntitlementKey = "image.load"
EntitlementKeyImage EntitlementKey = "image"
EntitlementKeySSH EntitlementKey = "ssh"
)
type EntitlementConf struct {
NetworkHost bool
SecurityInsecure bool
Devices *EntitlementsDevicesConf
FSRead []string
FSWrite []string
ImagePush []string
ImageLoad []string
SSH bool
}
type EntitlementsDevicesConf struct {
All bool
Devices map[string]struct{}
}
func ParseEntitlements(in []string) (EntitlementConf, error) {
var conf EntitlementConf
for _, e := range in {
switch e {
case string(EntitlementKeyNetworkHost):
conf.NetworkHost = true
case string(EntitlementKeySecurityInsecure):
conf.SecurityInsecure = true
case string(EntitlementKeySSH):
conf.SSH = true
default:
k, v, _ := strings.Cut(e, "=")
switch k {
case string(EntitlementKeyDevice):
if v == "" {
conf.Devices = &EntitlementsDevicesConf{All: true}
continue
}
fields, err := csvvalue.Fields(v, nil)
if err != nil {
return EntitlementConf{}, errors.Wrapf(err, "failed to parse device entitlement %q", v)
}
if conf.Devices == nil {
conf.Devices = &EntitlementsDevicesConf{}
}
if conf.Devices.Devices == nil {
conf.Devices.Devices = make(map[string]struct{}, 0)
}
conf.Devices.Devices[fields[0]] = struct{}{}
case string(EntitlementKeyFSRead):
conf.FSRead = append(conf.FSRead, v)
case string(EntitlementKeyFSWrite):
conf.FSWrite = append(conf.FSWrite, v)
case string(EntitlementKeyFS):
conf.FSRead = append(conf.FSRead, v)
conf.FSWrite = append(conf.FSWrite, v)
case string(EntitlementKeyImagePush):
conf.ImagePush = append(conf.ImagePush, v)
case string(EntitlementKeyImageLoad):
conf.ImageLoad = append(conf.ImageLoad, v)
case string(EntitlementKeyImage):
conf.ImagePush = append(conf.ImagePush, v)
conf.ImageLoad = append(conf.ImageLoad, v)
default:
return conf, errors.Errorf("unknown entitlement key %q", k)
}
}
}
return conf, nil
}
func (c EntitlementConf) Validate(m map[string]build.Options) (EntitlementConf, error) {
var expected EntitlementConf
for _, v := range m {
if err := c.check(v, &expected); err != nil {
return EntitlementConf{}, err
}
}
return expected, nil
}
func (c EntitlementConf) check(bo build.Options, expected *EntitlementConf) error {
for _, e := range bo.Allow {
k, rest, _ := strings.Cut(e, "=")
switch k {
case entitlements.EntitlementDevice.String():
if rest == "" {
if c.Devices == nil || !c.Devices.All {
expected.Devices = &EntitlementsDevicesConf{All: true}
}
continue
}
fields, err := csvvalue.Fields(rest, nil)
if err != nil {
return errors.Wrapf(err, "failed to parse device entitlement %q", rest)
}
if expected.Devices == nil {
expected.Devices = &EntitlementsDevicesConf{}
}
if expected.Devices.Devices == nil {
expected.Devices.Devices = make(map[string]struct{}, 0)
}
expected.Devices.Devices[fields[0]] = struct{}{}
}
switch e {
case entitlements.EntitlementNetworkHost.String():
if !c.NetworkHost {
expected.NetworkHost = true
}
case entitlements.EntitlementSecurityInsecure.String():
if !c.SecurityInsecure {
expected.SecurityInsecure = true
}
}
}
rwPaths := map[string]struct{}{}
roPaths := map[string]struct{}{}
for _, p := range collectLocalPaths(bo.Inputs) {
roPaths[p] = struct{}{}
}
for _, p := range bo.ExportsLocalPathsTemporary {
rwPaths[p] = struct{}{}
}
for _, ce := range bo.CacheTo {
if ce.Type == "local" {
if dest, ok := ce.Attrs["dest"]; ok {
rwPaths[dest] = struct{}{}
}
}
}
for _, ci := range bo.CacheFrom {
if ci.Type == "local" {
if src, ok := ci.Attrs["src"]; ok {
roPaths[src] = struct{}{}
}
}
}
for _, secret := range bo.SecretSpecs {
if secret.FilePath != "" {
roPaths[secret.FilePath] = struct{}{}
}
}
for _, ssh := range bo.SSHSpecs {
for _, p := range ssh.Paths {
roPaths[p] = struct{}{}
}
if len(ssh.Paths) == 0 {
if !c.SSH {
expected.SSH = true
}
}
}
var err error
expected.FSRead, err = findMissingPaths(c.FSRead, roPaths)
if err != nil {
return err
}
expected.FSWrite, err = findMissingPaths(c.FSWrite, rwPaths)
if err != nil {
return err
}
return nil
}
func (c EntitlementConf) Prompt(ctx context.Context, isRemote bool, out io.Writer) error {
var term bool
if _, err := console.ConsoleFromFile(os.Stdin); err == nil {
term = true
}
var msgs []string
var flags []string
// these warnings are currently disabled to give users time to update
var msgsFS []string
var flagsFS []string
if c.NetworkHost {
msgs = append(msgs, " - Running build containers that can access host network")
flags = append(flags, string(EntitlementKeyNetworkHost))
}
if c.SecurityInsecure {
msgs = append(msgs, " - Running privileged containers that can make system changes")
flags = append(flags, string(EntitlementKeySecurityInsecure))
}
if c.Devices != nil {
if c.Devices.All {
msgs = append(msgs, " - Access to CDI devices")
flags = append(flags, string(EntitlementKeyDevice))
} else {
for d := range c.Devices.Devices {
msgs = append(msgs, fmt.Sprintf(" - Access to device %s", d))
flags = append(flags, string(EntitlementKeyDevice)+"="+d)
}
}
}
if c.SSH {
msgsFS = append(msgsFS, " - Forwarding default SSH agent socket")
flagsFS = append(flagsFS, string(EntitlementKeySSH))
}
roPaths, rwPaths, commonPaths := groupSamePaths(c.FSRead, c.FSWrite)
wd, err := os.Getwd()
if err != nil {
return errors.Wrap(err, "failed to get current working directory")
}
wd, err = filepath.EvalSymlinks(wd)
if err != nil {
return errors.Wrap(err, "failed to evaluate working directory")
}
roPaths = toRelativePaths(roPaths, wd)
rwPaths = toRelativePaths(rwPaths, wd)
commonPaths = toRelativePaths(commonPaths, wd)
if len(commonPaths) > 0 {
for _, p := range commonPaths {
msgsFS = append(msgsFS, fmt.Sprintf(" - Read and write access to path %s", p))
flagsFS = append(flagsFS, string(EntitlementKeyFS)+"="+p)
}
}
if len(roPaths) > 0 {
for _, p := range roPaths {
msgsFS = append(msgsFS, fmt.Sprintf(" - Read access to path %s", p))
flagsFS = append(flagsFS, string(EntitlementKeyFSRead)+"="+p)
}
}
if len(rwPaths) > 0 {
for _, p := range rwPaths {
msgsFS = append(msgsFS, fmt.Sprintf(" - Write access to path %s", p))
flagsFS = append(flagsFS, string(EntitlementKeyFSWrite)+"="+p)
}
}
if len(msgs) == 0 && len(msgsFS) == 0 {
return nil
}
fmt.Fprintf(out, "Your build is requesting privileges for following possibly insecure capabilities:\n\n")
for _, m := range slices.Concat(msgs, msgsFS) {
fmt.Fprintf(out, "%s\n", m)
}
for i, f := range flags {
flags[i] = "--allow=" + f
}
for i, f := range flagsFS {
flagsFS[i] = "--allow=" + f
}
if term {
fmt.Fprintf(out, "\nIn order to not see this message in the future pass %q to grant requested privileges.\n", strings.Join(slices.Concat(flags, flagsFS), " "))
} else {
fmt.Fprintf(out, "\nPass %q to grant requested privileges.\n", strings.Join(slices.Concat(flags, flagsFS), " "))
}
args := slices.Clone(os.Args)
if v, ok := os.LookupEnv("DOCKER_CLI_PLUGIN_ORIGINAL_CLI_COMMAND"); ok && v != "" {
args[0] = v
}
idx := slices.Index(args, "bake")
if idx != -1 {
fmt.Fprintf(out, "\nYour full command with requested privileges:\n\n")
fmt.Fprintf(out, "%s %s %s\n\n", strings.Join(args[:idx+1], " "), strings.Join(slices.Concat(flags, flagsFS), " "), strings.Join(args[idx+1:], " "))
}
fsEntitlementsEnabled := true
if isRemote {
if v, ok := os.LookupEnv("BAKE_ALLOW_REMOTE_FS_ACCESS"); ok {
vv, err := strconv.ParseBool(v)
if err != nil {
return errors.Wrapf(err, "failed to parse BAKE_ALLOW_REMOTE_FS_ACCESS value %q", v)
}
fsEntitlementsEnabled = !vv
}
}
v, fsEntitlementsSet := os.LookupEnv("BUILDX_BAKE_ENTITLEMENTS_FS")
if fsEntitlementsSet {
vv, err := strconv.ParseBool(v)
if err != nil {
return errors.Wrapf(err, "failed to parse BUILDX_BAKE_ENTITLEMENTS_FS value %q", v)
}
fsEntitlementsEnabled = vv
}
if !fsEntitlementsEnabled && len(msgs) == 0 {
return nil
}
if fsEntitlementsEnabled && !fsEntitlementsSet && len(msgsFS) != 0 {
fmt.Fprintf(out, "To disable filesystem entitlements checks, you can set BUILDX_BAKE_ENTITLEMENTS_FS=0 .\n\n")
}
if term {
fmt.Fprintf(out, "Do you want to grant requested privileges and continue? [y/N] ")
reader := bufio.NewReader(os.Stdin)
answerCh := make(chan string, 1)
go func() {
answer, _, _ := reader.ReadLine()
answerCh <- string(answer)
close(answerCh)
}()
select {
case <-ctx.Done():
case answer := <-answerCh:
if strings.ToLower(string(answer)) == "y" {
return nil
}
}
}
return errors.Errorf("additional privileges requested")
}
func isParentOrEqualPath(p, parent string) bool {
if p == parent || parent == "/" {
return true
}
if strings.HasPrefix(p, filepath.Clean(parent+string(filepath.Separator))) {
return true
}
return false
}
func findMissingPaths(set []string, paths map[string]struct{}) ([]string, error) {
set, allowAny, err := evaluatePaths(set)
if err != nil {
return nil, err
} else if allowAny {
return nil, nil
}
paths, err = evaluateToExistingPaths(paths)
if err != nil {
return nil, err
}
paths, err = dedupPaths(paths)
if err != nil {
return nil, err
}
out := make([]string, 0, len(paths))
loop0:
for p := range paths {
for _, c := range set {
if isParentOrEqualPath(p, c) {
continue loop0
}
}
out = append(out, p)
}
if len(out) == 0 {
return nil, nil
}
slices.Sort(out)
return out, nil
}
func dedupPaths(in map[string]struct{}) (map[string]struct{}, error) {
arr := make([]string, 0, len(in))
for p := range in {
arr = append(arr, filepath.Clean(p))
}
slices.SortFunc(arr, func(a, b string) int {
return cmp.Compare(len(a), len(b))
})
m := make(map[string]struct{}, len(arr))
loop0:
for _, p := range arr {
for parent := range m {
if strings.HasPrefix(p, parent+string(filepath.Separator)) {
continue loop0
}
}
m[p] = struct{}{}
}
return m, nil
}
func toRelativePaths(in []string, wd string) []string {
out := make([]string, 0, len(in))
for _, p := range in {
rel, err := filepath.Rel(wd, p)
if err == nil {
// allow up to one level of ".." in the path
if !strings.HasPrefix(rel, ".."+string(filepath.Separator)+"..") {
out = append(out, rel)
continue
}
}
out = append(out, p)
}
return out
}
func groupSamePaths(in1, in2 []string) ([]string, []string, []string) {
if in1 == nil || in2 == nil {
return in1, in2, nil
}
slices.Sort(in1)
slices.Sort(in2)
common := []string{}
i, j := 0, 0
for i < len(in1) && j < len(in2) {
switch {
case in1[i] == in2[j]:
common = append(common, in1[i])
i++
j++
case in1[i] < in2[j]:
i++
default:
j++
}
}
in1 = removeCommonPaths(in1, common)
in2 = removeCommonPaths(in2, common)
return in1, in2, common
}
func removeCommonPaths(in, common []string) []string {
filtered := make([]string, 0, len(in))
commonIndex := 0
for _, path := range in {
if commonIndex < len(common) && path == common[commonIndex] {
commonIndex++
continue
}
filtered = append(filtered, path)
}
return filtered
}
func evaluatePaths(in []string) ([]string, bool, error) {
out := make([]string, 0, len(in))
allowAny := false
for _, p := range in {
if p == "*" {
allowAny = true
continue
}
v, err := filepath.Abs(p)
if err != nil {
logrus.Warnf("failed to evaluate entitlement path %q: %v", p, err)
continue
}
v, rest, err := evaluateToExistingPath(v)
if err != nil {
return nil, false, errors.Wrapf(err, "failed to evaluate path %q", p)
}
v, err = osutil.GetLongPathName(v)
if err != nil {
return nil, false, errors.Wrapf(err, "failed to evaluate path %q", p)
}
if rest != "" {
v = filepath.Join(v, rest)
}
out = append(out, v)
}
return out, allowAny, nil
}
func evaluateToExistingPaths(in map[string]struct{}) (map[string]struct{}, error) {
m := make(map[string]struct{}, len(in))
for p := range in {
v, _, err := evaluateToExistingPath(p)
if err != nil {
return nil, errors.Wrapf(err, "failed to evaluate path %q", p)
}
v, err = osutil.GetLongPathName(v)
if err != nil {
return nil, errors.Wrapf(err, "failed to evaluate path %q", p)
}
m[v] = struct{}{}
}
return m, nil
}
func evaluateToExistingPath(in string) (string, string, error) {
in, err := filepath.Abs(in)
if err != nil {
return "", "", err
}
volLen := volumeNameLen(in)
pathSeparator := string(os.PathSeparator)
if volLen < len(in) && os.IsPathSeparator(in[volLen]) {
volLen++
}
vol := in[:volLen]
dest := vol
linksWalked := 0
var end int
for start := volLen; start < len(in); start = end {
for start < len(in) && os.IsPathSeparator(in[start]) {
start++
}
end = start
for end < len(in) && !os.IsPathSeparator(in[end]) {
end++
}
if end == start {
break
} else if in[start:end] == "." {
continue
} else if in[start:end] == ".." {
var r int
for r = len(dest) - 1; r >= volLen; r-- {
if os.IsPathSeparator(dest[r]) {
break
}
}
if r < volLen || dest[r+1:] == ".." {
if len(dest) > volLen {
dest += pathSeparator
}
dest += ".."
} else {
dest = dest[:r]
}
continue
}
if len(dest) > volumeNameLen(dest) && !os.IsPathSeparator(dest[len(dest)-1]) {
dest += pathSeparator
}
dest += in[start:end]
fi, err := os.Lstat(dest)
if err != nil {
// If the component doesn't exist, return the last valid path
if os.IsNotExist(err) {
for r := len(dest) - 1; r >= volLen; r-- {
if os.IsPathSeparator(dest[r]) {
return dest[:r], in[start:], nil
}
}
return vol, in[start:], nil
}
return "", "", err
}
if fi.Mode()&fs.ModeSymlink == 0 {
if !fi.Mode().IsDir() && end < len(in) {
return "", "", syscall.ENOTDIR
}
continue
}
linksWalked++
if linksWalked > 255 {
return "", "", errors.New("too many symlinks")
}
link, err := os.Readlink(dest)
if err != nil {
return "", "", err
}
in = link + in[end:]
v := volumeNameLen(link)
if v > 0 {
if v < len(link) && os.IsPathSeparator(link[v]) {
v++
}
vol = link[:v]
dest = vol
end = len(vol)
} else if len(link) > 0 && os.IsPathSeparator(link[0]) {
dest = link[:1]
end = 1
vol = link[:1]
volLen = 1
} else {
var r int
for r = len(dest) - 1; r >= volLen; r-- {
if os.IsPathSeparator(dest[r]) {
break
}
}
if r < volLen {
dest = vol
} else {
dest = dest[:r]
}
end = 0
}
}
return filepath.Clean(dest), "", nil
}
func volumeNameLen(s string) int {
return len(filepath.VolumeName(s))
}

486
bake/entitlements_test.go Normal file
View File

@ -0,0 +1,486 @@
package bake
import (
"fmt"
"os"
"path/filepath"
"slices"
"testing"
"github.com/docker/buildx/build"
"github.com/docker/buildx/util/buildflags"
"github.com/docker/buildx/util/osutil"
"github.com/moby/buildkit/client/llb"
"github.com/moby/buildkit/util/entitlements"
"github.com/stretchr/testify/require"
)
func TestEvaluateToExistingPath(t *testing.T) {
tempDir, err := osutil.GetLongPathName(t.TempDir())
require.NoError(t, err)
// Setup temporary directory structure for testing
existingFile := filepath.Join(tempDir, "existing_file")
require.NoError(t, os.WriteFile(existingFile, []byte("test"), 0644))
existingDir := filepath.Join(tempDir, "existing_dir")
require.NoError(t, os.Mkdir(existingDir, 0755))
symlinkToFile := filepath.Join(tempDir, "symlink_to_file")
require.NoError(t, os.Symlink(existingFile, symlinkToFile))
symlinkToDir := filepath.Join(tempDir, "symlink_to_dir")
require.NoError(t, os.Symlink(existingDir, symlinkToDir))
nonexistentPath := filepath.Join(tempDir, "nonexistent", "path", "file.txt")
tests := []struct {
name string
input string
expected string
expectErr bool
}{
{
name: "Existing file",
input: existingFile,
expected: existingFile,
expectErr: false,
},
{
name: "Existing directory",
input: existingDir,
expected: existingDir,
expectErr: false,
},
{
name: "Symlink to file",
input: symlinkToFile,
expected: existingFile,
expectErr: false,
},
{
name: "Symlink to directory",
input: symlinkToDir,
expected: existingDir,
expectErr: false,
},
{
name: "Non-existent path",
input: nonexistentPath,
expected: tempDir,
expectErr: false,
},
{
name: "Non-existent intermediate path",
input: filepath.Join(tempDir, "nonexistent", "file.txt"),
expected: tempDir,
expectErr: false,
},
{
name: "Root path",
input: "/",
expected: func() string {
root, _ := filepath.Abs("/")
return root
}(),
expectErr: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result, _, err := evaluateToExistingPath(tt.input)
if tt.expectErr {
require.Error(t, err)
} else {
require.NoError(t, err)
require.Equal(t, tt.expected, result)
}
})
}
}
func TestDedupePaths(t *testing.T) {
wd := osutil.GetWd()
tcases := []struct {
in map[string]struct{}
out map[string]struct{}
}{
{
in: map[string]struct{}{
"/a/b/c": {},
"/a/b/d": {},
"/a/b/e": {},
},
out: map[string]struct{}{
"/a/b/c": {},
"/a/b/d": {},
"/a/b/e": {},
},
},
{
in: map[string]struct{}{
"/a/b/c": {},
"/a/b/c/d": {},
"/a/b/c/d/e": {},
"/a/b/../b/c": {},
},
out: map[string]struct{}{
"/a/b/c": {},
},
},
{
in: map[string]struct{}{
filepath.Join(wd, "a/b/c"): {},
filepath.Join(wd, "../aa"): {},
filepath.Join(wd, "a/b"): {},
filepath.Join(wd, "a/b/d"): {},
filepath.Join(wd, "../aa/b"): {},
filepath.Join(wd, "../../bb"): {},
},
out: map[string]struct{}{
"a/b": {},
"../aa": {},
filepath.Join(wd, "../../bb"): {},
},
},
}
for i, tc := range tcases {
t.Run(fmt.Sprintf("case%d", i), func(t *testing.T) {
out, err := dedupPaths(tc.in)
if err != nil {
require.NoError(t, err)
}
// convert to relative paths as that is shown to user
arr := make([]string, 0, len(out))
for k := range out {
arr = append(arr, k)
}
require.NoError(t, err)
arr = toRelativePaths(arr, wd)
m := make(map[string]struct{})
for _, v := range arr {
m[filepath.ToSlash(v)] = struct{}{}
}
o := make(map[string]struct{}, len(tc.out))
for k := range tc.out {
o[filepath.ToSlash(k)] = struct{}{}
}
require.Equal(t, o, m)
})
}
}
func TestValidateEntitlements(t *testing.T) {
dir1 := t.TempDir()
dir2 := t.TempDir()
// the paths returned by entitlements validation will have symlinks resolved
expDir1, err := filepath.EvalSymlinks(dir1)
require.NoError(t, err)
expDir2, err := filepath.EvalSymlinks(dir2)
require.NoError(t, err)
escapeLink := filepath.Join(dir1, "escape_link")
require.NoError(t, os.Symlink("../../aa", escapeLink))
wd, err := os.Getwd()
require.NoError(t, err)
expWd, err := filepath.EvalSymlinks(wd)
require.NoError(t, err)
tcases := []struct {
name string
conf EntitlementConf
opt build.Options
expected EntitlementConf
}{
{
name: "No entitlements",
opt: build.Options{
Inputs: build.Inputs{
ContextState: &llb.State{},
},
},
},
{
name: "NetworkHostMissing",
opt: build.Options{
Allow: []string{
entitlements.EntitlementNetworkHost.String(),
},
},
expected: EntitlementConf{
NetworkHost: true,
FSRead: []string{expWd},
},
},
{
name: "NetworkHostSet",
conf: EntitlementConf{
NetworkHost: true,
},
opt: build.Options{
Allow: []string{
entitlements.EntitlementNetworkHost.String(),
},
},
expected: EntitlementConf{
FSRead: []string{expWd},
},
},
{
name: "SecurityAndNetworkHostMissing",
opt: build.Options{
Allow: []string{
entitlements.EntitlementNetworkHost.String(),
entitlements.EntitlementSecurityInsecure.String(),
},
},
expected: EntitlementConf{
NetworkHost: true,
SecurityInsecure: true,
FSRead: []string{expWd},
},
},
{
name: "SecurityMissingAndNetworkHostSet",
conf: EntitlementConf{
NetworkHost: true,
},
opt: build.Options{
Allow: []string{
entitlements.EntitlementNetworkHost.String(),
entitlements.EntitlementSecurityInsecure.String(),
},
},
expected: EntitlementConf{
SecurityInsecure: true,
FSRead: []string{expWd},
},
},
{
name: "SSHMissing",
opt: build.Options{
SSHSpecs: []*buildflags.SSH{
{
ID: "test",
},
},
},
expected: EntitlementConf{
SSH: true,
FSRead: []string{expWd},
},
},
{
name: "ExportLocal",
opt: build.Options{
ExportsLocalPathsTemporary: []string{
dir1,
filepath.Join(dir1, "subdir"),
dir2,
},
},
expected: EntitlementConf{
FSWrite: func() []string {
exp := []string{expDir1, expDir2}
slices.Sort(exp)
return exp
}(),
FSRead: []string{expWd},
},
},
{
name: "SecretFromSubFile",
opt: build.Options{
SecretSpecs: []*buildflags.Secret{
{
FilePath: filepath.Join(dir1, "subfile"),
},
},
},
conf: EntitlementConf{
FSRead: []string{wd, dir1},
},
},
{
name: "SecretFromEscapeLink",
opt: build.Options{
SecretSpecs: []*buildflags.Secret{
{
FilePath: escapeLink,
},
},
},
conf: EntitlementConf{
FSRead: []string{wd, dir1},
},
expected: EntitlementConf{
FSRead: []string{filepath.Join(expDir1, "../..")},
},
},
{
name: "SecretFromEscapeLinkAllowRoot",
opt: build.Options{
SecretSpecs: []*buildflags.Secret{
{
FilePath: escapeLink,
},
},
},
conf: EntitlementConf{
FSRead: []string{"/"},
},
expected: EntitlementConf{
FSRead: func() []string {
// on windows root (/) is only allowed if it is the same volume as wd
if filepath.VolumeName(wd) == filepath.VolumeName(escapeLink) {
return nil
}
// if not, then escapeLink is not allowed
exp, _, err := evaluateToExistingPath(escapeLink)
require.NoError(t, err)
exp, err = filepath.EvalSymlinks(exp)
require.NoError(t, err)
return []string{exp}
}(),
},
},
{
name: "SecretFromEscapeLinkAllowAny",
opt: build.Options{
SecretSpecs: []*buildflags.Secret{
{
FilePath: escapeLink,
},
},
},
conf: EntitlementConf{
FSRead: []string{"*"},
},
expected: EntitlementConf{},
},
{
name: "NonExistingAllowedPathSubpath",
opt: build.Options{
ExportsLocalPathsTemporary: []string{
dir1,
},
},
conf: EntitlementConf{
FSRead: []string{wd},
FSWrite: []string{filepath.Join(dir1, "not/exists")},
},
expected: EntitlementConf{
FSWrite: []string{expDir1}, // dir1 is still needed as only subpath was allowed
},
},
{
name: "NonExistingAllowedPathMatches",
opt: build.Options{
ExportsLocalPathsTemporary: []string{
filepath.Join(dir1, "not/exists"),
},
},
conf: EntitlementConf{
FSRead: []string{wd},
FSWrite: []string{filepath.Join(dir1, "not/exists")},
},
expected: EntitlementConf{
FSWrite: []string{expDir1}, // dir1 is still needed as build also needs to write not/exists directory
},
},
{
name: "NonExistingBuildPath",
opt: build.Options{
ExportsLocalPathsTemporary: []string{
filepath.Join(dir1, "not/exists"),
},
},
conf: EntitlementConf{
FSRead: []string{wd},
FSWrite: []string{dir1},
},
},
}
for _, tc := range tcases {
t.Run(tc.name, func(t *testing.T) {
expected, err := tc.conf.Validate(map[string]build.Options{"test": tc.opt})
require.NoError(t, err)
require.Equal(t, tc.expected, expected)
})
}
}
func TestGroupSamePaths(t *testing.T) {
tests := []struct {
name string
in1 []string
in2 []string
expected1 []string
expected2 []string
expectedC []string
}{
{
name: "All common paths",
in1: []string{"/path/a", "/path/b", "/path/c"},
in2: []string{"/path/a", "/path/b", "/path/c"},
expected1: []string{},
expected2: []string{},
expectedC: []string{"/path/a", "/path/b", "/path/c"},
},
{
name: "No common paths",
in1: []string{"/path/a", "/path/b"},
in2: []string{"/path/c", "/path/d"},
expected1: []string{"/path/a", "/path/b"},
expected2: []string{"/path/c", "/path/d"},
expectedC: []string{},
},
{
name: "Some common paths",
in1: []string{"/path/a", "/path/b", "/path/c"},
in2: []string{"/path/b", "/path/c", "/path/d"},
expected1: []string{"/path/a"},
expected2: []string{"/path/d"},
expectedC: []string{"/path/b", "/path/c"},
},
{
name: "Empty inputs",
in1: []string{},
in2: []string{},
expected1: []string{},
expected2: []string{},
expectedC: []string{},
},
{
name: "One empty input",
in1: []string{"/path/a", "/path/b"},
in2: []string{},
expected1: []string{"/path/a", "/path/b"},
expected2: []string{},
expectedC: []string{},
},
{
name: "Unsorted inputs with common paths",
in1: []string{"/path/c", "/path/a", "/path/b"},
in2: []string{"/path/b", "/path/c", "/path/a"},
expected1: []string{},
expected2: []string{},
expectedC: []string{"/path/a", "/path/b", "/path/c"},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
out1, out2, common := groupSamePaths(tt.in1, tt.in2)
require.Equal(t, tt.expected1, out1, "in1 should match expected1")
require.Equal(t, tt.expected2, out2, "in2 should match expected2")
require.Equal(t, tt.expectedC, common, "common should match expectedC")
})
}
}

View File

@ -56,7 +56,7 @@ func formatHCLError(err error, files []File) error {
break
}
}
src := errdefs.Source{
src := &errdefs.Source{
Info: &pb.SourceInfo{
Filename: d.Subject.Filename,
Data: dt,
@ -72,7 +72,7 @@ func formatHCLError(err error, files []File) error {
func toErrRange(in *hcl.Range) *pb.Range {
return &pb.Range{
Start: pb.Position{Line: int32(in.Start.Line), Character: int32(in.Start.Column)},
End: pb.Position{Line: int32(in.End.Line), Character: int32(in.End.Column)},
Start: &pb.Position{Line: int32(in.Start.Line), Character: int32(in.Start.Column)},
End: &pb.Position{Line: int32(in.End.Line), Character: int32(in.End.Column)},
}
}

File diff suppressed because it is too large Load Diff

355
bake/hclparser/LICENSE Normal file
View File

@ -0,0 +1,355 @@
Copyright (c) 2014 HashiCorp, Inc.
Mozilla Public License, version 2.0
1. Definitions
1.1. “Contributor”
means each individual or legal entity that creates, contributes to the
creation of, or owns Covered Software.
1.2. “Contributor Version”
means the combination of the Contributions of others (if any) used by a
Contributor and that particular Contributors Contribution.
1.3. “Contribution”
means Covered Software of a particular Contributor.
1.4. “Covered Software”
means Source Code Form to which the initial Contributor has attached the
notice in Exhibit A, the Executable Form of such Source Code Form, and
Modifications of such Source Code Form, in each case including portions
thereof.
1.5. “Incompatible With Secondary Licenses”
means
a. that the initial Contributor has attached the notice described in
Exhibit B to the Covered Software; or
b. that the Covered Software was made available under the terms of version
1.1 or earlier of the License, but not also under the terms of a
Secondary License.
1.6. “Executable Form”
means any form of the work other than Source Code Form.
1.7. “Larger Work”
means a work that combines Covered Software with other material, in a separate
file or files, that is not Covered Software.
1.8. “License”
means this document.
1.9. “Licensable”
means having the right to grant, to the maximum extent possible, whether at the
time of the initial grant or subsequently, any and all of the rights conveyed by
this License.
1.10. “Modifications”
means any of the following:
a. any file in Source Code Form that results from an addition to, deletion
from, or modification of the contents of Covered Software; or
b. any new file in Source Code Form that contains any Covered Software.
1.11. “Patent Claims” of a Contributor
means any patent claim(s), including without limitation, method, process,
and apparatus claims, in any patent Licensable by such Contributor that
would be infringed, but for the grant of the License, by the making,
using, selling, offering for sale, having made, import, or transfer of
either its Contributions or its Contributor Version.
1.12. “Secondary License”
means either the GNU General Public License, Version 2.0, the GNU Lesser
General Public License, Version 2.1, the GNU Affero General Public
License, Version 3.0, or any later versions of those licenses.
1.13. “Source Code Form”
means the form of the work preferred for making modifications.
1.14. “You” (or “Your”)
means an individual or a legal entity exercising rights under this
License. For legal entities, “You” includes any entity that controls, is
controlled by, or is under common control with You. For purposes of this
definition, “control” means (a) the power, direct or indirect, to cause
the direction or management of such entity, whether by contract or
otherwise, or (b) ownership of more than fifty percent (50%) of the
outstanding shares or beneficial ownership of such entity.
2. License Grants and Conditions
2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free,
non-exclusive license:
a. under intellectual property rights (other than patent or trademark)
Licensable by such Contributor to use, reproduce, make available,
modify, display, perform, distribute, and otherwise exploit its
Contributions, either on an unmodified basis, with Modifications, or as
part of a Larger Work; and
b. under Patent Claims of such Contributor to make, use, sell, offer for
sale, have made, import, and otherwise transfer either its Contributions
or its Contributor Version.
2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution become
effective for each Contribution on the date the Contributor first distributes
such Contribution.
2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under this
License. No additional rights or licenses will be implied from the distribution
or licensing of Covered Software under this License. Notwithstanding Section
2.1(b) above, no patent license is granted by a Contributor:
a. for any code that a Contributor has removed from Covered Software; or
b. for infringements caused by: (i) Your and any other third partys
modifications of Covered Software, or (ii) the combination of its
Contributions with other software (except as part of its Contributor
Version); or
c. under Patent Claims infringed by Covered Software in the absence of its
Contributions.
This License does not grant any rights in the trademarks, service marks, or
logos of any Contributor (except as may be necessary to comply with the
notice requirements in Section 3.4).
2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to
distribute the Covered Software under a subsequent version of this License
(see Section 10.2) or under the terms of a Secondary License (if permitted
under the terms of Section 3.3).
2.5. Representation
Each Contributor represents that the Contributor believes its Contributions
are its original creation(s) or it has sufficient rights to grant the
rights to its Contributions conveyed by this License.
2.6. Fair Use
This License is not intended to limit any rights You have under applicable
copyright doctrines of fair use, fair dealing, or other equivalents.
2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in
Section 2.1.
3. Responsibilities
3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any
Modifications that You create or to which You contribute, must be under the
terms of this License. You must inform recipients that the Source Code Form
of the Covered Software is governed by the terms of this License, and how
they can obtain a copy of this License. You may not attempt to alter or
restrict the recipients rights in the Source Code Form.
3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
a. such Covered Software must also be made available in Source Code Form,
as described in Section 3.1, and You must inform recipients of the
Executable Form how they can obtain a copy of such Source Code Form by
reasonable means in a timely manner, at a charge no more than the cost
of distribution to the recipient; and
b. You may distribute such Executable Form under the terms of this License,
or sublicense it under different terms, provided that the license for
the Executable Form does not attempt to limit or alter the recipients
rights in the Source Code Form under this License.
3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice,
provided that You also comply with the requirements of this License for the
Covered Software. If the Larger Work is a combination of Covered Software
with a work governed by one or more Secondary Licenses, and the Covered
Software is not Incompatible With Secondary Licenses, this License permits
You to additionally distribute such Covered Software under the terms of
such Secondary License(s), so that the recipient of the Larger Work may, at
their option, further distribute the Covered Software under the terms of
either this License or such Secondary License(s).
3.4. Notices
You may not remove or alter the substance of any license notices (including
copyright notices, patent notices, disclaimers of warranty, or limitations
of liability) contained within the Source Code Form of the Covered
Software, except that You may alter any license notices to the extent
required to remedy known factual inaccuracies.
3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support,
indemnity or liability obligations to one or more recipients of Covered
Software. However, You may do so only on Your own behalf, and not on behalf
of any Contributor. You must make it absolutely clear that any such
warranty, support, indemnity, or liability obligation is offered by You
alone, and You hereby agree to indemnify every Contributor for any
liability incurred by such Contributor as a result of warranty, support,
indemnity or liability terms You offer. You may include additional
disclaimers of warranty and limitations of liability specific to any
jurisdiction.
4. Inability to Comply Due to Statute or Regulation
If it is impossible for You to comply with any of the terms of this License
with respect to some or all of the Covered Software due to statute, judicial
order, or regulation then You must: (a) comply with the terms of this License
to the maximum extent possible; and (b) describe the limitations and the code
they affect. Such description must be placed in a text file included with all
distributions of the Covered Software under this License. Except to the
extent prohibited by statute or regulation, such description must be
sufficiently detailed for a recipient of ordinary skill to be able to
understand it.
5. Termination
5.1. The rights granted under this License will terminate automatically if You
fail to comply with any of its terms. However, if You become compliant,
then the rights granted under this License from a particular Contributor
are reinstated (a) provisionally, unless and until such Contributor
explicitly and finally terminates Your grants, and (b) on an ongoing basis,
if such Contributor fails to notify You of the non-compliance by some
reasonable means prior to 60 days after You have come back into compliance.
Moreover, Your grants from a particular Contributor are reinstated on an
ongoing basis if such Contributor notifies You of the non-compliance by
some reasonable means, this is the first time You have received notice of
non-compliance with this License from such Contributor, and You become
compliant prior to 30 days after Your receipt of the notice.
5.2. If You initiate litigation against any entity by asserting a patent
infringement claim (excluding declaratory judgment actions, counter-claims,
and cross-claims) alleging that a Contributor Version directly or
indirectly infringes any patent, then the rights granted to You by any and
all Contributors for the Covered Software under Section 2.1 of this License
shall terminate.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user
license agreements (excluding distributors and resellers) which have been
validly granted by You or Your distributors under this License prior to
termination shall survive termination.
6. Disclaimer of Warranty
Covered Software is provided under this License on an “as is” basis, without
warranty of any kind, either expressed, implied, or statutory, including,
without limitation, warranties that the Covered Software is free of defects,
merchantable, fit for a particular purpose or non-infringing. The entire
risk as to the quality and performance of the Covered Software is with You.
Should any Covered Software prove defective in any respect, You (not any
Contributor) assume the cost of any necessary servicing, repair, or
correction. This disclaimer of warranty constitutes an essential part of this
License. No use of any Covered Software is authorized under this License
except under this disclaimer.
7. Limitation of Liability
Under no circumstances and under no legal theory, whether tort (including
negligence), contract, or otherwise, shall any Contributor, or anyone who
distributes Covered Software as permitted above, be liable to You for any
direct, indirect, special, incidental, or consequential damages of any
character including, without limitation, damages for lost profits, loss of
goodwill, work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses, even if such party shall have been
informed of the possibility of such damages. This limitation of liability
shall not apply to liability for death or personal injury resulting from such
partys negligence to the extent applicable law prohibits such limitation.
Some jurisdictions do not allow the exclusion or limitation of incidental or
consequential damages, so this exclusion and limitation may not apply to You.
8. Litigation
Any litigation relating to this License may be brought only in the courts of
a jurisdiction where the defendant maintains its principal place of business
and such litigation shall be governed by laws of that jurisdiction, without
reference to its conflict-of-law provisions. Nothing in this Section shall
prevent a partys ability to bring cross-claims or counter-claims.
9. Miscellaneous
This License represents the complete agreement concerning the subject matter
hereof. If any provision of this License is held to be unenforceable, such
provision shall be reformed only to the extent necessary to make it
enforceable. Any law or regulation which provides that the language of a
contract shall be construed against the drafter shall not be used to construe
this License against a Contributor.
10. Versions of the License
10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section
10.3, no one other than the license steward has the right to modify or
publish new versions of this License. Each version will be given a
distinguishing version number.
10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version of
the License under which You originally received the Covered Software, or
under the terms of any subsequent version published by the license
steward.
10.3. Modified Versions
If you create software not governed by this License, and you want to
create a new license for such software, you may create and use a modified
version of this License if you rename the license and remove any
references to the name of the license steward (except to note that such
modified license differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary Licenses
If You choose to distribute Source Code Form that is Incompatible With
Secondary Licenses under the terms of this version of the License, the
notice described in Exhibit B of this License must be attached.
Exhibit A - Source Code Form License Notice
This Source Code Form is subject to the
terms of the Mozilla Public License, v.
2.0. If a copy of the MPL was not
distributed with this file, You can
obtain one at
http://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular file, then
You may include the notice in a location (such as a LICENSE file in a relevant
directory) where a recipient would be likely to look for such a notice.
You may add additional accurate notices of copyright ownership.
Exhibit B - “Incompatible With Secondary Licenses” Notice
This Source Code Form is “Incompatible
With Secondary Licenses”, as defined by
the Mozilla Public License, v. 2.0.

View File

@ -0,0 +1,348 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
package gohcl
import (
"fmt"
"reflect"
"github.com/hashicorp/hcl/v2"
"github.com/zclconf/go-cty/cty"
"github.com/zclconf/go-cty/cty/convert"
"github.com/zclconf/go-cty/cty/gocty"
)
// DecodeOptions allows customizing sections of the decoding process.
type DecodeOptions struct {
ImpliedType func(gv any) (cty.Type, error)
Convert func(in cty.Value, want cty.Type) (cty.Value, error)
}
func (o DecodeOptions) DecodeBody(body hcl.Body, ctx *hcl.EvalContext, val any) hcl.Diagnostics {
o = o.withDefaults()
rv := reflect.ValueOf(val)
if rv.Kind() != reflect.Ptr {
panic(fmt.Sprintf("target value must be a pointer, not %s", rv.Type().String()))
}
return o.decodeBodyToValue(body, ctx, rv.Elem())
}
// DecodeBody extracts the configuration within the given body into the given
// value. This value must be a non-nil pointer to either a struct or
// a map, where in the former case the configuration will be decoded using
// struct tags and in the latter case only attributes are allowed and their
// values are decoded into the map.
//
// The given EvalContext is used to resolve any variables or functions in
// expressions encountered while decoding. This may be nil to require only
// constant values, for simple applications that do not support variables or
// functions.
//
// The returned diagnostics should be inspected with its HasErrors method to
// determine if the populated value is valid and complete. If error diagnostics
// are returned then the given value may have been partially-populated but
// may still be accessed by a careful caller for static analysis and editor
// integration use-cases.
func DecodeBody(body hcl.Body, ctx *hcl.EvalContext, val any) hcl.Diagnostics {
return DecodeOptions{}.DecodeBody(body, ctx, val)
}
func (o DecodeOptions) decodeBodyToValue(body hcl.Body, ctx *hcl.EvalContext, val reflect.Value) hcl.Diagnostics {
et := val.Type()
switch et.Kind() {
case reflect.Struct:
return o.decodeBodyToStruct(body, ctx, val)
case reflect.Map:
return o.decodeBodyToMap(body, ctx, val)
default:
panic(fmt.Sprintf("target value must be pointer to struct or map, not %s", et.String()))
}
}
func (o DecodeOptions) decodeBodyToStruct(body hcl.Body, ctx *hcl.EvalContext, val reflect.Value) hcl.Diagnostics {
schema, partial := ImpliedBodySchema(val.Interface())
var content *hcl.BodyContent
var leftovers hcl.Body
var diags hcl.Diagnostics
if partial {
content, leftovers, diags = body.PartialContent(schema)
} else {
content, diags = body.Content(schema)
}
if content == nil {
return diags
}
tags := getFieldTags(val.Type())
if tags.Body != nil {
fieldIdx := *tags.Body
field := val.Type().Field(fieldIdx)
fieldV := val.Field(fieldIdx)
switch {
case bodyType.AssignableTo(field.Type):
fieldV.Set(reflect.ValueOf(body))
default:
diags = append(diags, o.decodeBodyToValue(body, ctx, fieldV)...)
}
}
if tags.Remain != nil {
fieldIdx := *tags.Remain
field := val.Type().Field(fieldIdx)
fieldV := val.Field(fieldIdx)
switch {
case bodyType.AssignableTo(field.Type):
fieldV.Set(reflect.ValueOf(leftovers))
case attrsType.AssignableTo(field.Type):
attrs, attrsDiags := leftovers.JustAttributes()
if len(attrsDiags) > 0 {
diags = append(diags, attrsDiags...)
}
fieldV.Set(reflect.ValueOf(attrs))
default:
diags = append(diags, o.decodeBodyToValue(leftovers, ctx, fieldV)...)
}
}
for name, fieldIdx := range tags.Attributes {
attr := content.Attributes[name]
field := val.Type().Field(fieldIdx)
fieldV := val.Field(fieldIdx)
if attr == nil {
if !exprType.AssignableTo(field.Type) {
continue
}
// As a special case, if the target is of type hcl.Expression then
// we'll assign an actual expression that evalues to a cty null,
// so the caller can deal with it within the cty realm rather
// than within the Go realm.
synthExpr := hcl.StaticExpr(cty.NullVal(cty.DynamicPseudoType), body.MissingItemRange())
fieldV.Set(reflect.ValueOf(synthExpr))
continue
}
switch {
case attrType.AssignableTo(field.Type):
fieldV.Set(reflect.ValueOf(attr))
case exprType.AssignableTo(field.Type):
fieldV.Set(reflect.ValueOf(attr.Expr))
default:
diags = append(diags, o.DecodeExpression(
attr.Expr, ctx, fieldV.Addr().Interface(),
)...)
}
}
blocksByType := content.Blocks.ByType()
for typeName, fieldIdx := range tags.Blocks {
blocks := blocksByType[typeName]
field := val.Type().Field(fieldIdx)
ty := field.Type
isSlice := false
isPtr := false
if ty.Kind() == reflect.Slice {
isSlice = true
ty = ty.Elem()
}
if ty.Kind() == reflect.Ptr {
isPtr = true
ty = ty.Elem()
}
if len(blocks) > 1 && !isSlice {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: fmt.Sprintf("Duplicate %s block", typeName),
Detail: fmt.Sprintf(
"Only one %s block is allowed. Another was defined at %s.",
typeName, blocks[0].DefRange.String(),
),
Subject: &blocks[1].DefRange,
})
continue
}
if len(blocks) == 0 {
if isSlice || isPtr {
if val.Field(fieldIdx).IsNil() {
val.Field(fieldIdx).Set(reflect.Zero(field.Type))
}
} else {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: fmt.Sprintf("Missing %s block", typeName),
Detail: fmt.Sprintf("A %s block is required.", typeName),
Subject: body.MissingItemRange().Ptr(),
})
}
continue
}
switch {
case isSlice:
elemType := ty
if isPtr {
elemType = reflect.PointerTo(ty)
}
sli := val.Field(fieldIdx)
if sli.IsNil() {
sli = reflect.MakeSlice(reflect.SliceOf(elemType), len(blocks), len(blocks))
}
for i, block := range blocks {
if isPtr {
if i >= sli.Len() {
sli = reflect.Append(sli, reflect.New(ty))
}
v := sli.Index(i)
if v.IsNil() {
v = reflect.New(ty)
}
diags = append(diags, o.decodeBlockToValue(block, ctx, v.Elem())...)
sli.Index(i).Set(v)
} else {
if i >= sli.Len() {
sli = reflect.Append(sli, reflect.Indirect(reflect.New(ty)))
}
diags = append(diags, o.decodeBlockToValue(block, ctx, sli.Index(i))...)
}
}
if sli.Len() > len(blocks) {
sli.SetLen(len(blocks))
}
val.Field(fieldIdx).Set(sli)
default:
block := blocks[0]
if isPtr {
v := val.Field(fieldIdx)
if v.IsNil() {
v = reflect.New(ty)
}
diags = append(diags, o.decodeBlockToValue(block, ctx, v.Elem())...)
val.Field(fieldIdx).Set(v)
} else {
diags = append(diags, o.decodeBlockToValue(block, ctx, val.Field(fieldIdx))...)
}
}
}
return diags
}
func (o DecodeOptions) decodeBodyToMap(body hcl.Body, ctx *hcl.EvalContext, v reflect.Value) hcl.Diagnostics {
attrs, diags := body.JustAttributes()
if attrs == nil {
return diags
}
mv := reflect.MakeMap(v.Type())
for k, attr := range attrs {
switch {
case attrType.AssignableTo(v.Type().Elem()):
mv.SetMapIndex(reflect.ValueOf(k), reflect.ValueOf(attr))
case exprType.AssignableTo(v.Type().Elem()):
mv.SetMapIndex(reflect.ValueOf(k), reflect.ValueOf(attr.Expr))
default:
ev := reflect.New(v.Type().Elem())
diags = append(diags, o.DecodeExpression(attr.Expr, ctx, ev.Interface())...)
mv.SetMapIndex(reflect.ValueOf(k), ev.Elem())
}
}
v.Set(mv)
return diags
}
func (o DecodeOptions) decodeBlockToValue(block *hcl.Block, ctx *hcl.EvalContext, v reflect.Value) hcl.Diagnostics {
diags := o.decodeBodyToValue(block.Body, ctx, v)
if len(block.Labels) > 0 {
blockTags := getFieldTags(v.Type())
for li, lv := range block.Labels {
lfieldIdx := blockTags.Labels[li].FieldIndex
v.Field(lfieldIdx).Set(reflect.ValueOf(lv))
}
}
return diags
}
func (o DecodeOptions) DecodeExpression(expr hcl.Expression, ctx *hcl.EvalContext, val any) hcl.Diagnostics {
o = o.withDefaults()
srcVal, diags := expr.Value(ctx)
convTy, err := o.ImpliedType(val)
if err != nil {
panic(fmt.Sprintf("unsuitable DecodeExpression target: %s", err))
}
srcVal, err = o.Convert(srcVal, convTy)
if err != nil {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Unsuitable value type",
Detail: fmt.Sprintf("Unsuitable value: %s", err.Error()),
Subject: expr.StartRange().Ptr(),
Context: expr.Range().Ptr(),
})
return diags
}
err = gocty.FromCtyValue(srcVal, val)
if err != nil {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Unsuitable value type",
Detail: fmt.Sprintf("Unsuitable value: %s", err.Error()),
Subject: expr.StartRange().Ptr(),
Context: expr.Range().Ptr(),
})
}
return diags
}
// DecodeExpression extracts the value of the given expression into the given
// value. This value must be something that gocty is able to decode into,
// since the final decoding is delegated to that package.
//
// The given EvalContext is used to resolve any variables or functions in
// expressions encountered while decoding. This may be nil to require only
// constant values, for simple applications that do not support variables or
// functions.
//
// The returned diagnostics should be inspected with its HasErrors method to
// determine if the populated value is valid and complete. If error diagnostics
// are returned then the given value may have been partially-populated but
// may still be accessed by a careful caller for static analysis and editor
// integration use-cases.
func DecodeExpression(expr hcl.Expression, ctx *hcl.EvalContext, val any) hcl.Diagnostics {
return DecodeOptions{}.DecodeExpression(expr, ctx, val)
}
func (o DecodeOptions) withDefaults() DecodeOptions {
if o.ImpliedType == nil {
o.ImpliedType = gocty.ImpliedType
}
if o.Convert == nil {
o.Convert = convert.Convert
}
return o
}

View File

@ -0,0 +1,805 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
package gohcl
import (
"encoding/json"
"fmt"
"reflect"
"testing"
"github.com/davecgh/go-spew/spew"
"github.com/hashicorp/hcl/v2"
hclJSON "github.com/hashicorp/hcl/v2/json"
"github.com/zclconf/go-cty/cty"
)
func TestDecodeBody(t *testing.T) {
deepEquals := func(other any) func(v any) bool {
return func(v any) bool {
return reflect.DeepEqual(v, other)
}
}
type withNameExpression struct {
Name hcl.Expression `hcl:"name"`
}
type withTwoAttributes struct {
A string `hcl:"a,optional"`
B string `hcl:"b,optional"`
}
type withNestedBlock struct {
Plain string `hcl:"plain,optional"`
Nested *withTwoAttributes `hcl:"nested,block"`
}
type withListofNestedBlocks struct {
Nested []*withTwoAttributes `hcl:"nested,block"`
}
type withListofNestedBlocksNoPointers struct {
Nested []withTwoAttributes `hcl:"nested,block"`
}
tests := []struct {
Body map[string]any
Target func() any
Check func(v any) bool
DiagCount int
}{
{
map[string]any{},
makeInstantiateType(struct{}{}),
deepEquals(struct{}{}),
0,
},
{
map[string]any{},
makeInstantiateType(struct {
Name string `hcl:"name"`
}{}),
deepEquals(struct {
Name string `hcl:"name"`
}{}),
1, // name is required
},
{
map[string]any{},
makeInstantiateType(struct {
Name *string `hcl:"name"`
}{}),
deepEquals(struct {
Name *string `hcl:"name"`
}{}),
0,
}, // name nil
{
map[string]any{},
makeInstantiateType(struct {
Name string `hcl:"name,optional"`
}{}),
deepEquals(struct {
Name string `hcl:"name,optional"`
}{}),
0,
}, // name optional
{
map[string]any{},
makeInstantiateType(withNameExpression{}),
func(v any) bool {
if v == nil {
return false
}
wne, valid := v.(withNameExpression)
if !valid {
return false
}
if wne.Name == nil {
return false
}
nameVal, _ := wne.Name.Value(nil)
return nameVal.IsNull()
},
0,
},
{
map[string]any{
"name": "Ermintrude",
},
makeInstantiateType(withNameExpression{}),
func(v any) bool {
if v == nil {
return false
}
wne, valid := v.(withNameExpression)
if !valid {
return false
}
if wne.Name == nil {
return false
}
nameVal, _ := wne.Name.Value(nil)
return nameVal.Equals(cty.StringVal("Ermintrude")).True()
},
0,
},
{
map[string]any{
"name": "Ermintrude",
},
makeInstantiateType(struct {
Name string `hcl:"name"`
}{}),
deepEquals(struct {
Name string `hcl:"name"`
}{"Ermintrude"}),
0,
},
{
map[string]any{
"name": "Ermintrude",
"age": 23,
},
makeInstantiateType(struct {
Name string `hcl:"name"`
}{}),
deepEquals(struct {
Name string `hcl:"name"`
}{"Ermintrude"}),
1, // Extraneous "age" property
},
{
map[string]any{
"name": "Ermintrude",
"age": 50,
},
makeInstantiateType(struct {
Name string `hcl:"name"`
Attrs hcl.Attributes `hcl:",remain"`
}{}),
func(gotI any) bool {
got := gotI.(struct {
Name string `hcl:"name"`
Attrs hcl.Attributes `hcl:",remain"`
})
return got.Name == "Ermintrude" && len(got.Attrs) == 1 && got.Attrs["age"] != nil
},
0,
},
{
map[string]any{
"name": "Ermintrude",
"age": 50,
},
makeInstantiateType(struct {
Name string `hcl:"name"`
Remain hcl.Body `hcl:",remain"`
}{}),
func(gotI any) bool {
got := gotI.(struct {
Name string `hcl:"name"`
Remain hcl.Body `hcl:",remain"`
})
attrs, _ := got.Remain.JustAttributes()
return got.Name == "Ermintrude" && len(attrs) == 1 && attrs["age"] != nil
},
0,
},
{
map[string]any{
"name": "Ermintrude",
"living": true,
},
makeInstantiateType(struct {
Name string `hcl:"name"`
Remain map[string]cty.Value `hcl:",remain"`
}{}),
deepEquals(struct {
Name string `hcl:"name"`
Remain map[string]cty.Value `hcl:",remain"`
}{
Name: "Ermintrude",
Remain: map[string]cty.Value{
"living": cty.True,
},
}),
0,
},
{
map[string]any{
"name": "Ermintrude",
"age": 50,
},
makeInstantiateType(struct {
Name string `hcl:"name"`
Body hcl.Body `hcl:",body"`
Remain hcl.Body `hcl:",remain"`
}{}),
func(gotI any) bool {
got := gotI.(struct {
Name string `hcl:"name"`
Body hcl.Body `hcl:",body"`
Remain hcl.Body `hcl:",remain"`
})
attrs, _ := got.Body.JustAttributes()
return got.Name == "Ermintrude" && len(attrs) == 2 &&
attrs["name"] != nil && attrs["age"] != nil
},
0,
},
{
map[string]any{
"noodle": map[string]any{},
},
makeInstantiateType(struct {
Noodle struct{} `hcl:"noodle,block"`
}{}),
func(gotI any) bool {
// Generating no diagnostics is good enough for this one.
return true
},
0,
},
{
map[string]any{
"noodle": []map[string]any{{}},
},
makeInstantiateType(struct {
Noodle struct{} `hcl:"noodle,block"`
}{}),
func(gotI any) bool {
// Generating no diagnostics is good enough for this one.
return true
},
0,
},
{
map[string]any{
"noodle": []map[string]any{{}, {}},
},
makeInstantiateType(struct {
Noodle struct{} `hcl:"noodle,block"`
}{}),
func(gotI any) bool {
// Generating one diagnostic is good enough for this one.
return true
},
1,
},
{
map[string]any{},
makeInstantiateType(struct {
Noodle struct{} `hcl:"noodle,block"`
}{}),
func(gotI any) bool {
// Generating one diagnostic is good enough for this one.
return true
},
1,
},
{
map[string]any{
"noodle": []map[string]any{},
},
makeInstantiateType(struct {
Noodle struct{} `hcl:"noodle,block"`
}{}),
func(gotI any) bool {
// Generating one diagnostic is good enough for this one.
return true
},
1,
},
{
map[string]any{
"noodle": map[string]any{},
},
makeInstantiateType(struct {
Noodle *struct{} `hcl:"noodle,block"`
}{}),
func(gotI any) bool {
return gotI.(struct {
Noodle *struct{} `hcl:"noodle,block"`
}).Noodle != nil
},
0,
},
{
map[string]any{
"noodle": []map[string]any{{}},
},
makeInstantiateType(struct {
Noodle *struct{} `hcl:"noodle,block"`
}{}),
func(gotI any) bool {
return gotI.(struct {
Noodle *struct{} `hcl:"noodle,block"`
}).Noodle != nil
},
0,
},
{
map[string]any{
"noodle": []map[string]any{},
},
makeInstantiateType(struct {
Noodle *struct{} `hcl:"noodle,block"`
}{}),
func(gotI any) bool {
return gotI.(struct {
Noodle *struct{} `hcl:"noodle,block"`
}).Noodle == nil
},
0,
},
{
map[string]any{
"noodle": []map[string]any{{}, {}},
},
makeInstantiateType(struct {
Noodle *struct{} `hcl:"noodle,block"`
}{}),
func(gotI any) bool {
// Generating one diagnostic is good enough for this one.
return true
},
1,
},
{
map[string]any{
"noodle": []map[string]any{},
},
makeInstantiateType(struct {
Noodle []struct{} `hcl:"noodle,block"`
}{}),
func(gotI any) bool {
noodle := gotI.(struct {
Noodle []struct{} `hcl:"noodle,block"`
}).Noodle
return len(noodle) == 0
},
0,
},
{
map[string]any{
"noodle": []map[string]any{{}},
},
makeInstantiateType(struct {
Noodle []struct{} `hcl:"noodle,block"`
}{}),
func(gotI any) bool {
noodle := gotI.(struct {
Noodle []struct{} `hcl:"noodle,block"`
}).Noodle
return len(noodle) == 1
},
0,
},
{
map[string]any{
"noodle": []map[string]any{{}, {}},
},
makeInstantiateType(struct {
Noodle []struct{} `hcl:"noodle,block"`
}{}),
func(gotI any) bool {
noodle := gotI.(struct {
Noodle []struct{} `hcl:"noodle,block"`
}).Noodle
return len(noodle) == 2
},
0,
},
{
map[string]any{
"noodle": map[string]any{},
},
makeInstantiateType(struct {
Noodle struct {
Name string `hcl:"name,label"`
} `hcl:"noodle,block"`
}{}),
func(gotI any) bool {
// Generating two diagnostics is good enough for this one.
// (one for the missing noodle block and the other for
// the JSON serialization detecting the missing level of
// hierarchy for the label.)
return true
},
2,
},
{
map[string]any{
"noodle": map[string]any{
"foo_foo": map[string]any{},
},
},
makeInstantiateType(struct {
Noodle struct {
Name string `hcl:"name,label"`
} `hcl:"noodle,block"`
}{}),
func(gotI any) bool {
noodle := gotI.(struct {
Noodle struct {
Name string `hcl:"name,label"`
} `hcl:"noodle,block"`
}).Noodle
return noodle.Name == "foo_foo"
},
0,
},
{
map[string]any{
"noodle": map[string]any{
"foo_foo": map[string]any{},
"bar_baz": map[string]any{},
},
},
makeInstantiateType(struct {
Noodle struct {
Name string `hcl:"name,label"`
} `hcl:"noodle,block"`
}{}),
func(gotI any) bool {
// One diagnostic is enough for this one.
return true
},
1,
},
{
map[string]any{
"noodle": map[string]any{
"foo_foo": map[string]any{},
"bar_baz": map[string]any{},
},
},
makeInstantiateType(struct {
Noodles []struct {
Name string `hcl:"name,label"`
} `hcl:"noodle,block"`
}{}),
func(gotI any) bool {
noodles := gotI.(struct {
Noodles []struct {
Name string `hcl:"name,label"`
} `hcl:"noodle,block"`
}).Noodles
return len(noodles) == 2 && (noodles[0].Name == "foo_foo" || noodles[0].Name == "bar_baz") && (noodles[1].Name == "foo_foo" || noodles[1].Name == "bar_baz") && noodles[0].Name != noodles[1].Name
},
0,
},
{
map[string]any{
"noodle": map[string]any{
"foo_foo": map[string]any{
"type": "rice",
},
},
},
makeInstantiateType(struct {
Noodle struct {
Name string `hcl:"name,label"`
Type string `hcl:"type"`
} `hcl:"noodle,block"`
}{}),
func(gotI any) bool {
noodle := gotI.(struct {
Noodle struct {
Name string `hcl:"name,label"`
Type string `hcl:"type"`
} `hcl:"noodle,block"`
}).Noodle
return noodle.Name == "foo_foo" && noodle.Type == "rice"
},
0,
},
{
map[string]any{
"name": "Ermintrude",
"age": 34,
},
makeInstantiateType(map[string]string(nil)),
deepEquals(map[string]string{
"name": "Ermintrude",
"age": "34",
}),
0,
},
{
map[string]any{
"name": "Ermintrude",
"age": 89,
},
makeInstantiateType(map[string]*hcl.Attribute(nil)),
func(gotI any) bool {
got := gotI.(map[string]*hcl.Attribute)
return len(got) == 2 && got["name"] != nil && got["age"] != nil
},
0,
},
{
map[string]any{
"name": "Ermintrude",
"age": 13,
},
makeInstantiateType(map[string]hcl.Expression(nil)),
func(gotI any) bool {
got := gotI.(map[string]hcl.Expression)
return len(got) == 2 && got["name"] != nil && got["age"] != nil
},
0,
},
{
map[string]any{
"name": "Ermintrude",
"living": true,
},
makeInstantiateType(map[string]cty.Value(nil)),
deepEquals(map[string]cty.Value{
"name": cty.StringVal("Ermintrude"),
"living": cty.True,
}),
0,
},
{
// Retain "nested" block while decoding
map[string]any{
"plain": "foo",
},
func() any {
return &withNestedBlock{
Plain: "bar",
Nested: &withTwoAttributes{
A: "bar",
},
}
},
func(gotI any) bool {
foo := gotI.(withNestedBlock)
return foo.Plain == "foo" && foo.Nested != nil && foo.Nested.A == "bar"
},
0,
},
{
// Retain values in "nested" block while decoding
map[string]any{
"nested": map[string]any{
"a": "foo",
},
},
func() any {
return &withNestedBlock{
Nested: &withTwoAttributes{
B: "bar",
},
}
},
func(gotI any) bool {
foo := gotI.(withNestedBlock)
return foo.Nested.A == "foo" && foo.Nested.B == "bar"
},
0,
},
{
// Retain values in "nested" block list while decoding
map[string]any{
"nested": []map[string]any{
{
"a": "foo",
},
},
},
func() any {
return &withListofNestedBlocks{
Nested: []*withTwoAttributes{
{
B: "bar",
},
},
}
},
func(gotI any) bool {
n := gotI.(withListofNestedBlocks)
return n.Nested[0].A == "foo" && n.Nested[0].B == "bar"
},
0,
},
{
// Remove additional elements from the list while decoding nested blocks
map[string]any{
"nested": []map[string]any{
{
"a": "foo",
},
},
},
func() any {
return &withListofNestedBlocks{
Nested: []*withTwoAttributes{
{
B: "bar",
},
{
B: "bar",
},
},
}
},
func(gotI any) bool {
n := gotI.(withListofNestedBlocks)
return len(n.Nested) == 1
},
0,
},
{
// Make sure decoding value slices works the same as pointer slices.
map[string]any{
"nested": []map[string]any{
{
"b": "bar",
},
{
"b": "baz",
},
},
},
func() any {
return &withListofNestedBlocksNoPointers{
Nested: []withTwoAttributes{
{
B: "foo",
},
},
}
},
func(gotI any) bool {
n := gotI.(withListofNestedBlocksNoPointers)
return n.Nested[0].B == "bar" && len(n.Nested) == 2
},
0,
},
}
for i, test := range tests {
// For convenience here we're going to use the JSON parser
// to process the given body.
buf, err := json.Marshal(test.Body)
if err != nil {
t.Fatalf("error JSON-encoding body for test %d: %s", i, err)
}
t.Run(string(buf), func(t *testing.T) {
file, diags := hclJSON.Parse(buf, "test.json")
if len(diags) != 0 {
t.Fatalf("diagnostics while parsing: %s", diags.Error())
}
targetVal := reflect.ValueOf(test.Target())
diags = DecodeBody(file.Body, nil, targetVal.Interface())
if len(diags) != test.DiagCount {
t.Errorf("wrong number of diagnostics %d; want %d", len(diags), test.DiagCount)
for _, diag := range diags {
t.Logf(" - %s", diag.Error())
}
}
got := targetVal.Elem().Interface()
if !test.Check(got) {
t.Errorf("wrong result\ngot: %s", spew.Sdump(got))
}
})
}
}
func TestDecodeExpression(t *testing.T) {
tests := []struct {
Value cty.Value
Target any
Want any
DiagCount int
}{
{
cty.StringVal("hello"),
"",
"hello",
0,
},
{
cty.StringVal("hello"),
cty.NilVal,
cty.StringVal("hello"),
0,
},
{
cty.NumberIntVal(2),
"",
"2",
0,
},
{
cty.StringVal("true"),
false,
true,
0,
},
{
cty.NullVal(cty.String),
"",
"",
1, // null value is not allowed
},
{
cty.UnknownVal(cty.String),
"",
"",
1, // value must be known
},
{
cty.ListVal([]cty.Value{cty.True}),
false,
false,
1, // bool required
},
}
for i, test := range tests {
t.Run(fmt.Sprintf("%02d", i), func(t *testing.T) {
expr := &fixedExpression{test.Value}
targetVal := reflect.New(reflect.TypeOf(test.Target))
diags := DecodeExpression(expr, nil, targetVal.Interface())
if len(diags) != test.DiagCount {
t.Errorf("wrong number of diagnostics %d; want %d", len(diags), test.DiagCount)
for _, diag := range diags {
t.Logf(" - %s", diag.Error())
}
}
got := targetVal.Elem().Interface()
if !reflect.DeepEqual(got, test.Want) {
t.Errorf("wrong result\ngot: %#v\nwant: %#v", got, test.Want)
}
})
}
}
type fixedExpression struct {
val cty.Value
}
func (e *fixedExpression) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) {
return e.val, nil
}
func (e *fixedExpression) Range() (r hcl.Range) {
return
}
func (e *fixedExpression) StartRange() (r hcl.Range) {
return
}
func (e *fixedExpression) Variables() []hcl.Traversal {
return nil
}
func makeInstantiateType(target any) func() any {
return func() any {
return reflect.New(reflect.TypeOf(target)).Interface()
}
}

View File

@ -0,0 +1,65 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
// Package gohcl allows decoding HCL configurations into Go data structures.
//
// It provides a convenient and concise way of describing the schema for
// configuration and then accessing the resulting data via native Go
// types.
//
// A struct field tag scheme is used, similar to other decoding and
// unmarshalling libraries. The tags are formatted as in the following example:
//
// ThingType string `hcl:"thing_type,attr"`
//
// Within each tag there are two comma-separated tokens. The first is the
// name of the corresponding construct in configuration, while the second
// is a keyword giving the kind of construct expected. The following
// kind keywords are supported:
//
// attr (the default) indicates that the value is to be populated from an attribute
// block indicates that the value is to populated from a block
// label indicates that the value is to populated from a block label
// optional is the same as attr, but the field is optional
// remain indicates that the value is to be populated from the remaining body after populating other fields
//
// "attr" fields may either be of type *hcl.Expression, in which case the raw
// expression is assigned, or of any type accepted by gocty, in which case
// gocty will be used to assign the value to a native Go type.
//
// "block" fields may be a struct that recursively uses the same tags, or a
// slice of such structs, in which case multiple blocks of the corresponding
// type are decoded into the slice.
//
// "body" can be placed on a single field of type hcl.Body to capture
// the full hcl.Body that was decoded for a block. This does not allow leftover
// values like "remain", so a decoding error will still be returned if leftover
// fields are given. If you want to capture the decoding body PLUS leftover
// fields, you must specify a "remain" field as well to prevent errors. The
// body field and the remain field will both contain the leftover fields.
//
// "label" fields are considered only in a struct used as the type of a field
// marked as "block", and are used sequentially to capture the labels of
// the blocks being decoded. In this case, the name token is used only as
// an identifier for the label in diagnostic messages.
//
// "optional" fields behave like "attr" fields, but they are optional
// and will not give parsing errors if they are missing.
//
// "remain" can be placed on a single field that may be either of type
// hcl.Body or hcl.Attributes, in which case any remaining body content is
// placed into this field for delayed processing. If no "remain" field is
// present then any attributes or blocks not matched by another valid tag
// will cause an error diagnostic.
//
// Only a subset of this tagging/typing vocabulary is supported for the
// "Encode" family of functions. See the EncodeIntoBody docs for full details
// on the constraints there.
//
// Broadly-speaking this package deals with two types of error. The first is
// errors in the configuration itself, which are returned as diagnostics
// written with the configuration author as the target audience. The second
// is bugs in the calling program, such as invalid struct tags, which are
// surfaced via panics since there can be no useful runtime handling of such
// errors and they should certainly not be returned to the user as diagnostics.
package gohcl

View File

@ -0,0 +1,192 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
package gohcl
import (
"fmt"
"reflect"
"sort"
"github.com/hashicorp/hcl/v2/hclwrite"
"github.com/zclconf/go-cty/cty/gocty"
)
// EncodeIntoBody replaces the contents of the given hclwrite Body with
// attributes and blocks derived from the given value, which must be a
// struct value or a pointer to a struct value with the struct tags defined
// in this package.
//
// This function can work only with fully-decoded data. It will ignore any
// fields tagged as "remain", any fields that decode attributes into either
// hcl.Attribute or hcl.Expression values, and any fields that decode blocks
// into hcl.Attributes values. This function does not have enough information
// to complete the decoding of these types.
//
// Any fields tagged as "label" are ignored by this function. Use EncodeAsBlock
// to produce a whole hclwrite.Block including block labels.
//
// As long as a suitable value is given to encode and the destination body
// is non-nil, this function will always complete. It will panic in case of
// any errors in the calling program, such as passing an inappropriate type
// or a nil body.
//
// The layout of the resulting HCL source is derived from the ordering of
// the struct fields, with blank lines around nested blocks of different types.
// Fields representing attributes should usually precede those representing
// blocks so that the attributes can group together in the result. For more
// control, use the hclwrite API directly.
func EncodeIntoBody(val any, dst *hclwrite.Body) {
rv := reflect.ValueOf(val)
ty := rv.Type()
if ty.Kind() == reflect.Ptr {
rv = rv.Elem()
ty = rv.Type()
}
if ty.Kind() != reflect.Struct {
panic(fmt.Sprintf("value is %s, not struct", ty.Kind()))
}
tags := getFieldTags(ty)
populateBody(rv, ty, tags, dst)
}
// EncodeAsBlock creates a new hclwrite.Block populated with the data from
// the given value, which must be a struct or pointer to struct with the
// struct tags defined in this package.
//
// If the given struct type has fields tagged with "label" tags then they
// will be used in order to annotate the created block with labels.
//
// This function has the same constraints as EncodeIntoBody and will panic
// if they are violated.
func EncodeAsBlock(val any, blockType string) *hclwrite.Block {
rv := reflect.ValueOf(val)
ty := rv.Type()
if ty.Kind() == reflect.Ptr {
rv = rv.Elem()
ty = rv.Type()
}
if ty.Kind() != reflect.Struct {
panic(fmt.Sprintf("value is %s, not struct", ty.Kind()))
}
tags := getFieldTags(ty)
labels := make([]string, len(tags.Labels))
for i, lf := range tags.Labels {
lv := rv.Field(lf.FieldIndex)
// We just stringify whatever we find. It should always be a string
// but if not then we'll still do something reasonable.
labels[i] = fmt.Sprintf("%s", lv.Interface())
}
block := hclwrite.NewBlock(blockType, labels)
populateBody(rv, ty, tags, block.Body())
return block
}
func populateBody(rv reflect.Value, ty reflect.Type, tags *fieldTags, dst *hclwrite.Body) {
nameIdxs := make(map[string]int, len(tags.Attributes)+len(tags.Blocks))
namesOrder := make([]string, 0, len(tags.Attributes)+len(tags.Blocks))
for n, i := range tags.Attributes {
nameIdxs[n] = i
namesOrder = append(namesOrder, n)
}
for n, i := range tags.Blocks {
nameIdxs[n] = i
namesOrder = append(namesOrder, n)
}
sort.SliceStable(namesOrder, func(i, j int) bool {
ni, nj := namesOrder[i], namesOrder[j]
return nameIdxs[ni] < nameIdxs[nj]
})
dst.Clear()
prevWasBlock := false
for _, name := range namesOrder {
fieldIdx := nameIdxs[name]
field := ty.Field(fieldIdx)
fieldTy := field.Type
fieldVal := rv.Field(fieldIdx)
if fieldTy.Kind() == reflect.Ptr {
fieldTy = fieldTy.Elem()
fieldVal = fieldVal.Elem()
}
if _, isAttr := tags.Attributes[name]; isAttr {
if exprType.AssignableTo(fieldTy) || attrType.AssignableTo(fieldTy) {
continue // ignore undecoded fields
}
if !fieldVal.IsValid() {
continue // ignore (field value is nil pointer)
}
if fieldTy.Kind() == reflect.Ptr && fieldVal.IsNil() {
continue // ignore
}
if prevWasBlock {
dst.AppendNewline()
prevWasBlock = false
}
valTy, err := gocty.ImpliedType(fieldVal.Interface())
if err != nil {
panic(fmt.Sprintf("cannot encode %T as HCL expression: %s", fieldVal.Interface(), err))
}
val, err := gocty.ToCtyValue(fieldVal.Interface(), valTy)
if err != nil {
// This should never happen, since we should always be able
// to decode into the implied type.
panic(fmt.Sprintf("failed to encode %T as %#v: %s", fieldVal.Interface(), valTy, err))
}
dst.SetAttributeValue(name, val)
} else { // must be a block, then
elemTy := fieldTy
isSeq := false
if elemTy.Kind() == reflect.Slice || elemTy.Kind() == reflect.Array {
isSeq = true
elemTy = elemTy.Elem()
}
if bodyType.AssignableTo(elemTy) || attrsType.AssignableTo(elemTy) {
continue // ignore undecoded fields
}
prevWasBlock = false
if isSeq {
l := fieldVal.Len()
for i := range l {
elemVal := fieldVal.Index(i)
if !elemVal.IsValid() {
continue // ignore (elem value is nil pointer)
}
if elemTy.Kind() == reflect.Ptr && elemVal.IsNil() {
continue // ignore
}
block := EncodeAsBlock(elemVal.Interface(), name)
if !prevWasBlock {
dst.AppendNewline()
prevWasBlock = true
}
dst.AppendBlock(block)
}
} else {
if !fieldVal.IsValid() {
continue // ignore (field value is nil pointer)
}
if elemTy.Kind() == reflect.Ptr && fieldVal.IsNil() {
continue // ignore
}
block := EncodeAsBlock(fieldVal.Interface(), name)
if !prevWasBlock {
dst.AppendNewline()
prevWasBlock = true
}
dst.AppendBlock(block)
}
}
}
}

View File

@ -0,0 +1,67 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
package gohcl_test
import (
"fmt"
"github.com/hashicorp/hcl/v2/gohcl"
"github.com/hashicorp/hcl/v2/hclwrite"
)
func ExampleEncodeIntoBody() {
type Service struct {
Name string `hcl:"name,label"`
Exe []string `hcl:"executable"`
}
type Constraints struct {
OS string `hcl:"os"`
Arch string `hcl:"arch"`
}
type App struct {
Name string `hcl:"name"`
Desc string `hcl:"description"`
Constraints *Constraints `hcl:"constraints,block"`
Services []Service `hcl:"service,block"`
}
app := App{
Name: "awesome-app",
Desc: "Such an awesome application",
Constraints: &Constraints{
OS: "linux",
Arch: "amd64",
},
Services: []Service{
{
Name: "web",
Exe: []string{"./web", "--listen=:8080"},
},
{
Name: "worker",
Exe: []string{"./worker"},
},
},
}
f := hclwrite.NewEmptyFile()
gohcl.EncodeIntoBody(&app, f.Body())
fmt.Printf("%s", f.Bytes())
// Output:
// name = "awesome-app"
// description = "Such an awesome application"
//
// constraints {
// os = "linux"
// arch = "amd64"
// }
//
// service "web" {
// executable = ["./web", "--listen=:8080"]
// }
// service "worker" {
// executable = ["./worker"]
// }
}

View File

@ -0,0 +1,184 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
package gohcl
import (
"fmt"
"reflect"
"sort"
"strings"
"github.com/hashicorp/hcl/v2"
)
// ImpliedBodySchema produces a hcl.BodySchema derived from the type of the
// given value, which must be a struct value or a pointer to one. If an
// inappropriate value is passed, this function will panic.
//
// The second return argument indicates whether the given struct includes
// a "remain" field, and thus the returned schema is non-exhaustive.
//
// This uses the tags on the fields of the struct to discover how each
// field's value should be expressed within configuration. If an invalid
// mapping is attempted, this function will panic.
func ImpliedBodySchema(val any) (schema *hcl.BodySchema, partial bool) {
ty := reflect.TypeOf(val)
if ty.Kind() == reflect.Ptr {
ty = ty.Elem()
}
if ty.Kind() != reflect.Struct {
panic(fmt.Sprintf("given value must be struct, not %T", val))
}
var attrSchemas []hcl.AttributeSchema
var blockSchemas []hcl.BlockHeaderSchema
tags := getFieldTags(ty)
attrNames := make([]string, 0, len(tags.Attributes))
for n := range tags.Attributes {
attrNames = append(attrNames, n)
}
sort.Strings(attrNames)
for _, n := range attrNames {
idx := tags.Attributes[n]
optional := tags.Optional[n]
field := ty.Field(idx)
var required bool
switch {
case field.Type.AssignableTo(exprType):
// If we're decoding to hcl.Expression then absence can be
// indicated via a null value, so we don't specify that
// the field is required during decoding.
required = false
case field.Type.Kind() != reflect.Ptr && !optional:
required = true
default:
required = false
}
attrSchemas = append(attrSchemas, hcl.AttributeSchema{
Name: n,
Required: required,
})
}
blockNames := make([]string, 0, len(tags.Blocks))
for n := range tags.Blocks {
blockNames = append(blockNames, n)
}
sort.Strings(blockNames)
for _, n := range blockNames {
idx := tags.Blocks[n]
field := ty.Field(idx)
fty := field.Type
if fty.Kind() == reflect.Slice {
fty = fty.Elem()
}
if fty.Kind() == reflect.Ptr {
fty = fty.Elem()
}
if fty.Kind() != reflect.Struct {
panic(fmt.Sprintf(
"hcl 'block' tag kind cannot be applied to %s field %s: struct required", field.Type.String(), field.Name,
))
}
ftags := getFieldTags(fty)
var labelNames []string
if len(ftags.Labels) > 0 {
labelNames = make([]string, len(ftags.Labels))
for i, l := range ftags.Labels {
labelNames[i] = l.Name
}
}
blockSchemas = append(blockSchemas, hcl.BlockHeaderSchema{
Type: n,
LabelNames: labelNames,
})
}
partial = tags.Remain != nil
schema = &hcl.BodySchema{
Attributes: attrSchemas,
Blocks: blockSchemas,
}
return schema, partial
}
type fieldTags struct {
Attributes map[string]int
Blocks map[string]int
Labels []labelField
Remain *int
Body *int
Optional map[string]bool
}
type labelField struct {
FieldIndex int
Name string
}
func getFieldTags(ty reflect.Type) *fieldTags {
ret := &fieldTags{
Attributes: map[string]int{},
Blocks: map[string]int{},
Optional: map[string]bool{},
}
ct := ty.NumField()
for i := range ct {
field := ty.Field(i)
tag := field.Tag.Get("hcl")
if tag == "" {
continue
}
comma := strings.Index(tag, ",")
var name, kind string
if comma != -1 {
name = tag[:comma]
kind = tag[comma+1:]
} else {
name = tag
kind = "attr"
}
switch kind {
case "attr":
ret.Attributes[name] = i
case "block":
ret.Blocks[name] = i
case "label":
ret.Labels = append(ret.Labels, labelField{
FieldIndex: i,
Name: name,
})
case "remain":
if ret.Remain != nil {
panic("only one 'remain' tag is permitted")
}
idx := i // copy, because this loop will continue assigning to i
ret.Remain = &idx
case "body":
if ret.Body != nil {
panic("only one 'body' tag is permitted")
}
idx := i // copy, because this loop will continue assigning to i
ret.Body = &idx
case "optional":
ret.Attributes[name] = i
ret.Optional[name] = true
default:
panic(fmt.Sprintf("invalid hcl field tag kind %q on %s %q", kind, field.Type.String(), field.Name))
}
}
return ret
}

View File

@ -0,0 +1,233 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
package gohcl
import (
"fmt"
"reflect"
"testing"
"github.com/davecgh/go-spew/spew"
"github.com/hashicorp/hcl/v2"
)
func TestImpliedBodySchema(t *testing.T) {
tests := []struct {
val any
wantSchema *hcl.BodySchema
wantPartial bool
}{
{
struct{}{},
&hcl.BodySchema{},
false,
},
{
struct {
Ignored bool
}{},
&hcl.BodySchema{},
false,
},
{
struct {
Attr1 bool `hcl:"attr1"`
Attr2 bool `hcl:"attr2"`
}{},
&hcl.BodySchema{
Attributes: []hcl.AttributeSchema{
{
Name: "attr1",
Required: true,
},
{
Name: "attr2",
Required: true,
},
},
},
false,
},
{
struct {
Attr *bool `hcl:"attr,attr"`
}{},
&hcl.BodySchema{
Attributes: []hcl.AttributeSchema{
{
Name: "attr",
Required: false,
},
},
},
false,
},
{
struct {
Thing struct{} `hcl:"thing,block"`
}{},
&hcl.BodySchema{
Blocks: []hcl.BlockHeaderSchema{
{
Type: "thing",
},
},
},
false,
},
{
struct {
Thing struct {
Type string `hcl:"type,label"`
Name string `hcl:"name,label"`
} `hcl:"thing,block"`
}{},
&hcl.BodySchema{
Blocks: []hcl.BlockHeaderSchema{
{
Type: "thing",
LabelNames: []string{"type", "name"},
},
},
},
false,
},
{
struct {
Thing []struct {
Type string `hcl:"type,label"`
Name string `hcl:"name,label"`
} `hcl:"thing,block"`
}{},
&hcl.BodySchema{
Blocks: []hcl.BlockHeaderSchema{
{
Type: "thing",
LabelNames: []string{"type", "name"},
},
},
},
false,
},
{
struct {
Thing *struct {
Type string `hcl:"type,label"`
Name string `hcl:"name,label"`
} `hcl:"thing,block"`
}{},
&hcl.BodySchema{
Blocks: []hcl.BlockHeaderSchema{
{
Type: "thing",
LabelNames: []string{"type", "name"},
},
},
},
false,
},
{
struct {
Thing struct {
Name string `hcl:"name,label"`
Something string `hcl:"something"`
} `hcl:"thing,block"`
}{},
&hcl.BodySchema{
Blocks: []hcl.BlockHeaderSchema{
{
Type: "thing",
LabelNames: []string{"name"},
},
},
},
false,
},
{
struct {
Doodad string `hcl:"doodad"`
Thing struct {
Name string `hcl:"name,label"`
} `hcl:"thing,block"`
}{},
&hcl.BodySchema{
Attributes: []hcl.AttributeSchema{
{
Name: "doodad",
Required: true,
},
},
Blocks: []hcl.BlockHeaderSchema{
{
Type: "thing",
LabelNames: []string{"name"},
},
},
},
false,
},
{
struct {
Doodad string `hcl:"doodad"`
Config string `hcl:",remain"`
}{},
&hcl.BodySchema{
Attributes: []hcl.AttributeSchema{
{
Name: "doodad",
Required: true,
},
},
},
true,
},
{
struct {
Expr hcl.Expression `hcl:"expr"`
}{},
&hcl.BodySchema{
Attributes: []hcl.AttributeSchema{
{
Name: "expr",
Required: false,
},
},
},
false,
},
{
struct {
Meh string `hcl:"meh,optional"`
}{},
&hcl.BodySchema{
Attributes: []hcl.AttributeSchema{
{
Name: "meh",
Required: false,
},
},
},
false,
},
}
for _, test := range tests {
t.Run(fmt.Sprintf("%#v", test.val), func(t *testing.T) {
schema, partial := ImpliedBodySchema(test.val)
if !reflect.DeepEqual(schema, test.wantSchema) {
t.Errorf(
"wrong schema\ngot: %s\nwant: %s",
spew.Sdump(schema), spew.Sdump(test.wantSchema),
)
}
if partial != test.wantPartial {
t.Errorf(
"wrong partial flag\ngot: %#v\nwant: %#v",
partial, test.wantPartial,
)
}
})
}
}

View File

@ -0,0 +1,19 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
package gohcl
import (
"reflect"
"github.com/hashicorp/hcl/v2"
)
var victimExpr hcl.Expression
var victimBody hcl.Body
var exprType = reflect.TypeOf(&victimExpr).Elem()
var bodyType = reflect.TypeOf(&victimBody).Elem()
var blockType = reflect.TypeOf((*hcl.Block)(nil)) //nolint:unused
var attrType = reflect.TypeOf((*hcl.Attribute)(nil))
var attrsType = reflect.TypeOf(hcl.Attributes(nil))

View File

@ -7,17 +7,23 @@ import (
"math"
"math/big"
"reflect"
"slices"
"strconv"
"strings"
"github.com/docker/buildx/bake/hclparser/gohcl"
"github.com/docker/buildx/util/userfunc"
"github.com/hashicorp/hcl/v2"
"github.com/hashicorp/hcl/v2/gohcl"
"github.com/hashicorp/hcl/v2/ext/typeexpr"
"github.com/pkg/errors"
"github.com/tonistiigi/go-csvvalue"
"github.com/zclconf/go-cty/cty"
"github.com/zclconf/go-cty/cty/gocty"
"github.com/zclconf/go-cty/cty/convert"
ctyjson "github.com/zclconf/go-cty/cty/json"
)
const jsonEnvOverrideSuffix = "_JSON"
type Opt struct {
LookupVar func(string) (string, bool)
Vars map[string]string
@ -25,15 +31,27 @@ type Opt struct {
}
type variable struct {
Name string `json:"-" hcl:"name,label"`
Default *hcl.Attribute `json:"default,omitempty" hcl:"default,optional"`
Body hcl.Body `json:"-" hcl:",body"`
Name string `json:"-" hcl:"name,label"`
Type hcl.Expression `json:"type,omitempty" hcl:"type,optional"`
Default *hcl.Attribute `json:"default,omitempty" hcl:"default,optional"`
Description string `json:"description,omitempty" hcl:"description,optional"`
Validations []*variableValidation `json:"validation,omitempty" hcl:"validation,block"`
Body hcl.Body `json:"-" hcl:",body"`
Remain hcl.Body `json:"-" hcl:",remain"`
// the type described by Type if it was specified
constraint *cty.Type
}
type variableValidation struct {
Condition hcl.Expression `json:"condition" hcl:"condition"`
ErrorMessage hcl.Expression `json:"error_message" hcl:"error_message"`
}
type functionDef struct {
Name string `json:"-" hcl:"name,label"`
Params *hcl.Attribute `json:"params,omitempty" hcl:"params"`
Variadic *hcl.Attribute `json:"variadic_param,omitempty" hcl:"variadic_params"`
Variadic *hcl.Attribute `json:"variadic_params,omitempty" hcl:"variadic_params"`
Result *hcl.Attribute `json:"result,omitempty" hcl:"result"`
}
@ -73,7 +91,12 @@ type WithGetName interface {
GetName(ectx *hcl.EvalContext, block *hcl.Block, loadDeps func(hcl.Expression) hcl.Diagnostics) (string, error)
}
var errUndefined = errors.New("undefined")
// errUndefined is returned when a variable or function is not defined.
type errUndefined struct{}
func (errUndefined) Error() string {
return "undefined"
}
func (p *parser) loadDeps(ectx *hcl.EvalContext, exp hcl.Expression, exclude map[string]struct{}, allowMissing bool) hcl.Diagnostics {
fns, hcldiags := funcCalls(exp)
@ -83,7 +106,7 @@ func (p *parser) loadDeps(ectx *hcl.EvalContext, exp hcl.Expression, exclude map
for _, fn := range fns {
if err := p.resolveFunction(ectx, fn); err != nil {
if allowMissing && errors.Is(err, errUndefined) {
if allowMissing && errors.Is(err, errUndefined{}) {
continue
}
return wrapErrorDiagnostic("Invalid expression", err, exp.Range().Ptr(), exp.Range().Ptr())
@ -137,7 +160,7 @@ func (p *parser) loadDeps(ectx *hcl.EvalContext, exp hcl.Expression, exclude map
}
for _, block := range blocks {
if err := p.resolveBlock(block, target); err != nil {
if allowMissing && errors.Is(err, errUndefined) {
if allowMissing && errors.Is(err, errUndefined{}) {
continue
}
return wrapErrorDiagnostic("Invalid expression", err, exp.Range().Ptr(), exp.Range().Ptr())
@ -145,7 +168,7 @@ func (p *parser) loadDeps(ectx *hcl.EvalContext, exp hcl.Expression, exclude map
}
} else {
if err := p.resolveValue(ectx, v.RootName()); err != nil {
if allowMissing && errors.Is(err, errUndefined) {
if allowMissing && errors.Is(err, errUndefined{}) {
continue
}
return wrapErrorDiagnostic("Invalid expression", err, exp.Range().Ptr(), exp.Range().Ptr())
@ -167,7 +190,7 @@ func (p *parser) resolveFunction(ectx *hcl.EvalContext, name string) error {
}
f, ok := p.funcs[name]
if !ok {
return errors.Wrapf(errUndefined, "function %q does not exist", name)
return errors.Wrapf(errUndefined{}, "function %q does not exist", name)
}
if _, ok := p.progressF[key(ectx, name)]; ok {
return errors.Errorf("function cycle not allowed for %s", name)
@ -253,57 +276,92 @@ func (p *parser) resolveValue(ectx *hcl.EvalContext, name string) (err error) {
}
}()
def, ok := p.attrs[name]
if _, builtin := p.opt.Vars[name]; !ok && !builtin {
vr, ok := p.vars[name]
if !ok {
return errors.Wrapf(errUndefined, "variable %q does not exist", name)
}
def = vr.Default
ectx = p.ectx
}
if def == nil {
val, ok := p.opt.Vars[name]
if !ok {
val, _ = p.opt.LookupVar(name)
}
// built-in vars aren't intended to be overridden and are statically typed as strings;
// no sense sending them through type checks or waiting to return them
if val, ok := p.opt.Vars[name]; ok {
vv := cty.StringVal(val)
v = &vv
return
}
if diags := p.loadDeps(ectx, def.Expr, nil, true); diags.HasErrors() {
return diags
}
vv, diags := def.Expr.Value(ectx)
if diags.HasErrors() {
return diags
var diags hcl.Diagnostics
varType, typeSpecified := cty.DynamicPseudoType, false
def, ok := p.attrs[name]
if !ok {
vr, ok := p.vars[name]
if !ok {
return errors.Wrapf(errUndefined{}, "variable %q does not exist", name)
}
def = vr.Default
ectx = p.ectx
varType, diags = typeConstraint(vr.Type)
if diags.HasErrors() {
return diags
}
typeSpecified = !varType.Equals(cty.DynamicPseudoType) || hcl.ExprAsKeyword(vr.Type) == "any"
if typeSpecified {
vr.constraint = &varType
}
}
if def == nil {
// Lack of specified value, when untyped is considered to have an empty string value.
// A typed variable with no value will result in (typed) nil.
if _, ok, _ := p.valueHasOverride(name, false); !ok && !typeSpecified {
vv := cty.StringVal("")
v = &vv
return
}
}
var vv cty.Value
if def != nil {
if diags := p.loadDeps(ectx, def.Expr, nil, true); diags.HasErrors() {
return diags
}
vv, diags = def.Expr.Value(ectx)
if diags.HasErrors() {
return diags
}
vv, err = convert.Convert(vv, varType)
if err != nil {
return errors.Wrapf(err, "invalid type %s for variable %s default value", varType.FriendlyName(), name)
}
}
envv, hasEnv, jsonEnv := p.valueHasOverride(name, typeSpecified)
_, isVar := p.vars[name]
if envv, ok := p.opt.LookupVar(name); ok && isVar {
if hasEnv && isVar {
switch {
case vv.Type().Equals(cty.Bool):
b, err := strconv.ParseBool(envv)
case typeSpecified && jsonEnv:
vv, err = ctyjson.Unmarshal([]byte(envv), varType)
if err != nil {
return errors.Wrapf(err, "failed to parse %s as bool", name)
return errors.Wrapf(err, "failed to convert variable %s from JSON", name)
}
vv = cty.BoolVal(b)
case vv.Type().Equals(cty.String), vv.Type().Equals(cty.DynamicPseudoType):
case supportedCSVType(varType): // typing explicitly specified for selected complex types
vv, err = valueFromCSV(name, envv, varType)
if err != nil {
return errors.Wrapf(err, "failed to convert variable %s from CSV", name)
}
case typeSpecified && varType.IsPrimitiveType():
vv, err = convertPrimitive(name, envv, varType)
if err != nil {
return err
}
case typeSpecified:
// e.g., an 'object' not provided as JSON (which can't be expressed in the default CSV format)
return errors.Errorf("unsupported type %s for variable %s", varType.FriendlyName(), name)
case def == nil: // no default from which to infer typing
vv = cty.StringVal(envv)
case vv.Type().Equals(cty.Number):
n, err := strconv.ParseFloat(envv, 64)
if err == nil && (math.IsNaN(n) || math.IsInf(n, 0)) {
err = errors.Errorf("invalid number value")
}
case vv.Type().Equals(cty.DynamicPseudoType):
vv = cty.StringVal(envv)
case vv.Type().IsPrimitiveType():
vv, err = convertPrimitive(name, envv, vv.Type())
if err != nil {
return errors.Wrapf(err, "failed to parse %s as number", name)
return err
}
vv = cty.NumberVal(big.NewFloat(n))
default:
// TODO: support lists with csv values
return errors.Errorf("unsupported type %s for variable %s", vv.Type().FriendlyName(), name)
}
}
@ -311,6 +369,29 @@ func (p *parser) resolveValue(ectx *hcl.EvalContext, name string) (err error) {
return nil
}
// valueHasOverride returns a possible override value if one was specified, and whether it should
// be treated as a JSON value.
//
// A plain/CSV override is the default; this consolidates the logic around how a JSON-specific override
// is specified and when it will be honored when there are naming conflicts or ambiguity.
func (p *parser) valueHasOverride(name string, favorJSON bool) (string, bool, bool) {
jsonEnv := false
envv, hasEnv := p.opt.LookupVar(name)
// If no plain override exists (!hasEnv) or JSON overrides are explicitly favored (favorJSON),
// check for a JSON-specific override with the "_JSON" suffix.
if !hasEnv || favorJSON {
jsonVarName := name + jsonEnvOverrideSuffix
_, builtin := p.opt.Vars[jsonVarName]
if _, ok := p.vars[jsonVarName]; !ok && !builtin {
if j, ok := p.opt.LookupVar(jsonVarName); ok {
envv = j
hasEnv, jsonEnv = true, true
}
}
}
return envv, hasEnv, jsonEnv
}
// resolveBlock force evaluates a block, storing the result in the parser. If a
// target schema is provided, only the attributes and blocks present in the
// schema will be evaluated.
@ -441,7 +522,7 @@ func (p *parser) resolveBlock(block *hcl.Block, target *hcl.BodySchema) (err err
}
// decode!
diag = gohcl.DecodeBody(body(), ectx, output.Interface())
diag = decodeBody(body(), ectx, output.Interface())
if diag.HasErrors() {
return diag
}
@ -463,11 +544,11 @@ func (p *parser) resolveBlock(block *hcl.Block, target *hcl.BodySchema) (err err
}
// store the result into the evaluation context (so it can be referenced)
outputType, err := gocty.ImpliedType(output.Interface())
outputType, err := ImpliedType(output.Interface())
if err != nil {
return err
}
outputValue, err := gocty.ToCtyValue(output.Interface(), outputType)
outputValue, err := ToCtyValue(output.Interface(), outputType)
if err != nil {
return err
}
@ -479,7 +560,12 @@ func (p *parser) resolveBlock(block *hcl.Block, target *hcl.BodySchema) (err err
m = map[string]cty.Value{}
}
m[name] = outputValue
p.ectx.Variables[block.Type] = cty.MapVal(m)
// The logical contents of this structure is similar to a map,
// but it's possible for some attributes to be different in a way that's
// illegal for a map so we use an object here instead which is structurally
// equivalent but allows disparate types for different keys.
p.ectx.Variables[block.Type] = cty.ObjectVal(m)
}
return nil
@ -534,7 +620,76 @@ func (p *parser) resolveBlockNames(block *hcl.Block) ([]string, error) {
return names, nil
}
func Parse(b hcl.Body, opt Opt, val interface{}) (map[string]map[string][]string, hcl.Diagnostics) {
func (p *parser) validateVariables(vars map[string]*variable, ectx *hcl.EvalContext) hcl.Diagnostics {
var diags hcl.Diagnostics
for _, v := range vars {
for _, rule := range v.Validations {
resultVal, condDiags := rule.Condition.Value(ectx)
if condDiags.HasErrors() {
diags = append(diags, condDiags...)
continue
}
if resultVal.IsNull() {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid condition result",
Detail: "Condition expression must return either true or false, not null.",
Subject: rule.Condition.Range().Ptr(),
Expression: rule.Condition,
})
continue
}
var err error
resultVal, err = convert.Convert(resultVal, cty.Bool)
if err != nil {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid condition result",
Detail: fmt.Sprintf("Invalid condition result value: %s", err),
Subject: rule.Condition.Range().Ptr(),
Expression: rule.Condition,
})
continue
}
if !resultVal.True() {
message, msgDiags := rule.ErrorMessage.Value(ectx)
if msgDiags.HasErrors() {
diags = append(diags, msgDiags...)
continue
}
errorMessage := "This check failed, but has an invalid error message."
if !message.IsNull() {
errorMessage = message.AsString()
}
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Validation failed",
Detail: errorMessage,
Subject: rule.Condition.Range().Ptr(),
})
}
}
}
return diags
}
type Variable struct {
Name string `json:"name"`
Description string `json:"description,omitempty"`
Type string `json:"type,omitempty"`
Value *string `json:"value,omitempty"`
}
type ParseMeta struct {
Renamed map[string]map[string][]string
AllVariables []*Variable
}
func Parse(b hcl.Body, opt Opt, val any) (*ParseMeta, hcl.Diagnostics) {
reserved := map[string]struct{}{}
schema, _ := gohcl.ImpliedBodySchema(val)
@ -631,7 +786,6 @@ func Parse(b hcl.Body, opt Opt, val interface{}) (map[string]map[string][]string
}
for _, a := range content.Attributes {
a := a
return nil, hcl.Diagnostics{
&hcl.Diagnostic{
Severity: hcl.DiagError,
@ -643,6 +797,7 @@ func Parse(b hcl.Body, opt Opt, val interface{}) (map[string]map[string][]string
}
}
vars := make([]*Variable, 0, len(p.vars))
for k := range p.vars {
if err := p.resolveValue(p.ectx, k); err != nil {
if diags, ok := err.(hcl.Diagnostics); ok {
@ -651,6 +806,42 @@ func Parse(b hcl.Body, opt Opt, val interface{}) (map[string]map[string][]string
r := p.vars[k].Body.MissingItemRange()
return nil, wrapErrorDiagnostic("Invalid value", err, &r, &r)
}
v := &Variable{
Name: p.vars[k].Name,
Description: p.vars[k].Description,
}
tc := p.vars[k].constraint
if tc != nil {
v.Type = tc.FriendlyNameForConstraint()
}
if vv := p.ectx.Variables[k]; !vv.IsNull() {
var s string
switch {
case tc != nil:
if bs, err := ctyjson.Marshal(vv, *tc); err == nil {
s = string(bs)
// untyped strings were always unquoted, so be consistent with typed strings as well
if tc.Equals(cty.String) {
s = strings.Trim(s, "\"")
}
}
case vv.Type().IsPrimitiveType():
// all primitives can convert to string, so error should never occur
if val, err := convert.Convert(vv, cty.String); err == nil {
s = val.AsString()
}
default:
// must be an (inferred) tuple or object
if bs, err := ctyjson.Marshal(vv, vv.Type()); err == nil {
s = string(bs)
}
}
v.Value = &s
}
vars = append(vars, v)
}
if diags := p.validateVariables(p.vars, p.ectx); diags.HasErrors() {
return nil, diags
}
for k := range p.funcs {
@ -665,7 +856,6 @@ func Parse(b hcl.Body, opt Opt, val interface{}) (map[string]map[string][]string
context = subject
} else {
for _, block := range blocks.Blocks {
block := block
if block.Type == "function" && len(block.Labels) == 1 && block.Labels[0] == k {
subject = block.LabelRanges[0].Ptr()
context = block.DefRange.Ptr()
@ -689,7 +879,7 @@ func Parse(b hcl.Body, opt Opt, val interface{}) (map[string]map[string][]string
types := map[string]field{}
renamed := map[string]map[string][]string{}
vt := reflect.ValueOf(val).Elem().Type()
for i := 0; i < vt.NumField(); i++ {
for i := range vt.NumField() {
tags := strings.Split(vt.Field(i).Tag.Get("hcl"), ",")
p.blockTypes[tags[0]] = vt.Field(i).Type.Elem().Elem()
@ -734,7 +924,6 @@ func Parse(b hcl.Body, opt Opt, val interface{}) (map[string]map[string][]string
diags = hcl.Diagnostics{}
for _, b := range content.Blocks {
b := b
v := reflect.ValueOf(val)
err := p.resolveBlock(b, nil)
@ -757,7 +946,7 @@ func Parse(b hcl.Body, opt Opt, val interface{}) (map[string]map[string][]string
oldValue, exists := t.values[lblName]
if !exists && lblExists {
if v.Elem().Field(t.idx).Type().Kind() == reflect.Slice {
for i := 0; i < v.Elem().Field(t.idx).Len(); i++ {
for i := range v.Elem().Field(t.idx).Len() {
if lblName == v.Elem().Field(t.idx).Index(i).Elem().Field(lblIndex).String() {
exists = true
oldValue = value{Value: v.Elem().Field(t.idx).Index(i), idx: i}
@ -767,7 +956,7 @@ func Parse(b hcl.Body, opt Opt, val interface{}) (map[string]map[string][]string
}
}
if exists {
if m := oldValue.Value.MethodByName("Merge"); m.IsValid() {
if m := oldValue.MethodByName("Merge"); m.IsValid() {
m.Call([]reflect.Value{vv})
} else {
v.Elem().Field(t.idx).Index(oldValue.idx).Set(vv)
@ -795,7 +984,145 @@ func Parse(b hcl.Body, opt Opt, val interface{}) (map[string]map[string][]string
}
}
return renamed, nil
return &ParseMeta{
Renamed: renamed,
AllVariables: vars,
}, nil
}
// typeConstraint wraps typeexpr.TypeConstraint to differentiate between errors in the
// specification and errors due to being cty.NullVal (not provided).
func typeConstraint(expr hcl.Expression) (cty.Type, hcl.Diagnostics) {
t, diag := typeexpr.TypeConstraint(expr)
if !diag.HasErrors() {
return t, diag
}
// if had errors, it could be because the expression is 'nil', i.e., unspecified
if v, err := expr.Value(nil); err == nil {
if v.IsNull() {
return cty.DynamicPseudoType, nil
}
}
// even if the evaluation resulted in error, the original (error) diagnostics are likely more useful
return t, diag
}
// convertPrimitive converts a single string primitive value to a given cty.Type.
func convertPrimitive(name, value string, target cty.Type) (cty.Value, error) {
switch {
case target.Equals(cty.String):
return cty.StringVal(value), nil
case target.Equals(cty.Bool):
b, err := strconv.ParseBool(value)
if err != nil {
return cty.NilVal, errors.Wrapf(err, "failed to parse %s as bool", name)
}
return cty.BoolVal(b), nil
case target.Equals(cty.Number):
n, err := strconv.ParseFloat(value, 64)
if err == nil && (math.IsNaN(n) || math.IsInf(n, 0)) {
err = errors.Errorf("invalid number value")
}
if err != nil {
return cty.NilVal, errors.Wrapf(err, "failed to parse %s as number", name)
}
return cty.NumberVal(big.NewFloat(n)), nil
default:
return cty.NilVal, errors.Errorf("%s of type %s is not a primitive", name, target.FriendlyName())
}
}
// supportedCSVType reports whether the given cty.Type might be convertible from a CSV string via valueFromCSV.
func supportedCSVType(t cty.Type) bool {
return t.IsListType() || t.IsSetType() || t.IsTupleType() || t.IsMapType()
}
// valueFromCSV takes CSV value and converts it to cty.Type.
//
// This currently supports conversion to cty.List and cty.Set.
// It also contains preliminary support for cty.Map (the other collection type).
// While not considered a collection type, it also tentatively supports cty.Tuple.
func valueFromCSV(name, value string, target cty.Type) (cty.Value, error) {
fields, err := csvvalue.Fields(value, nil)
if err != nil {
return cty.NilVal, errors.Wrapf(err, "failed to parse %s as CSV", value)
}
// used for lists and set, which require identical processing and differ only in return type
singleTypeConvert := func(t cty.Type) ([]cty.Value, error) {
var elems []cty.Value
for _, f := range fields {
v, err := convertPrimitive(name, f, t)
if err != nil {
return nil, errors.Wrapf(err, "failed to parse element of type %s", target.FriendlyName())
}
elems = append(elems, v)
}
return elems, nil
}
switch {
case target.IsListType():
if !target.ElementType().IsPrimitiveType() {
return cty.NilVal, errors.Errorf("unsupported type %s for CSV specification", target.FriendlyName())
}
elems, err := singleTypeConvert(target.ElementType())
if err != nil {
return cty.NilVal, err
}
return cty.ListVal(elems), nil
case target.IsSetType():
if !target.ElementType().IsPrimitiveType() {
return cty.NilVal, errors.Errorf("unsupported type %s for CSV specification", target.FriendlyName())
}
elems, err := singleTypeConvert(target.ElementType())
if err != nil {
return cty.NilVal, err
}
return cty.SetVal(elems), nil
case target.IsTupleType():
tupleTypes := target.TupleElementTypes()
if len(tupleTypes) != len(fields) {
return cty.NilVal, errors.Errorf("%s expects %d elements but only %d provided", target.FriendlyName(), len(tupleTypes), len(fields))
}
var elems []cty.Value
for i, f := range fields {
tt := tupleTypes[i]
if !tt.IsPrimitiveType() {
return cty.NilVal, errors.Errorf("unsupported type %s for CSV specification", target.FriendlyName())
}
v, err := convertPrimitive(name, f, tt)
if err != nil {
return cty.NilVal, errors.Wrapf(err, "failed to parse element of type %s", target.FriendlyName())
}
elems = append(elems, v)
}
return cty.TupleVal(elems), nil
case target.IsMapType():
if !target.ElementType().IsPrimitiveType() {
return cty.NilVal, errors.Errorf("unsupported type %s for CSV specification", target.FriendlyName())
}
p := csvvalue.Parser{Comma: ':'}
var kvSlice []string
m := make(map[string]cty.Value)
for _, f := range fields {
kvSlice, err = p.Fields(f, kvSlice)
if err != nil {
return cty.NilVal, errors.Wrapf(err, "failed to parse %s as k/v for variable %s", f, name)
}
if len(kvSlice) != 2 {
return cty.NilVal, errors.Errorf("expected one k/v pair but got %d pieces from %s", len(kvSlice), f)
}
v, err := convertPrimitive(name, kvSlice[1], target.ElementType())
if err != nil {
return cty.NilVal, errors.Wrapf(err, "failed to parse element from type %s", target.FriendlyName())
}
m[kvSlice[0]] = v
}
return cty.MapVal(m), nil
default:
return cty.NilVal, errors.Errorf("unsupported type %s for CSV specification", target.FriendlyName())
}
}
// wrapErrorDiagnostic wraps an error into a hcl.Diagnostics object.
@ -821,7 +1148,7 @@ func wrapErrorDiagnostic(message string, err error, subject *hcl.Range, context
func setName(v reflect.Value, name string) {
numFields := v.Elem().Type().NumField()
for i := 0; i < numFields; i++ {
for i := range numFields {
parts := strings.Split(v.Elem().Type().Field(i).Tag.Get("hcl"), ",")
for _, t := range parts[1:] {
if t == "label" {
@ -833,12 +1160,10 @@ func setName(v reflect.Value, name string) {
func getName(v reflect.Value) (string, bool) {
numFields := v.Elem().Type().NumField()
for i := 0; i < numFields; i++ {
for i := range numFields {
parts := strings.Split(v.Elem().Type().Field(i).Tag.Get("hcl"), ",")
for _, t := range parts[1:] {
if t == "label" {
return v.Elem().Field(i).String(), true
}
if slices.Contains(parts[1:], "label") {
return v.Elem().Field(i).String(), true
}
}
return "", false
@ -846,12 +1171,10 @@ func getName(v reflect.Value) (string, bool) {
func getNameIndex(v reflect.Value) (int, bool) {
numFields := v.Elem().Type().NumField()
for i := 0; i < numFields; i++ {
for i := range numFields {
parts := strings.Split(v.Elem().Type().Field(i).Tag.Get("hcl"), ",")
for _, t := range parts[1:] {
if t == "label" {
return i, true
}
if slices.Contains(parts[1:], "label") {
return i, true
}
}
return 0, false
@ -910,3 +1233,8 @@ func key(ks ...any) uint64 {
}
return hash.Sum64()
}
func decodeBody(body hcl.Body, ctx *hcl.EvalContext, val any) hcl.Diagnostics {
dec := gohcl.DecodeOptions{ImpliedType: ImpliedType}
return dec.DecodeBody(body, ctx, val)
}

View File

@ -1,8 +1,6 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
// Forked from https://github.com/hashicorp/hcl/blob/4679383728fe331fc8a6b46036a27b8f818d9bc0/merged.go
package hclparser
import (
@ -111,21 +109,19 @@ func (mb mergedBodies) JustAttributes() (hcl.Attributes, hcl.Diagnostics) {
diags = append(diags, thisDiags...)
}
if thisAttrs != nil {
for name, attr := range thisAttrs {
if existing := attrs[name]; existing != nil {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Duplicate argument",
Detail: fmt.Sprintf(
"Argument %q was already set at %s",
name, existing.NameRange.String(),
),
Subject: thisAttrs[name].NameRange.Ptr(),
})
}
attrs[name] = attr
for name, attr := range thisAttrs {
if existing := attrs[name]; existing != nil {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Duplicate argument",
Detail: fmt.Sprintf(
"Argument %q was already set at %s",
name, existing.NameRange.String(),
),
Subject: thisAttrs[name].NameRange.Ptr(),
})
}
attrs[name] = attr
}
}

View File

@ -0,0 +1,687 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
package hclparser
import (
"fmt"
"reflect"
"testing"
"github.com/davecgh/go-spew/spew"
"github.com/hashicorp/hcl/v2"
)
func TestMergedBodiesContent(t *testing.T) {
tests := []struct {
Bodies []hcl.Body
Schema *hcl.BodySchema
Want *hcl.BodyContent
DiagCount int
}{
{
[]hcl.Body{},
&hcl.BodySchema{},
&hcl.BodyContent{
Attributes: map[string]*hcl.Attribute{},
},
0,
},
{
[]hcl.Body{},
&hcl.BodySchema{
Attributes: []hcl.AttributeSchema{
{
Name: "name",
},
},
},
&hcl.BodyContent{
Attributes: map[string]*hcl.Attribute{},
},
0,
},
{
[]hcl.Body{},
&hcl.BodySchema{
Attributes: []hcl.AttributeSchema{
{
Name: "name",
Required: true,
},
},
},
&hcl.BodyContent{
Attributes: map[string]*hcl.Attribute{},
},
1,
},
{
[]hcl.Body{
&testMergedBodiesVictim{
HasAttributes: []string{"name"},
},
},
&hcl.BodySchema{
Attributes: []hcl.AttributeSchema{
{
Name: "name",
},
},
},
&hcl.BodyContent{
Attributes: map[string]*hcl.Attribute{
"name": {
Name: "name",
},
},
},
0,
},
{
[]hcl.Body{
&testMergedBodiesVictim{
Name: "first",
HasAttributes: []string{"name"},
},
&testMergedBodiesVictim{
Name: "second",
HasAttributes: []string{"name"},
},
},
&hcl.BodySchema{
Attributes: []hcl.AttributeSchema{
{
Name: "name",
},
},
},
&hcl.BodyContent{
Attributes: map[string]*hcl.Attribute{
"name": {
Name: "name",
NameRange: hcl.Range{Filename: "second"},
},
},
},
1,
},
{
[]hcl.Body{
&testMergedBodiesVictim{
Name: "first",
HasAttributes: []string{"name"},
},
&testMergedBodiesVictim{
Name: "second",
HasAttributes: []string{"age"},
},
},
&hcl.BodySchema{
Attributes: []hcl.AttributeSchema{
{
Name: "name",
},
{
Name: "age",
},
},
},
&hcl.BodyContent{
Attributes: map[string]*hcl.Attribute{
"name": {
Name: "name",
NameRange: hcl.Range{Filename: "first"},
},
"age": {
Name: "age",
NameRange: hcl.Range{Filename: "second"},
},
},
},
0,
},
{
[]hcl.Body{},
&hcl.BodySchema{
Blocks: []hcl.BlockHeaderSchema{
{
Type: "pizza",
},
},
},
&hcl.BodyContent{
Attributes: map[string]*hcl.Attribute{},
},
0,
},
{
[]hcl.Body{
&testMergedBodiesVictim{
HasBlocks: map[string]int{
"pizza": 1,
},
},
},
&hcl.BodySchema{
Blocks: []hcl.BlockHeaderSchema{
{
Type: "pizza",
},
},
},
&hcl.BodyContent{
Attributes: map[string]*hcl.Attribute{},
Blocks: hcl.Blocks{
{
Type: "pizza",
},
},
},
0,
},
{
[]hcl.Body{
&testMergedBodiesVictim{
HasBlocks: map[string]int{
"pizza": 2,
},
},
},
&hcl.BodySchema{
Blocks: []hcl.BlockHeaderSchema{
{
Type: "pizza",
},
},
},
&hcl.BodyContent{
Attributes: map[string]*hcl.Attribute{},
Blocks: hcl.Blocks{
{
Type: "pizza",
},
{
Type: "pizza",
},
},
},
0,
},
{
[]hcl.Body{
&testMergedBodiesVictim{
Name: "first",
HasBlocks: map[string]int{
"pizza": 1,
},
},
&testMergedBodiesVictim{
Name: "second",
HasBlocks: map[string]int{
"pizza": 1,
},
},
},
&hcl.BodySchema{
Blocks: []hcl.BlockHeaderSchema{
{
Type: "pizza",
},
},
},
&hcl.BodyContent{
Attributes: map[string]*hcl.Attribute{},
Blocks: hcl.Blocks{
{
Type: "pizza",
DefRange: hcl.Range{Filename: "first"},
},
{
Type: "pizza",
DefRange: hcl.Range{Filename: "second"},
},
},
},
0,
},
{
[]hcl.Body{
&testMergedBodiesVictim{
Name: "first",
},
&testMergedBodiesVictim{
Name: "second",
HasBlocks: map[string]int{
"pizza": 2,
},
},
},
&hcl.BodySchema{
Blocks: []hcl.BlockHeaderSchema{
{
Type: "pizza",
},
},
},
&hcl.BodyContent{
Attributes: map[string]*hcl.Attribute{},
Blocks: hcl.Blocks{
{
Type: "pizza",
DefRange: hcl.Range{Filename: "second"},
},
{
Type: "pizza",
DefRange: hcl.Range{Filename: "second"},
},
},
},
0,
},
{
[]hcl.Body{
&testMergedBodiesVictim{
Name: "first",
HasBlocks: map[string]int{
"pizza": 2,
},
},
&testMergedBodiesVictim{
Name: "second",
},
},
&hcl.BodySchema{
Blocks: []hcl.BlockHeaderSchema{
{
Type: "pizza",
},
},
},
&hcl.BodyContent{
Attributes: map[string]*hcl.Attribute{},
Blocks: hcl.Blocks{
{
Type: "pizza",
DefRange: hcl.Range{Filename: "first"},
},
{
Type: "pizza",
DefRange: hcl.Range{Filename: "first"},
},
},
},
0,
},
{
[]hcl.Body{
&testMergedBodiesVictim{
Name: "first",
},
&testMergedBodiesVictim{
Name: "second",
},
},
&hcl.BodySchema{
Blocks: []hcl.BlockHeaderSchema{
{
Type: "pizza",
},
},
},
&hcl.BodyContent{
Attributes: map[string]*hcl.Attribute{},
},
0,
},
}
for i, test := range tests {
t.Run(fmt.Sprintf("%02d", i), func(t *testing.T) {
merged := MergeBodies(test.Bodies)
got, diags := merged.Content(test.Schema)
if len(diags) != test.DiagCount {
t.Errorf("Wrong number of diagnostics %d; want %d", len(diags), test.DiagCount)
for _, diag := range diags {
t.Logf(" - %s", diag)
}
}
if !reflect.DeepEqual(got, test.Want) {
t.Errorf("wrong result\ngot: %s\nwant: %s", spew.Sdump(got), spew.Sdump(test.Want))
}
})
}
}
func TestMergeBodiesPartialContent(t *testing.T) {
tests := []struct {
Bodies []hcl.Body
Schema *hcl.BodySchema
WantContent *hcl.BodyContent
WantRemain hcl.Body
DiagCount int
}{
{
[]hcl.Body{},
&hcl.BodySchema{},
&hcl.BodyContent{
Attributes: map[string]*hcl.Attribute{},
},
mergedBodies{},
0,
},
{
[]hcl.Body{
&testMergedBodiesVictim{
Name: "first",
HasAttributes: []string{"name", "age"},
},
},
&hcl.BodySchema{
Attributes: []hcl.AttributeSchema{
{
Name: "name",
},
},
},
&hcl.BodyContent{
Attributes: map[string]*hcl.Attribute{
"name": {
Name: "name",
NameRange: hcl.Range{Filename: "first"},
},
},
},
mergedBodies{
&testMergedBodiesVictim{
Name: "first",
HasAttributes: []string{"age"},
},
},
0,
},
{
[]hcl.Body{
&testMergedBodiesVictim{
Name: "first",
HasAttributes: []string{"name", "age"},
},
&testMergedBodiesVictim{
Name: "second",
HasAttributes: []string{"name", "pizza"},
},
},
&hcl.BodySchema{
Attributes: []hcl.AttributeSchema{
{
Name: "name",
},
},
},
&hcl.BodyContent{
Attributes: map[string]*hcl.Attribute{
"name": {
Name: "name",
NameRange: hcl.Range{Filename: "second"},
},
},
},
mergedBodies{
&testMergedBodiesVictim{
Name: "first",
HasAttributes: []string{"age"},
},
&testMergedBodiesVictim{
Name: "second",
HasAttributes: []string{"pizza"},
},
},
1,
},
{
[]hcl.Body{
&testMergedBodiesVictim{
Name: "first",
HasAttributes: []string{"name", "age"},
},
&testMergedBodiesVictim{
Name: "second",
HasAttributes: []string{"pizza", "soda"},
},
},
&hcl.BodySchema{
Attributes: []hcl.AttributeSchema{
{
Name: "name",
},
{
Name: "soda",
},
},
},
&hcl.BodyContent{
Attributes: map[string]*hcl.Attribute{
"name": {
Name: "name",
NameRange: hcl.Range{Filename: "first"},
},
"soda": {
Name: "soda",
NameRange: hcl.Range{Filename: "second"},
},
},
},
mergedBodies{
&testMergedBodiesVictim{
Name: "first",
HasAttributes: []string{"age"},
},
&testMergedBodiesVictim{
Name: "second",
HasAttributes: []string{"pizza"},
},
},
0,
},
{
[]hcl.Body{
&testMergedBodiesVictim{
Name: "first",
HasBlocks: map[string]int{
"pizza": 1,
},
},
&testMergedBodiesVictim{
Name: "second",
HasBlocks: map[string]int{
"pizza": 1,
"soda": 2,
},
},
},
&hcl.BodySchema{
Blocks: []hcl.BlockHeaderSchema{
{
Type: "pizza",
},
},
},
&hcl.BodyContent{
Attributes: map[string]*hcl.Attribute{},
Blocks: hcl.Blocks{
{
Type: "pizza",
DefRange: hcl.Range{Filename: "first"},
},
{
Type: "pizza",
DefRange: hcl.Range{Filename: "second"},
},
},
},
mergedBodies{
&testMergedBodiesVictim{
Name: "first",
HasAttributes: []string{},
HasBlocks: map[string]int{},
},
&testMergedBodiesVictim{
Name: "second",
HasAttributes: []string{},
HasBlocks: map[string]int{
"soda": 2,
},
},
},
0,
},
}
for i, test := range tests {
t.Run(fmt.Sprintf("%02d", i), func(t *testing.T) {
merged := MergeBodies(test.Bodies)
got, gotRemain, diags := merged.PartialContent(test.Schema)
if len(diags) != test.DiagCount {
t.Errorf("Wrong number of diagnostics %d; want %d", len(diags), test.DiagCount)
for _, diag := range diags {
t.Logf(" - %s", diag)
}
}
if !reflect.DeepEqual(got, test.WantContent) {
t.Errorf("wrong content result\ngot: %s\nwant: %s", spew.Sdump(got), spew.Sdump(test.WantContent))
}
if !reflect.DeepEqual(gotRemain, test.WantRemain) {
t.Errorf("wrong remaining result\ngot: %s\nwant: %s", spew.Sdump(gotRemain), spew.Sdump(test.WantRemain))
}
})
}
}
type testMergedBodiesVictim struct {
Name string
HasAttributes []string
HasBlocks map[string]int
DiagCount int
}
func (v *testMergedBodiesVictim) Content(schema *hcl.BodySchema) (*hcl.BodyContent, hcl.Diagnostics) {
c, _, d := v.PartialContent(schema)
return c, d
}
func (v *testMergedBodiesVictim) PartialContent(schema *hcl.BodySchema) (*hcl.BodyContent, hcl.Body, hcl.Diagnostics) {
remain := &testMergedBodiesVictim{
Name: v.Name,
HasAttributes: []string{},
}
hasAttrs := map[string]struct{}{}
for _, n := range v.HasAttributes {
hasAttrs[n] = struct{}{}
var found bool
for _, attrS := range schema.Attributes {
if n == attrS.Name {
found = true
break
}
}
if !found {
remain.HasAttributes = append(remain.HasAttributes, n)
}
}
content := &hcl.BodyContent{
Attributes: map[string]*hcl.Attribute{},
}
rng := hcl.Range{
Filename: v.Name,
}
for _, attrS := range schema.Attributes {
_, has := hasAttrs[attrS.Name]
if has {
content.Attributes[attrS.Name] = &hcl.Attribute{
Name: attrS.Name,
NameRange: rng,
}
}
}
if v.HasBlocks != nil {
for _, blockS := range schema.Blocks {
num := v.HasBlocks[blockS.Type]
for range num {
content.Blocks = append(content.Blocks, &hcl.Block{
Type: blockS.Type,
DefRange: rng,
})
}
}
remain.HasBlocks = map[string]int{}
for n := range v.HasBlocks {
var found bool
for _, blockS := range schema.Blocks {
if blockS.Type == n {
found = true
break
}
}
if !found {
remain.HasBlocks[n] = v.HasBlocks[n]
}
}
}
diags := make(hcl.Diagnostics, v.DiagCount)
for i := range diags {
diags[i] = &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: fmt.Sprintf("Fake diagnostic %d", i),
Detail: "For testing only.",
Context: &rng,
}
}
return content, remain, diags
}
func (v *testMergedBodiesVictim) JustAttributes() (hcl.Attributes, hcl.Diagnostics) {
attrs := make(map[string]*hcl.Attribute)
rng := hcl.Range{
Filename: v.Name,
}
for _, name := range v.HasAttributes {
attrs[name] = &hcl.Attribute{
Name: name,
NameRange: rng,
}
}
diags := make(hcl.Diagnostics, v.DiagCount)
for i := range diags {
diags[i] = &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: fmt.Sprintf("Fake diagnostic %d", i),
Detail: "For testing only.",
Context: &rng,
}
}
return attrs, diags
}
func (v *testMergedBodiesVictim) MissingItemRange() hcl.Range {
return hcl.Range{
Filename: v.Name,
}
}

View File

@ -1,8 +1,16 @@
package hclparser
import (
"errors"
"os"
"os/user"
"path"
"path/filepath"
"runtime"
"strings"
"time"
"github.com/docker/cli/cli/config"
"github.com/hashicorp/go-cty-funcs/cidr"
"github.com/hashicorp/go-cty-funcs/crypto"
"github.com/hashicorp/go-cty-funcs/encoding"
@ -14,122 +22,289 @@ import (
"github.com/zclconf/go-cty/cty/function/stdlib"
)
var stdlibFunctions = map[string]function.Function{
"absolute": stdlib.AbsoluteFunc,
"add": stdlib.AddFunc,
"and": stdlib.AndFunc,
"base64decode": encoding.Base64DecodeFunc,
"base64encode": encoding.Base64EncodeFunc,
"bcrypt": crypto.BcryptFunc,
"byteslen": stdlib.BytesLenFunc,
"bytesslice": stdlib.BytesSliceFunc,
"can": tryfunc.CanFunc,
"ceil": stdlib.CeilFunc,
"chomp": stdlib.ChompFunc,
"chunklist": stdlib.ChunklistFunc,
"cidrhost": cidr.HostFunc,
"cidrnetmask": cidr.NetmaskFunc,
"cidrsubnet": cidr.SubnetFunc,
"cidrsubnets": cidr.SubnetsFunc,
"coalesce": stdlib.CoalesceFunc,
"coalescelist": stdlib.CoalesceListFunc,
"compact": stdlib.CompactFunc,
"concat": stdlib.ConcatFunc,
"contains": stdlib.ContainsFunc,
"convert": typeexpr.ConvertFunc,
"csvdecode": stdlib.CSVDecodeFunc,
"distinct": stdlib.DistinctFunc,
"divide": stdlib.DivideFunc,
"element": stdlib.ElementFunc,
"equal": stdlib.EqualFunc,
"flatten": stdlib.FlattenFunc,
"floor": stdlib.FloorFunc,
"format": stdlib.FormatFunc,
"formatdate": stdlib.FormatDateFunc,
"formatlist": stdlib.FormatListFunc,
"greaterthan": stdlib.GreaterThanFunc,
"greaterthanorequalto": stdlib.GreaterThanOrEqualToFunc,
"hasindex": stdlib.HasIndexFunc,
"indent": stdlib.IndentFunc,
"index": stdlib.IndexFunc,
"int": stdlib.IntFunc,
"join": stdlib.JoinFunc,
"jsondecode": stdlib.JSONDecodeFunc,
"jsonencode": stdlib.JSONEncodeFunc,
"keys": stdlib.KeysFunc,
"length": stdlib.LengthFunc,
"lessthan": stdlib.LessThanFunc,
"lessthanorequalto": stdlib.LessThanOrEqualToFunc,
"log": stdlib.LogFunc,
"lookup": stdlib.LookupFunc,
"lower": stdlib.LowerFunc,
"max": stdlib.MaxFunc,
"md5": crypto.Md5Func,
"merge": stdlib.MergeFunc,
"min": stdlib.MinFunc,
"modulo": stdlib.ModuloFunc,
"multiply": stdlib.MultiplyFunc,
"negate": stdlib.NegateFunc,
"not": stdlib.NotFunc,
"notequal": stdlib.NotEqualFunc,
"or": stdlib.OrFunc,
"parseint": stdlib.ParseIntFunc,
"pow": stdlib.PowFunc,
"range": stdlib.RangeFunc,
"regex_replace": stdlib.RegexReplaceFunc,
"regex": stdlib.RegexFunc,
"regexall": stdlib.RegexAllFunc,
"replace": stdlib.ReplaceFunc,
"reverse": stdlib.ReverseFunc,
"reverselist": stdlib.ReverseListFunc,
"rsadecrypt": crypto.RsaDecryptFunc,
"sethaselement": stdlib.SetHasElementFunc,
"setintersection": stdlib.SetIntersectionFunc,
"setproduct": stdlib.SetProductFunc,
"setsubtract": stdlib.SetSubtractFunc,
"setsymmetricdifference": stdlib.SetSymmetricDifferenceFunc,
"setunion": stdlib.SetUnionFunc,
"sha1": crypto.Sha1Func,
"sha256": crypto.Sha256Func,
"sha512": crypto.Sha512Func,
"signum": stdlib.SignumFunc,
"slice": stdlib.SliceFunc,
"sort": stdlib.SortFunc,
"split": stdlib.SplitFunc,
"strlen": stdlib.StrlenFunc,
"substr": stdlib.SubstrFunc,
"subtract": stdlib.SubtractFunc,
"timeadd": stdlib.TimeAddFunc,
"timestamp": timestampFunc,
"title": stdlib.TitleFunc,
"trim": stdlib.TrimFunc,
"trimprefix": stdlib.TrimPrefixFunc,
"trimspace": stdlib.TrimSpaceFunc,
"trimsuffix": stdlib.TrimSuffixFunc,
"try": tryfunc.TryFunc,
"upper": stdlib.UpperFunc,
"urlencode": encoding.URLEncodeFunc,
"uuidv4": uuid.V4Func,
"uuidv5": uuid.V5Func,
"values": stdlib.ValuesFunc,
"zipmap": stdlib.ZipmapFunc,
type funcDef struct {
name string
descriptionAlt string
fn function.Function
factory func() function.Function
}
var stdlibFunctions = []funcDef{
{name: "absolute", fn: stdlib.AbsoluteFunc},
{name: "add", fn: stdlib.AddFunc},
{name: "and", fn: stdlib.AndFunc},
{name: "base64decode", fn: encoding.Base64DecodeFunc, descriptionAlt: `Decodes a string containing a base64 sequence.`},
{name: "base64encode", fn: encoding.Base64EncodeFunc, descriptionAlt: `Encodes a string to a base64 sequence.`},
{name: "basename", factory: basenameFunc},
{name: "bcrypt", fn: crypto.BcryptFunc, descriptionAlt: `Computes a hash of the given string using the Blowfish cipher.`},
{name: "byteslen", fn: stdlib.BytesLenFunc},
{name: "bytesslice", fn: stdlib.BytesSliceFunc},
{name: "can", fn: tryfunc.CanFunc, descriptionAlt: `Tries to evaluate the expression given in its first argument.`},
{name: "ceil", fn: stdlib.CeilFunc},
{name: "chomp", fn: stdlib.ChompFunc},
{name: "chunklist", fn: stdlib.ChunklistFunc},
{name: "cidrhost", fn: cidr.HostFunc, descriptionAlt: `Calculates a full host IP address within a given IP network address prefix.`},
{name: "cidrnetmask", fn: cidr.NetmaskFunc, descriptionAlt: `Converts an IPv4 address prefix given in CIDR notation into a subnet mask address.`},
{name: "cidrsubnet", fn: cidr.SubnetFunc, descriptionAlt: `Calculates a subnet address within a given IP network address prefix.`},
{name: "cidrsubnets", fn: cidr.SubnetsFunc, descriptionAlt: `Calculates many consecutive subnet addresses at once, rather than just a single subnet extension.`},
{name: "coalesce", fn: stdlib.CoalesceFunc},
{name: "coalescelist", fn: stdlib.CoalesceListFunc},
{name: "compact", fn: stdlib.CompactFunc},
{name: "concat", fn: stdlib.ConcatFunc},
{name: "contains", fn: stdlib.ContainsFunc},
{name: "convert", fn: typeexpr.ConvertFunc, descriptionAlt: `Converts a value to a specified type constraint, using HCL's customdecode extension for type expression support.`},
{name: "csvdecode", fn: stdlib.CSVDecodeFunc},
{name: "dirname", factory: dirnameFunc},
{name: "distinct", fn: stdlib.DistinctFunc},
{name: "divide", fn: stdlib.DivideFunc},
{name: "element", fn: stdlib.ElementFunc},
{name: "equal", fn: stdlib.EqualFunc},
{name: "flatten", fn: stdlib.FlattenFunc},
{name: "floor", fn: stdlib.FloorFunc},
{name: "format", fn: stdlib.FormatFunc},
{name: "formatdate", fn: stdlib.FormatDateFunc},
{name: "formatlist", fn: stdlib.FormatListFunc},
{name: "greaterthan", fn: stdlib.GreaterThanFunc},
{name: "greaterthanorequalto", fn: stdlib.GreaterThanOrEqualToFunc},
{name: "hasindex", fn: stdlib.HasIndexFunc},
{name: "homedir", factory: homedirFunc},
{name: "indent", fn: stdlib.IndentFunc},
{name: "index", fn: stdlib.IndexFunc},
{name: "indexof", factory: indexOfFunc},
{name: "int", fn: stdlib.IntFunc},
{name: "join", fn: stdlib.JoinFunc},
{name: "jsondecode", fn: stdlib.JSONDecodeFunc},
{name: "jsonencode", fn: stdlib.JSONEncodeFunc},
{name: "keys", fn: stdlib.KeysFunc},
{name: "length", fn: stdlib.LengthFunc},
{name: "lessthan", fn: stdlib.LessThanFunc},
{name: "lessthanorequalto", fn: stdlib.LessThanOrEqualToFunc},
{name: "log", fn: stdlib.LogFunc},
{name: "lookup", fn: stdlib.LookupFunc},
{name: "lower", fn: stdlib.LowerFunc},
{name: "max", fn: stdlib.MaxFunc},
{name: "md5", fn: crypto.Md5Func, descriptionAlt: `Computes the MD5 hash of a given string and encodes it with hexadecimal digits.`},
{name: "merge", fn: stdlib.MergeFunc},
{name: "min", fn: stdlib.MinFunc},
{name: "modulo", fn: stdlib.ModuloFunc},
{name: "multiply", fn: stdlib.MultiplyFunc},
{name: "negate", fn: stdlib.NegateFunc},
{name: "not", fn: stdlib.NotFunc},
{name: "notequal", fn: stdlib.NotEqualFunc},
{name: "or", fn: stdlib.OrFunc},
{name: "parseint", fn: stdlib.ParseIntFunc},
{name: "pow", fn: stdlib.PowFunc},
{name: "range", fn: stdlib.RangeFunc},
{name: "regex_replace", fn: stdlib.RegexReplaceFunc},
{name: "regex", fn: stdlib.RegexFunc},
{name: "regexall", fn: stdlib.RegexAllFunc},
{name: "replace", fn: stdlib.ReplaceFunc},
{name: "reverse", fn: stdlib.ReverseFunc},
{name: "reverselist", fn: stdlib.ReverseListFunc},
{name: "rsadecrypt", fn: crypto.RsaDecryptFunc, descriptionAlt: `Decrypts an RSA-encrypted ciphertext.`},
{name: "sanitize", factory: sanitizeFunc},
{name: "sethaselement", fn: stdlib.SetHasElementFunc},
{name: "setintersection", fn: stdlib.SetIntersectionFunc},
{name: "setproduct", fn: stdlib.SetProductFunc},
{name: "setsubtract", fn: stdlib.SetSubtractFunc},
{name: "setsymmetricdifference", fn: stdlib.SetSymmetricDifferenceFunc},
{name: "setunion", fn: stdlib.SetUnionFunc},
{name: "sha1", fn: crypto.Sha1Func, descriptionAlt: `Computes the SHA1 hash of a given string and encodes it with hexadecimal digits.`},
{name: "sha256", fn: crypto.Sha256Func, descriptionAlt: `Computes the SHA256 hash of a given string and encodes it with hexadecimal digits.`},
{name: "sha512", fn: crypto.Sha512Func, descriptionAlt: `Computes the SHA512 hash of a given string and encodes it with hexadecimal digits.`},
{name: "signum", fn: stdlib.SignumFunc},
{name: "slice", fn: stdlib.SliceFunc},
{name: "sort", fn: stdlib.SortFunc},
{name: "split", fn: stdlib.SplitFunc},
{name: "strlen", fn: stdlib.StrlenFunc},
{name: "substr", fn: stdlib.SubstrFunc},
{name: "subtract", fn: stdlib.SubtractFunc},
{name: "timeadd", fn: stdlib.TimeAddFunc},
{name: "timestamp", factory: timestampFunc},
{name: "title", fn: stdlib.TitleFunc},
{name: "trim", fn: stdlib.TrimFunc},
{name: "trimprefix", fn: stdlib.TrimPrefixFunc},
{name: "trimspace", fn: stdlib.TrimSpaceFunc},
{name: "trimsuffix", fn: stdlib.TrimSuffixFunc},
{name: "try", fn: tryfunc.TryFunc, descriptionAlt: `Variadic function that tries to evaluate all of is arguments in sequence until one succeeds, in which case it returns that result, or returns an error if none of them succeed.`},
{name: "upper", fn: stdlib.UpperFunc},
{name: "urlencode", fn: encoding.URLEncodeFunc, descriptionAlt: `Applies URL encoding to a given string.`},
{name: "uuidv4", fn: uuid.V4Func, descriptionAlt: `Generates and returns a Type-4 UUID in the standard hexadecimal string format.`},
{name: "uuidv5", fn: uuid.V5Func, descriptionAlt: `Generates and returns a Type-5 UUID in the standard hexadecimal string format.`},
{name: "values", fn: stdlib.ValuesFunc},
{name: "zipmap", fn: stdlib.ZipmapFunc},
}
// indexOfFunc constructs a function that finds the element index for a given
// value in a list.
func indexOfFunc() function.Function {
return function.New(&function.Spec{
Description: `Finds the element index for a given value in a list.`,
Params: []function.Parameter{
{
Name: "list",
Type: cty.DynamicPseudoType,
},
{
Name: "value",
Type: cty.DynamicPseudoType,
},
},
Type: function.StaticReturnType(cty.Number),
Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) {
if !args[0].Type().IsListType() && !args[0].Type().IsTupleType() {
return cty.NilVal, errors.New("argument must be a list or tuple")
}
if !args[0].IsKnown() {
return cty.UnknownVal(cty.Number), nil
}
if args[0].LengthInt() == 0 { // Easy path
return cty.NilVal, errors.New("cannot search an empty list")
}
for it := args[0].ElementIterator(); it.Next(); {
i, v := it.Element()
eq, err := stdlib.Equal(v, args[1])
if err != nil {
return cty.NilVal, err
}
if !eq.IsKnown() {
return cty.UnknownVal(cty.Number), nil
}
if eq.True() {
return i, nil
}
}
return cty.NilVal, errors.New("item not found")
},
})
}
// basenameFunc constructs a function that returns the last element of a path.
func basenameFunc() function.Function {
return function.New(&function.Spec{
Description: `Returns the last element of a path.`,
Params: []function.Parameter{
{
Name: "path",
Type: cty.String,
},
},
Type: function.StaticReturnType(cty.String),
Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) {
in := args[0].AsString()
return cty.StringVal(path.Base(in)), nil
},
})
}
// dirnameFunc constructs a function that returns the directory of a path.
func dirnameFunc() function.Function {
return function.New(&function.Spec{
Description: `Returns the directory of a path.`,
Params: []function.Parameter{
{
Name: "path",
Type: cty.String,
},
},
Type: function.StaticReturnType(cty.String),
Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) {
in := args[0].AsString()
return cty.StringVal(path.Dir(in)), nil
},
})
}
// sanitizyFunc constructs a function that replaces all non-alphanumeric characters with a underscore,
// leaving only characters that are valid for a Bake target name.
func sanitizeFunc() function.Function {
return function.New(&function.Spec{
Description: `Replaces all non-alphanumeric characters with a underscore, leaving only characters that are valid for a Bake target name.`,
Params: []function.Parameter{
{
Name: "name",
Type: cty.String,
},
},
Type: function.StaticReturnType(cty.String),
Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) {
in := args[0].AsString()
// only [a-zA-Z0-9_-]+ is allowed
var b strings.Builder
for _, r := range in {
if r >= 'a' && r <= 'z' || r >= 'A' && r <= 'Z' || r >= '0' && r <= '9' || r == '_' || r == '-' {
b.WriteRune(r)
} else {
b.WriteRune('_')
}
}
return cty.StringVal(b.String()), nil
},
})
}
// timestampFunc constructs a function that returns a string representation of the current date and time.
//
// This function was imported from terraform's datetime utilities.
var timestampFunc = function.New(&function.Spec{
Params: []function.Parameter{},
Type: function.StaticReturnType(cty.String),
Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) {
return cty.StringVal(time.Now().UTC().Format(time.RFC3339)), nil
},
})
func timestampFunc() function.Function {
return function.New(&function.Spec{
Description: `Returns a string representation of the current date and time.`,
Params: []function.Parameter{},
Type: function.StaticReturnType(cty.String),
Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) {
return cty.StringVal(time.Now().UTC().Format(time.RFC3339)), nil
},
})
}
// homedirFunc constructs a function that returns the current user's home directory.
func homedirFunc() function.Function {
return function.New(&function.Spec{
Description: `Returns the current user's home directory.`,
Params: []function.Parameter{},
Type: function.StaticReturnType(cty.String),
Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) {
home, err := os.UserHomeDir()
if err != nil {
if home == "" && runtime.GOOS != "windows" {
if u, err := user.Current(); err == nil {
return cty.StringVal(u.HomeDir), nil
}
}
return cty.StringVal(filepath.Dir(config.Dir())), nil
}
return cty.StringVal(home), nil
},
})
}
func Stdlib() map[string]function.Function {
funcs := make(map[string]function.Function, len(stdlibFunctions))
for k, v := range stdlibFunctions {
funcs[k] = v
for _, v := range stdlibFunctions {
if v.factory != nil {
funcs[v.name] = v.factory()
} else {
funcs[v.name] = v.fn
}
}
return funcs
}
func StdlibFuncDescription(name string) string {
for _, v := range stdlibFunctions {
if v.name != name {
continue
}
if v.descriptionAlt != "" {
return v.descriptionAlt
}
if v.factory != nil {
return v.factory().Description()
}
return v.fn.Description()
}
return ""
}

View File

@ -0,0 +1,207 @@
package hclparser
import (
"path/filepath"
"testing"
"github.com/stretchr/testify/require"
"github.com/zclconf/go-cty/cty"
)
func TestIndexOf(t *testing.T) {
type testCase struct {
input cty.Value
key cty.Value
want cty.Value
wantErr bool
}
tests := map[string]testCase{
"index 0": {
input: cty.TupleVal([]cty.Value{cty.StringVal("one"), cty.NumberIntVal(2.0), cty.NumberIntVal(3), cty.StringVal("four")}),
key: cty.StringVal("one"),
want: cty.NumberIntVal(0),
},
"index 3": {
input: cty.TupleVal([]cty.Value{cty.StringVal("one"), cty.NumberIntVal(2.0), cty.NumberIntVal(3), cty.StringVal("four")}),
key: cty.StringVal("four"),
want: cty.NumberIntVal(3),
},
"index -1": {
input: cty.TupleVal([]cty.Value{cty.StringVal("one"), cty.NumberIntVal(2.0), cty.NumberIntVal(3), cty.StringVal("four")}),
key: cty.StringVal("3"),
wantErr: true,
},
}
for name, test := range tests {
name, test := name, test
t.Run(name, func(t *testing.T) {
got, err := indexOfFunc().Call([]cty.Value{test.input, test.key})
if test.wantErr {
require.Error(t, err)
} else {
require.NoError(t, err)
require.Equal(t, test.want, got)
}
})
}
}
func TestBasename(t *testing.T) {
type testCase struct {
input cty.Value
want cty.Value
wantErr bool
}
tests := map[string]testCase{
"empty": {
input: cty.StringVal(""),
want: cty.StringVal("."),
},
"slash": {
input: cty.StringVal("/"),
want: cty.StringVal("/"),
},
"simple": {
input: cty.StringVal("/foo/bar"),
want: cty.StringVal("bar"),
},
"simple no slash": {
input: cty.StringVal("foo/bar"),
want: cty.StringVal("bar"),
},
"dot": {
input: cty.StringVal("/foo/bar."),
want: cty.StringVal("bar."),
},
"dotdot": {
input: cty.StringVal("/foo/bar.."),
want: cty.StringVal("bar.."),
},
"dotdotdot": {
input: cty.StringVal("/foo/bar..."),
want: cty.StringVal("bar..."),
},
}
for name, test := range tests {
name, test := name, test
t.Run(name, func(t *testing.T) {
got, err := basenameFunc().Call([]cty.Value{test.input})
if test.wantErr {
require.Error(t, err)
} else {
require.NoError(t, err)
require.Equal(t, test.want, got)
}
})
}
}
func TestDirname(t *testing.T) {
type testCase struct {
input cty.Value
want cty.Value
wantErr bool
}
tests := map[string]testCase{
"empty": {
input: cty.StringVal(""),
want: cty.StringVal("."),
},
"slash": {
input: cty.StringVal("/"),
want: cty.StringVal("/"),
},
"simple": {
input: cty.StringVal("/foo/bar"),
want: cty.StringVal("/foo"),
},
"simple no slash": {
input: cty.StringVal("foo/bar"),
want: cty.StringVal("foo"),
},
"dot": {
input: cty.StringVal("/foo/bar."),
want: cty.StringVal("/foo"),
},
"dotdot": {
input: cty.StringVal("/foo/bar.."),
want: cty.StringVal("/foo"),
},
"dotdotdot": {
input: cty.StringVal("/foo/bar..."),
want: cty.StringVal("/foo"),
},
}
for name, test := range tests {
name, test := name, test
t.Run(name, func(t *testing.T) {
got, err := dirnameFunc().Call([]cty.Value{test.input})
if test.wantErr {
require.Error(t, err)
} else {
require.NoError(t, err)
require.Equal(t, test.want, got)
}
})
}
}
func TestSanitize(t *testing.T) {
type testCase struct {
input cty.Value
want cty.Value
}
tests := map[string]testCase{
"empty": {
input: cty.StringVal(""),
want: cty.StringVal(""),
},
"simple": {
input: cty.StringVal("foo/bar"),
want: cty.StringVal("foo_bar"),
},
"simple no slash": {
input: cty.StringVal("foobar"),
want: cty.StringVal("foobar"),
},
"dot": {
input: cty.StringVal("foo/bar."),
want: cty.StringVal("foo_bar_"),
},
"dotdot": {
input: cty.StringVal("foo/bar.."),
want: cty.StringVal("foo_bar__"),
},
"dotdotdot": {
input: cty.StringVal("foo/bar..."),
want: cty.StringVal("foo_bar___"),
},
"utf8": {
input: cty.StringVal("foo/🍕bar"),
want: cty.StringVal("foo__bar"),
},
"symbols": {
input: cty.StringVal("foo/bar!@(ba+z)"),
want: cty.StringVal("foo_bar___ba_z_"),
},
}
for name, test := range tests {
name, test := name, test
t.Run(name, func(t *testing.T) {
got, err := sanitizeFunc().Call([]cty.Value{test.input})
require.NoError(t, err)
require.Equal(t, test.want, got)
})
}
}
func TestHomedir(t *testing.T) {
home, err := homedirFunc().Call(nil)
require.NoError(t, err)
require.NotEmpty(t, home.AsString())
require.True(t, filepath.IsAbs(home.AsString()))
}

View File

@ -0,0 +1,160 @@
// MIT License
//
// Copyright (c) 2017-2018 Martin Atkins
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in all
// copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
// SOFTWARE.
package hclparser
import (
"reflect"
"github.com/zclconf/go-cty/cty"
)
// ImpliedType takes an arbitrary Go value (as an interface{}) and attempts
// to find a suitable cty.Type instance that could be used for a conversion
// with ToCtyValue.
//
// This allows -- for simple situations at least -- types to be defined just
// once in Go and the cty types derived from the Go types, but in the process
// it makes some assumptions that may be undesirable so applications are
// encouraged to build their cty types directly if exacting control is
// required.
//
// Not all Go types can be represented as cty types, so an error may be
// returned which is usually considered to be a bug in the calling program.
// In particular, ImpliedType will never use capsule types in its returned
// type, because it cannot know the capsule types supported by the calling
// program.
func ImpliedType(gv any) (cty.Type, error) {
rt := reflect.TypeOf(gv)
var path cty.Path
return impliedType(rt, path)
}
func impliedType(rt reflect.Type, path cty.Path) (cty.Type, error) {
if ety, err := impliedTypeExt(rt, path); err == nil {
return ety, nil
}
switch rt.Kind() {
case reflect.Ptr:
return impliedType(rt.Elem(), path)
// Primitive types
case reflect.Bool:
return cty.Bool, nil
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return cty.Number, nil
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return cty.Number, nil
case reflect.Float32, reflect.Float64:
return cty.Number, nil
case reflect.String:
return cty.String, nil
// Collection types
case reflect.Slice:
path := append(path, cty.IndexStep{Key: cty.UnknownVal(cty.Number)})
ety, err := impliedType(rt.Elem(), path)
if err != nil {
return cty.NilType, err
}
return cty.List(ety), nil
case reflect.Map:
if !stringType.AssignableTo(rt.Key()) {
return cty.NilType, path.NewErrorf("no cty.Type for %s (must have string keys)", rt)
}
path := append(path, cty.IndexStep{Key: cty.UnknownVal(cty.String)})
ety, err := impliedType(rt.Elem(), path)
if err != nil {
return cty.NilType, err
}
return cty.Map(ety), nil
// Structural types
case reflect.Struct:
return impliedStructType(rt, path)
default:
return cty.NilType, path.NewErrorf("no cty.Type for %s", rt)
}
}
func impliedStructType(rt reflect.Type, path cty.Path) (cty.Type, error) {
if valueType.AssignableTo(rt) {
// Special case: cty.Value represents cty.DynamicPseudoType, for
// type conformance checking.
return cty.DynamicPseudoType, nil
}
fieldIdxs := structTagIndices(rt)
if len(fieldIdxs) == 0 {
return cty.NilType, path.NewErrorf("no cty.Type for %s (no cty field tags)", rt)
}
atys := make(map[string]cty.Type, len(fieldIdxs))
{
// Temporary extension of path for attributes
path := append(path, nil)
for k, fi := range fieldIdxs {
path[len(path)-1] = cty.GetAttrStep{Name: k}
ft := rt.Field(fi).Type
aty, err := impliedType(ft, path)
if err != nil {
return cty.NilType, err
}
atys[k] = aty
}
}
return cty.Object(atys), nil
}
var (
valueType = reflect.TypeOf(cty.Value{})
stringType = reflect.TypeOf("")
)
// structTagIndices interrogates the fields of the given type (which must
// be a struct type, or we'll panic) and returns a map from the cty
// attribute names declared via struct tags to the indices of the
// fields holding those tags.
//
// This function will panic if two fields within the struct are tagged with
// the same cty attribute name.
func structTagIndices(st reflect.Type) map[string]int {
ct := st.NumField()
ret := make(map[string]int, ct)
for i := range ct {
field := st.Field(i)
attrName := field.Tag.Get("cty")
if attrName != "" {
ret[attrName] = i
}
}
return ret
}

View File

@ -0,0 +1,166 @@
package hclparser
import (
"reflect"
"sync"
"github.com/containerd/errdefs"
"github.com/zclconf/go-cty/cty"
"github.com/zclconf/go-cty/cty/convert"
"github.com/zclconf/go-cty/cty/gocty"
)
type ToCtyValueConverter interface {
// ToCtyValue will convert this capsule value into a native
// cty.Value. This should not return a capsule type.
ToCtyValue() cty.Value
}
type FromCtyValueConverter interface {
// FromCtyValue will initialize this value using a cty.Value.
FromCtyValue(in cty.Value, path cty.Path) error
}
type extensionType int
const (
unwrapCapsuleValueExtension extensionType = iota
)
func impliedTypeExt(rt reflect.Type, _ cty.Path) (cty.Type, error) {
if rt.Kind() != reflect.Pointer {
rt = reflect.PointerTo(rt)
}
if isCapsuleType(rt) {
return capsuleValueCapsuleType(rt), nil
}
return cty.NilType, errdefs.ErrNotImplemented
}
func isCapsuleType(rt reflect.Type) bool {
fromCtyValueType := reflect.TypeFor[FromCtyValueConverter]()
toCtyValueType := reflect.TypeFor[ToCtyValueConverter]()
return rt.Implements(fromCtyValueType) && rt.Implements(toCtyValueType)
}
var capsuleValueTypes sync.Map
func capsuleValueCapsuleType(rt reflect.Type) cty.Type {
if rt.Kind() != reflect.Pointer {
panic("capsule value must be a pointer")
}
elem := rt.Elem()
if val, loaded := capsuleValueTypes.Load(elem); loaded {
return val.(cty.Type)
}
toCtyValueType := reflect.TypeFor[ToCtyValueConverter]()
// First time used. Initialize new capsule ops.
ops := &cty.CapsuleOps{
ConversionTo: func(_ cty.Type) func(cty.Value, cty.Path) (any, error) {
return func(in cty.Value, p cty.Path) (any, error) {
rv := reflect.New(elem).Interface()
if err := rv.(FromCtyValueConverter).FromCtyValue(in, p); err != nil {
return nil, err
}
return rv, nil
}
},
ConversionFrom: func(want cty.Type) func(any, cty.Path) (cty.Value, error) {
return func(in any, _ cty.Path) (cty.Value, error) {
rv := reflect.ValueOf(in).Convert(toCtyValueType)
v := rv.Interface().(ToCtyValueConverter).ToCtyValue()
return convert.Convert(v, want)
}
},
ExtensionData: func(key any) any {
switch key {
case unwrapCapsuleValueExtension:
zero := reflect.Zero(elem).Interface()
if conv, ok := zero.(ToCtyValueConverter); ok {
return conv.ToCtyValue().Type()
}
zero = reflect.Zero(rt).Interface()
if conv, ok := zero.(ToCtyValueConverter); ok {
return conv.ToCtyValue().Type()
}
}
return nil
},
}
// Attempt to store the new type. Use whichever was loaded first in the case
// of a race condition.
ety := cty.CapsuleWithOps(elem.Name(), elem, ops)
val, _ := capsuleValueTypes.LoadOrStore(elem, ety)
return val.(cty.Type)
}
// UnwrapCtyValue will unwrap capsule type values into their native cty value
// equivalents if possible.
func UnwrapCtyValue(in cty.Value) cty.Value {
want := toCtyValueType(in.Type())
if in.Type().Equals(want) {
return in
} else if out, err := convert.Convert(in, want); err == nil {
return out
}
return cty.NullVal(want)
}
func toCtyValueType(in cty.Type) cty.Type {
if et := in.MapElementType(); et != nil {
return cty.Map(toCtyValueType(*et))
}
if et := in.SetElementType(); et != nil {
return cty.Set(toCtyValueType(*et))
}
if et := in.ListElementType(); et != nil {
return cty.List(toCtyValueType(*et))
}
if in.IsObjectType() {
var optional []string
inAttrTypes := in.AttributeTypes()
outAttrTypes := make(map[string]cty.Type, len(inAttrTypes))
for name, typ := range inAttrTypes {
outAttrTypes[name] = toCtyValueType(typ)
if in.AttributeOptional(name) {
optional = append(optional, name)
}
}
return cty.ObjectWithOptionalAttrs(outAttrTypes, optional)
}
if in.IsTupleType() {
inTypes := in.TupleElementTypes()
outTypes := make([]cty.Type, len(inTypes))
for i, typ := range inTypes {
outTypes[i] = toCtyValueType(typ)
}
return cty.Tuple(outTypes)
}
if in.IsCapsuleType() {
if out := in.CapsuleExtensionData(unwrapCapsuleValueExtension); out != nil {
return out.(cty.Type)
}
return cty.DynamicPseudoType
}
return in
}
func ToCtyValue(val any, ty cty.Type) (cty.Value, error) {
out, err := gocty.ToCtyValue(val, ty)
if err != nil {
return out, err
}
return UnwrapCtyValue(out), nil
}

View File

@ -4,11 +4,15 @@ import (
"archive/tar"
"bytes"
"context"
"os"
"strings"
"github.com/docker/buildx/build"
"github.com/docker/buildx/builder"
controllerapi "github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/driver"
"github.com/docker/buildx/util/buildflags"
"github.com/docker/buildx/util/progress"
"github.com/docker/go-units"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/client/llb"
"github.com/moby/buildkit/frontend/dockerui"
@ -17,19 +21,46 @@ import (
"github.com/pkg/errors"
)
const maxBakeDefinitionSize = 2 * 1024 * 1024 // 2 MB
type Input struct {
State *llb.State
URL string
}
func ReadRemoteFiles(ctx context.Context, nodes []builder.Node, url string, names []string, pw progress.Writer) ([]File, *Input, error) {
var session []session.Attachable
var sessions []session.Attachable
var filename string
st, ok := dockerui.DetectGitContext(url, false)
keepGitDir := false
st, ok, err := dockerui.DetectGitContext(url, &keepGitDir)
if ok {
ssh, err := controllerapi.CreateSSH([]*controllerapi.SSH{{ID: "default"}})
if err == nil {
session = append(session, ssh)
if err != nil {
return nil, nil, err
}
if ssh, err := build.CreateSSH([]*buildflags.SSH{{
ID: "default",
Paths: strings.Split(os.Getenv("BUILDX_BAKE_GIT_SSH"), ","),
}}); err == nil {
sessions = append(sessions, ssh)
}
var gitAuthSecrets []*buildflags.Secret
if _, ok := os.LookupEnv("BUILDX_BAKE_GIT_AUTH_TOKEN"); ok {
gitAuthSecrets = append(gitAuthSecrets, &buildflags.Secret{
ID: llb.GitAuthTokenKey,
Env: "BUILDX_BAKE_GIT_AUTH_TOKEN",
})
}
if _, ok := os.LookupEnv("BUILDX_BAKE_GIT_AUTH_HEADER"); ok {
gitAuthSecrets = append(gitAuthSecrets, &buildflags.Secret{
ID: llb.GitAuthHeaderKey,
Env: "BUILDX_BAKE_GIT_AUTH_HEADER",
})
}
if len(gitAuthSecrets) > 0 {
if secrets, err := build.CreateSecrets(gitAuthSecrets); err == nil {
sessions = append(sessions, secrets)
}
}
} else {
st, filename, ok = dockerui.DetectHTTPContext(url)
@ -59,7 +90,7 @@ func ReadRemoteFiles(ctx context.Context, nodes []builder.Node, url string, name
ch, done := progress.NewChannel(pw)
defer func() { <-done }()
_, err = c.Build(ctx, client.SolveOpt{Session: session, Internal: true}, "buildx", func(ctx context.Context, c gwclient.Client) (*gwclient.Result, error) {
_, err = c.Build(ctx, client.SolveOpt{Session: sessions, Internal: true}, "buildx", func(ctx context.Context, c gwclient.Client) (*gwclient.Result, error) {
def, err := st.Marshal(ctx)
if err != nil {
return nil, err
@ -83,7 +114,6 @@ func ReadRemoteFiles(ctx context.Context, nodes []builder.Node, url string, name
}
return nil, err
}, ch)
if err != nil {
return nil, nil, err
}
@ -155,9 +185,9 @@ func filesFromURLRef(ctx context.Context, c gwclient.Client, ref gwclient.Refere
name := inp.URL
inp.URL = ""
if len(dt) > stat.Size() {
if stat.Size() > 1024*512 {
return nil, errors.Errorf("non-archive definition URL bigger than maximum allowed size")
if int64(len(dt)) > stat.Size {
if stat.Size > maxBakeDefinitionSize {
return nil, errors.Errorf("non-archive definition URL bigger than maximum allowed size (%s)", units.HumanSize(maxBakeDefinitionSize))
}
dt, err = ref.ReadFile(ctx, gwclient.ReadRequest{

File diff suppressed because it is too large Load Diff

View File

@ -4,15 +4,16 @@ import (
"context"
stderrors "errors"
"net"
"slices"
"github.com/containerd/containerd/platforms"
"github.com/containerd/platforms"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/util/progress"
v1 "github.com/opencontainers/image-spec/specs-go/v1"
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
func Dial(ctx context.Context, nodes []builder.Node, pw progress.Writer, platform *v1.Platform) (net.Conn, error) {
func Dial(ctx context.Context, nodes []builder.Node, pw progress.Writer, platform *ocispecs.Platform) (net.Conn, error) {
nodes, err := filterAvailableNodes(nodes)
if err != nil {
return nil, err
@ -22,9 +23,9 @@ func Dial(ctx context.Context, nodes []builder.Node, pw progress.Writer, platfor
return nil, errors.New("no nodes available")
}
var pls []v1.Platform
var pls []ocispecs.Platform
if platform != nil {
pls = []v1.Platform{*platform}
pls = []ocispecs.Platform{*platform}
}
opts := map[string]Options{"default": {Platforms: pls}}
@ -37,15 +38,7 @@ func Dial(ctx context.Context, nodes []builder.Node, pw progress.Writer, platfor
for _, ls := range resolved {
for _, rn := range ls {
if platform != nil {
p := *platform
var found bool
for _, pp := range rn.platforms {
if platforms.Only(p).Match(pp) {
found = true
break
}
}
if !found {
if !slices.ContainsFunc(rn.platforms, platforms.Only(*platform).Match) {
continue
}
}

View File

@ -3,8 +3,10 @@ package build
import (
"context"
"fmt"
"slices"
"sync"
"github.com/containerd/containerd/platforms"
"github.com/containerd/platforms"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/driver"
"github.com/docker/buildx/util/progress"
@ -12,7 +14,7 @@ import (
gateway "github.com/moby/buildkit/frontend/gateway/client"
"github.com/moby/buildkit/util/flightcontrol"
"github.com/moby/buildkit/util/tracing"
specs "github.com/opencontainers/image-spec/specs-go/v1"
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"go.opentelemetry.io/otel/trace"
"golang.org/x/sync/errgroup"
@ -21,7 +23,7 @@ import (
type resolvedNode struct {
resolver *nodeResolver
driverIndex int
platforms []specs.Platform
platforms []ocispecs.Platform
}
func (dp resolvedNode) Node() builder.Node {
@ -44,12 +46,24 @@ func (dp resolvedNode) BuildOpts(ctx context.Context) (gateway.BuildOpts, error)
return opts[0], nil
}
type matchMaker func(specs.Platform) platforms.MatchComparer
type matchMaker func(ocispecs.Platform) platforms.MatchComparer
type cachedGroup[T any] struct {
g flightcontrol.Group[T]
cache map[int]T
cacheMu sync.Mutex
}
func newCachedGroup[T any]() cachedGroup[T] {
return cachedGroup[T]{
cache: map[int]T{},
}
}
type nodeResolver struct {
nodes []builder.Node
clients flightcontrol.Group[*client.Client]
opt flightcontrol.Group[gateway.BuildOpts]
nodes []builder.Node
clients cachedGroup[*client.Client]
buildOpts cachedGroup[gateway.BuildOpts]
}
func resolveDrivers(ctx context.Context, nodes []builder.Node, opt map[string]Options, pw progress.Writer) (map[string][]*resolvedNode, error) {
@ -63,7 +77,9 @@ func resolveDrivers(ctx context.Context, nodes []builder.Node, opt map[string]Op
func newDriverResolver(nodes []builder.Node) *nodeResolver {
r := &nodeResolver{
nodes: nodes,
nodes: nodes,
clients: newCachedGroup[*client.Client](),
buildOpts: newCachedGroup[gateway.BuildOpts](),
}
return r
}
@ -96,7 +112,7 @@ func (r *nodeResolver) Resolve(ctx context.Context, opt map[string]Options, pw p
return nil, err
}
eg, egCtx := errgroup.WithContext(ctx)
workers := make([][]specs.Platform, len(clients))
workers := make([][]ocispecs.Platform, len(clients))
for i, c := range clients {
i, c := i, c
if c == nil {
@ -108,7 +124,7 @@ func (r *nodeResolver) Resolve(ctx context.Context, opt map[string]Options, pw p
return errors.Wrap(err, "listing workers")
}
ps := make(map[string]specs.Platform, len(ww))
ps := make(map[string]ocispecs.Platform, len(ww))
for _, w := range ww {
for _, p := range w.Platforms {
pk := platforms.Format(platforms.Normalize(p))
@ -129,7 +145,7 @@ func (r *nodeResolver) Resolve(ctx context.Context, opt map[string]Options, pw p
// (this time we don't care about imperfect matches)
nodes = map[string][]*resolvedNode{}
for k, opt := range opt {
node, _, err := r.resolve(ctx, opt.Platforms, pw, platforms.Only, func(idx int, n builder.Node) []specs.Platform {
node, _, err := r.resolve(ctx, opt.Platforms, pw, platforms.Only, func(idx int, n builder.Node) []ocispecs.Platform {
return workers[idx]
})
if err != nil {
@ -157,7 +173,7 @@ func (r *nodeResolver) Resolve(ctx context.Context, opt map[string]Options, pw p
return nodes, nil
}
func (r *nodeResolver) resolve(ctx context.Context, ps []specs.Platform, pw progress.Writer, matcher matchMaker, additional func(idx int, n builder.Node) []specs.Platform) ([]*resolvedNode, bool, error) {
func (r *nodeResolver) resolve(ctx context.Context, ps []ocispecs.Platform, pw progress.Writer, matcher matchMaker, additional func(idx int, n builder.Node) []ocispecs.Platform) ([]*resolvedNode, bool, error) {
if len(r.nodes) == 0 {
return nil, true, nil
}
@ -179,6 +195,7 @@ func (r *nodeResolver) resolve(ctx context.Context, ps []specs.Platform, pw prog
resolver: r,
driverIndex: 0,
})
nodeIdxs = append(nodeIdxs, 0)
} else {
for i, idx := range nodeIdxs {
node := &resolvedNode{
@ -186,7 +203,7 @@ func (r *nodeResolver) resolve(ctx context.Context, ps []specs.Platform, pw prog
driverIndex: idx,
}
if len(ps) > 0 {
node.platforms = []specs.Platform{ps[i]}
node.platforms = []ocispecs.Platform{ps[i]}
}
nodes = append(nodes, node)
}
@ -199,13 +216,13 @@ func (r *nodeResolver) resolve(ctx context.Context, ps []specs.Platform, pw prog
return nodes, perfect, nil
}
func (r *nodeResolver) get(p specs.Platform, matcher matchMaker, additionalPlatforms func(int, builder.Node) []specs.Platform) int {
func (r *nodeResolver) get(p ocispecs.Platform, matcher matchMaker, additionalPlatforms func(int, builder.Node) []ocispecs.Platform) int {
best := -1
bestPlatform := specs.Platform{}
bestPlatform := ocispecs.Platform{}
for i, node := range r.nodes {
platforms := node.Platforms
if additionalPlatforms != nil {
platforms = append([]specs.Platform{}, platforms...)
platforms = slices.Clone(platforms)
platforms = append(platforms, additionalPlatforms(i, node)...)
}
for _, p2 := range platforms {
@ -237,11 +254,24 @@ func (r *nodeResolver) boot(ctx context.Context, idxs []int, pw progress.Writer)
for i, idx := range idxs {
i, idx := i, idx
eg.Go(func() error {
c, err := r.clients.Do(ctx, fmt.Sprint(idx), func(ctx context.Context) (*client.Client, error) {
c, err := r.clients.g.Do(ctx, fmt.Sprint(idx), func(ctx context.Context) (*client.Client, error) {
if r.nodes[idx].Driver == nil {
return nil, nil
}
return driver.Boot(ctx, baseCtx, r.nodes[idx].Driver, pw)
r.clients.cacheMu.Lock()
c, ok := r.clients.cache[idx]
r.clients.cacheMu.Unlock()
if ok {
return c, nil
}
c, err := driver.Boot(ctx, baseCtx, r.nodes[idx].Driver, pw)
if err != nil {
return nil, err
}
r.clients.cacheMu.Lock()
r.clients.cache[idx] = c
r.clients.cacheMu.Unlock()
return c, nil
})
if err != nil {
return err
@ -272,14 +302,25 @@ func (r *nodeResolver) opts(ctx context.Context, idxs []int, pw progress.Writer)
continue
}
eg.Go(func() error {
opt, err := r.opt.Do(ctx, fmt.Sprint(idx), func(ctx context.Context) (gateway.BuildOpts, error) {
opt := gateway.BuildOpts{}
opt, err := r.buildOpts.g.Do(ctx, fmt.Sprint(idx), func(ctx context.Context) (gateway.BuildOpts, error) {
r.buildOpts.cacheMu.Lock()
opt, ok := r.buildOpts.cache[idx]
r.buildOpts.cacheMu.Unlock()
if ok {
return opt, nil
}
_, err := c.Build(ctx, client.SolveOpt{
Internal: true,
}, "buildx", func(ctx context.Context, c gateway.Client) (*gateway.Result, error) {
opt = c.BuildOpts()
return nil, nil
}, nil)
if err != nil {
return gateway.BuildOpts{}, err
}
r.buildOpts.cacheMu.Lock()
r.buildOpts.cache[idx] = opt
r.buildOpts.cacheMu.Unlock()
return opt, err
})
if err != nil {

View File

@ -5,43 +5,43 @@ import (
"sort"
"testing"
"github.com/containerd/containerd/platforms"
"github.com/containerd/platforms"
"github.com/docker/buildx/builder"
specs "github.com/opencontainers/image-spec/specs-go/v1"
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/stretchr/testify/require"
)
func TestFindDriverSanity(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
r := makeTestResolver(map[string][]ocispecs.Platform{
"aaa": {platforms.DefaultSpec()},
})
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.DefaultSpec()}, nil, platforms.OnlyStrict, nil)
res, perfect, err := r.resolve(context.TODO(), []ocispecs.Platform{platforms.DefaultSpec()}, nil, platforms.OnlyStrict, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, 0, res[0].driverIndex)
require.Equal(t, "aaa", res[0].Node().Builder)
require.Equal(t, []specs.Platform{platforms.DefaultSpec()}, res[0].platforms)
require.Equal(t, []ocispecs.Platform{platforms.DefaultSpec()}, res[0].platforms)
}
func TestFindDriverEmpty(t *testing.T) {
r := makeTestResolver(nil)
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.DefaultSpec()}, nil, platforms.Only, nil)
res, perfect, err := r.resolve(context.TODO(), []ocispecs.Platform{platforms.DefaultSpec()}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Nil(t, res)
}
func TestFindDriverWeirdName(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
r := makeTestResolver(map[string][]ocispecs.Platform{
"aaa": {platforms.MustParse("linux/amd64")},
"bbb": {platforms.MustParse("linux/foobar")},
})
// find first platform
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/foobar")}, nil, platforms.Only, nil)
res, perfect, err := r.resolve(context.TODO(), []ocispecs.Platform{platforms.MustParse("linux/foobar")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
@ -50,11 +50,11 @@ func TestFindDriverWeirdName(t *testing.T) {
}
func TestFindDriverUnknown(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
r := makeTestResolver(map[string][]ocispecs.Platform{
"aaa": {platforms.MustParse("linux/amd64")},
})
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/riscv64")}, nil, platforms.Only, nil)
res, perfect, err := r.resolve(context.TODO(), []ocispecs.Platform{platforms.MustParse("linux/riscv64")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.False(t, perfect)
require.Len(t, res, 1)
@ -63,13 +63,13 @@ func TestFindDriverUnknown(t *testing.T) {
}
func TestSelectNodeSinglePlatform(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
r := makeTestResolver(map[string][]ocispecs.Platform{
"aaa": {platforms.MustParse("linux/amd64")},
"bbb": {platforms.MustParse("linux/riscv64")},
})
// find first platform
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/amd64")}, nil, platforms.Only, nil)
res, perfect, err := r.resolve(context.TODO(), []ocispecs.Platform{platforms.MustParse("linux/amd64")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
@ -77,7 +77,7 @@ func TestSelectNodeSinglePlatform(t *testing.T) {
require.Equal(t, "aaa", res[0].Node().Builder)
// find second platform
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/riscv64")}, nil, platforms.Only, nil)
res, perfect, err = r.resolve(context.TODO(), []ocispecs.Platform{platforms.MustParse("linux/riscv64")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
@ -85,7 +85,7 @@ func TestSelectNodeSinglePlatform(t *testing.T) {
require.Equal(t, "bbb", res[0].Node().Builder)
// find an unknown platform, should match the first driver
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/s390x")}, nil, platforms.Only, nil)
res, perfect, err = r.resolve(context.TODO(), []ocispecs.Platform{platforms.MustParse("linux/s390x")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.False(t, perfect)
require.Len(t, res, 1)
@ -94,26 +94,26 @@ func TestSelectNodeSinglePlatform(t *testing.T) {
}
func TestSelectNodeMultiPlatform(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
r := makeTestResolver(map[string][]ocispecs.Platform{
"aaa": {platforms.MustParse("linux/amd64"), platforms.MustParse("linux/arm64")},
"bbb": {platforms.MustParse("linux/riscv64")},
})
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/amd64")}, nil, platforms.Only, nil)
res, perfect, err := r.resolve(context.TODO(), []ocispecs.Platform{platforms.MustParse("linux/amd64")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, 0, res[0].driverIndex)
require.Equal(t, "aaa", res[0].Node().Builder)
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm64")}, nil, platforms.Only, nil)
res, perfect, err = r.resolve(context.TODO(), []ocispecs.Platform{platforms.MustParse("linux/arm64")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, 0, res[0].driverIndex)
require.Equal(t, "aaa", res[0].Node().Builder)
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/riscv64")}, nil, platforms.Only, nil)
res, perfect, err = r.resolve(context.TODO(), []ocispecs.Platform{platforms.MustParse("linux/riscv64")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
@ -122,27 +122,27 @@ func TestSelectNodeMultiPlatform(t *testing.T) {
}
func TestSelectNodeNonStrict(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
r := makeTestResolver(map[string][]ocispecs.Platform{
"aaa": {platforms.MustParse("linux/amd64")},
"bbb": {platforms.MustParse("linux/arm64")},
})
// arm64 should match itself
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm64")}, nil, platforms.Only, nil)
res, perfect, err := r.resolve(context.TODO(), []ocispecs.Platform{platforms.MustParse("linux/arm64")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "bbb", res[0].Node().Builder)
// arm64 may support arm/v8
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v8")}, nil, platforms.Only, nil)
res, perfect, err = r.resolve(context.TODO(), []ocispecs.Platform{platforms.MustParse("linux/arm/v8")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "bbb", res[0].Node().Builder)
// arm64 may support arm/v7
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v7")}, nil, platforms.Only, nil)
res, perfect, err = r.resolve(context.TODO(), []ocispecs.Platform{platforms.MustParse("linux/arm/v7")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
@ -150,19 +150,19 @@ func TestSelectNodeNonStrict(t *testing.T) {
}
func TestSelectNodeNonStrictARM(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
r := makeTestResolver(map[string][]ocispecs.Platform{
"aaa": {platforms.MustParse("linux/amd64")},
"bbb": {platforms.MustParse("linux/arm64")},
"ccc": {platforms.MustParse("linux/arm/v8")},
})
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v8")}, nil, platforms.Only, nil)
res, perfect, err := r.resolve(context.TODO(), []ocispecs.Platform{platforms.MustParse("linux/arm/v8")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "ccc", res[0].Node().Builder)
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v7")}, nil, platforms.Only, nil)
res, perfect, err = r.resolve(context.TODO(), []ocispecs.Platform{platforms.MustParse("linux/arm/v7")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
@ -170,20 +170,20 @@ func TestSelectNodeNonStrictARM(t *testing.T) {
}
func TestSelectNodeNonStrictLower(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
r := makeTestResolver(map[string][]ocispecs.Platform{
"aaa": {platforms.MustParse("linux/amd64")},
"bbb": {platforms.MustParse("linux/arm/v7")},
})
// v8 can't be built on v7 (so we should select the default)...
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v8")}, nil, platforms.Only, nil)
res, perfect, err := r.resolve(context.TODO(), []ocispecs.Platform{platforms.MustParse("linux/arm/v8")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.False(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "aaa", res[0].Node().Builder)
// ...but v6 can be built on v8
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v6")}, nil, platforms.Only, nil)
res, perfect, err = r.resolve(context.TODO(), []ocispecs.Platform{platforms.MustParse("linux/arm/v6")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
@ -191,13 +191,13 @@ func TestSelectNodeNonStrictLower(t *testing.T) {
}
func TestSelectNodePreferStart(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
r := makeTestResolver(map[string][]ocispecs.Platform{
"aaa": {platforms.MustParse("linux/amd64")},
"bbb": {platforms.MustParse("linux/riscv64")},
"ccc": {platforms.MustParse("linux/riscv64")},
})
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/riscv64")}, nil, platforms.Only, nil)
res, perfect, err := r.resolve(context.TODO(), []ocispecs.Platform{platforms.MustParse("linux/riscv64")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
@ -205,12 +205,12 @@ func TestSelectNodePreferStart(t *testing.T) {
}
func TestSelectNodePreferExact(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
r := makeTestResolver(map[string][]ocispecs.Platform{
"aaa": {platforms.MustParse("linux/arm/v8")},
"bbb": {platforms.MustParse("linux/arm/v7")},
})
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v7")}, nil, platforms.Only, nil)
res, perfect, err := r.resolve(context.TODO(), []ocispecs.Platform{platforms.MustParse("linux/arm/v7")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
@ -218,12 +218,12 @@ func TestSelectNodePreferExact(t *testing.T) {
}
func TestSelectNodeNoPlatform(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
r := makeTestResolver(map[string][]ocispecs.Platform{
"aaa": {platforms.MustParse("linux/foobar")},
"bbb": {platforms.DefaultSpec()},
})
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{}, nil, platforms.Only, nil)
res, perfect, err := r.resolve(context.TODO(), []ocispecs.Platform{}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
@ -232,20 +232,20 @@ func TestSelectNodeNoPlatform(t *testing.T) {
}
func TestSelectNodeAdditionalPlatforms(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
r := makeTestResolver(map[string][]ocispecs.Platform{
"aaa": {platforms.MustParse("linux/amd64")},
"bbb": {platforms.MustParse("linux/arm/v8")},
})
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v7")}, nil, platforms.Only, nil)
res, perfect, err := r.resolve(context.TODO(), []ocispecs.Platform{platforms.MustParse("linux/arm/v7")}, nil, platforms.Only, nil)
require.NoError(t, err)
require.True(t, perfect)
require.Len(t, res, 1)
require.Equal(t, "bbb", res[0].Node().Builder)
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{platforms.MustParse("linux/arm/v7")}, nil, platforms.Only, func(idx int, n builder.Node) []specs.Platform {
res, perfect, err = r.resolve(context.TODO(), []ocispecs.Platform{platforms.MustParse("linux/arm/v7")}, nil, platforms.Only, func(idx int, n builder.Node) []ocispecs.Platform {
if n.Builder == "aaa" {
return []specs.Platform{platforms.MustParse("linux/arm/v7")}
return []ocispecs.Platform{platforms.MustParse("linux/arm/v7")}
}
return nil
})
@ -256,12 +256,12 @@ func TestSelectNodeAdditionalPlatforms(t *testing.T) {
}
func TestSplitNodeMultiPlatform(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
r := makeTestResolver(map[string][]ocispecs.Platform{
"aaa": {platforms.MustParse("linux/amd64"), platforms.MustParse("linux/arm64")},
"bbb": {platforms.MustParse("linux/riscv64")},
})
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{
res, perfect, err := r.resolve(context.TODO(), []ocispecs.Platform{
platforms.MustParse("linux/amd64"),
platforms.MustParse("linux/arm64"),
}, nil, platforms.Only, nil)
@ -270,7 +270,7 @@ func TestSplitNodeMultiPlatform(t *testing.T) {
require.Len(t, res, 1)
require.Equal(t, "aaa", res[0].Node().Builder)
res, perfect, err = r.resolve(context.TODO(), []specs.Platform{
res, perfect, err = r.resolve(context.TODO(), []ocispecs.Platform{
platforms.MustParse("linux/amd64"),
platforms.MustParse("linux/riscv64"),
}, nil, platforms.Only, nil)
@ -282,14 +282,14 @@ func TestSplitNodeMultiPlatform(t *testing.T) {
}
func TestSplitNodeMultiPlatformNoUnify(t *testing.T) {
r := makeTestResolver(map[string][]specs.Platform{
r := makeTestResolver(map[string][]ocispecs.Platform{
"aaa": {platforms.MustParse("linux/amd64")},
"bbb": {platforms.MustParse("linux/amd64"), platforms.MustParse("linux/riscv64")},
})
// the "best" choice would be the node with both platforms, but we're using
// a naive algorithm that doesn't try to unify the platforms
res, perfect, err := r.resolve(context.TODO(), []specs.Platform{
res, perfect, err := r.resolve(context.TODO(), []ocispecs.Platform{
platforms.MustParse("linux/amd64"),
platforms.MustParse("linux/riscv64"),
}, nil, platforms.Only, nil)
@ -300,7 +300,7 @@ func TestSplitNodeMultiPlatformNoUnify(t *testing.T) {
require.Equal(t, "bbb", res[1].Node().Builder)
}
func makeTestResolver(nodes map[string][]specs.Platform) *nodeResolver {
func makeTestResolver(nodes map[string][]ocispecs.Platform) *nodeResolver {
var ns []builder.Node
for name, platforms := range nodes {
ns = append(ns, builder.Node{

View File

@ -2,6 +2,7 @@ package build
import (
"context"
"maps"
"os"
"path"
"path/filepath"
@ -11,16 +12,25 @@ import (
"github.com/docker/buildx/util/gitutil"
"github.com/docker/buildx/util/osutil"
"github.com/moby/buildkit/client"
specs "github.com/opencontainers/image-spec/specs-go/v1"
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
const DockerfileLabel = "com.docker.image.source.entrypoint"
func getGitAttributes(ctx context.Context, contextPath string, dockerfilePath string) (map[string]string, func(*client.SolveOpt), error) {
res := make(map[string]string)
type gitAttrsAppendFunc func(so *client.SolveOpt)
func gitAppendNoneFunc(_ *client.SolveOpt) {}
func getGitAttributes(ctx context.Context, contextPath, dockerfilePath string) (f gitAttrsAppendFunc, err error) {
defer func() {
if f == nil {
f = gitAppendNoneFunc
}
}()
if contextPath == "" {
return nil, nil, nil
return nil, nil
}
setGitLabels := false
@ -39,7 +49,7 @@ func getGitAttributes(ctx context.Context, contextPath string, dockerfilePath st
}
if !setGitLabels && !setGitInfo {
return nil, nil, nil
return nil, nil
}
// figure out in which directory the git command needs to run in
@ -54,25 +64,27 @@ func getGitAttributes(ctx context.Context, contextPath string, dockerfilePath st
gitc, err := gitutil.New(gitutil.WithContext(ctx), gitutil.WithWorkingDir(wd))
if err != nil {
if st, err1 := os.Stat(path.Join(wd, ".git")); err1 == nil && st.IsDir() {
return res, nil, errors.Wrap(err, "git was not found in the system")
return nil, errors.Wrap(err, "git was not found in the system")
}
return nil, nil, nil
return nil, nil
}
if !gitc.IsInsideWorkTree() {
if st, err := os.Stat(path.Join(wd, ".git")); err == nil && st.IsDir() {
return res, nil, errors.New("failed to read current commit information with git rev-parse --is-inside-work-tree")
return nil, errors.New("failed to read current commit information with git rev-parse --is-inside-work-tree")
}
return nil, nil, nil
return nil, nil
}
root, err := gitc.RootDir()
if err != nil {
return res, nil, errors.Wrap(err, "failed to get git root dir")
return nil, errors.Wrap(err, "failed to get git root dir")
}
res := make(map[string]string)
if sha, err := gitc.FullCommit(); err != nil && !gitutil.IsUnknownRevision(err) {
return res, nil, errors.Wrap(err, "failed to get git commit")
return nil, errors.Wrap(err, "failed to get git commit")
} else if sha != "" {
checkDirty := false
if v, ok := os.LookupEnv("BUILDX_GIT_CHECK_DIRTY"); ok {
@ -84,7 +96,7 @@ func getGitAttributes(ctx context.Context, contextPath string, dockerfilePath st
sha += "-dirty"
}
if setGitLabels {
res["label:"+specs.AnnotationRevision] = sha
res["label:"+ocispecs.AnnotationRevision] = sha
}
if setGitInfo {
res["vcs:revision"] = sha
@ -93,7 +105,7 @@ func getGitAttributes(ctx context.Context, contextPath string, dockerfilePath st
if rurl, err := gitc.RemoteURL(); err == nil && rurl != "" {
if setGitLabels {
res["label:"+specs.AnnotationSource] = rurl
res["label:"+ocispecs.AnnotationSource] = rurl
}
if setGitInfo {
res["vcs:source"] = rurl
@ -112,12 +124,22 @@ func getGitAttributes(ctx context.Context, contextPath string, dockerfilePath st
}
}
return res, func(so *client.SolveOpt) {
return func(so *client.SolveOpt) {
if so.FrontendAttrs == nil {
so.FrontendAttrs = make(map[string]string)
}
maps.Copy(so.FrontendAttrs, res)
if !setGitInfo || root == "" {
return
}
for k, dir := range so.LocalDirs {
dir, err = filepath.EvalSymlinks(dir)
for key, mount := range so.LocalMounts {
fs, ok := mount.(*fs)
if !ok {
continue
}
dir, err := filepath.EvalSymlinks(fs.dir) // keep same behavior as fsutil.NewFS
if err != nil {
continue
}
@ -130,7 +152,7 @@ func getGitAttributes(ctx context.Context, contextPath string, dockerfilePath st
}
dir = osutil.SanitizePath(dir)
if r, err := filepath.Rel(root, dir); err == nil && !strings.HasPrefix(r, "..") {
so.FrontendAttrs["vcs:localdir:"+k] = r
so.FrontendAttrs["vcs:localdir:"+key] = r
}
}
}, nil

View File

@ -9,46 +9,49 @@ import (
"testing"
"github.com/docker/buildx/util/gitutil"
"github.com/docker/buildx/util/gitutil/gittestutil"
"github.com/moby/buildkit/client"
specs "github.com/opencontainers/image-spec/specs-go/v1"
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func setupTest(tb testing.TB) {
gitutil.Mktmp(tb)
gittestutil.Mktmp(tb)
c, err := gitutil.New()
require.NoError(tb, err)
gitutil.GitInit(c, tb)
gittestutil.GitInit(c, tb)
df := []byte("FROM alpine:latest\n")
assert.NoError(tb, os.WriteFile("Dockerfile", df, 0644))
require.NoError(tb, os.WriteFile("Dockerfile", df, 0644))
gitutil.GitAdd(c, tb, "Dockerfile")
gitutil.GitCommit(c, tb, "initial commit")
gitutil.GitSetRemote(c, tb, "origin", "git@github.com:docker/buildx.git")
gittestutil.GitAdd(c, tb, "Dockerfile")
gittestutil.GitCommit(c, tb, "initial commit")
gittestutil.GitSetRemote(c, tb, "origin", "git@github.com:docker/buildx.git")
}
func TestGetGitAttributesNotGitRepo(t *testing.T) {
_, _, err := getGitAttributes(context.Background(), t.TempDir(), "Dockerfile")
assert.NoError(t, err)
_, err := getGitAttributes(context.Background(), t.TempDir(), "Dockerfile")
require.NoError(t, err)
}
func TestGetGitAttributesBadGitRepo(t *testing.T) {
tmp := t.TempDir()
require.NoError(t, os.MkdirAll(path.Join(tmp, ".git"), 0755))
_, _, err := getGitAttributes(context.Background(), tmp, "Dockerfile")
_, err := getGitAttributes(context.Background(), tmp, "Dockerfile")
assert.Error(t, err)
}
func TestGetGitAttributesNoContext(t *testing.T) {
setupTest(t)
gitattrs, _, err := getGitAttributes(context.Background(), "", "Dockerfile")
assert.NoError(t, err)
assert.Empty(t, gitattrs)
addGitAttrs, err := getGitAttributes(context.Background(), "", "Dockerfile")
require.NoError(t, err)
var so client.SolveOpt
addGitAttrs(&so)
assert.Empty(t, so.FrontendAttrs)
}
func TestGetGitAttributes(t *testing.T) {
@ -88,8 +91,8 @@ func TestGetGitAttributes(t *testing.T) {
envGitInfo: "false",
expected: []string{
"label:" + DockerfileLabel,
"label:" + specs.AnnotationRevision,
"label:" + specs.AnnotationSource,
"label:" + ocispecs.AnnotationRevision,
"label:" + ocispecs.AnnotationSource,
},
},
{
@ -98,15 +101,14 @@ func TestGetGitAttributes(t *testing.T) {
envGitInfo: "",
expected: []string{
"label:" + DockerfileLabel,
"label:" + specs.AnnotationRevision,
"label:" + specs.AnnotationSource,
"label:" + ocispecs.AnnotationRevision,
"label:" + ocispecs.AnnotationSource,
"vcs:revision",
"vcs:source",
},
},
}
for _, tt := range cases {
tt := tt
t.Run(tt.name, func(t *testing.T) {
setupTest(t)
if tt.envGitLabels != "" {
@ -115,15 +117,18 @@ func TestGetGitAttributes(t *testing.T) {
if tt.envGitInfo != "" {
t.Setenv("BUILDX_GIT_INFO", tt.envGitInfo)
}
gitattrs, _, err := getGitAttributes(context.Background(), ".", "Dockerfile")
addGitAttrs, err := getGitAttributes(context.Background(), ".", "Dockerfile")
require.NoError(t, err)
var so client.SolveOpt
addGitAttrs(&so)
for _, e := range tt.expected {
assert.Contains(t, gitattrs, e)
assert.NotEmpty(t, gitattrs[e])
if e == "label:"+DockerfileLabel {
assert.Equal(t, "Dockerfile", gitattrs[e])
} else if e == "label:"+specs.AnnotationSource || e == "vcs:source" {
assert.Equal(t, "git@github.com:docker/buildx.git", gitattrs[e])
assert.Contains(t, so.FrontendAttrs, e)
assert.NotEmpty(t, so.FrontendAttrs[e])
switch e {
case "label:" + DockerfileLabel:
assert.Equal(t, "Dockerfile", so.FrontendAttrs[e])
case "label:" + ocispecs.AnnotationSource, "vcs:source":
assert.Equal(t, "git@github.com:docker/buildx.git", so.FrontendAttrs[e])
}
}
})
@ -140,20 +145,25 @@ func TestGetGitAttributesDirty(t *testing.T) {
require.NoError(t, os.WriteFile(filepath.Join("dir", "Dockerfile"), df, 0644))
t.Setenv("BUILDX_GIT_LABELS", "true")
gitattrs, _, _ := getGitAttributes(context.Background(), ".", "Dockerfile")
assert.Equal(t, 5, len(gitattrs))
addGitAttrs, err := getGitAttributes(context.Background(), ".", "Dockerfile")
require.NoError(t, err)
assert.Contains(t, gitattrs, "label:"+DockerfileLabel)
assert.Equal(t, "Dockerfile", gitattrs["label:"+DockerfileLabel])
assert.Contains(t, gitattrs, "label:"+specs.AnnotationSource)
assert.Equal(t, "git@github.com:docker/buildx.git", gitattrs["label:"+specs.AnnotationSource])
assert.Contains(t, gitattrs, "label:"+specs.AnnotationRevision)
assert.True(t, strings.HasSuffix(gitattrs["label:"+specs.AnnotationRevision], "-dirty"))
var so client.SolveOpt
addGitAttrs(&so)
assert.Contains(t, gitattrs, "vcs:source")
assert.Equal(t, "git@github.com:docker/buildx.git", gitattrs["vcs:source"])
assert.Contains(t, gitattrs, "vcs:revision")
assert.True(t, strings.HasSuffix(gitattrs["vcs:revision"], "-dirty"))
assert.Equal(t, 5, len(so.FrontendAttrs))
assert.Contains(t, so.FrontendAttrs, "label:"+DockerfileLabel)
assert.Equal(t, "Dockerfile", so.FrontendAttrs["label:"+DockerfileLabel])
assert.Contains(t, so.FrontendAttrs, "label:"+ocispecs.AnnotationSource)
assert.Equal(t, "git@github.com:docker/buildx.git", so.FrontendAttrs["label:"+ocispecs.AnnotationSource])
assert.Contains(t, so.FrontendAttrs, "label:"+ocispecs.AnnotationRevision)
assert.True(t, strings.HasSuffix(so.FrontendAttrs["label:"+ocispecs.AnnotationRevision], "-dirty"))
assert.Contains(t, so.FrontendAttrs, "vcs:source")
assert.Equal(t, "git@github.com:docker/buildx.git", so.FrontendAttrs["vcs:source"])
assert.Contains(t, so.FrontendAttrs, "vcs:revision")
assert.True(t, strings.HasSuffix(so.FrontendAttrs["vcs:revision"], "-dirty"))
}
func TestLocalDirs(t *testing.T) {
@ -161,53 +171,52 @@ func TestLocalDirs(t *testing.T) {
so := &client.SolveOpt{
FrontendAttrs: map[string]string{},
LocalDirs: map[string]string{
"context": ".",
"dockerfile": ".",
},
}
_, addVCSLocalDir, err := getGitAttributes(context.Background(), ".", "Dockerfile")
addGitAttrs, err := getGitAttributes(context.Background(), ".", "Dockerfile")
require.NoError(t, err)
require.NotNil(t, addVCSLocalDir)
addVCSLocalDir(so)
require.NoError(t, setLocalMount("context", ".", so))
require.NoError(t, setLocalMount("dockerfile", ".", so))
addGitAttrs(so)
require.Contains(t, so.FrontendAttrs, "vcs:localdir:context")
assert.Equal(t, ".", so.FrontendAttrs["vcs:localdir:context"])
require.Contains(t, so.FrontendAttrs, "vcs:localdir:dockerfile")
assert.Equal(t, ".", so.FrontendAttrs["vcs:localdir:dockerfile"])
}
func TestLocalDirsSub(t *testing.T) {
gitutil.Mktmp(t)
gittestutil.Mktmp(t)
c, err := gitutil.New()
require.NoError(t, err)
gitutil.GitInit(c, t)
gittestutil.GitInit(c, t)
df := []byte("FROM alpine:latest\n")
assert.NoError(t, os.MkdirAll("app", 0755))
assert.NoError(t, os.WriteFile("app/Dockerfile", df, 0644))
require.NoError(t, os.MkdirAll("app", 0755))
require.NoError(t, os.WriteFile("app/Dockerfile", df, 0644))
gitutil.GitAdd(c, t, "app/Dockerfile")
gitutil.GitCommit(c, t, "initial commit")
gitutil.GitSetRemote(c, t, "origin", "git@github.com:docker/buildx.git")
gittestutil.GitAdd(c, t, "app/Dockerfile")
gittestutil.GitCommit(c, t, "initial commit")
gittestutil.GitSetRemote(c, t, "origin", "git@github.com:docker/buildx.git")
so := &client.SolveOpt{
FrontendAttrs: map[string]string{},
LocalDirs: map[string]string{
"context": ".",
"dockerfile": "app",
},
}
require.NoError(t, setLocalMount("context", ".", so))
require.NoError(t, setLocalMount("dockerfile", "app", so))
_, addVCSLocalDir, err := getGitAttributes(context.Background(), ".", "app/Dockerfile")
addGitAttrs, err := getGitAttributes(context.Background(), ".", "app/Dockerfile")
require.NoError(t, err)
require.NotNil(t, addVCSLocalDir)
addVCSLocalDir(so)
addGitAttrs(so)
require.Contains(t, so.FrontendAttrs, "vcs:localdir:context")
assert.Equal(t, ".", so.FrontendAttrs["vcs:localdir:context"])
require.Contains(t, so.FrontendAttrs, "vcs:localdir:dockerfile")
assert.Equal(t, "app", so.FrontendAttrs["vcs:localdir:dockerfile"])
}

View File

@ -8,15 +8,56 @@ import (
"sync/atomic"
"syscall"
controllerapi "github.com/docker/buildx/controller/pb"
gateway "github.com/moby/buildkit/frontend/gateway/client"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
type InvokeConfig struct {
Entrypoint []string `json:"entrypoint,omitempty"`
Cmd []string `json:"cmd,omitempty"`
NoCmd bool `json:"noCmd,omitempty"`
Env []string `json:"env,omitempty"`
User string `json:"user,omitempty"`
NoUser bool `json:"noUser,omitempty"`
Cwd string `json:"cwd,omitempty"`
NoCwd bool `json:"noCwd,omitempty"`
Tty bool `json:"tty,omitempty"`
Rollback bool `json:"rollback,omitempty"`
Initial bool `json:"initial,omitempty"`
SuspendOn SuspendOn `json:"suspendOn,omitempty"`
}
func (cfg *InvokeConfig) NeedsDebug(err error) bool {
return cfg.SuspendOn.DebugEnabled(err)
}
type SuspendOn int
const (
SuspendError SuspendOn = iota
SuspendAlways
)
func (s SuspendOn) DebugEnabled(err error) bool {
return err != nil || s == SuspendAlways
}
func (s *SuspendOn) UnmarshalText(text []byte) error {
switch string(text) {
case "error":
*s = SuspendError
case "always":
*s = SuspendAlways
default:
return errors.Errorf("unknown suspend name: %s", string(text))
}
return nil
}
type Container struct {
cancelOnce sync.Once
containerCancel func()
containerCancel func(error)
isUnavailable atomic.Bool
initStarted atomic.Bool
container gateway.Container
@ -24,29 +65,21 @@ type Container struct {
resultCtx *ResultHandle
}
func NewContainer(ctx context.Context, resultCtx *ResultHandle, cfg *controllerapi.InvokeConfig) (*Container, error) {
func NewContainer(ctx context.Context, resultCtx *ResultHandle, cfg *InvokeConfig) (*Container, error) {
mainCtx := ctx
ctrCh := make(chan *Container)
errCh := make(chan error)
ctrCh := make(chan *Container, 1)
errCh := make(chan error, 1)
go func() {
err := resultCtx.build(func(ctx context.Context, c gateway.Client) (*gateway.Result, error) {
ctx, cancel := context.WithCancel(ctx)
go func() {
<-mainCtx.Done()
cancel()
}()
err := func() error {
containerCtx, containerCancel := context.WithCancelCause(ctx)
defer containerCancel(errors.WithStack(context.Canceled))
containerCfg, err := resultCtx.getContainerConfig(ctx, c, cfg)
bkContainer, err := resultCtx.NewContainer(containerCtx, cfg)
if err != nil {
return nil, err
}
containerCtx, containerCancel := context.WithCancel(ctx)
defer containerCancel()
bkContainer, err := c.NewContainer(containerCtx, containerCfg)
if err != nil {
return nil, err
return err
}
releaseCh := make(chan struct{})
container := &Container{
containerCancel: containerCancel,
@ -63,8 +96,8 @@ func NewContainer(ctx context.Context, resultCtx *ResultHandle, cfg *controllera
ctrCh <- container
<-container.releaseCh
return nil, bkContainer.Release(ctx)
})
return bkContainer.Release(ctx)
}()
if err != nil {
errCh <- err
}
@ -83,7 +116,7 @@ func (c *Container) Cancel() {
c.markUnavailable()
c.cancelOnce.Do(func() {
if c.containerCancel != nil {
c.containerCancel()
c.containerCancel(errors.WithStack(context.Canceled))
}
close(c.releaseCh)
})
@ -97,7 +130,7 @@ func (c *Container) markUnavailable() {
c.isUnavailable.Store(true)
}
func (c *Container) Exec(ctx context.Context, cfg *controllerapi.InvokeConfig, stdin io.ReadCloser, stdout io.WriteCloser, stderr io.WriteCloser) error {
func (c *Container) Exec(ctx context.Context, cfg *InvokeConfig, stdin io.ReadCloser, stdout io.WriteCloser, stderr io.WriteCloser) error {
if isInit := c.initStarted.CompareAndSwap(false, true); isInit {
defer func() {
// container can't be used after init exits
@ -112,7 +145,7 @@ func (c *Container) Exec(ctx context.Context, cfg *controllerapi.InvokeConfig, s
return err
}
func exec(ctx context.Context, resultCtx *ResultHandle, cfg *controllerapi.InvokeConfig, ctr gateway.Container, stdin io.ReadCloser, stdout io.WriteCloser, stderr io.WriteCloser) error {
func exec(ctx context.Context, resultCtx *ResultHandle, cfg *InvokeConfig, ctr gateway.Container, stdin io.ReadCloser, stdout io.WriteCloser, stderr io.WriteCloser) error {
processCfg, err := resultCtx.getProcessConfig(cfg, stdin, stdout, stderr)
if err != nil {
return err

View File

@ -5,39 +5,40 @@ import (
"github.com/docker/buildx/builder"
"github.com/docker/buildx/localstate"
"github.com/docker/buildx/util/confutil"
"github.com/moby/buildkit/client"
)
func saveLocalState(so *client.SolveOpt, target string, opts Options, node builder.Node, configDir string) error {
func saveLocalState(so *client.SolveOpt, target string, opts Options, node builder.Node, cfg *confutil.Config) error {
var err error
if so.Ref == "" {
if so.Ref == "" || opts.CallFunc != nil {
return nil
}
lp := opts.Inputs.ContextPath
dp := opts.Inputs.DockerfilePath
if lp != "" || dp != "" {
if lp != "" {
lp, err = filepath.Abs(lp)
if err != nil {
return err
}
}
if dp != "" {
dp, err = filepath.Abs(dp)
if err != nil {
return err
}
}
l, err := localstate.New(configDir)
if dp != "" && !IsRemoteURL(lp) && lp != "-" && dp != "-" {
dp, err = filepath.Abs(dp)
if err != nil {
return err
}
return l.SaveRef(node.Builder, node.Name, so.Ref, localstate.State{
Target: target,
LocalPath: lp,
DockerfilePath: dp,
GroupRef: opts.GroupRef,
})
}
return nil
if lp != "" && !IsRemoteURL(lp) && lp != "-" {
lp, err = filepath.Abs(lp)
if err != nil {
return err
}
}
if lp == "" && dp == "" {
return nil
}
l, err := localstate.New(cfg)
if err != nil {
return err
}
return l.SaveRef(node.Builder, node.Name, so.Ref, localstate.State{
Target: target,
LocalPath: lp,
DockerfilePath: dp,
GroupRef: opts.GroupRef,
})
}

929
build/opt.go Normal file
View File

@ -0,0 +1,929 @@
package build
import (
"bytes"
"context"
"io"
"maps"
"os"
"path/filepath"
"slices"
"strconv"
"strings"
"syscall"
awsconfig "github.com/aws/aws-sdk-go-v2/config"
"github.com/containerd/console"
"github.com/containerd/containerd/v2/core/content"
"github.com/containerd/containerd/v2/plugins/content/local"
"github.com/containerd/platforms"
"github.com/distribution/reference"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/driver"
"github.com/docker/buildx/util/buildflags"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/dockerutil"
"github.com/docker/buildx/util/osutil"
"github.com/docker/buildx/util/progress"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/client/llb"
"github.com/moby/buildkit/client/ociindex"
"github.com/moby/buildkit/exporter/containerimage/exptypes"
gateway "github.com/moby/buildkit/frontend/gateway/client"
"github.com/moby/buildkit/identity"
"github.com/moby/buildkit/session"
"github.com/moby/buildkit/session/secrets/secretsprovider"
"github.com/moby/buildkit/session/sshforward/sshprovider"
"github.com/moby/buildkit/session/upload/uploadprovider"
"github.com/moby/buildkit/solver/pb"
"github.com/moby/buildkit/util/apicaps"
"github.com/moby/buildkit/util/entitlements"
"github.com/moby/buildkit/util/gitutil"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/tonistiigi/fsutil"
)
func toSolveOpt(ctx context.Context, node builder.Node, multiDriver bool, opt *Options, bopts gateway.BuildOpts, cfg *confutil.Config, pw progress.Writer, docker *dockerutil.Client) (_ *client.SolveOpt, release func(), err error) {
nodeDriver := node.Driver
defers := make([]func(), 0, 2)
releaseF := func() {
for _, f := range defers {
f()
}
}
defer func() {
if err != nil {
releaseF()
}
}()
// inline cache from build arg
if v, ok := opt.BuildArgs["BUILDKIT_INLINE_CACHE"]; ok {
if v, _ := strconv.ParseBool(v); v {
opt.CacheTo = append(opt.CacheTo, client.CacheOptionsEntry{
Type: "inline",
Attrs: map[string]string{},
})
}
}
for _, e := range opt.CacheTo {
if e.Type != "inline" && !nodeDriver.Features(ctx)[driver.CacheExport] {
return nil, nil, notSupported(driver.CacheExport, nodeDriver, "https://docs.docker.com/go/build-cache-backends/")
}
}
cacheTo := make([]client.CacheOptionsEntry, 0, len(opt.CacheTo))
for _, e := range opt.CacheTo {
if e.Type == "gha" {
if !bopts.LLBCaps.Contains(apicaps.CapID("cache.gha")) {
continue
}
} else if e.Type == "s3" {
if !bopts.LLBCaps.Contains(apicaps.CapID("cache.s3")) {
continue
}
}
cacheTo = append(cacheTo, e)
}
cacheFrom := make([]client.CacheOptionsEntry, 0, len(opt.CacheFrom))
for _, e := range opt.CacheFrom {
if e.Type == "gha" {
if !bopts.LLBCaps.Contains(apicaps.CapID("cache.gha")) {
continue
}
} else if e.Type == "s3" {
if !bopts.LLBCaps.Contains(apicaps.CapID("cache.s3")) {
continue
}
}
cacheFrom = append(cacheFrom, e)
}
so := client.SolveOpt{
Ref: opt.Ref,
Frontend: "dockerfile.v0",
FrontendAttrs: map[string]string{},
LocalMounts: map[string]fsutil.FS{},
CacheExports: cacheTo,
CacheImports: cacheFrom,
AllowedEntitlements: opt.Allow,
SourcePolicy: opt.SourcePolicy,
}
if opt.CgroupParent != "" {
so.FrontendAttrs["cgroup-parent"] = opt.CgroupParent
}
if v, ok := opt.BuildArgs["BUILDKIT_SYNTAX"]; ok {
p := strings.SplitN(strings.TrimSpace(v), " ", 2)
so.Frontend = "gateway.v0"
so.FrontendAttrs["source"] = p[0]
so.FrontendAttrs["cmdline"] = v
}
if v, ok := opt.BuildArgs["BUILDKIT_MULTI_PLATFORM"]; ok {
if v, _ := strconv.ParseBool(v); v {
so.FrontendAttrs["multi-platform"] = "true"
}
}
if multiDriver {
// force creation of manifest list
so.FrontendAttrs["multi-platform"] = "true"
}
attests := make(map[string]string)
for k, v := range opt.Attests {
if v != nil {
attests[k] = *v
}
}
supportAttestations := bopts.LLBCaps.Contains(apicaps.CapID("exporter.image.attestations")) && nodeDriver.Features(ctx)[driver.MultiPlatform]
if len(attests) > 0 {
if !supportAttestations {
if !nodeDriver.Features(ctx)[driver.MultiPlatform] {
return nil, nil, notSupported("Attestation", nodeDriver, "https://docs.docker.com/go/attestations/")
}
return nil, nil, errors.Errorf("Attestations are not supported by the current BuildKit daemon")
}
for k, v := range attests {
so.FrontendAttrs["attest:"+k] = v
}
}
if _, ok := opt.Attests["provenance"]; !ok && supportAttestations {
const noAttestEnv = "BUILDX_NO_DEFAULT_ATTESTATIONS"
var noProv bool
if v, ok := os.LookupEnv(noAttestEnv); ok {
noProv, err = strconv.ParseBool(v)
if err != nil {
return nil, nil, errors.Wrap(err, "invalid "+noAttestEnv)
}
}
if !noProv {
so.FrontendAttrs["attest:provenance"] = "mode=min,inline-only=true"
}
}
switch len(opt.Exports) {
case 1:
// valid
case 0:
if !noDefaultLoad() && opt.CallFunc == nil {
if nodeDriver.IsMobyDriver() {
// backwards compat for docker driver only:
// this ensures the build results in a docker image.
opt.Exports = []client.ExportEntry{{Type: "image", Attrs: map[string]string{}}}
} else if nodeDriver.Features(ctx)[driver.DefaultLoad] {
opt.Exports = []client.ExportEntry{{Type: "docker", Attrs: map[string]string{}}}
}
}
default:
if err := bopts.LLBCaps.Supports(pb.CapMultipleExporters); err != nil {
return nil, nil, errors.Errorf("multiple outputs currently unsupported by the current BuildKit daemon, please upgrade to version v0.13+ or use a single output")
}
}
// check if index annotations are supported by docker driver
if len(opt.Exports) > 0 && opt.CallFunc == nil && len(opt.Annotations) > 0 && nodeDriver.IsMobyDriver() && !nodeDriver.Features(ctx)[driver.MultiPlatform] {
for _, exp := range opt.Exports {
if exp.Type == "image" || exp.Type == "docker" {
for ak := range opt.Annotations {
switch ak.Type {
case exptypes.AnnotationIndex, exptypes.AnnotationIndexDescriptor:
return nil, nil, errors.New("index annotations not supported for single platform export")
}
}
}
}
}
// fill in image exporter names from tags
if len(opt.Tags) > 0 {
tags := make([]string, len(opt.Tags))
for i, tag := range opt.Tags {
ref, err := reference.Parse(tag)
if err != nil {
return nil, nil, errors.Wrapf(err, "invalid tag %q", tag)
}
tags[i] = ref.String()
}
for i, e := range opt.Exports {
switch e.Type {
case "image", "oci", "docker":
opt.Exports[i].Attrs["name"] = strings.Join(tags, ",")
}
}
} else {
for _, e := range opt.Exports {
if e.Type == "image" && e.Attrs["name"] == "" && e.Attrs["push"] != "" {
if ok, _ := strconv.ParseBool(e.Attrs["push"]); ok {
return nil, nil, errors.Errorf("tag is needed when pushing to registry")
}
}
}
}
// cacheonly is a fake exporter to opt out of default behaviors
exports := make([]client.ExportEntry, 0, len(opt.Exports))
for _, e := range opt.Exports {
if e.Type != "cacheonly" {
exports = append(exports, e)
}
}
opt.Exports = exports
// set up exporters
for i, e := range opt.Exports {
if e.Type == "oci" && !nodeDriver.Features(ctx)[driver.OCIExporter] {
return nil, nil, notSupported(driver.OCIExporter, nodeDriver, "https://docs.docker.com/go/build-exporters/")
}
if e.Type == "docker" {
features := docker.Features(ctx, e.Attrs["context"])
if features[dockerutil.OCIImporter] && e.Output == nil {
// rely on oci importer if available (which supports
// multi-platform images), otherwise fall back to docker
opt.Exports[i].Type = "oci"
} else if len(opt.Platforms) > 1 || len(attests) > 0 {
if e.Output != nil {
return nil, nil, errors.Errorf("docker exporter does not support exporting manifest lists, use the oci exporter instead")
}
return nil, nil, errors.Errorf("docker exporter does not currently support exporting manifest lists")
}
if e.Output == nil {
if nodeDriver.IsMobyDriver() {
e.Type = "image"
} else {
w, cancel, err := docker.LoadImage(ctx, e.Attrs["context"], pw)
if err != nil {
return nil, nil, err
}
defers = append(defers, cancel)
opt.Exports[i].Output = func(_ map[string]string) (io.WriteCloser, error) {
return w, nil
}
// if docker is using the containerd snapshotter, prefer to export the image digest
// (rather than the image config digest). See https://github.com/moby/moby/issues/45458.
if features[dockerutil.OCIImporter] {
opt.Exports[i].Attrs["prefer-image-digest"] = "true"
}
}
} else if !nodeDriver.Features(ctx)[driver.DockerExporter] {
return nil, nil, notSupported(driver.DockerExporter, nodeDriver, "https://docs.docker.com/go/build-exporters/")
}
}
if e.Type == "image" && nodeDriver.IsMobyDriver() {
opt.Exports[i].Type = "moby"
if e.Attrs["push"] != "" {
if ok, _ := strconv.ParseBool(e.Attrs["push"]); ok {
if ok, _ := strconv.ParseBool(e.Attrs["push-by-digest"]); ok {
return nil, nil, errors.Errorf("push-by-digest is currently not implemented for docker driver, please create a new builder instance")
}
}
}
}
if e.Type == "docker" || e.Type == "image" || e.Type == "oci" {
// inline buildinfo attrs from build arg
if v, ok := opt.BuildArgs["BUILDKIT_INLINE_BUILDINFO_ATTRS"]; ok {
opt.Exports[i].Attrs["buildinfo-attrs"] = v
}
}
}
so.Exports = opt.Exports
so.Session = slices.Clone(opt.Session)
releaseLoad, err := loadInputs(ctx, nodeDriver, &opt.Inputs, pw, &so)
if err != nil {
return nil, nil, err
}
defers = append(defers, releaseLoad)
// add node identifier to shared key if one was specified
if so.SharedKey != "" {
so.SharedKey += ":" + cfg.TryNodeIdentifier()
}
if opt.Pull {
so.FrontendAttrs["image-resolve-mode"] = pb.AttrImageResolveModeForcePull
} else if nodeDriver.IsMobyDriver() {
// moby driver always resolves local images by default
so.FrontendAttrs["image-resolve-mode"] = pb.AttrImageResolveModePreferLocal
}
if opt.Target != "" {
so.FrontendAttrs["target"] = opt.Target
}
if len(opt.NoCacheFilter) > 0 {
so.FrontendAttrs["no-cache"] = strings.Join(opt.NoCacheFilter, ",")
}
if opt.NoCache {
so.FrontendAttrs["no-cache"] = ""
}
for k, v := range opt.BuildArgs {
so.FrontendAttrs["build-arg:"+k] = v
}
for k, v := range opt.Labels {
so.FrontendAttrs["label:"+k] = v
}
for k, v := range node.ProxyConfig {
if _, ok := opt.BuildArgs[k]; !ok {
so.FrontendAttrs["build-arg:"+k] = v
}
}
// set platforms
if len(opt.Platforms) != 0 {
pp := make([]string, len(opt.Platforms))
for i, p := range opt.Platforms {
pp[i] = platforms.Format(p)
}
if len(pp) > 1 && !nodeDriver.Features(ctx)[driver.MultiPlatform] {
return nil, nil, notSupported(driver.MultiPlatform, nodeDriver, "https://docs.docker.com/go/build-multi-platform/")
}
so.FrontendAttrs["platform"] = strings.Join(pp, ",")
}
// setup networkmode
switch opt.NetworkMode {
case "host":
so.FrontendAttrs["force-network-mode"] = opt.NetworkMode
so.AllowedEntitlements = append(so.AllowedEntitlements, entitlements.EntitlementNetworkHost.String())
case "none":
so.FrontendAttrs["force-network-mode"] = opt.NetworkMode
case "", "default":
default:
return nil, nil, errors.Errorf("network mode %q not supported by buildkit - you can define a custom network for your builder using the network driver-opt in buildx create", opt.NetworkMode)
}
// setup extrahosts
extraHosts, err := toBuildkitExtraHosts(ctx, opt.ExtraHosts, nodeDriver)
if err != nil {
return nil, nil, err
}
if len(extraHosts) > 0 {
so.FrontendAttrs["add-hosts"] = extraHosts
}
// setup shm size
if opt.ShmSize.Value() > 0 {
so.FrontendAttrs["shm-size"] = strconv.FormatInt(opt.ShmSize.Value(), 10)
}
// setup ulimits
ulimits, err := toBuildkitUlimits(opt.Ulimits)
if err != nil {
return nil, nil, err
} else if len(ulimits) > 0 {
so.FrontendAttrs["ulimit"] = ulimits
}
// mark call request as internal
if opt.CallFunc != nil {
so.Internal = true
}
return &so, releaseF, nil
}
func loadInputs(ctx context.Context, d *driver.DriverHandle, inp *Inputs, pw progress.Writer, target *client.SolveOpt) (func(), error) {
if inp.ContextPath == "" {
return nil, errors.New("please specify build context (e.g. \".\" for the current directory)")
}
// TODO: handle stdin, symlinks, remote contexts, check files exist
var (
err error
dockerfileReader io.ReadCloser
dockerfileDir string
dockerfileName = inp.DockerfilePath
dockerfileSrcName = inp.DockerfilePath
toRemove []string
caps = map[string]struct{}{}
)
switch {
case inp.ContextState != nil:
if target.FrontendInputs == nil {
target.FrontendInputs = make(map[string]llb.State)
}
target.FrontendInputs["context"] = *inp.ContextState
target.FrontendInputs["dockerfile"] = *inp.ContextState
case inp.ContextPath == "-":
if inp.DockerfilePath == "-" {
return nil, errors.Errorf("invalid argument: can't use stdin for both build context and dockerfile")
}
rc := inp.InStream.NewReadCloser()
magic, err := inp.InStream.Peek(archiveHeaderSize * 2)
if err != nil && err != io.EOF {
return nil, errors.Wrap(err, "failed to peek context header from STDIN")
}
if err != io.EOF || len(magic) != 0 {
if isArchive(magic) {
// stdin is context
up := uploadprovider.New()
target.FrontendAttrs["context"] = up.Add(rc)
target.Session = append(target.Session, up)
} else {
if inp.DockerfilePath != "" {
return nil, errors.Errorf("ambiguous Dockerfile source: both stdin and flag correspond to Dockerfiles")
}
// stdin is dockerfile
dockerfileReader = rc
inp.ContextPath, _ = os.MkdirTemp("", "empty-dir")
toRemove = append(toRemove, inp.ContextPath)
if err := setLocalMount("context", inp.ContextPath, target); err != nil {
return nil, err
}
}
}
case osutil.IsLocalDir(inp.ContextPath):
if err := setLocalMount("context", inp.ContextPath, target); err != nil {
return nil, err
}
sharedKey := inp.ContextPath
if p, err := filepath.Abs(sharedKey); err == nil {
sharedKey = filepath.Base(p)
}
target.SharedKey = sharedKey
switch inp.DockerfilePath {
case "-":
dockerfileReader = inp.InStream.NewReadCloser()
case "":
dockerfileDir = inp.ContextPath
default:
dockerfileDir = filepath.Dir(inp.DockerfilePath)
dockerfileName = filepath.Base(inp.DockerfilePath)
}
case IsRemoteURL(inp.ContextPath):
if inp.DockerfilePath == "-" {
dockerfileReader = inp.InStream.NewReadCloser()
} else if filepath.IsAbs(inp.DockerfilePath) {
dockerfileDir = filepath.Dir(inp.DockerfilePath)
dockerfileName = filepath.Base(inp.DockerfilePath)
target.FrontendAttrs["dockerfilekey"] = "dockerfile"
}
target.FrontendAttrs["context"] = inp.ContextPath
gitRef, err := gitutil.ParseURL(inp.ContextPath)
if err == nil && len(gitRef.Query) > 0 {
caps["moby.buildkit.frontend.gitquerystring"] = struct{}{}
}
default:
return nil, errors.Errorf("unable to prepare context: path %q not found", inp.ContextPath)
}
if inp.DockerfileInline != "" {
dockerfileReader = io.NopCloser(strings.NewReader(inp.DockerfileInline))
dockerfileSrcName = "inline"
} else if inp.DockerfilePath == "-" {
dockerfileSrcName = "stdin"
} else if inp.DockerfilePath == "" {
dockerfileSrcName = filepath.Join(inp.ContextPath, "Dockerfile")
}
if dockerfileReader != nil {
dockerfileDir, err = createTempDockerfile(dockerfileReader, inp.InStream)
if err != nil {
return nil, err
}
toRemove = append(toRemove, dockerfileDir)
dockerfileName = "Dockerfile"
target.FrontendAttrs["dockerfilekey"] = "dockerfile"
}
if isHTTPURL(inp.DockerfilePath) {
dockerfileDir, err = createTempDockerfileFromURL(ctx, d, inp.DockerfilePath, pw)
if err != nil {
return nil, err
}
toRemove = append(toRemove, dockerfileDir)
dockerfileName = "Dockerfile"
target.FrontendAttrs["dockerfilekey"] = "dockerfile"
delete(target.FrontendInputs, "dockerfile")
}
if dockerfileName == "" {
dockerfileName = "Dockerfile"
}
if dockerfileDir != "" {
if err := setLocalMount("dockerfile", dockerfileDir, target); err != nil {
return nil, err
}
dockerfileName = handleLowercaseDockerfile(dockerfileDir, dockerfileName)
}
target.FrontendAttrs["filename"] = dockerfileName
for k, v := range inp.NamedContexts {
caps["moby.buildkit.frontend.contexts+forward"] = struct{}{}
if v.State != nil {
target.FrontendAttrs["context:"+k] = "input:" + k
if target.FrontendInputs == nil {
target.FrontendInputs = make(map[string]llb.State)
}
target.FrontendInputs[k] = *v.State
continue
}
if IsRemoteURL(v.Path) || strings.HasPrefix(v.Path, "docker-image://") || strings.HasPrefix(v.Path, "target:") {
target.FrontendAttrs["context:"+k] = v.Path
gitRef, err := gitutil.ParseURL(v.Path)
if err == nil && len(gitRef.Query) > 0 {
if _, ok := caps["moby.buildkit.frontend.gitquerystring"]; !ok {
caps["moby.buildkit.frontend.gitquerystring+forward"] = struct{}{}
}
}
continue
}
// handle OCI layout
if localPath, ok := strings.CutPrefix(v.Path, "oci-layout://"); ok {
localPath, dig, hasDigest := strings.Cut(localPath, "@")
localPath, tag, hasTag := strings.Cut(localPath, ":")
if !hasTag {
tag = "latest"
}
if !hasDigest {
dig, err = resolveDigest(localPath, tag)
if err != nil {
return nil, errors.Wrapf(err, "oci-layout reference %q could not be resolved", v.Path)
}
}
store, err := local.NewStore(localPath)
if err != nil {
return nil, errors.Wrapf(err, "invalid store at %s", localPath)
}
storeName := identity.NewID()
if target.OCIStores == nil {
target.OCIStores = map[string]content.Store{}
}
target.OCIStores[storeName] = store
target.FrontendAttrs["context:"+k] = "oci-layout://" + storeName + ":" + tag + "@" + dig
continue
}
st, err := os.Stat(v.Path)
if err != nil {
return nil, errors.Wrapf(err, "failed to get build context %v", k)
}
if !st.IsDir() {
return nil, errors.Wrapf(syscall.ENOTDIR, "failed to get build context path %v", v)
}
localName := k
if k == "context" || k == "dockerfile" {
localName = "_" + k // underscore to avoid collisions
}
if err := setLocalMount(localName, v.Path, target); err != nil {
return nil, err
}
target.FrontendAttrs["context:"+k] = "local:" + localName
}
release := func() {
for _, dir := range toRemove {
_ = os.RemoveAll(dir)
}
}
if len(caps) > 0 {
keys := slices.Collect(maps.Keys(caps))
slices.Sort(keys)
target.FrontendAttrs["frontend.caps"] = strings.Join(keys, ",")
}
inp.DockerfileMappingSrc = dockerfileSrcName
inp.DockerfileMappingDst = dockerfileName
return release, nil
}
func resolveDigest(localPath, tag string) (dig string, _ error) {
idx := ociindex.NewStoreIndex(localPath)
// lookup by name
desc, err := idx.Get(tag)
if err != nil {
return "", err
}
if desc == nil {
// lookup single
desc, err = idx.GetSingle()
if err != nil {
return "", err
}
}
if desc == nil {
return "", errors.New("failed to resolve digest")
}
dig = string(desc.Digest)
_, err = digest.Parse(dig)
if err != nil {
return "", errors.Wrapf(err, "invalid digest %s", dig)
}
return dig, nil
}
func setLocalMount(name, dir string, so *client.SolveOpt) error {
lm, err := fsutil.NewFS(dir)
if err != nil {
return err
}
if so.LocalMounts == nil {
so.LocalMounts = map[string]fsutil.FS{}
}
so.LocalMounts[name] = &fs{FS: lm, dir: dir}
return nil
}
func createTempDockerfile(r io.Reader, multiReader *SyncMultiReader) (string, error) {
dir, err := os.MkdirTemp("", "dockerfile")
if err != nil {
return "", err
}
f, err := os.Create(filepath.Join(dir, "Dockerfile"))
if err != nil {
return "", err
}
defer f.Close()
if multiReader != nil {
dt, err := io.ReadAll(r)
if err != nil {
return "", err
}
multiReader.Reset(dt)
r = bytes.NewReader(dt)
}
if _, err := io.Copy(f, r); err != nil {
return "", err
}
return dir, err
}
// handle https://github.com/moby/moby/pull/10858
func handleLowercaseDockerfile(dir, p string) string {
if filepath.Base(p) != "Dockerfile" {
return p
}
f, err := os.Open(filepath.Dir(filepath.Join(dir, p)))
if err != nil {
return p
}
names, err := f.Readdirnames(-1)
if err != nil {
return p
}
foundLowerCase := false
for _, n := range names {
if n == "Dockerfile" {
return p
}
if n == "dockerfile" {
foundLowerCase = true
}
}
if foundLowerCase {
return filepath.Join(filepath.Dir(p), "dockerfile")
}
return p
}
type fs struct {
fsutil.FS
dir string
}
var _ fsutil.FS = &fs{}
func CreateSSH(ssh []*buildflags.SSH) (session.Attachable, error) {
configs := make([]sshprovider.AgentConfig, 0, len(ssh))
for _, ssh := range ssh {
cfg := sshprovider.AgentConfig{
ID: ssh.ID,
Paths: slices.Clone(ssh.Paths),
}
configs = append(configs, cfg)
}
return sshprovider.NewSSHAgentProvider(configs)
}
func CreateSecrets(secrets []*buildflags.Secret) (session.Attachable, error) {
fs := make([]secretsprovider.Source, 0, len(secrets))
for _, secret := range secrets {
fs = append(fs, secretsprovider.Source{
ID: secret.ID,
FilePath: secret.FilePath,
Env: secret.Env,
})
}
store, err := secretsprovider.NewStore(fs)
if err != nil {
return nil, err
}
return secretsprovider.NewSecretProvider(store), nil
}
func CreateExports(entries []*buildflags.ExportEntry) ([]client.ExportEntry, []string, error) {
var outs []client.ExportEntry
var localPaths []string
if len(entries) == 0 {
return nil, nil, nil
}
var stdoutUsed bool
for _, entry := range entries {
if entry.Type == "" {
return nil, nil, errors.Errorf("type is required for output")
}
out := client.ExportEntry{
Type: entry.Type,
Attrs: map[string]string{},
}
maps.Copy(out.Attrs, entry.Attrs)
supportFile := false
supportDir := false
switch out.Type {
case client.ExporterLocal:
supportDir = true
case client.ExporterTar:
supportFile = true
case client.ExporterOCI, client.ExporterDocker:
tar, err := strconv.ParseBool(out.Attrs["tar"])
if err != nil {
tar = true
}
supportFile = tar
supportDir = !tar
case "registry":
out.Type = client.ExporterImage
out.Attrs["push"] = "true"
}
if supportDir {
if entry.Destination == "" {
return nil, nil, errors.Errorf("dest is required for %s exporter", out.Type)
}
if entry.Destination == "-" {
return nil, nil, errors.Errorf("dest cannot be stdout for %s exporter", out.Type)
}
fi, err := os.Stat(entry.Destination)
if err != nil && !os.IsNotExist(err) {
return nil, nil, errors.Wrapf(err, "invalid destination directory: %s", entry.Destination)
}
if err == nil && !fi.IsDir() {
return nil, nil, errors.Errorf("destination directory %s is a file", entry.Destination)
}
out.OutputDir = entry.Destination
localPaths = append(localPaths, entry.Destination)
}
if supportFile {
if entry.Destination == "" && out.Type != client.ExporterDocker {
entry.Destination = "-"
}
if entry.Destination == "-" {
if stdoutUsed {
return nil, nil, errors.Errorf("multiple outputs configured to write to stdout")
}
if _, err := console.ConsoleFromFile(os.Stdout); err == nil {
return nil, nil, errors.Errorf("dest file is required for %s exporter. refusing to write to console", out.Type)
}
out.Output = wrapWriteCloser(os.Stdout)
stdoutUsed = true
} else if entry.Destination != "" {
fi, err := os.Stat(entry.Destination)
if err != nil && !os.IsNotExist(err) {
return nil, nil, errors.Wrapf(err, "invalid destination file: %s", entry.Destination)
}
if err == nil && fi.IsDir() {
return nil, nil, errors.Errorf("destination file %s is a directory", entry.Destination)
}
f, err := os.Create(entry.Destination)
if err != nil {
return nil, nil, errors.Errorf("failed to open %s", err)
}
out.Output = wrapWriteCloser(f)
localPaths = append(localPaths, entry.Destination)
}
}
outs = append(outs, out)
}
return outs, localPaths, nil
}
func wrapWriteCloser(wc io.WriteCloser) func(map[string]string) (io.WriteCloser, error) {
return func(map[string]string) (io.WriteCloser, error) {
return wc, nil
}
}
func CreateCaches(entries []*buildflags.CacheOptionsEntry) []client.CacheOptionsEntry {
var outs []client.CacheOptionsEntry
if len(entries) == 0 {
return nil
}
for _, entry := range entries {
out := client.CacheOptionsEntry{
Type: entry.Type,
Attrs: map[string]string{},
}
maps.Copy(out.Attrs, entry.Attrs)
addGithubToken(&out)
addAwsCredentials(&out)
if !isActive(&out) {
continue
}
outs = append(outs, out)
}
return outs
}
func addGithubToken(ci *client.CacheOptionsEntry) {
if ci.Type != "gha" {
return
}
version, ok := ci.Attrs["version"]
if !ok {
// https://github.com/actions/toolkit/blob/2b08dc18f261b9fdd978b70279b85cbef81af8bc/packages/cache/src/internal/config.ts#L19
if v, ok := os.LookupEnv("ACTIONS_CACHE_SERVICE_V2"); ok {
if b, err := strconv.ParseBool(v); err == nil && b {
version = "2"
}
}
}
if _, ok := ci.Attrs["token"]; !ok {
if v, ok := os.LookupEnv("ACTIONS_RUNTIME_TOKEN"); ok {
ci.Attrs["token"] = v
}
}
if _, ok := ci.Attrs["url_v2"]; !ok && version == "2" {
// https://github.com/actions/toolkit/blob/2b08dc18f261b9fdd978b70279b85cbef81af8bc/packages/cache/src/internal/config.ts#L34-L35
if v, ok := os.LookupEnv("ACTIONS_RESULTS_URL"); ok {
ci.Attrs["url_v2"] = v
}
}
if _, ok := ci.Attrs["url"]; !ok {
// https://github.com/actions/toolkit/blob/2b08dc18f261b9fdd978b70279b85cbef81af8bc/packages/cache/src/internal/config.ts#L28-L33
if v, ok := os.LookupEnv("ACTIONS_CACHE_URL"); ok {
ci.Attrs["url"] = v
} else if v, ok := os.LookupEnv("ACTIONS_RESULTS_URL"); ok {
ci.Attrs["url"] = v
}
}
}
func addAwsCredentials(ci *client.CacheOptionsEntry) {
if ci.Type != "s3" {
return
}
_, okAccessKeyID := ci.Attrs["access_key_id"]
_, okSecretAccessKey := ci.Attrs["secret_access_key"]
// If the user provides access_key_id, secret_access_key, do not override the session token.
if okAccessKeyID && okSecretAccessKey {
return
}
ctx := context.TODO()
awsConfig, err := awsconfig.LoadDefaultConfig(ctx)
if err != nil {
return
}
credentials, err := awsConfig.Credentials.Retrieve(ctx)
if err != nil {
return
}
if !okAccessKeyID && credentials.AccessKeyID != "" {
ci.Attrs["access_key_id"] = credentials.AccessKeyID
}
if !okSecretAccessKey && credentials.SecretAccessKey != "" {
ci.Attrs["secret_access_key"] = credentials.SecretAccessKey
}
if _, ok := ci.Attrs["session_token"]; !ok && credentials.SessionToken != "" {
ci.Attrs["session_token"] = credentials.SessionToken
}
}
func isActive(ce *client.CacheOptionsEntry) bool {
// Always active if not gha.
if ce.Type != "gha" {
return true
}
return ce.Attrs["token"] != "" && (ce.Attrs["url"] != "" || ce.Attrs["url_v2"] != "")
}

40
build/opt_test.go Normal file
View File

@ -0,0 +1,40 @@
package build
import (
"testing"
"github.com/docker/buildx/util/buildflags"
"github.com/moby/buildkit/client"
"github.com/stretchr/testify/require"
)
func TestCacheOptions_DerivedVars(t *testing.T) {
t.Setenv("ACTIONS_RUNTIME_TOKEN", "sensitive_token")
t.Setenv("ACTIONS_CACHE_URL", "https://cache.github.com")
t.Setenv("AWS_ACCESS_KEY_ID", "definitely_dont_look_here")
t.Setenv("AWS_SECRET_ACCESS_KEY", "hackers_please_dont_steal")
t.Setenv("AWS_SESSION_TOKEN", "not_a_mitm_attack")
cacheFrom, err := buildflags.ParseCacheEntry([]string{"type=gha", "type=s3,region=us-west-2,bucket=my_bucket,name=my_image"})
require.NoError(t, err)
require.Equal(t, []client.CacheOptionsEntry{
{
Type: "gha",
Attrs: map[string]string{
"token": "sensitive_token",
"url": "https://cache.github.com",
},
},
{
Type: "s3",
Attrs: map[string]string{
"region": "us-west-2",
"bucket": "my_bucket",
"name": "my_image",
"access_key_id": "definitely_dont_look_here",
"secret_access_key": "hackers_please_dont_steal",
"session_token": "not_a_mitm_attack",
},
},
}, CreateCaches(cacheFrom))
}

149
build/provenance.go Normal file
View File

@ -0,0 +1,149 @@
package build
import (
"context"
"encoding/base64"
"encoding/json"
"io"
"maps"
"strings"
"sync"
"github.com/containerd/containerd/v2/core/content"
"github.com/containerd/containerd/v2/core/content/proxy"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/progress"
slsa1 "github.com/in-toto/in-toto-golang/in_toto/slsa_provenance/v1"
controlapi "github.com/moby/buildkit/api/services/control"
"github.com/moby/buildkit/client"
provenancetypes "github.com/moby/buildkit/solver/llbsolver/provenance/types"
digest "github.com/opencontainers/go-digest"
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"golang.org/x/sync/errgroup"
)
func setRecordProvenance(ctx context.Context, c *client.Client, sr *client.SolveResponse, ref string, mode confutil.MetadataProvenanceMode, pw progress.Writer) error {
if mode == confutil.MetadataProvenanceModeDisabled {
return nil
}
pw = progress.ResetTime(pw)
return progress.Wrap("resolving provenance for metadata file", pw.Write, func(l progress.SubLogger) error {
res, err := fetchProvenance(ctx, c, ref, mode)
if err != nil {
return err
}
maps.Copy(sr.ExporterResponse, res)
return nil
})
}
func fetchProvenance(ctx context.Context, c *client.Client, ref string, mode confutil.MetadataProvenanceMode) (out map[string]string, err error) {
cl, err := c.ControlClient().ListenBuildHistory(ctx, &controlapi.BuildHistoryRequest{
Ref: ref,
EarlyExit: true,
})
if err != nil {
return nil, err
}
var mu sync.Mutex
eg, ctx := errgroup.WithContext(ctx)
store := proxy.NewContentStore(c.ContentClient())
for {
ev, err := cl.Recv()
if errors.Is(err, io.EOF) {
break
} else if err != nil {
return nil, err
}
if ev.Record == nil {
continue
}
if ev.Record.Result != nil {
desc, predicateType := lookupProvenance(ev.Record.Result)
if desc == nil {
continue
}
eg.Go(func() error {
dt, err := content.ReadBlob(ctx, store, *desc)
if err != nil {
return errors.Wrapf(err, "failed to load provenance blob from build record")
}
prv, err := encodeProvenance(dt, predicateType, mode)
if err != nil {
return err
}
mu.Lock()
if out == nil {
out = make(map[string]string)
}
out["buildx.build.provenance"] = prv
mu.Unlock()
return nil
})
} else if ev.Record.Results != nil {
for platform, res := range ev.Record.Results {
desc, predicateType := lookupProvenance(res)
if desc == nil {
continue
}
eg.Go(func() error {
dt, err := content.ReadBlob(ctx, store, *desc)
if err != nil {
return errors.Wrapf(err, "failed to load provenance blob from build record")
}
prv, err := encodeProvenance(dt, predicateType, mode)
if err != nil {
return err
}
mu.Lock()
if out == nil {
out = make(map[string]string)
}
out["buildx.build.provenance/"+platform] = prv
mu.Unlock()
return nil
})
}
}
}
return out, eg.Wait()
}
func lookupProvenance(res *controlapi.BuildResultInfo) (*ocispecs.Descriptor, string) {
for _, a := range res.Attestations {
if a.MediaType == "application/vnd.in-toto+json" && strings.HasPrefix(a.Annotations["in-toto.io/predicate-type"], "https://slsa.dev/provenance/") {
return &ocispecs.Descriptor{
Digest: digest.Digest(a.Digest),
Size: a.Size,
MediaType: a.MediaType,
Annotations: a.Annotations,
}, a.Annotations["in-toto.io/predicate-type"]
}
}
return nil, ""
}
func encodeProvenance(dt []byte, predicateType string, mode confutil.MetadataProvenanceMode) (string, error) {
var pred *provenancetypes.ProvenancePredicateSLSA02
if predicateType == slsa1.PredicateSLSAProvenance {
var pred1 *provenancetypes.ProvenancePredicateSLSA1
if err := json.Unmarshal(dt, &pred1); err != nil {
return "", errors.Wrapf(err, "failed to unmarshal provenance")
}
pred = pred1.ConvertToSLSA02()
} else if err := json.Unmarshal(dt, &pred); err != nil {
return "", errors.Wrapf(err, "failed to unmarshal provenance")
}
if mode == confutil.MetadataProvenanceModeMin {
// reset fields for minimal provenance
pred.BuildConfig = nil
pred.Metadata = nil
}
dtprv, err := json.Marshal(pred)
if err != nil {
return "", errors.Wrapf(err, "failed to marshal provenance")
}
return base64.StdEncoding.EncodeToString(dtprv), nil
}

164
build/replicatedstream.go Normal file
View File

@ -0,0 +1,164 @@
package build
import (
"bufio"
"bytes"
"io"
"sync"
)
type SyncMultiReader struct {
source *bufio.Reader
buffer []byte
static []byte
mu sync.Mutex
cond *sync.Cond
readers []*syncReader
err error
offset int
}
type syncReader struct {
mr *SyncMultiReader
offset int
closed bool
}
func NewSyncMultiReader(source io.Reader) *SyncMultiReader {
mr := &SyncMultiReader{
source: bufio.NewReader(source),
buffer: make([]byte, 0, 32*1024),
}
mr.cond = sync.NewCond(&mr.mu)
return mr
}
func (mr *SyncMultiReader) Peek(n int) ([]byte, error) {
mr.mu.Lock()
defer mr.mu.Unlock()
if mr.static != nil {
return mr.static[min(n, len(mr.static)):], nil
}
return mr.source.Peek(n)
}
func (mr *SyncMultiReader) Reset(dt []byte) {
mr.mu.Lock()
defer mr.mu.Unlock()
mr.static = dt
}
func (mr *SyncMultiReader) NewReadCloser() io.ReadCloser {
mr.mu.Lock()
defer mr.mu.Unlock()
if mr.static != nil {
return io.NopCloser(bytes.NewReader(mr.static))
}
reader := &syncReader{
mr: mr,
}
mr.readers = append(mr.readers, reader)
return reader
}
func (sr *syncReader) Read(p []byte) (int, error) {
sr.mr.mu.Lock()
defer sr.mr.mu.Unlock()
return sr.read(p)
}
func (sr *syncReader) read(p []byte) (int, error) {
end := sr.mr.offset + len(sr.mr.buffer)
loop0:
for {
if sr.closed {
return 0, io.EOF
}
end := sr.mr.offset + len(sr.mr.buffer)
if sr.mr.err != nil && sr.offset == end {
return 0, sr.mr.err
}
start := sr.offset - sr.mr.offset
dt := sr.mr.buffer[start:]
if len(dt) > 0 {
n := copy(p, dt)
sr.offset += n
sr.mr.cond.Broadcast()
return n, nil
}
// check for readers that have not caught up
hasOpen := false
for _, r := range sr.mr.readers {
if !r.closed {
hasOpen = true
} else {
continue
}
if r.offset < end {
sr.mr.cond.Wait()
continue loop0
}
}
if !hasOpen {
return 0, io.EOF
}
break
}
last := sr.mr.offset + len(sr.mr.buffer)
// another reader has already updated the buffer
if last > end || sr.mr.err != nil {
return sr.read(p)
}
sr.mr.offset += len(sr.mr.buffer)
sr.mr.buffer = sr.mr.buffer[:cap(sr.mr.buffer)]
n, err := sr.mr.source.Read(sr.mr.buffer)
if n >= 0 {
sr.mr.buffer = sr.mr.buffer[:n]
} else {
sr.mr.buffer = sr.mr.buffer[:0]
}
sr.mr.cond.Broadcast()
if err != nil {
sr.mr.err = err
return 0, err
}
nn := copy(p, sr.mr.buffer)
sr.offset += nn
return nn, nil
}
func (sr *syncReader) Close() error {
sr.mr.mu.Lock()
defer sr.mr.mu.Unlock()
if sr.closed {
return nil
}
sr.closed = true
sr.mr.cond.Broadcast()
return nil
}

View File

@ -0,0 +1,77 @@
package build
import (
"bytes"
"crypto/rand"
"io"
mathrand "math/rand"
"sync"
"testing"
"time"
"github.com/stretchr/testify/assert"
)
func generateRandomData(size int) []byte {
data := make([]byte, size)
rand.Read(data)
return data
}
func TestSyncMultiReaderParallel(t *testing.T) {
data := generateRandomData(1024 * 1024)
source := bytes.NewReader(data)
mr := NewSyncMultiReader(source)
var wg sync.WaitGroup
numReaders := 10
bufferSize := 4096 * 4
readers := make([]io.ReadCloser, numReaders)
for i := range numReaders {
readers[i] = mr.NewReadCloser()
}
for i := range numReaders {
wg.Add(1)
go func(readerId int) {
defer wg.Done()
reader := readers[readerId]
defer reader.Close()
totalRead := 0
buf := make([]byte, bufferSize)
for totalRead < len(data) {
// Simulate random read sizes
readSize := mathrand.Intn(bufferSize) // #nosec G404 -- ignore "Use of weak random number generator (math/rand instead of crypto/rand)"
n, err := reader.Read(buf[:readSize])
if n > 0 {
assert.Equal(t, data[totalRead:totalRead+n], buf[:n], "Reader %d mismatch", readerId)
totalRead += n
}
if err == io.EOF {
assert.Equal(t, len(data), totalRead, "Reader %d EOF mismatch", readerId)
return
}
assert.NoError(t, err, "Reader %d error", readerId)
// #nosec G404 -- ignore "Use of weak random number generator (math/rand instead of crypto/rand)"
if mathrand.Intn(1000) == 0 {
t.Logf("Reader %d closing", readerId)
// Simulate random close
return
}
// Simulate random timing between reads
time.Sleep(time.Millisecond * time.Duration(mathrand.Intn(5))) // #nosec G404 -- ignore "Use of weak random number generator (math/rand instead of crypto/rand)"
}
assert.Equal(t, len(data), totalRead, "Reader %d total read mismatch", readerId)
}(i)
}
wg.Wait()
}

View File

@ -1,266 +1,55 @@
package build
import (
"cmp"
"context"
_ "crypto/sha256" // ensure digests can be computed
"encoding/json"
"io"
iofs "io/fs"
"path/filepath"
"slices"
"strings"
"sync"
controllerapi "github.com/docker/buildx/controller/pb"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/exporter/containerimage/exptypes"
gateway "github.com/moby/buildkit/frontend/gateway/client"
"github.com/moby/buildkit/solver/errdefs"
"github.com/moby/buildkit/solver/pb"
"github.com/moby/buildkit/solver/result"
specs "github.com/opencontainers/image-spec/specs-go/v1"
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"golang.org/x/sync/errgroup"
"github.com/tonistiigi/fsutil/types"
)
// NewResultHandle makes a call to client.Build, additionally returning a
// opaque ResultHandle alongside the standard response and error.
// NewResultHandle stores a gateway client, gateway reference, and the error from
// an evaluate call if it is present.
//
// This ResultHandle can be used to execute additional build steps in the same
// context as the build occurred, which can allow easy debugging of build
// failures and successes.
//
// If the returned ResultHandle is not nil, the caller must call Done() on it.
func NewResultHandle(ctx context.Context, cc *client.Client, opt client.SolveOpt, product string, buildFunc gateway.BuildFunc, ch chan *client.SolveStatus) (*ResultHandle, *client.SolveResponse, error) {
// Create a new context to wrap the original, and cancel it when the
// caller-provided context is cancelled.
//
// We derive the context from the background context so that we can forbid
// cancellation of the build request after <-done is closed (which we do
// before returning the ResultHandle).
baseCtx := ctx
ctx, cancel := context.WithCancelCause(context.Background())
done := make(chan struct{})
go func() {
select {
case <-baseCtx.Done():
cancel(baseCtx.Err())
case <-done:
// Once done is closed, we've recorded a ResultHandle, so we
// shouldn't allow cancelling the underlying build request anymore.
}
}()
// Create a new channel to forward status messages to the original.
//
// We do this so that we can discard status messages after the main portion
// of the build is complete. This is necessary for the solve error case,
// where the original gateway is kept open until the ResultHandle is
// closed - we don't want progress messages from operations in that
// ResultHandle to display after this function exits.
//
// Additionally, callers should wait for the progress channel to be closed.
// If we keep the session open and never close the progress channel, the
// caller will likely hang.
baseCh := ch
ch = make(chan *client.SolveStatus)
go func() {
for {
s, ok := <-ch
if !ok {
return
}
select {
case <-baseCh:
// base channel is closed, discard status messages
default:
baseCh <- s
}
}
}()
defer close(baseCh)
var resp *client.SolveResponse
var respErr error
var respHandle *ResultHandle
go func() {
defer cancel(context.Canceled) // ensure no dangling processes
var res *gateway.Result
var err error
resp, err = cc.Build(ctx, opt, product, func(ctx context.Context, c gateway.Client) (*gateway.Result, error) {
var err error
res, err = buildFunc(ctx, c)
if res != nil && err == nil {
// Force evaluation of the build result (otherwise, we likely
// won't get a solve error)
def, err2 := getDefinition(ctx, res)
if err2 != nil {
return nil, err2
}
res, err = evalDefinition(ctx, c, def)
}
if err != nil {
// Scenario 1: we failed to evaluate a node somewhere in the
// build graph.
//
// In this case, we construct a ResultHandle from this
// original Build session, and return it alongside the original
// build error. We then need to keep the gateway session open
// until the caller explicitly closes the ResultHandle.
var se *errdefs.SolveError
if errors.As(err, &se) {
respHandle = &ResultHandle{
done: make(chan struct{}),
solveErr: se,
gwClient: c,
gwCtx: ctx,
}
respErr = err // return original error to preserve stacktrace
close(done)
// Block until the caller closes the ResultHandle.
select {
case <-respHandle.done:
case <-ctx.Done():
}
}
}
return res, err
}, ch)
if respHandle != nil {
return
}
if err != nil {
// Something unexpected failed during the build, we didn't succeed,
// but we also didn't make it far enough to create a ResultHandle.
respErr = err
close(done)
return
}
// Scenario 2: we successfully built the image with no errors.
//
// In this case, the original gateway session has now been closed
// since the Build has been completed. So, we need to create a new
// gateway session to populate the ResultHandle. To do this, we
// need to re-evaluate the target result, in this new session. This
// should be instantaneous since the result should be cached.
def, err := getDefinition(ctx, res)
if err != nil {
respErr = err
close(done)
return
}
// NOTE: ideally this second connection should be lazily opened
opt := opt
opt.Ref = ""
opt.Exports = nil
opt.CacheExports = nil
opt.Internal = true
_, respErr = cc.Build(ctx, opt, "buildx", func(ctx context.Context, c gateway.Client) (*gateway.Result, error) {
res, err := evalDefinition(ctx, c, def)
if err != nil {
// This should probably not happen, since we've previously
// successfully evaluated the same result with no issues.
return nil, errors.Wrap(err, "inconsistent solve result")
}
respHandle = &ResultHandle{
done: make(chan struct{}),
res: res,
gwClient: c,
gwCtx: ctx,
}
close(done)
// Block until the caller closes the ResultHandle.
select {
case <-respHandle.done:
case <-ctx.Done():
}
return nil, ctx.Err()
}, nil)
if respHandle != nil {
return
}
close(done)
}()
// Block until the other thread signals that it's completed the build.
select {
case <-done:
case <-baseCtx.Done():
if respErr == nil {
respErr = baseCtx.Err()
}
func NewResultHandle(ctx context.Context, c gateway.Client, ref gateway.Reference, meta map[string][]byte, err error) *ResultHandle {
rCtx := &ResultHandle{
ref: ref,
meta: meta,
gwClient: c,
}
return respHandle, resp, respErr
}
// getDefinition converts a gateway result into a collection of definitions for
// each ref in the result.
func getDefinition(ctx context.Context, res *gateway.Result) (*result.Result[*pb.Definition], error) {
return result.ConvertResult(res, func(ref gateway.Reference) (*pb.Definition, error) {
st, err := ref.ToState()
if err != nil {
return nil, err
}
def, err := st.Marshal(ctx)
if err != nil {
return nil, err
}
return def.ToPB(), nil
})
}
// evalDefinition performs the reverse of getDefinition, converting a
// collection of definitions into a gateway result.
func evalDefinition(ctx context.Context, c gateway.Client, defs *result.Result[*pb.Definition]) (*gateway.Result, error) {
// force evaluation of all targets in parallel
results := make(map[*pb.Definition]*gateway.Result)
resultsMu := sync.Mutex{}
eg, egCtx := errgroup.WithContext(ctx)
defs.EachRef(func(def *pb.Definition) error {
eg.Go(func() error {
res, err := c.Solve(egCtx, gateway.SolveRequest{
Evaluate: true,
Definition: def,
})
if err != nil {
return err
}
resultsMu.Lock()
results[def] = res
resultsMu.Unlock()
return nil
})
if err != nil && !errors.As(err, &rCtx.solveErr) {
return nil
})
if err := eg.Wait(); err != nil {
return nil, err
}
res, _ := result.ConvertResult(defs, func(def *pb.Definition) (gateway.Reference, error) {
if res, ok := results[def]; ok {
return res.Ref, nil
}
return nil, nil
})
return res, nil
return rCtx
}
// ResultHandle is a build result with the client that built it.
type ResultHandle struct {
res *gateway.Result
ref gateway.Reference
solveErr *errdefs.SolveError
done chan struct{}
doneOnce sync.Once
meta map[string][]byte
gwClient gateway.Client
gwCtx context.Context
doneOnce sync.Once
cleanups []func()
cleanupsMu sync.Mutex
@ -275,9 +64,6 @@ func (r *ResultHandle) Done() {
for _, f := range cleanups {
f()
}
close(r.done)
<-r.gwCtx.Done()
})
}
@ -287,22 +73,59 @@ func (r *ResultHandle) registerCleanup(f func()) {
r.cleanupsMu.Unlock()
}
func (r *ResultHandle) build(buildFunc gateway.BuildFunc) (err error) {
_, err = buildFunc(r.gwCtx, r.gwClient)
return err
func (r *ResultHandle) NewContainer(ctx context.Context, cfg *InvokeConfig) (gateway.Container, error) {
req, err := r.getContainerConfig(cfg)
if err != nil {
return nil, err
}
return r.gwClient.NewContainer(ctx, req)
}
func (r *ResultHandle) getContainerConfig(ctx context.Context, c gateway.Client, cfg *controllerapi.InvokeConfig) (containerCfg gateway.NewContainerRequest, _ error) {
if r.res != nil && r.solveErr == nil {
func (r *ResultHandle) StatFile(ctx context.Context, fpath string, cfg *InvokeConfig) (*types.Stat, error) {
containerCfg, err := r.getContainerConfig(cfg)
if err != nil {
return nil, err
}
candidateMounts := make([]gateway.Mount, 0, len(containerCfg.Mounts))
for _, m := range containerCfg.Mounts {
if strings.HasPrefix(fpath, m.Dest) {
candidateMounts = append(candidateMounts, m)
}
}
if len(candidateMounts) == 0 {
return nil, iofs.ErrNotExist
}
slices.SortFunc(candidateMounts, func(a, b gateway.Mount) int {
return cmp.Compare(len(a.Dest), len(b.Dest))
})
m := candidateMounts[len(candidateMounts)-1]
relpath, err := filepath.Rel(m.Dest, fpath)
if err != nil {
return nil, err
}
if m.Ref == nil {
return nil, iofs.ErrNotExist
}
req := gateway.StatRequest{Path: filepath.ToSlash(relpath)}
return m.Ref.StatFile(ctx, req)
}
func (r *ResultHandle) getContainerConfig(cfg *InvokeConfig) (containerCfg gateway.NewContainerRequest, _ error) {
if r.ref != nil && r.solveErr == nil {
logrus.Debugf("creating container from successful build")
ccfg, err := containerConfigFromResult(ctx, r.res, c, *cfg)
ccfg, err := containerConfigFromResult(r.ref, cfg)
if err != nil {
return containerCfg, err
}
containerCfg = *ccfg
} else {
logrus.Debugf("creating container from failed build %+v", cfg)
ccfg, err := containerConfigFromError(r.solveErr, *cfg)
ccfg, err := containerConfigFromError(r.solveErr, cfg)
if err != nil {
return containerCfg, errors.Wrapf(err, "no result nor error is available")
}
@ -311,36 +134,27 @@ func (r *ResultHandle) getContainerConfig(ctx context.Context, c gateway.Client,
return containerCfg, nil
}
func (r *ResultHandle) getProcessConfig(cfg *controllerapi.InvokeConfig, stdin io.ReadCloser, stdout io.WriteCloser, stderr io.WriteCloser) (_ gateway.StartRequest, err error) {
func (r *ResultHandle) getProcessConfig(cfg *InvokeConfig, stdin io.ReadCloser, stdout io.WriteCloser, stderr io.WriteCloser) (_ gateway.StartRequest, err error) {
processCfg := newStartRequest(stdin, stdout, stderr)
if r.res != nil && r.solveErr == nil {
if r.ref != nil && r.solveErr == nil {
logrus.Debugf("creating container from successful build")
if err := populateProcessConfigFromResult(&processCfg, r.res, *cfg); err != nil {
if err := populateProcessConfigFromResult(&processCfg, r.meta, cfg); err != nil {
return processCfg, err
}
} else {
logrus.Debugf("creating container from failed build %+v", cfg)
if err := populateProcessConfigFromError(&processCfg, r.solveErr, *cfg); err != nil {
if err := populateProcessConfigFromError(&processCfg, r.solveErr, cfg); err != nil {
return processCfg, err
}
}
return processCfg, nil
}
func containerConfigFromResult(ctx context.Context, res *gateway.Result, c gateway.Client, cfg controllerapi.InvokeConfig) (*gateway.NewContainerRequest, error) {
func containerConfigFromResult(ref gateway.Reference, cfg *InvokeConfig) (*gateway.NewContainerRequest, error) {
if cfg.Initial {
return nil, errors.Errorf("starting from the container from the initial state of the step is supported only on the failed steps")
}
ps, err := exptypes.ParsePlatforms(res.Metadata)
if err != nil {
return nil, err
}
ref, ok := res.FindRef(ps.Platforms[0].ID)
if !ok {
return nil, errors.Errorf("no reference found")
}
return &gateway.NewContainerRequest{
Mounts: []gateway.Mount{
{
@ -352,11 +166,11 @@ func containerConfigFromResult(ctx context.Context, res *gateway.Result, c gatew
}, nil
}
func populateProcessConfigFromResult(req *gateway.StartRequest, res *gateway.Result, cfg controllerapi.InvokeConfig) error {
imgData := res.Metadata[exptypes.ExporterImageConfigKey]
var img *specs.Image
func populateProcessConfigFromResult(req *gateway.StartRequest, meta map[string][]byte, cfg *InvokeConfig) error {
imgData := meta[exptypes.ExporterImageConfigKey]
var img *ocispecs.Image
if len(imgData) > 0 {
img = &specs.Image{}
img = &ocispecs.Image{}
if err := json.Unmarshal(imgData, img); err != nil {
return err
}
@ -403,16 +217,16 @@ func populateProcessConfigFromResult(req *gateway.StartRequest, res *gateway.Res
return nil
}
func containerConfigFromError(solveErr *errdefs.SolveError, cfg controllerapi.InvokeConfig) (*gateway.NewContainerRequest, error) {
func containerConfigFromError(solveErr *errdefs.SolveError, cfg *InvokeConfig) (*gateway.NewContainerRequest, error) {
exec, err := execOpFromError(solveErr)
if err != nil {
return nil, err
}
var mounts []gateway.Mount
for i, mnt := range exec.Mounts {
rid := solveErr.Solve.MountIDs[i]
rid := solveErr.MountIDs[i]
if cfg.Initial {
rid = solveErr.Solve.InputIDs[i]
rid = solveErr.InputIDs[i]
}
mounts = append(mounts, gateway.Mount{
Selector: mnt.Selector,
@ -431,7 +245,7 @@ func containerConfigFromError(solveErr *errdefs.SolveError, cfg controllerapi.In
}, nil
}
func populateProcessConfigFromError(req *gateway.StartRequest, solveErr *errdefs.SolveError, cfg controllerapi.InvokeConfig) error {
func populateProcessConfigFromError(req *gateway.StartRequest, solveErr *errdefs.SolveError, cfg *InvokeConfig) error {
exec, err := execOpFromError(solveErr)
if err != nil {
return err
@ -477,7 +291,7 @@ func execOpFromError(solveErr *errdefs.SolveError) (*pb.ExecOp, error) {
if solveErr == nil {
return nil, errors.Errorf("no error is available")
}
switch op := solveErr.Solve.Op.GetOp().(type) {
switch op := solveErr.Op.GetOp().(type) {
case *pb.Op_Exec:
return op.Exec, nil
default:

View File

@ -7,12 +7,15 @@ import (
"github.com/docker/buildx/driver"
"github.com/docker/buildx/util/progress"
"github.com/docker/go-units"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/client/llb"
gwclient "github.com/moby/buildkit/frontend/gateway/client"
"github.com/pkg/errors"
)
const maxDockerfileSize = 2 * 1024 * 1024 // 2 MB
func createTempDockerfileFromURL(ctx context.Context, d *driver.DriverHandle, url string, pw progress.Writer) (string, error) {
c, err := driver.Boot(ctx, ctx, d, pw)
if err != nil {
@ -43,8 +46,8 @@ func createTempDockerfileFromURL(ctx context.Context, d *driver.DriverHandle, ur
if err != nil {
return nil, err
}
if stat.Size() > 512*1024 {
return nil, errors.Errorf("Dockerfile %s bigger than allowed max size", url)
if stat.Size > maxDockerfileSize {
return nil, errors.Errorf("Dockerfile %s bigger than allowed max size (%s)", url, units.HumanSize(maxDockerfileSize))
}
dt, err := ref.ReadFile(ctx, gwclient.ReadRequest{
@ -63,7 +66,6 @@ func createTempDockerfileFromURL(ctx context.Context, d *driver.DriverHandle, ur
out = dir
return nil, nil
}, ch)
if err != nil {
return "", err
}

View File

@ -5,13 +5,15 @@ import (
"bytes"
"context"
"net"
"os"
"strconv"
"strings"
"github.com/docker/buildx/driver"
"github.com/docker/cli/opts"
"github.com/docker/docker/builder/remotecontext/urlutil"
"github.com/moby/buildkit/util/gitutil"
"github.com/moby/buildkit/frontend/dockerfile/dfgitutil"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
const (
@ -23,11 +25,18 @@ const (
mobyHostGatewayName = "host-gateway"
)
// isHTTPURL returns true if the provided str is an HTTP(S) URL by checking if it
// has a http:// or https:// scheme. No validation is performed to verify if the
// URL is well-formed.
func isHTTPURL(str string) bool {
return strings.HasPrefix(str, "https://") || strings.HasPrefix(str, "http://")
}
func IsRemoteURL(c string) bool {
if urlutil.IsURL(c) {
if isHTTPURL(c) {
return true
}
if _, err := gitutil.ParseGitRef(c); err == nil {
if _, ok, _ := dfgitutil.ParseGitRef(c); ok {
return true
}
return false
@ -68,24 +77,30 @@ func toBuildkitExtraHosts(ctx context.Context, inp []string, nodeDriver *driver.
}
// If the IP Address is a "host-gateway", replace this value with the
// IP address provided by the worker's label.
var ips []string
if ip == mobyHostGatewayName {
hgip, err := nodeDriver.HostGatewayIP(ctx)
if err != nil {
return "", errors.Wrap(err, "unable to derive the IP value for host-gateway")
}
ip = hgip.String()
ips = append(ips, hgip.String())
} else {
// If the address is enclosed in square brackets, extract it (for IPv6, but
// permit it for IPv4 as well; we don't know the address family here, but it's
// unambiguous).
if len(ip) > 2 && ip[0] == '[' && ip[len(ip)-1] == ']' {
ip = ip[1 : len(ip)-1]
}
if net.ParseIP(ip) == nil {
return "", errors.Errorf("invalid host %s", h)
for _, v := range strings.Split(ip, ",") {
// If the address is enclosed in square brackets, extract it
// (for IPv6, but permit it for IPv4 as well; we don't know the
// address family here, but it's unambiguous).
if len(v) > 2 && v[0] == '[' && v[len(v)-1] == ']' {
v = v[1 : len(v)-1]
}
if net.ParseIP(v) == nil {
return "", errors.Errorf("invalid host %s", h)
}
ips = append(ips, v)
}
}
hosts = append(hosts, host+"="+ip)
for _, v := range ips {
hosts = append(hosts, host+"="+v)
}
}
return strings.Join(hosts, ","), nil
}
@ -101,3 +116,21 @@ func toBuildkitUlimits(inp *opts.UlimitOpt) (string, error) {
}
return strings.Join(ulimits, ","), nil
}
func notSupported(f driver.Feature, d *driver.DriverHandle, docs string) error {
return errors.Errorf(`%s is not supported for the %s driver.
Switch to a different driver, or turn on the containerd image store, and try again.
Learn more at %s`, f, d.Factory().Name(), docs)
}
func noDefaultLoad() bool {
v, ok := os.LookupEnv("BUILDX_NO_DEFAULT_LOAD")
if !ok {
return false
}
b, err := strconv.ParseBool(v)
if err != nil {
logrus.Warnf("invalid non-bool value for BUILDX_NO_DEFAULT_LOAD: %s", v)
}
return b
}

View File

@ -72,6 +72,11 @@ func TestToBuildkitExtraHosts(t *testing.T) {
doc: "IPv6 localhost, non-canonical, eq sep",
input: []string{`ipv6local=0:0:0:0:0:0:0:1`},
},
{
doc: "Multi IPs",
input: []string{`myhost=162.242.195.82,162.242.195.83`},
expectedOut: `myhost=162.242.195.82,myhost=162.242.195.83`,
},
{
doc: "IPv6 localhost, non-canonical, eq sep, brackets",
input: []string{`ipv6local=[0:0:0:0:0:0:0:1]`},
@ -130,7 +135,6 @@ func TestToBuildkitExtraHosts(t *testing.T) {
}
for _, tc := range tests {
tc := tc
if tc.expectedOut == "" {
tc.expectedOut = strings.Join(tc.input, ",")
}
@ -138,7 +142,7 @@ func TestToBuildkitExtraHosts(t *testing.T) {
actualOut, actualErr := toBuildkitExtraHosts(context.TODO(), tc.input, nil)
if tc.expectedErr == "" {
require.Equal(t, tc.expectedOut, actualOut)
require.Nil(t, actualErr)
require.NoError(t, actualErr)
} else {
require.Zero(t, actualOut)
require.Error(t, actualErr, tc.expectedErr)

View File

@ -2,10 +2,10 @@ package builder
import (
"context"
"encoding/csv"
"encoding/json"
"net/url"
"os"
"slices"
"sort"
"strings"
"sync"
@ -27,6 +27,7 @@ import (
"github.com/moby/buildkit/util/progress/progressui"
"github.com/pkg/errors"
"github.com/spf13/pflag"
"github.com/tonistiigi/go-csvvalue"
"golang.org/x/sync/errgroup"
)
@ -121,7 +122,7 @@ func New(dockerCli command.Cli, opts ...Option) (_ *Builder, err error) {
// Validate validates builder context
func (b *Builder) Validate() error {
if b.NodeGroup != nil && b.NodeGroup.DockerContext {
if b.NodeGroup != nil && b.DockerContext {
list, err := b.opts.dockerCli.ContextStore().List()
if err != nil {
return err
@ -143,7 +144,7 @@ func (b *Builder) ContextName() string {
return ""
}
for _, cb := range ctxbuilders {
if b.NodeGroup.Driver == "docker" && len(b.NodeGroup.Nodes) == 1 && b.NodeGroup.Nodes[0].Endpoint == cb.Name {
if b.Driver == "docker" && len(b.NodeGroup.Nodes) == 1 && b.NodeGroup.Nodes[0].Endpoint == cb.Name {
return cb.Name
}
}
@ -199,7 +200,7 @@ func (b *Builder) Boot(ctx context.Context) (bool, error) {
err = err1
}
if err == nil && len(errCh) == len(toBoot) {
if err == nil && len(errCh) > 0 {
return false, <-errCh
}
return true, err
@ -253,7 +254,7 @@ func (b *Builder) Factory(ctx context.Context, dialMeta map[string][]string) (_
if err != nil {
return
}
b.Driver = b.driverFactory.Factory.Name()
b.Driver = b.driverFactory.Name()
}
})
return b.driverFactory.Factory, err
@ -288,7 +289,15 @@ func GetBuilders(dockerCli command.Cli, txn *store.Txn) ([]*Builder, error) {
return nil, err
}
builders := make([]*Builder, len(storeng))
contexts, err := dockerCli.ContextStore().List()
if err != nil {
return nil, err
}
sort.Slice(contexts, func(i, j int) bool {
return contexts[i].Name < contexts[j].Name
})
builders := make([]*Builder, len(storeng), len(storeng)+len(contexts))
seen := make(map[string]struct{})
for i, ng := range storeng {
b, err := New(dockerCli,
@ -300,17 +309,9 @@ func GetBuilders(dockerCli command.Cli, txn *store.Txn) ([]*Builder, error) {
return nil, err
}
builders[i] = b
seen[b.NodeGroup.Name] = struct{}{}
seen[b.Name] = struct{}{}
}
contexts, err := dockerCli.ContextStore().List()
if err != nil {
return nil, err
}
sort.Slice(contexts, func(i, j int) bool {
return contexts[i].Name < contexts[j].Name
})
for _, c := range contexts {
// if a context has the same name as an instance from the store, do not
// add it to the builders list. An instance from the store takes
@ -435,7 +436,16 @@ func Create(ctx context.Context, txn *store.Txn, dockerCli command.Cli, opts Cre
return nil, err
}
buildkitdFlags, err := parseBuildkitdFlags(opts.BuildkitdFlags, driverName, driverOpts)
buildkitdConfigFile := opts.BuildkitdConfigFile
if buildkitdConfigFile == "" {
// if buildkit daemon config is not provided, check if the default one
// is available and use it
if f, ok := confutil.NewConfig(dockerCli).BuildKitConfigFile(); ok {
buildkitdConfigFile = f
}
}
buildkitdFlags, err := parseBuildkitdFlags(opts.BuildkitdFlags, driverName, driverOpts, buildkitdConfigFile)
if err != nil {
return nil, err
}
@ -496,15 +506,6 @@ func Create(ctx context.Context, txn *store.Txn, dockerCli command.Cli, opts Cre
setEp = false
}
buildkitdConfigFile := opts.BuildkitdConfigFile
if buildkitdConfigFile == "" {
// if buildkit daemon config is not provided, check if the default one
// is available and use it
if f, ok := confutil.DefaultConfigFile(dockerCli); ok {
buildkitdConfigFile = f
}
}
if err := ng.Update(opts.NodeName, ep, opts.Platforms, setEp, opts.Append, buildkitdFlags, buildkitdConfigFile, driverOpts); err != nil {
return nil, err
}
@ -522,8 +523,9 @@ func Create(ctx context.Context, txn *store.Txn, dockerCli command.Cli, opts Cre
return nil, err
}
timeoutCtx, cancel := context.WithTimeout(ctx, 20*time.Second)
defer cancel()
cancelCtx, cancel := context.WithCancelCause(ctx)
timeoutCtx, _ := context.WithTimeoutCause(cancelCtx, 20*time.Second, errors.WithStack(context.DeadlineExceeded)) //nolint:govet // no need to manually cancel this context as we already rely on parent
defer func() { cancel(errors.WithStack(context.Canceled)) }()
nodes, err := b.LoadNodes(timeoutCtx, WithData())
if err != nil {
@ -584,7 +586,7 @@ func Leave(ctx context.Context, txn *store.Txn, dockerCli command.Cli, opts Leav
return err
}
ls, err := localstate.New(confutil.ConfigDir(dockerCli))
ls, err := localstate.New(confutil.NewConfig(dockerCli))
if err != nil {
return err
}
@ -601,8 +603,7 @@ func csvToMap(in []string) (map[string]string, error) {
}
m := make(map[string]string, len(in))
for _, s := range in {
csvReader := csv.NewReader(strings.NewReader(s))
fields, err := csvReader.Read()
fields, err := csvvalue.Fields(s, nil)
if err != nil {
return nil, err
}
@ -642,7 +643,7 @@ func validateBuildkitEndpoint(ep string) (string, error) {
}
// parseBuildkitdFlags parses buildkit flags
func parseBuildkitdFlags(inp string, driver string, driverOpts map[string]string) (res []string, err error) {
func parseBuildkitdFlags(inp string, driver string, driverOpts map[string]string, buildkitdConfigFile string) (res []string, err error) {
if inp != "" {
res, err = shlex.Split(inp)
if err != nil {
@ -656,18 +657,26 @@ func parseBuildkitdFlags(inp string, driver string, driverOpts map[string]string
flags.StringArrayVar(&allowInsecureEntitlements, "allow-insecure-entitlement", nil, "")
_ = flags.Parse(res)
var hasNetworkHostEntitlement bool
for _, e := range allowInsecureEntitlements {
if e == "network.host" {
hasNetworkHostEntitlement = true
break
hasNetworkHostEntitlement := slices.Contains(allowInsecureEntitlements, "network.host")
var hasNetworkHostEntitlementInConf bool
if buildkitdConfigFile != "" {
btoml, err := confutil.LoadConfigTree(buildkitdConfigFile)
if err != nil {
return nil, err
} else if btoml != nil {
if ies := btoml.GetArray("insecure-entitlements"); ies != nil {
if slices.Contains(ies.([]string), "network.host") {
hasNetworkHostEntitlementInConf = true
}
}
}
}
if v, ok := driverOpts["network"]; ok && v == "host" && !hasNetworkHostEntitlement && driver == "docker-container" {
// always set network.host entitlement if user has set network=host
res = append(res, "--allow-insecure-entitlement=network.host")
} else if len(allowInsecureEntitlements) == 0 && (driver == "kubernetes" || driver == "docker-container") {
} else if len(allowInsecureEntitlements) == 0 && !hasNetworkHostEntitlementInConf && (driver == "kubernetes" || driver == "docker-container") {
// set network.host entitlement if user does not provide any as
// network is isolated for container drivers.
res = append(res, "--allow-insecure-entitlement=network.host")

View File

@ -1,6 +1,8 @@
package builder
import (
"os"
"path"
"testing"
"github.com/stretchr/testify/assert"
@ -17,29 +19,55 @@ func TestCsvToMap(t *testing.T) {
require.NoError(t, err)
require.Contains(t, r, "tolerations")
require.Equal(t, r["tolerations"], "key=foo,value=bar;key=foo2,value=bar2")
require.Equal(t, "key=foo,value=bar;key=foo2,value=bar2", r["tolerations"])
require.Contains(t, r, "replicas")
require.Equal(t, r["replicas"], "1")
require.Equal(t, "1", r["replicas"])
require.Contains(t, r, "namespace")
require.Equal(t, r["namespace"], "default")
require.Equal(t, "default", r["namespace"])
}
func TestParseBuildkitdFlags(t *testing.T) {
dirConf := t.TempDir()
buildkitdConfPath := path.Join(dirConf, "buildkitd-conf.toml")
require.NoError(t, os.WriteFile(buildkitdConfPath, []byte(`
# debug enables additional debug logging
debug = true
# insecure-entitlements allows insecure entitlements, disabled by default.
insecure-entitlements = [ "network.host", "security.insecure" ]
[log]
# log formatter: json or text
format = "text"
`), 0644))
buildkitdConfBrokenPath := path.Join(dirConf, "buildkitd-conf-broken.toml")
require.NoError(t, os.WriteFile(buildkitdConfBrokenPath, []byte(`
[worker.oci]
gc = "maybe"
`), 0644))
buildkitdConfUnknownFieldPath := path.Join(dirConf, "buildkitd-unknown-field.toml")
require.NoError(t, os.WriteFile(buildkitdConfUnknownFieldPath, []byte(`
foo = "bar"
`), 0644))
testCases := []struct {
name string
flags string
driver string
driverOpts map[string]string
expected []string
wantErr bool
name string
flags string
driver string
driverOpts map[string]string
buildkitdConfigFile string
expected []string
wantErr bool
}{
{
"docker-container no flags",
"",
"docker-container",
nil,
"",
[]string{
"--allow-insecure-entitlement=network.host",
},
@ -50,6 +78,7 @@ func TestParseBuildkitdFlags(t *testing.T) {
"",
"kubernetes",
nil,
"",
[]string{
"--allow-insecure-entitlement=network.host",
},
@ -60,6 +89,7 @@ func TestParseBuildkitdFlags(t *testing.T) {
"",
"remote",
nil,
"",
nil,
false,
},
@ -68,6 +98,7 @@ func TestParseBuildkitdFlags(t *testing.T) {
"--allow-insecure-entitlement=security.insecure",
"docker-container",
nil,
"",
[]string{
"--allow-insecure-entitlement=security.insecure",
},
@ -78,6 +109,7 @@ func TestParseBuildkitdFlags(t *testing.T) {
"--allow-insecure-entitlement=network.host --allow-insecure-entitlement=security.insecure",
"docker-container",
nil,
"",
[]string{
"--allow-insecure-entitlement=network.host",
"--allow-insecure-entitlement=security.insecure",
@ -89,6 +121,7 @@ func TestParseBuildkitdFlags(t *testing.T) {
"",
"docker-container",
map[string]string{"network": "host"},
"",
[]string{
"--allow-insecure-entitlement=network.host",
},
@ -99,6 +132,7 @@ func TestParseBuildkitdFlags(t *testing.T) {
"--allow-insecure-entitlement=network.host",
"docker-container",
map[string]string{"network": "host"},
"",
[]string{
"--allow-insecure-entitlement=network.host",
},
@ -109,25 +143,55 @@ func TestParseBuildkitdFlags(t *testing.T) {
"--allow-insecure-entitlement=network.host --allow-insecure-entitlement=security.insecure",
"docker-container",
map[string]string{"network": "host"},
"",
[]string{
"--allow-insecure-entitlement=network.host",
"--allow-insecure-entitlement=security.insecure",
},
false,
},
{
"docker-container with buildkitd conf setting network.host entitlement",
"",
"docker-container",
nil,
buildkitdConfPath,
nil,
false,
},
{
"error parsing flags",
"foo'",
"docker-container",
nil,
"",
nil,
true,
},
{
"error parsing buildkit config",
"",
"docker-container",
nil,
buildkitdConfBrokenPath,
nil,
true,
},
{
"unknown field in buildkit config",
"",
"docker-container",
nil,
buildkitdConfUnknownFieldPath,
[]string{
"--allow-insecure-entitlement=network.host",
},
false,
},
}
for _, tt := range testCases {
tt := tt
t.Run(tt.name, func(t *testing.T) {
flags, err := parseBuildkitdFlags(tt.flags, tt.driver, tt.driverOpts)
flags, err := parseBuildkitdFlags(tt.flags, tt.driver, tt.driverOpts, tt.buildkitdConfigFile)
if tt.wantErr {
require.Error(t, err)
return

View File

@ -6,9 +6,8 @@ import (
"sort"
"strings"
"github.com/containerd/containerd/platforms"
"github.com/containerd/platforms"
"github.com/docker/buildx/driver"
ctxkube "github.com/docker/buildx/driver/kubernetes/context"
"github.com/docker/buildx/store"
"github.com/docker/buildx/store/storeutil"
"github.com/docker/buildx/util/dockerutil"
@ -18,7 +17,6 @@ import (
"github.com/moby/buildkit/util/grpcerrors"
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"golang.org/x/sync/errgroup"
"google.golang.org/grpc/codes"
)
@ -34,10 +32,11 @@ type Node struct {
Err error
// worker settings
IDs []string
Platforms []ocispecs.Platform
GCPolicy []client.PruneInfo
Labels map[string]string
IDs []string
Platforms []ocispecs.Platform
GCPolicy []client.PruneInfo
Labels map[string]string
CDIDevices []client.CDIDevice
}
// Nodes returns nodes for this builder.
@ -48,8 +47,9 @@ func (b *Builder) Nodes() []Node {
type LoadNodesOption func(*loadNodesOptions)
type loadNodesOptions struct {
data bool
dialMeta map[string][]string
data bool
dialMeta map[string][]string
clientOpt []client.ClientOpt
}
func WithData() LoadNodesOption {
@ -64,6 +64,12 @@ func WithDialMeta(dialMeta map[string][]string) LoadNodesOption {
}
}
func WithClientOpt(clientOpt ...client.ClientOpt) LoadNodesOption {
return func(o *loadNodesOptions) {
o.clientOpt = clientOpt
}
}
// LoadNodes loads and returns nodes for this builder.
// TODO: this should be a method on a Node object and lazy load data for each driver.
func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []Node, err error) {
@ -112,37 +118,20 @@ func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []N
return nil
}
contextStore := b.opts.dockerCli.ContextStore()
var kcc driver.KubeClientConfig
kcc, err = ctxkube.ConfigFromEndpoint(n.Endpoint, contextStore)
if err != nil {
// err is returned if n.Endpoint is non-context name like "unix:///var/run/docker.sock".
// try again with name="default".
// FIXME(@AkihiroSuda): n should retain real context name.
kcc, err = ctxkube.ConfigFromEndpoint("default", contextStore)
if err != nil {
logrus.Error(err)
}
}
tryToUseKubeConfigInCluster := false
if kcc == nil {
tryToUseKubeConfigInCluster = true
} else {
if _, err := kcc.ClientConfig(); err != nil {
tryToUseKubeConfigInCluster = true
}
}
if tryToUseKubeConfigInCluster {
kccInCluster := driver.KubeClientConfigInCluster{}
if _, err := kccInCluster.ClientConfig(); err == nil {
logrus.Debug("using kube config in cluster")
kcc = kccInCluster
}
}
d, err := driver.GetDriver(ctx, "buildx_buildkit_"+n.Name, factory, n.Endpoint, dockerapi, imageopt.Auth, kcc, n.BuildkitdFlags, n.Files, n.DriverOpts, n.Platforms, b.opts.contextPathHash, lno.dialMeta)
d, err := driver.GetDriver(ctx, factory, driver.InitConfig{
Name: driver.BuilderName(n.Name),
EndpointAddr: n.Endpoint,
DockerAPI: dockerapi,
DockerContext: b.opts.dockerCli.CurrentContext(),
ContextStore: b.opts.dockerCli.ContextStore(),
BuildkitdFlags: n.BuildkitdFlags,
Files: n.Files,
DriverOpts: n.DriverOpts,
Auth: imageopt.Auth,
Platforms: n.Platforms,
ContextPathHash: b.opts.contextPathHash,
DialMeta: lno.dialMeta,
})
if err != nil {
node.Err = err
return nil
@ -151,7 +140,7 @@ func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []N
node.ImageOpt = imageopt
if lno.data {
if err := node.loadData(ctx); err != nil {
if err := node.loadData(ctx, lno.clientOpt...); err != nil {
node.Err = err
}
}
@ -181,12 +170,12 @@ func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []N
// dynamic nodes are used in Kubernetes driver.
// Kubernetes' pods are dynamically mapped to BuildKit Nodes.
if di.DriverInfo != nil && len(di.DriverInfo.DynamicNodes) > 0 {
for i := 0; i < len(di.DriverInfo.DynamicNodes); i++ {
for i := range di.DriverInfo.DynamicNodes {
diClone := di
if pl := di.DriverInfo.DynamicNodes[i].Platforms; len(pl) > 0 {
diClone.Platforms = pl
}
nodes = append(nodes, di)
nodes = append(nodes, diClone)
}
dynamicNodes = append(dynamicNodes, di.DriverInfo.DynamicNodes...)
}
@ -195,7 +184,7 @@ func (b *Builder) LoadNodes(ctx context.Context, opts ...LoadNodesOption) (_ []N
// not append (remove the static nodes in the store)
b.NodeGroup.Nodes = dynamicNodes
b.nodes = nodes
b.NodeGroup.Dynamic = true
b.Dynamic = true
}
}
@ -247,7 +236,7 @@ func (n *Node) MarshalJSON() ([]byte, error) {
})
}
func (n *Node) loadData(ctx context.Context) error {
func (n *Node) loadData(ctx context.Context, clientOpt ...client.ClientOpt) error {
if n.Driver == nil {
return nil
}
@ -257,7 +246,7 @@ func (n *Node) loadData(ctx context.Context) error {
}
n.DriverInfo = info
if n.DriverInfo.Status == driver.Running {
driverClient, err := n.Driver.Client(ctx)
driverClient, err := n.Driver.Client(ctx, clientOpt...)
if err != nil {
return err
}
@ -272,6 +261,7 @@ func (n *Node) loadData(ctx context.Context) error {
n.GCPolicy = w.GCPolicy
n.Labels = w.Labels
}
n.CDIDevices = w.CDIDevices
}
sort.Strings(n.IDs)
n.Platforms = platformutil.Dedupe(n.Platforms)

75
cmd/buildx/debug.go Normal file
View File

@ -0,0 +1,75 @@
package main
import (
"context"
"os"
"runtime"
"runtime/pprof"
"github.com/moby/buildkit/util/bklog"
"github.com/sirupsen/logrus"
)
func setupDebugProfiles(ctx context.Context) (stop func()) {
var stopFuncs []func()
if fn := setupCPUProfile(ctx); fn != nil {
stopFuncs = append(stopFuncs, fn)
}
if fn := setupHeapProfile(ctx); fn != nil {
stopFuncs = append(stopFuncs, fn)
}
return func() {
for _, fn := range stopFuncs {
fn()
}
}
}
func setupCPUProfile(ctx context.Context) (stop func()) {
if cpuProfile := os.Getenv("BUILDX_CPU_PROFILE"); cpuProfile != "" {
f, err := os.Create(cpuProfile)
if err != nil {
bklog.G(ctx).Warn("could not create cpu profile", logrus.WithError(err))
return nil
}
if err := pprof.StartCPUProfile(f); err != nil {
bklog.G(ctx).Warn("could not start cpu profile", logrus.WithError(err))
_ = f.Close()
return nil
}
return func() {
pprof.StopCPUProfile()
if err := f.Close(); err != nil {
bklog.G(ctx).Warn("could not close file for cpu profile", logrus.WithError(err))
}
}
}
return nil
}
func setupHeapProfile(ctx context.Context) (stop func()) {
if heapProfile := os.Getenv("BUILDX_MEM_PROFILE"); heapProfile != "" {
// Memory profile is only created on stop.
return func() {
f, err := os.Create(heapProfile)
if err != nil {
bklog.G(ctx).Warn("could not create memory profile", logrus.WithError(err))
return
}
// get up-to-date statistics
runtime.GC()
if err := pprof.WriteHeapProfile(f); err != nil {
bklog.G(ctx).Warn("could not write memory profile", logrus.WithError(err))
}
if err := f.Close(); err != nil {
bklog.G(ctx).Warn("could not close file for memory profile", logrus.WithError(err))
}
}
}
return nil
}

View File

@ -1,23 +1,26 @@
package main
import (
"context"
"fmt"
"os"
"path/filepath"
"github.com/docker/buildx/commands"
"github.com/docker/buildx/util/cobrautil"
"github.com/docker/buildx/util/desktop"
"github.com/docker/buildx/version"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli-plugins/manager"
"github.com/docker/cli/cli-plugins/metadata"
"github.com/docker/cli/cli-plugins/plugin"
"github.com/docker/cli/cli/command"
"github.com/docker/cli/cli/debug"
cliflags "github.com/docker/cli/cli/flags"
"github.com/moby/buildkit/solver/errdefs"
solvererrdefs "github.com/moby/buildkit/solver/errdefs"
"github.com/moby/buildkit/util/grpcerrors"
"github.com/moby/buildkit/util/stack"
//nolint:staticcheck // vendored dependencies may still use this
"github.com/containerd/containerd/pkg/seed"
"github.com/pkg/errors"
"go.opentelemetry.io/otel"
"google.golang.org/grpc/codes"
_ "k8s.io/client-go/plugin/pkg/client/auth/oidc"
@ -25,32 +28,56 @@ import (
_ "github.com/docker/buildx/driver/docker-container"
_ "github.com/docker/buildx/driver/kubernetes"
_ "github.com/docker/buildx/driver/remote"
// Use custom grpc codec to utilize vtprotobuf
_ "github.com/moby/buildkit/util/grpcutil/encoding/proto"
)
func init() {
//nolint:staticcheck
seed.WithTimeAndRand()
stack.SetVersionInfo(version.Version, version.Revision)
}
func runStandalone(cmd *command.DockerCli) error {
if err := cmd.Initialize(cliflags.NewClientOptions()); err != nil {
return err
}
rootCmd := commands.NewRootCmd(os.Args[0], false, cmd)
defer flushMetrics(cmd)
executable := os.Args[0]
rootCmd := commands.NewRootCmd(filepath.Base(executable), false, cmd)
return rootCmd.Execute()
}
// flushMetrics will manually flush metrics from the configured
// meter provider. This is needed when running in standalone mode
// because the meter provider is initialized by the cli library,
// but the mechanism for forcing it to report is not presently
// exposed and not invoked when run in standalone mode.
// There are plans to fix that in the next release, but this is
// needed temporarily until the API for this is more thorough.
func flushMetrics(cmd *command.DockerCli) {
if mp, ok := cmd.MeterProvider().(command.MeterProvider); ok {
if err := mp.ForceFlush(context.Background()); err != nil {
otel.Handle(err)
}
}
}
func runPlugin(cmd *command.DockerCli) error {
rootCmd := commands.NewRootCmd("buildx", true, cmd)
return plugin.RunPlugin(cmd, rootCmd, manager.Metadata{
return plugin.RunPlugin(cmd, rootCmd, metadata.Metadata{
SchemaVersion: "0.1.0",
Vendor: "Docker Inc.",
Version: version.Version,
})
}
func run(cmd *command.DockerCli) error {
stopProfiles := setupDebugProfiles(context.TODO())
defer stopProfiles()
if plugin.RunningStandalone() {
return runStandalone(cmd)
}
return runPlugin(cmd)
}
func main() {
cmd, err := command.NewDockerCli()
if err != nil {
@ -58,15 +85,11 @@ func main() {
os.Exit(1)
}
if plugin.RunningStandalone() {
err = runStandalone(cmd)
} else {
err = runPlugin(cmd)
}
if err == nil {
if err = run(cmd); err == nil {
return
}
// Check the error from the run function above.
if sterr, ok := err.(cli.StatusError); ok {
if sterr.Status != "" {
fmt.Fprintln(cmd.Err(), sterr.Status)
@ -79,7 +102,14 @@ func main() {
os.Exit(sterr.StatusCode)
}
for _, s := range errdefs.Sources(err) {
// Check for ExitCodeError, which is used to exit with a specific code
// without printing an error message.
var exitCodeErr cobrautil.ExitCodeError
if errors.As(err, &exitCodeErr) {
os.Exit(int(exitCodeErr))
}
for _, s := range solvererrdefs.Sources(err) {
s.Print(cmd.Err())
}
if debug.IsEnabled() {
@ -87,9 +117,25 @@ func main() {
} else {
fmt.Fprintf(cmd.Err(), "ERROR: %v\n", err)
}
if ebr, ok := err.(*desktop.ErrorWithBuildRef); ok {
var ebr *desktop.ErrorWithBuildRef
if errors.As(err, &ebr) {
ebr.Print(cmd.Err())
}
os.Exit(1)
exitCode := 1
switch grpcerrors.Code(err) {
case codes.Internal:
exitCode = 100 // https://github.com/square/exit/blob/v1.3.0/exit.go#L70
case codes.ResourceExhausted:
exitCode = 102
case codes.Canceled:
exitCode = 130
default:
if errors.Is(err, context.Canceled) {
exitCode = 130
}
}
os.Exit(exitCode)
}

View File

@ -4,7 +4,6 @@ import (
"github.com/moby/buildkit/util/tracing/detect"
"go.opentelemetry.io/otel"
_ "github.com/moby/buildkit/util/tracing/detect/delegated"
_ "github.com/moby/buildkit/util/tracing/env"
)

View File

@ -1 +1,4 @@
comment: false
ignore:
- "**/*.pb.go"

View File

@ -1,24 +1,35 @@
package commands
import (
"bytes"
"cmp"
"context"
"crypto/sha256"
"encoding/hex"
"encoding/json"
"fmt"
"io"
"os"
"slices"
"sort"
"strings"
"sync"
"text/tabwriter"
"github.com/containerd/console"
"github.com/containerd/containerd/platforms"
"github.com/containerd/platforms"
"github.com/docker/buildx/bake"
"github.com/docker/buildx/bake/hclparser"
"github.com/docker/buildx/build"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/localstate"
"github.com/docker/buildx/util/buildflags"
"github.com/docker/buildx/util/cobrautil"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/desktop"
"github.com/docker/buildx/util/dockerutil"
"github.com/docker/buildx/util/osutil"
"github.com/docker/buildx/util/progress"
"github.com/docker/buildx/util/tracing"
"github.com/docker/cli/cli/command"
@ -26,23 +37,45 @@ import (
"github.com/moby/buildkit/util/progress/progressui"
"github.com/pkg/errors"
"github.com/spf13/cobra"
"github.com/tonistiigi/go-csvvalue"
"go.opentelemetry.io/otel/attribute"
)
const (
bakeEnvFileSeparator = "BUILDX_BAKE_PATH_SEPARATOR"
bakeEnvFilePath = "BUILDX_BAKE_FILE"
)
type bakeOptions struct {
files []string
overrides []string
printOnly bool
files []string
overrides []string
sbom string
provenance string
allow []string
builder string
metadataFile string
exportPush bool
exportLoad bool
callFunc string
print bool
list string
// TODO: remove deprecated flags
listTargets bool
listVars bool
}
func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in bakeOptions, cFlags commonFlags) (err error) {
ctx, end, err := tracing.TraceCurrentCommand(ctx, "bake")
func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in bakeOptions, cFlags commonFlags, filesFromEnv bool) (err error) {
mp := dockerCli.MeterProvider()
ctx, end, err := tracing.TraceCurrentCommand(ctx, append([]string{"bake"}, targets...),
attribute.String("builder", in.builder),
attribute.StringSlice("targets", targets),
attribute.StringSlice("files", in.files),
)
if err != nil {
return err
}
@ -50,34 +83,25 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
end(err)
}()
var url string
cmdContext := "cwd://"
if len(targets) > 0 {
if build.IsRemoteURL(targets[0]) {
url = targets[0]
targets = targets[1:]
if len(targets) > 0 {
if build.IsRemoteURL(targets[0]) {
cmdContext = targets[0]
targets = targets[1:]
}
}
}
}
url, cmdContext, targets := bakeArgs(targets)
if len(targets) == 0 {
targets = []string{"default"}
}
callFunc, err := buildflags.ParseCallFunc(in.callFunc)
if err != nil {
return err
}
overrides := in.overrides
if in.exportPush {
if in.exportLoad {
return errors.Errorf("push and load may not be set together at the moment")
}
overrides = append(overrides, "*.push=true")
} else if in.exportLoad {
overrides = append(overrides, "*.output=type=docker")
}
if in.exportLoad {
overrides = append(overrides, "*.load=true")
}
if callFunc != nil {
overrides = append(overrides, fmt.Sprintf("*.call=%s", callFunc.Name))
}
if cFlags.noCache != nil {
overrides = append(overrides, fmt.Sprintf("*.no-cache=%t", *cFlags.noCache))
@ -93,14 +117,31 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
}
contextPathHash, _ := os.Getwd()
ctx2, cancel := context.WithCancel(context.TODO())
defer cancel()
ent, err := bake.ParseEntitlements(in.allow)
if err != nil {
return err
}
wd, err := os.Getwd()
if err != nil {
return errors.Wrapf(err, "failed to get current working directory")
}
// filesystem access under the current working directory is allowed by default
ent.FSRead = append(ent.FSRead, wd)
ent.FSWrite = append(ent.FSWrite, wd)
ctx2, cancel := context.WithCancelCause(context.TODO())
defer cancel(errors.WithStack(context.Canceled))
var nodes []builder.Node
var progressConsoleDesc, progressTextDesc string
if in.print && in.list != "" {
return errors.New("--print and --list are mutually exclusive")
}
// instance only needed for reading remote bake files or building
if url != "" || !in.printOnly {
var driverType string
if url != "" || (!in.print && in.list == "") {
b, err := builder.New(dockerCli,
builder.WithName(in.builder),
builder.WithContextPathHash(contextPathHash),
@ -117,34 +158,41 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
}
progressConsoleDesc = fmt.Sprintf("%s:%s", b.Driver, b.Name)
progressTextDesc = fmt.Sprintf("building with %q instance using %s driver", b.Name, b.Driver)
driverType = b.Driver
}
var term bool
if _, err := console.ConsoleFromFile(os.Stderr); err == nil {
term = true
}
attributes := bakeMetricAttributes(dockerCli, driverType, url, cmdContext, targets, &in)
progressMode := progressui.DisplayMode(cFlags.progress)
printer, err := progress.NewPrinter(ctx2, os.Stderr, progressMode,
progress.WithDesc(progressTextDesc, progressConsoleDesc),
)
if err != nil {
return err
}
var printer *progress.Printer
defer func() {
if printer != nil {
err1 := printer.Wait()
if err == nil {
err = err1
}
if err == nil && progressMode != progressui.QuietMode && progressMode != progressui.RawJSONMode {
desktop.PrintBuildDetails(os.Stderr, printer.BuildRefs(), term)
}
printer.Wait()
}
}()
files, inp, err := readBakeFiles(ctx, nodes, url, in.files, dockerCli.In(), printer)
makePrinter := func() error {
var err error
printer, err = progress.NewPrinter(ctx2, os.Stderr, progressMode,
progress.WithDesc(progressTextDesc, progressConsoleDesc),
progress.WithMetrics(mp, attributes),
progress.WithOnClose(func() {
printWarnings(os.Stderr, printer.Warnings(), progressMode)
}),
)
return err
}
if err := makePrinter(); err != nil {
return err
}
files, inp, err := readBakeFiles(ctx, nodes, url, in.files, dockerCli.In(), printer, filesFromEnv)
if err != nil {
return err
}
@ -153,12 +201,34 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
return errors.New("couldn't find a bake definition")
}
tgts, grps, err := bake.ReadTargets(ctx, files, targets, overrides, map[string]string{
defaults := map[string]string{
// don't forget to update documentation if you add a new
// built-in variable: docs/bake-reference.md#built-in-variables
"BAKE_CMD_CONTEXT": cmdContext,
"BAKE_LOCAL_PLATFORM": platforms.DefaultString(),
})
"BAKE_LOCAL_PLATFORM": platforms.Format(platforms.DefaultSpec()),
}
if in.list != "" {
cfg, pm, err := bake.ParseFiles(files, defaults)
if err != nil {
return err
}
if err = printer.Wait(); err != nil {
return err
}
list, err := parseList(in.list)
if err != nil {
return err
}
switch list.Type {
case "targets":
return printTargetList(dockerCli.Out(), list.Format, cfg)
case "variables":
return printVars(dockerCli.Out(), list.Format, pm.AllVariables)
}
}
tgts, grps, err := bake.ReadTargets(ctx, files, targets, overrides, defaults, &ent)
if err != nil {
return err
}
@ -190,58 +260,199 @@ func runBake(ctx context.Context, dockerCli command.Cli, targets []string, in ba
Target: tgts,
}
if in.printOnly {
dt, err := json.MarshalIndent(def, "", " ")
if in.print {
if err = printer.Wait(); err != nil {
return err
}
dtdef, err := json.MarshalIndent(def, "", " ")
if err != nil {
return err
}
err = printer.Wait()
printer = nil
if err != nil {
_, err = fmt.Fprintln(dockerCli.Out(), string(dtdef))
return err
}
for k, opt := range bo {
if opt.CallFunc != nil {
cf, err := buildflags.ParseCallFunc(opt.CallFunc.Name)
if err != nil {
return err
}
if cf == nil {
opt.CallFunc = nil
bo[k] = opt
} else {
opt.CallFunc.Name = cf.Name
}
}
}
exp, err := ent.Validate(bo)
if err != nil {
return err
}
if progressMode != progressui.RawJSONMode {
if err := exp.Prompt(ctx, url != "", &syncWriter{w: dockerCli.Err(), wait: printer.Wait}); err != nil {
return err
}
}
if printer.IsDone() {
// init new printer as old one was stopped to show the prompt
if err := makePrinter(); err != nil {
return err
}
fmt.Fprintln(dockerCli.Out(), string(dt))
return nil
}
// local state group
groupRef := identity.NewID()
var refs []string
for k, b := range bo {
b.Ref = identity.NewID()
b.GroupRef = groupRef
refs = append(refs, b.Ref)
bo[k] = b
}
dt, err := json.Marshal(def)
if err != nil {
return err
}
if err := saveLocalStateGroup(dockerCli, groupRef, localstate.StateGroup{
Definition: dt,
Targets: targets,
Inputs: overrides,
Refs: refs,
}); err != nil {
if err := saveLocalStateGroup(dockerCli, in, targets, bo); err != nil {
return err
}
resp, err := build.Build(ctx, nodes, bo, dockerutil.NewClient(dockerCli), confutil.ConfigDir(dockerCli), printer)
done := timeBuildCommand(mp, attributes)
resp, retErr := build.Build(ctx, nodes, bo, dockerutil.NewClient(dockerCli), confutil.NewConfig(dockerCli), printer)
if err := printer.Wait(); retErr == nil {
retErr = err
}
if retErr != nil {
err = wrapBuildError(retErr, true)
}
done(err)
if err != nil {
return wrapBuildError(err, true)
return err
}
if progressMode != progressui.QuietMode && progressMode != progressui.RawJSONMode {
desktop.PrintBuildDetails(os.Stderr, printer.BuildRefs(), term)
}
if len(in.metadataFile) > 0 {
dt := make(map[string]interface{})
dt := make(map[string]any)
for t, r := range resp {
dt[t] = decodeExporterResponse(r.ExporterResponse)
}
if callFunc == nil {
if warnings := printer.Warnings(); len(warnings) > 0 && confutil.MetadataWarningsEnabled() {
dt["buildx.build.warnings"] = warnings
}
}
if err := writeMetadataFile(in.metadataFile, dt); err != nil {
return err
}
}
return err
var callFormatJSON bool
jsonResults := map[string]map[string]any{}
if callFunc != nil {
callFormatJSON = callFunc.Format == "json"
}
var sep bool
var exitCode int
names := make([]string, 0, len(bo))
for name := range bo {
names = append(names, name)
}
slices.Sort(names)
for _, name := range names {
req := bo[name]
if req.CallFunc == nil {
continue
}
pf := &buildflags.CallFunc{
Name: req.CallFunc.Name,
Format: req.CallFunc.Format,
IgnoreStatus: req.CallFunc.IgnoreStatus,
}
if callFunc != nil {
pf.Format = callFunc.Format
pf.IgnoreStatus = callFunc.IgnoreStatus
}
var res map[string]string
if sp, ok := resp[name]; ok {
res = sp.ExporterResponse
}
if callFormatJSON {
jsonResults[name] = map[string]any{}
buf := &bytes.Buffer{}
if code, err := printResult(buf, pf, res, name, &req.Inputs); err != nil {
jsonResults[name]["error"] = err.Error()
exitCode = 1
} else if code != 0 && exitCode == 0 {
exitCode = code
}
m := map[string]*json.RawMessage{}
if err := json.Unmarshal(buf.Bytes(), &m); err == nil {
for k, v := range m {
jsonResults[name][k] = v
}
} else {
jsonResults[name][pf.Name] = json.RawMessage(buf.Bytes())
}
} else {
if sep {
fmt.Fprintln(dockerCli.Out())
} else {
sep = true
}
fmt.Fprintf(dockerCli.Out(), "%s\n", name)
if descr := tgts[name].Description; descr != "" {
fmt.Fprintf(dockerCli.Out(), "%s\n", descr)
}
fmt.Fprintln(dockerCli.Out())
if code, err := printResult(dockerCli.Out(), pf, res, name, &req.Inputs); err != nil {
fmt.Fprintf(dockerCli.Out(), "error: %v\n", err)
exitCode = 1
} else if code != 0 && exitCode == 0 {
exitCode = code
}
}
}
if callFormatJSON {
out := struct {
Group map[string]*bake.Group `json:"group,omitempty"`
Target map[string]map[string]any `json:"target"`
}{
Group: grps,
Target: map[string]map[string]any{},
}
for name, def := range tgts {
out.Target[name] = map[string]any{
"build": def,
}
if res, ok := jsonResults[name]; ok {
printName := bo[name].CallFunc.Name
if printName == "lint" {
printName = "check"
}
out.Target[name][printName] = res
}
}
dt, err := json.MarshalIndent(out, "", " ")
if err != nil {
return err
}
fmt.Fprintln(dockerCli.Out(), string(dt))
}
for _, name := range names {
if sp, ok := resp[name]; ok {
if v, ok := sp.ExporterResponse["frontend.result.inlinemessage"]; ok {
fmt.Fprintf(dockerCli.Out(), "\n# %s\n%s\n", name, v)
}
}
}
if exitCode != 0 {
return cobrautil.ExitCodeError(exitCode)
}
return nil
}
func bakeCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
@ -253,6 +464,15 @@ func bakeCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
Aliases: []string{"f"},
Short: "Build from a file",
RunE: func(cmd *cobra.Command, args []string) error {
filesFromEnv := false
if len(options.files) == 0 {
if envFiles, err := bakeEnvFiles(os.LookupEnv); err != nil {
return err
} else if len(envFiles) > 0 {
options.files = envFiles
filesFromEnv = true
}
}
// reset to nil to avoid override is unset
if !cmd.Flags().Lookup("no-cache").Changed {
cFlags.noCache = nil
@ -260,44 +480,138 @@ func bakeCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
if !cmd.Flags().Lookup("pull").Changed {
cFlags.pull = nil
}
if options.list == "" {
if options.listTargets {
options.list = "targets"
} else if options.listVars {
options.list = "variables"
}
}
options.builder = rootOpts.builder
options.metadataFile = cFlags.metadataFile
// Other common flags (noCache, pull and progress) are processed in runBake function.
return runBake(cmd.Context(), dockerCli, args, options, cFlags)
return runBake(cmd.Context(), dockerCli, args, options, cFlags, filesFromEnv)
},
ValidArgsFunction: completion.BakeTargets(options.files),
ValidArgsFunction: completion.BakeTargets(options.files),
DisableFlagsInUseLine: true,
}
flags := cmd.Flags()
flags.StringArrayVarP(&options.files, "file", "f", []string{}, "Build definition file")
flags.BoolVar(&options.exportLoad, "load", false, `Shorthand for "--set=*.output=type=docker"`)
flags.BoolVar(&options.printOnly, "print", false, "Print the options without building")
flags.BoolVar(&options.exportPush, "push", false, `Shorthand for "--set=*.output=type=registry"`)
flags.StringVar(&options.sbom, "sbom", "", `Shorthand for "--set=*.attest=type=sbom"`)
flags.StringVar(&options.provenance, "provenance", "", `Shorthand for "--set=*.attest=type=provenance"`)
flags.StringArrayVar(&options.overrides, "set", nil, `Override target value (e.g., "targetpattern.key=value")`)
flags.StringVar(&options.callFunc, "call", "build", `Set method for evaluating build ("check", "outline", "targets")`)
flags.StringArrayVar(&options.allow, "allow", nil, "Allow build to access specified resources")
flags.VarPF(callAlias(&options.callFunc, "check"), "check", "", `Shorthand for "--call=check"`)
flags.Lookup("check").NoOptDefVal = "true"
flags.BoolVar(&options.print, "print", false, "Print the options without building")
flags.StringVar(&options.list, "list", "", "List targets or variables")
// TODO: remove deprecated flags
flags.BoolVar(&options.listTargets, "list-targets", false, "List available targets")
flags.MarkHidden("list-targets")
flags.MarkDeprecated("list-targets", "list-targets is deprecated, use list=targets instead")
flags.BoolVar(&options.listVars, "list-variables", false, "List defined variables")
flags.MarkHidden("list-variables")
flags.MarkDeprecated("list-variables", "list-variables is deprecated, use list=variables instead")
commonBuildFlags(&cFlags, flags)
return cmd
}
func saveLocalStateGroup(dockerCli command.Cli, ref string, lsg localstate.StateGroup) error {
l, err := localstate.New(confutil.ConfigDir(dockerCli))
func bakeEnvFiles(lookup func(string string) (string, bool)) ([]string, error) {
sep, _ := lookup(bakeEnvFileSeparator)
if sep == "" {
sep = string(os.PathListSeparator)
}
f, ok := lookup(bakeEnvFilePath)
if ok {
return cleanPaths(strings.Split(f, sep))
}
return []string{}, nil
}
func cleanPaths(p []string) ([]string, error) {
var paths []string
for _, f := range p {
f = strings.TrimSpace(f)
if f == "" {
continue
}
if f == "-" {
paths = append(paths, f)
continue
}
if _, err := os.Stat(f); err != nil {
return nil, err
}
paths = append(paths, f)
}
return paths, nil
}
func saveLocalStateGroup(dockerCli command.Cli, in bakeOptions, targets []string, bo map[string]build.Options) error {
l, err := localstate.New(confutil.NewConfig(dockerCli))
if err != nil {
return err
}
return l.SaveGroup(ref, lsg)
defer l.MigrateIfNeeded()
prm := confutil.MetadataProvenance()
if len(in.metadataFile) == 0 {
prm = confutil.MetadataProvenanceModeDisabled
}
groupRef := identity.NewID()
refs := make([]string, 0, len(bo))
for k, b := range bo {
if b.CallFunc != nil {
continue
}
b.Ref = identity.NewID()
b.GroupRef = groupRef
b.ProvenanceResponseMode = prm
refs = append(refs, b.Ref)
bo[k] = b
}
if len(refs) == 0 {
return nil
}
return l.SaveGroup(groupRef, localstate.StateGroup{
Refs: refs,
Targets: targets,
})
}
func readBakeFiles(ctx context.Context, nodes []builder.Node, url string, names []string, stdin io.Reader, pw progress.Writer) (files []bake.File, inp *bake.Input, err error) {
// bakeArgs will retrieve the remote url, command context, and targets
// from the command line arguments.
func bakeArgs(args []string) (url, cmdContext string, targets []string) {
cmdContext, targets = "cwd://", args
if len(targets) == 0 || !build.IsRemoteURL(targets[0]) {
return url, cmdContext, targets
}
url, targets = targets[0], targets[1:]
if len(targets) == 0 || !build.IsRemoteURL(targets[0]) {
return url, cmdContext, targets
}
cmdContext, targets = targets[0], targets[1:]
return url, cmdContext, targets
}
func readBakeFiles(ctx context.Context, nodes []builder.Node, url string, names []string, stdin io.Reader, pw progress.Writer, filesFromEnv bool) (files []bake.File, inp *bake.Input, err error) {
var lnames []string // local
var rnames []string // remote
var anames []string // both
for _, v := range names {
if strings.HasPrefix(v, "cwd://") {
tname := strings.TrimPrefix(v, "cwd://")
if tname, ok := strings.CutPrefix(v, "cwd://"); ok {
lnames = append(lnames, tname)
anames = append(anames, tname)
} else {
@ -317,7 +631,11 @@ func readBakeFiles(ctx context.Context, nodes []builder.Node, url string, names
if len(lnames) > 0 || url == "" {
var lfiles []bake.File
progress.Wrap("[internal] load local bake definitions", pw.Write, func(sub progress.SubLogger) error {
where := ""
if filesFromEnv {
where = " from " + bakeEnvFilePath + " env"
}
progress.Wrap("[internal] load local bake definitions"+where, pw.Write, func(sub progress.SubLogger) error {
if url != "" {
lfiles, err = bake.ReadLocalFiles(lnames, stdin, sub)
} else {
@ -333,3 +651,235 @@ func readBakeFiles(ctx context.Context, nodes []builder.Node, url string, names
return
}
type listEntry struct {
Type string
Format string
}
func parseList(input string) (listEntry, error) {
res := listEntry{}
fields, err := csvvalue.Fields(input, nil)
if err != nil {
return res, err
}
if len(fields) == 1 && fields[0] == input && !strings.HasPrefix(input, "type=") {
res.Type = input
}
if res.Type == "" {
for _, field := range fields {
key, value, ok := strings.Cut(field, "=")
if !ok {
return res, errors.Errorf("invalid value %s", field)
}
key = strings.TrimSpace(strings.ToLower(key))
switch key {
case "type":
res.Type = value
case "format":
res.Format = value
default:
return res, errors.Errorf("unexpected key '%s' in '%s'", key, field)
}
}
}
if res.Format == "" {
res.Format = "table"
}
switch res.Type {
case "targets", "variables":
default:
return res, errors.Errorf("invalid list type %q", res.Type)
}
switch res.Format {
case "table", "json":
default:
return res, errors.Errorf("invalid list format %q", res.Format)
}
return res, nil
}
func printVars(w io.Writer, format string, vars []*hclparser.Variable) error {
slices.SortFunc(vars, func(a, b *hclparser.Variable) int {
return cmp.Compare(a.Name, b.Name)
})
if format == "json" {
enc := json.NewEncoder(w)
enc.SetIndent("", " ")
return enc.Encode(vars)
}
tw := tabwriter.NewWriter(w, 1, 8, 1, '\t', 0)
defer tw.Flush()
tw.Write([]byte("VARIABLE\tTYPE\tVALUE\tDESCRIPTION\n"))
for _, v := range vars {
var value string
if v.Value != nil {
value = *v.Value
} else {
value = "<null>"
}
fmt.Fprintf(tw, "%s\t%s\t%s\t%s\n", v.Name, v.Type, value, v.Description)
}
return nil
}
func printTargetList(w io.Writer, format string, cfg *bake.Config) error {
type targetOrGroup struct {
name string
target *bake.Target
group *bake.Group
}
list := make([]targetOrGroup, 0, len(cfg.Targets)+len(cfg.Groups))
for _, tgt := range cfg.Targets {
list = append(list, targetOrGroup{name: tgt.Name, target: tgt})
}
for _, grp := range cfg.Groups {
list = append(list, targetOrGroup{name: grp.Name, group: grp})
}
slices.SortFunc(list, func(a, b targetOrGroup) int {
return cmp.Compare(a.name, b.name)
})
var tw *tabwriter.Writer
if format == "table" {
tw = tabwriter.NewWriter(w, 1, 8, 1, '\t', 0)
defer tw.Flush()
tw.Write([]byte("TARGET\tDESCRIPTION\n"))
}
type targetList struct {
Name string `json:"name"`
Description string `json:"description,omitempty"`
Group bool `json:"group,omitempty"`
}
var targetsList []targetList
for _, tgt := range list {
if strings.HasPrefix(tgt.name, "_") {
// convention for a private target
continue
}
var descr string
if tgt.target != nil {
descr = tgt.target.Description
targetsList = append(targetsList, targetList{Name: tgt.name, Description: descr})
} else if tgt.group != nil {
descr = tgt.group.Description
if len(tgt.group.Targets) > 0 {
slices.Sort(tgt.group.Targets)
names := strings.Join(tgt.group.Targets, ", ")
if descr != "" {
descr += " (" + names + ")"
} else {
descr = names
}
}
targetsList = append(targetsList, targetList{Name: tgt.name, Description: descr, Group: true})
}
if format == "table" {
fmt.Fprintf(tw, "%s\t%s\n", tgt.name, descr)
}
}
if format == "json" {
enc := json.NewEncoder(w)
enc.SetIndent("", " ")
return enc.Encode(targetsList)
}
return nil
}
func bakeMetricAttributes(dockerCli command.Cli, driverType, url, cmdContext string, targets []string, options *bakeOptions) attribute.Set {
return attribute.NewSet(
commandNameAttribute.String("bake"),
attribute.Stringer(string(commandOptionsHash), &bakeOptionsHash{
bakeOptions: options,
cfg: confutil.NewConfig(dockerCli),
url: url,
cmdContext: cmdContext,
targets: targets,
}),
driverNameAttribute.String(options.builder),
driverTypeAttribute.String(driverType),
)
}
type bakeOptionsHash struct {
*bakeOptions
cfg *confutil.Config
url string
cmdContext string
targets []string
result string
resultOnce sync.Once
}
func (o *bakeOptionsHash) String() string {
o.resultOnce.Do(func() {
url := o.url
cmdContext := o.cmdContext
if cmdContext == "cwd://" {
// Resolve the directory if the cmdContext is the current working directory.
cmdContext = osutil.GetWd()
}
// Sort the inputs for files and targets since the ordering
// doesn't matter, but avoid modifying the original slice.
files := immutableSort(o.files)
targets := immutableSort(o.targets)
joinedFiles := strings.Join(files, ",")
joinedTargets := strings.Join(targets, ",")
salt := o.cfg.TryNodeIdentifier()
h := sha256.New()
for _, s := range []string{url, cmdContext, joinedFiles, joinedTargets, salt} {
_, _ = io.WriteString(h, s)
h.Write([]byte{0})
}
o.result = hex.EncodeToString(h.Sum(nil))
})
return o.result
}
// immutableSort will sort the entries in s without modifying the original slice.
func immutableSort(s []string) []string {
if !sort.StringsAreSorted(s) {
cpy := make([]string, len(s))
copy(cpy, s)
sort.Strings(cpy)
return cpy
}
return s
}
type syncWriter struct {
w io.Writer
once sync.Once
wait func() error
}
func (w *syncWriter) Write(p []byte) (n int, err error) {
w.once.Do(func() {
if w.wait != nil {
err = w.wait()
}
})
if err != nil {
return 0, err
}
return w.w.Write(p)
}

File diff suppressed because it is too large Load Diff

View File

@ -98,7 +98,8 @@ func createCmd(dockerCli command.Cli) *cobra.Command {
RunE: func(cmd *cobra.Command, args []string) error {
return runCreate(cmd.Context(), dockerCli, options, args)
},
ValidArgsFunction: completion.Disable,
ValidArgsFunction: completion.Disable,
DisableFlagsInUseLine: true,
}
flags := cmd.Flags()

124
commands/dap.go Normal file
View File

@ -0,0 +1,124 @@
package commands
import (
"context"
"io"
"net"
"os"
"github.com/containerd/console"
"github.com/docker/buildx/dap"
"github.com/docker/buildx/dap/common"
"github.com/docker/buildx/util/cobrautil"
"github.com/docker/buildx/util/ioset"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
"github.com/pkg/errors"
"github.com/spf13/cobra"
)
func dapCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
var options dapOptions
cmd := &cobra.Command{
Use: "dap",
Short: "Start debug adapter protocol compatible debugger",
DisableFlagsInUseLine: true,
}
cobrautil.MarkCommandExperimental(cmd)
dapBuildCmd := buildCmd(dockerCli, rootOpts, &options)
dapBuildCmd.Args = cobra.RangeArgs(0, 1)
// Remove aliases for documentation.
dapBuildCmd.Aliases = nil
delete(dapBuildCmd.Annotations, "aliases")
cmd.AddCommand(dapBuildCmd)
cmd.AddCommand(dapAttachCmd())
return cmd
}
type dapOptions struct{}
func (d *dapOptions) New(in ioset.In) (debuggerInstance, error) {
conn := dap.NewConn(in.Stdin, in.Stdout)
return &adapterProtocolDebugger{
Adapter: dap.New[LaunchConfig](),
conn: conn,
}, nil
}
type LaunchConfig struct {
Dockerfile string `json:"dockerfile,omitempty"`
ContextPath string `json:"contextPath,omitempty"`
Target string `json:"target,omitempty"`
common.Config
}
type adapterProtocolDebugger struct {
*dap.Adapter[LaunchConfig]
conn dap.Conn
}
func (d *adapterProtocolDebugger) Start(printer *progress.Printer, opts *BuildOptions) error {
cfg, err := d.Adapter.Start(context.Background(), d.conn)
if err != nil {
return errors.Wrap(err, "debug adapter did not start")
}
if cfg.Dockerfile != "" {
opts.DockerfileName = cfg.Dockerfile
}
if cfg.ContextPath != "" {
opts.ContextPath = cfg.ContextPath
}
if cfg.Target != "" {
opts.Target = cfg.Target
}
return nil
}
func (d *adapterProtocolDebugger) Stop() error {
defer d.conn.Close()
return d.Adapter.Stop()
}
func dapAttachCmd() *cobra.Command {
cmd := &cobra.Command{
Use: "attach PATH",
Short: "Attach to a container created by the dap evaluate request",
Args: cli.ExactArgs(1),
Hidden: true,
RunE: func(cmd *cobra.Command, args []string) error {
c, err := console.ConsoleFromFile(os.Stdout)
if err != nil {
return err
}
if err := c.SetRaw(); err != nil {
return err
}
conn, err := net.Dial("unix", args[0])
if err != nil {
return err
}
fwd := ioset.NewSingleForwarder()
fwd.SetReader(os.Stdin)
fwd.SetWriter(conn, func() io.WriteCloser {
return conn
})
if _, err := io.Copy(os.Stdout, conn); err != nil && !errors.Is(err, io.EOF) {
return err
}
return nil
},
DisableFlagsInUseLine: true,
}
return cmd
}

180
commands/debug.go Normal file
View File

@ -0,0 +1,180 @@
package commands
import (
"encoding/json"
"io"
"os"
"strconv"
"strings"
"github.com/docker/buildx/build"
"github.com/docker/buildx/monitor"
"github.com/docker/buildx/util/cobrautil"
"github.com/docker/buildx/util/ioset"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/command"
"github.com/pkg/errors"
"github.com/spf13/cobra"
"github.com/tonistiigi/go-csvvalue"
)
type debugOptions struct {
// InvokeFlag is a flag to configure the launched debugger and the commaned executed on the debugger.
InvokeFlag string
// OnFlag is a flag to configure the timing of launching the debugger.
OnFlag string
}
// debuggerOptions will start a debuggerOptions instance.
type debuggerOptions interface {
New(in ioset.In) (debuggerInstance, error)
}
// debuggerInstance is an instance of a Debugger that has been started.
type debuggerInstance interface {
Start(printer *progress.Printer, opts *BuildOptions) error
Handler() build.Handler
Stop() error
Out() io.Writer
}
func debugCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
var options debugOptions
cmd := &cobra.Command{
Use: "debug",
Short: "Start debugger",
DisableFlagsInUseLine: true,
}
cobrautil.MarkCommandExperimental(cmd)
flags := cmd.Flags()
flags.StringVar(&options.InvokeFlag, "invoke", "", "Launch a monitor with executing specified command")
flags.StringVar(&options.OnFlag, "on", "error", "When to launch the monitor ([always, error])")
cobrautil.MarkFlagsExperimental(flags, "invoke", "on")
cmd.AddCommand(buildCmd(dockerCli, rootOpts, &options))
return cmd
}
func (d *debugOptions) New(in ioset.In) (debuggerInstance, error) {
cfg, err := parseInvokeConfig(d.InvokeFlag, d.OnFlag)
if err != nil {
return nil, err
}
return &monitorDebuggerInstance{
cfg: cfg,
in: in.Stdin,
}, nil
}
type monitorDebuggerInstance struct {
cfg *build.InvokeConfig
in io.ReadCloser
m *monitor.Monitor
}
func (d *monitorDebuggerInstance) Start(printer *progress.Printer, opts *BuildOptions) error {
d.m = monitor.New(d.cfg, d.in, os.Stdout, os.Stderr, printer)
return nil
}
func (d *monitorDebuggerInstance) Handler() build.Handler {
return d.m.Handler()
}
func (d *monitorDebuggerInstance) Stop() error {
return d.m.Close()
}
func (d *monitorDebuggerInstance) Out() io.Writer {
return os.Stderr
}
func parseInvokeConfig(invoke, on string) (*build.InvokeConfig, error) {
cfg := &build.InvokeConfig{}
switch on {
case "always":
cfg.SuspendOn = build.SuspendAlways
case "error":
cfg.SuspendOn = build.SuspendError
default:
if invoke != "" {
cfg.SuspendOn = build.SuspendAlways
}
}
cfg.Tty = true
cfg.NoCmd = true
switch invoke {
case "default", "":
return cfg, nil
case "on-error":
// NOTE: we overwrite the command to run because the original one should fail on the failed step.
// TODO: make this configurable via flags or restorable from LLB.
// Discussion: https://github.com/docker/buildx/pull/1640#discussion_r1113295900
cfg.Cmd = []string{"/bin/sh"}
cfg.NoCmd = false
return cfg, nil
}
csvParser := csvvalue.NewParser()
csvParser.LazyQuotes = true
fields, err := csvParser.Fields(invoke, nil)
if err != nil {
return nil, err
}
if len(fields) == 1 && !strings.Contains(fields[0], "=") {
cfg.Cmd = []string{fields[0]}
cfg.NoCmd = false
return cfg, nil
}
cfg.NoUser = true
cfg.NoCwd = true
for _, field := range fields {
parts := strings.SplitN(field, "=", 2)
if len(parts) != 2 {
return nil, errors.Errorf("invalid value %s", field)
}
key := strings.ToLower(parts[0])
value := parts[1]
switch key {
case "args":
cfg.Cmd = append(cfg.Cmd, maybeJSONArray(value)...)
cfg.NoCmd = false
case "entrypoint":
cfg.Entrypoint = append(cfg.Entrypoint, maybeJSONArray(value)...)
if cfg.Cmd == nil {
cfg.Cmd = []string{}
cfg.NoCmd = false
}
case "env":
cfg.Env = append(cfg.Env, maybeJSONArray(value)...)
case "user":
cfg.User = value
cfg.NoUser = false
case "cwd":
cfg.Cwd = value
cfg.NoCwd = false
case "tty":
cfg.Tty, err = strconv.ParseBool(value)
if err != nil {
return nil, errors.Errorf("failed to parse tty: %v", err)
}
default:
return nil, errors.Errorf("unknown key %q", key)
}
}
return cfg, nil
}
func maybeJSONArray(v string) []string {
var list []string
if err := json.Unmarshal([]byte(v), &list); err == nil {
return list
}
return []string{v}
}

View File

@ -1,92 +0,0 @@
package debug
import (
"context"
"os"
"runtime"
"github.com/containerd/console"
"github.com/docker/buildx/controller"
"github.com/docker/buildx/controller/control"
controllerapi "github.com/docker/buildx/controller/pb"
"github.com/docker/buildx/monitor"
"github.com/docker/buildx/util/cobrautil"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/command"
"github.com/moby/buildkit/util/progress/progressui"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
)
// DebugConfig is a user-specified configuration for the debugger.
type DebugConfig struct {
// InvokeFlag is a flag to configure the launched debugger and the commaned executed on the debugger.
InvokeFlag string
// OnFlag is a flag to configure the timing of launching the debugger.
OnFlag string
}
// DebuggableCmd is a command that supports debugger with recognizing the user-specified DebugConfig.
type DebuggableCmd interface {
// NewDebugger returns the new *cobra.Command with support for the debugger with recognizing DebugConfig.
NewDebugger(*DebugConfig) *cobra.Command
}
func RootCmd(dockerCli command.Cli, children ...DebuggableCmd) *cobra.Command {
var controlOptions control.ControlOptions
var progressMode string
var options DebugConfig
cmd := &cobra.Command{
Use: "debug",
Short: "Start debugger",
Args: cobra.NoArgs,
RunE: func(cmd *cobra.Command, args []string) error {
printer, err := progress.NewPrinter(context.TODO(), os.Stderr, progressui.DisplayMode(progressMode))
if err != nil {
return err
}
ctx := context.TODO()
c, err := controller.NewController(ctx, controlOptions, dockerCli, printer)
if err != nil {
return err
}
defer func() {
if err := c.Close(); err != nil {
logrus.Warnf("failed to close server connection %v", err)
}
}()
con := console.Current()
if err := con.SetRaw(); err != nil {
return errors.Errorf("failed to configure terminal: %v", err)
}
_, err = monitor.RunMonitor(ctx, "", nil, controllerapi.InvokeConfig{
Tty: true,
}, c, dockerCli.In(), os.Stdout, os.Stderr, printer)
con.Reset()
return err
},
}
cobrautil.MarkCommandExperimental(cmd)
flags := cmd.Flags()
flags.StringVar(&options.InvokeFlag, "invoke", "", "Launch a monitor with executing specified command")
flags.StringVar(&options.OnFlag, "on", "error", "When to launch the monitor ([always, error])")
flags.StringVar(&controlOptions.Root, "root", "", "Specify root directory of server to connect for the monitor")
flags.BoolVar(&controlOptions.Detach, "detach", runtime.GOOS == "linux", "Detach buildx server for the monitor (supported only on linux)")
flags.StringVar(&controlOptions.ServerConfig, "server-config", "", "Specify buildx server config file for the monitor (used only when launching new server)")
flags.StringVar(&progressMode, "progress", "auto", `Set type of progress output ("auto", "plain", "tty") for the monitor. Use plain to show container output`)
cobrautil.MarkFlagsExperimental(flags, "invoke", "on", "root", "detach", "server-config")
for _, c := range children {
cmd.AddCommand(c.NewDebugger(&options))
}
return cmd
}

View File

@ -5,14 +5,14 @@ import (
"net"
"os"
"github.com/containerd/containerd/platforms"
"github.com/containerd/platforms"
"github.com/docker/buildx/build"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/command"
"github.com/moby/buildkit/util/appcontext"
"github.com/moby/buildkit/util/progress/progressui"
v1 "github.com/opencontainers/image-spec/specs-go/v1"
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/spf13/cobra"
"golang.org/x/sync/errgroup"
@ -49,7 +49,7 @@ func runDialStdio(dockerCli command.Cli, opts stdioOptions) error {
return err
}
var p *v1.Platform
var p *ocispecs.Platform
if opts.platform != "" {
pp, err := platforms.Parse(opts.platform)
if err != nil {
@ -122,11 +122,11 @@ func dialStdioCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
opts.builder = rootOpts.builder
return runDialStdio(dockerCli, opts)
},
DisableFlagsInUseLine: true,
}
flags := cmd.Flags()
cmd.Flags()
flags.StringVar(&opts.platform, "platform", os.Getenv("DOCKER_DEFAULT_PLATFORM"), "Target platform: this is used for node selection")
flags.StringVar(&opts.progress, "progress", "quiet", "Set type of progress output (auto, plain, tty).")
flags.StringVar(&opts.progress, "progress", "quiet", `Set type of progress output ("auto", "plain", "tty", "rawjson"). Use plain to show container output`)
return cmd
}

View File

@ -4,8 +4,6 @@ import (
"context"
"fmt"
"io"
"os"
"strings"
"text/tabwriter"
"time"
@ -13,20 +11,77 @@ import (
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
"github.com/docker/cli/cli/command/formatter"
"github.com/docker/cli/opts"
"github.com/docker/go-units"
"github.com/moby/buildkit/client"
"github.com/pkg/errors"
"github.com/spf13/cobra"
"golang.org/x/sync/errgroup"
)
const (
duIDHeader = "ID"
duParentsHeader = "PARENTS"
duCreatedAtHeader = "CREATED AT"
duMutableHeader = "MUTABLE"
duReclaimHeader = "RECLAIMABLE"
duSharedHeader = "SHARED"
duSizeHeader = "SIZE"
duDescriptionHeader = "DESCRIPTION"
duUsageHeader = "USAGE COUNT"
duLastUsedAtHeader = "LAST ACCESSED"
duTypeHeader = "TYPE"
duDefaultTableFormat = "table {{.ID}}\t{{.Reclaimable}}\t{{.Size}}\t{{.LastUsedAt}}"
duDefaultPrettyTemplate = `ID: {{.ID}}
{{- if .Parents }}
Parents:
{{- range .Parents }}
- {{.}}
{{- end }}
{{- end }}
Created at: {{.CreatedAt}}
Mutable: {{.Mutable}}
Reclaimable: {{.Reclaimable}}
Shared: {{.Shared}}
Size: {{.Size}}
{{- if .Description}}
Description: {{ .Description }}
{{- end }}
Usage count: {{.UsageCount}}
{{- if .LastUsedAt}}
Last used: {{ .LastUsedAt }}
{{- end }}
{{- if .Type}}
Type: {{ .Type }}
{{- end }}
`
)
type duOptions struct {
builder string
filter opts.FilterOpt
verbose bool
format string
}
func runDiskUsage(ctx context.Context, dockerCli command.Cli, opts duOptions) error {
if opts.format != "" && opts.verbose {
return errors.New("--format and --verbose cannot be used together")
} else if opts.format == "" {
if opts.verbose {
opts.format = duDefaultPrettyTemplate
} else {
opts.format = duDefaultTableFormat
}
} else if opts.format == formatter.PrettyFormatKey {
opts.format = duDefaultPrettyTemplate
} else if opts.format == formatter.TableFormatKey {
opts.format = duDefaultTableFormat
}
pi, err := toBuildkitPruneInfo(opts.filter.Value())
if err != nil {
return err
@ -74,33 +129,53 @@ func runDiskUsage(ctx context.Context, dockerCli command.Cli, opts duOptions) er
return err
}
tw := tabwriter.NewWriter(os.Stdout, 1, 8, 1, '\t', 0)
first := true
fctx := formatter.Context{
Output: dockerCli.Out(),
Format: formatter.Format(opts.format),
}
var dus []*client.UsageInfo
for _, du := range out {
if du == nil {
continue
}
if opts.verbose {
printVerbose(tw, du)
} else {
if first {
printTableHeader(tw)
first = false
}
for _, di := range du {
printTableRow(tw, di)
}
tw.Flush()
if du != nil {
dus = append(dus, du...)
}
}
if opts.filter.Value().Len() == 0 {
printSummary(tw, out)
render := func(format func(subContext formatter.SubContext) error) error {
for _, du := range dus {
if err := format(&diskusageContext{
format: fctx.Format,
du: du,
}); err != nil {
return err
}
}
return nil
}
tw.Flush()
return nil
duCtx := diskusageContext{}
duCtx.Header = formatter.SubHeaderContext{
"ID": duIDHeader,
"Parents": duParentsHeader,
"CreatedAt": duCreatedAtHeader,
"Mutable": duMutableHeader,
"Reclaimable": duReclaimHeader,
"Shared": duSharedHeader,
"Size": duSizeHeader,
"Description": duDescriptionHeader,
"UsageCount": duUsageHeader,
"LastUsedAt": duLastUsedAtHeader,
"Type": duTypeHeader,
}
defer func() {
if (fctx.Format != duDefaultTableFormat && fctx.Format != duDefaultPrettyTemplate) || fctx.Format.IsJSON() || opts.filter.Value().Len() > 0 {
return
}
printSummary(dockerCli.Out(), out)
}()
return fctx.Write(&duCtx, render)
}
func duCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
@ -114,69 +189,84 @@ func duCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
options.builder = rootOpts.builder
return runDiskUsage(cmd.Context(), dockerCli, options)
},
ValidArgsFunction: completion.Disable,
ValidArgsFunction: completion.Disable,
DisableFlagsInUseLine: true,
}
flags := cmd.Flags()
flags.Var(&options.filter, "filter", "Provide filter values")
flags.BoolVar(&options.verbose, "verbose", false, "Provide a more verbose output")
flags.BoolVar(&options.verbose, "verbose", false, `Shorthand for "--format=pretty"`)
flags.StringVar(&options.format, "format", "", "Format the output")
return cmd
}
func printKV(w io.Writer, k string, v interface{}) {
fmt.Fprintf(w, "%s:\t%v\n", k, v)
type diskusageContext struct {
formatter.HeaderContext
format formatter.Format
du *client.UsageInfo
}
func printVerbose(tw *tabwriter.Writer, du []*client.UsageInfo) {
for _, di := range du {
printKV(tw, "ID", di.ID)
if len(di.Parents) != 0 {
printKV(tw, "Parent", strings.Join(di.Parents, ","))
}
printKV(tw, "Created at", di.CreatedAt)
printKV(tw, "Mutable", di.Mutable)
printKV(tw, "Reclaimable", !di.InUse)
printKV(tw, "Shared", di.Shared)
printKV(tw, "Size", units.HumanSize(float64(di.Size)))
if di.Description != "" {
printKV(tw, "Description", di.Description)
}
printKV(tw, "Usage count", di.UsageCount)
if di.LastUsedAt != nil {
printKV(tw, "Last used", units.HumanDuration(time.Since(*di.LastUsedAt))+" ago")
}
if di.RecordType != "" {
printKV(tw, "Type", di.RecordType)
}
fmt.Fprintf(tw, "\n")
}
tw.Flush()
func (d *diskusageContext) MarshalJSON() ([]byte, error) {
return formatter.MarshalJSON(d)
}
func printTableHeader(tw *tabwriter.Writer) {
fmt.Fprintln(tw, "ID\tRECLAIMABLE\tSIZE\tLAST ACCESSED")
}
func printTableRow(tw *tabwriter.Writer, di *client.UsageInfo) {
id := di.ID
if di.Mutable {
func (d *diskusageContext) ID() string {
id := d.du.ID
if d.format.IsTable() && d.du.Mutable {
id += "*"
}
size := units.HumanSize(float64(di.Size))
if di.Shared {
size += "*"
}
lastAccessed := ""
if di.LastUsedAt != nil {
lastAccessed = units.HumanDuration(time.Since(*di.LastUsedAt)) + " ago"
}
fmt.Fprintf(tw, "%-40s\t%-5v\t%-10s\t%s\n", id, !di.InUse, size, lastAccessed)
return id
}
func printSummary(tw *tabwriter.Writer, dus [][]*client.UsageInfo) {
func (d *diskusageContext) Parents() []string {
return d.du.Parents
}
func (d *diskusageContext) CreatedAt() string {
return d.du.CreatedAt.String()
}
func (d *diskusageContext) Mutable() bool {
return d.du.Mutable
}
func (d *diskusageContext) Reclaimable() bool {
return !d.du.InUse
}
func (d *diskusageContext) Shared() bool {
return d.du.Shared
}
func (d *diskusageContext) Size() string {
size := units.HumanSize(float64(d.du.Size))
if d.format.IsTable() && d.du.Shared {
size += "*"
}
return size
}
func (d *diskusageContext) Description() string {
return d.du.Description
}
func (d *diskusageContext) UsageCount() int {
return d.du.UsageCount
}
func (d *diskusageContext) LastUsedAt() string {
if d.du.LastUsedAt != nil {
return units.HumanDuration(time.Since(*d.du.LastUsedAt)) + " ago"
}
return ""
}
func (d *diskusageContext) Type() string {
return string(d.du.RecordType)
}
func printSummary(w io.Writer, dus [][]*client.UsageInfo) {
total := int64(0)
reclaimable := int64(0)
shared := int64(0)
@ -195,11 +285,11 @@ func printSummary(tw *tabwriter.Writer, dus [][]*client.UsageInfo) {
}
}
tw := tabwriter.NewWriter(w, 1, 8, 1, '\t', 0)
if shared > 0 {
fmt.Fprintf(tw, "Shared:\t%s\n", units.HumanSize(float64(shared)))
fmt.Fprintf(tw, "Private:\t%s\n", units.HumanSize(float64(total-shared)))
}
fmt.Fprintf(tw, "Reclaimable:\t%s\n", units.HumanSize(float64(reclaimable)))
fmt.Fprintf(tw, "Total:\t%s\n", units.HumanSize(float64(total)))
tw.Flush()

173
commands/history/export.go Normal file
View File

@ -0,0 +1,173 @@
package history
import (
"context"
"io"
"os"
"slices"
"github.com/containerd/console"
"github.com/containerd/platforms"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/localstate"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/desktop/bundle"
"github.com/docker/cli/cli/command"
"github.com/moby/buildkit/client"
"github.com/pkg/errors"
"github.com/spf13/cobra"
)
type exportOptions struct {
builder string
refs []string
output string
all bool
finalize bool
}
func runExport(ctx context.Context, dockerCli command.Cli, opts exportOptions) error {
nodes, err := loadNodes(ctx, dockerCli, opts.builder)
if err != nil {
return err
}
if len(opts.refs) == 0 {
opts.refs = []string{""}
}
var res []historyRecord
for _, ref := range opts.refs {
recs, err := queryRecords(ctx, ref, nodes, &queryOptions{
CompletedOnly: true,
})
if err != nil {
return err
}
if len(recs) == 0 {
if ref == "" {
return errors.New("no records found")
}
return errors.Errorf("no record found for ref %q", ref)
}
if opts.finalize {
var finalized bool
for _, rec := range recs {
if rec.Trace == nil {
finalized = true
if err := finalizeRecord(ctx, rec.Ref, nodes); err != nil {
return err
}
}
}
if finalized {
recs, err = queryRecords(ctx, ref, nodes, &queryOptions{
CompletedOnly: true,
})
if err != nil {
return err
}
}
}
if ref == "" {
slices.SortFunc(recs, func(a, b historyRecord) int {
return b.CreatedAt.AsTime().Compare(a.CreatedAt.AsTime())
})
}
if opts.all {
res = append(res, recs...)
break
} else {
res = append(res, recs[0])
}
}
ls, err := localstate.New(confutil.NewConfig(dockerCli))
if err != nil {
return err
}
visited := map[*builder.Node]struct{}{}
var clients []*client.Client
for _, rec := range res {
if _, ok := visited[rec.node]; ok {
continue
}
c, err := rec.node.Driver.Client(ctx)
if err != nil {
return err
}
clients = append(clients, c)
}
toExport := make([]*bundle.Record, 0, len(res))
for _, rec := range res {
var defaultPlatform string
if p := rec.node.Platforms; len(p) > 0 {
defaultPlatform = platforms.FormatAll(platforms.Normalize(p[0]))
}
var stg *localstate.StateGroup
st, _ := ls.ReadRef(rec.node.Builder, rec.node.Name, rec.Ref)
if st != nil && st.GroupRef != "" {
stg, err = ls.ReadGroup(st.GroupRef)
if err != nil {
return err
}
}
toExport = append(toExport, &bundle.Record{
BuildHistoryRecord: rec.BuildHistoryRecord,
DefaultPlatform: defaultPlatform,
LocalState: st,
StateGroup: stg,
})
}
var w io.Writer = os.Stdout
if opts.output != "" {
f, err := os.Create(opts.output)
if err != nil {
return errors.Wrapf(err, "failed to create output file %q", opts.output)
}
defer f.Close()
w = f
} else {
if _, err := console.ConsoleFromFile(os.Stdout); err == nil {
return errors.Errorf("refusing to write to console, use --output to specify a file")
}
}
return bundle.Export(ctx, clients, w, toExport)
}
func exportCmd(dockerCli command.Cli, rootOpts RootOptions) *cobra.Command {
var options exportOptions
cmd := &cobra.Command{
Use: "export [OPTIONS] [REF...]",
Short: "Export build records into Docker Desktop bundle",
RunE: func(cmd *cobra.Command, args []string) error {
if options.all && len(args) > 0 {
return errors.New("cannot specify refs when using --all")
}
options.refs = args
options.builder = *rootOpts.Builder
return runExport(cmd.Context(), dockerCli, options)
},
ValidArgsFunction: completion.Disable,
DisableFlagsInUseLine: true,
}
flags := cmd.Flags()
flags.StringVarP(&options.output, "output", "o", "", "Output file path")
flags.BoolVar(&options.all, "all", false, "Export all build records for the builder")
flags.BoolVar(&options.finalize, "finalize", false, "Ensure build records are finalized before exporting")
return cmd
}

136
commands/history/import.go Normal file
View File

@ -0,0 +1,136 @@
package history
import (
"context"
"encoding/json"
"fmt"
"io"
"net"
"net/http"
"os"
"strings"
remoteutil "github.com/docker/buildx/driver/remote/util"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/desktop"
"github.com/docker/cli/cli/command"
"github.com/pkg/browser"
"github.com/pkg/errors"
"github.com/spf13/cobra"
)
type importOptions struct {
file []string
}
func runImport(ctx context.Context, dockerCli command.Cli, opts importOptions) error {
sock, err := desktop.BuildServerAddr()
if err != nil {
return err
}
tr := http.DefaultTransport.(*http.Transport).Clone()
tr.DialContext = func(ctx context.Context, _, _ string) (net.Conn, error) {
network, addr, ok := strings.Cut(sock, "://")
if !ok {
return nil, errors.Errorf("invalid endpoint address: %s", sock)
}
return remoteutil.DialContext(ctx, network, addr)
}
client := &http.Client{
Transport: tr,
}
var urls []string
if len(opts.file) == 0 {
u, err := importFrom(ctx, client, os.Stdin)
if err != nil {
return err
}
urls = append(urls, u...)
} else {
for _, fn := range opts.file {
var f *os.File
var rdr io.Reader = os.Stdin
if fn != "-" {
f, err = os.Open(fn)
if err != nil {
return errors.Wrapf(err, "failed to open file %s", fn)
}
rdr = f
}
u, err := importFrom(ctx, client, rdr)
if err != nil {
return err
}
urls = append(urls, u...)
if f != nil {
f.Close()
}
}
}
if len(urls) == 0 {
return errors.New("no build records found in the bundle")
}
for i, url := range urls {
fmt.Fprintln(dockerCli.Err(), url)
if i == 0 {
err = browser.OpenURL(url)
}
}
return err
}
func importFrom(ctx context.Context, c *http.Client, rdr io.Reader) ([]string, error) {
req, err := http.NewRequestWithContext(ctx, http.MethodPost, "http://docker-desktop/upload", rdr)
if err != nil {
return nil, errors.Wrap(err, "failed to create request")
}
resp, err := c.Do(req)
if err != nil {
return nil, errors.Wrap(err, "failed to send request, check if Docker Desktop is running")
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
body, _ := io.ReadAll(resp.Body)
return nil, errors.Errorf("failed to import build: %s", string(body))
}
var refs []string
dec := json.NewDecoder(resp.Body)
if err := dec.Decode(&refs); err != nil {
return nil, errors.Wrap(err, "failed to decode response")
}
var urls []string
for _, ref := range refs {
urls = append(urls, desktop.BuildURL(fmt.Sprintf(".imported/_/%s", ref)))
}
return urls, err
}
func importCmd(dockerCli command.Cli, _ RootOptions) *cobra.Command {
var options importOptions
cmd := &cobra.Command{
Use: "import [OPTIONS] -",
Short: "Import build records into Docker Desktop",
Args: cobra.NoArgs,
RunE: func(cmd *cobra.Command, args []string) error {
return runImport(cmd.Context(), dockerCli, options)
},
ValidArgsFunction: completion.Disable,
DisableFlagsInUseLine: true,
}
flags := cmd.Flags()
flags.StringArrayVarP(&options.file, "file", "f", nil, "Import from a file path")
return cmd
}

894
commands/history/inspect.go Normal file
View File

@ -0,0 +1,894 @@
package history
import (
"bytes"
"cmp"
"context"
"encoding/json"
"fmt"
"io"
"os"
"path/filepath"
"slices"
"strconv"
"strings"
"text/tabwriter"
"text/template"
"time"
"github.com/containerd/containerd/v2/core/content"
"github.com/containerd/containerd/v2/core/content/proxy"
"github.com/containerd/containerd/v2/core/images"
"github.com/containerd/platforms"
"github.com/docker/buildx/localstate"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/desktop"
"github.com/docker/cli/cli/command"
"github.com/docker/cli/cli/command/formatter"
"github.com/docker/cli/cli/debug"
slsa "github.com/in-toto/in-toto-golang/in_toto/slsa_provenance/common"
slsa02 "github.com/in-toto/in-toto-golang/in_toto/slsa_provenance/v0.2"
controlapi "github.com/moby/buildkit/api/services/control"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/solver/errdefs"
provenancetypes "github.com/moby/buildkit/solver/llbsolver/provenance/types"
"github.com/moby/buildkit/util/grpcerrors"
"github.com/moby/buildkit/util/stack"
"github.com/opencontainers/go-digest"
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/spf13/cobra"
"github.com/tonistiigi/go-csvvalue"
spb "google.golang.org/genproto/googleapis/rpc/status"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
proto "google.golang.org/protobuf/proto"
)
type statusT string
const (
statusComplete statusT = "completed"
statusRunning statusT = "running"
statusError statusT = "failed"
statusCanceled statusT = "canceled"
)
type inspectOptions struct {
builder string
ref string
format string
}
type inspectOutput struct {
Name string `json:",omitempty"`
Ref string
Context string `json:",omitempty"`
Dockerfile string `json:",omitempty"`
VCSRepository string `json:",omitempty"`
VCSRevision string `json:",omitempty"`
Target string `json:",omitempty"`
Platform []string `json:",omitempty"`
KeepGitDir bool `json:",omitempty"`
NamedContexts []keyValueOutput `json:",omitempty"`
StartedAt *time.Time `json:",omitempty"`
CompletedAt *time.Time `json:",omitempty"`
Duration time.Duration `json:",omitempty"`
Status statusT `json:",omitempty"`
Error *errorOutput `json:",omitempty"`
NumCompletedSteps int32
NumTotalSteps int32
NumCachedSteps int32
BuildArgs []keyValueOutput `json:",omitempty"`
Labels []keyValueOutput `json:",omitempty"`
Config configOutput `json:",omitempty"`
Materials []materialOutput `json:",omitempty"`
Attachments []attachmentOutput `json:",omitempty"`
Errors []string `json:",omitempty"`
}
type configOutput struct {
Network string `json:",omitempty"`
ExtraHosts []string `json:",omitempty"`
Hostname string `json:",omitempty"`
CgroupParent string `json:",omitempty"`
ImageResolveMode string `json:",omitempty"`
MultiPlatform bool `json:",omitempty"`
NoCache bool `json:",omitempty"`
NoCacheFilter []string `json:",omitempty"`
ShmSize string `json:",omitempty"`
Ulimit string `json:",omitempty"`
CacheMountNS string `json:",omitempty"`
DockerfileCheckConfig string `json:",omitempty"`
SourceDateEpoch string `json:",omitempty"`
SandboxHostname string `json:",omitempty"`
RestRaw []keyValueOutput `json:",omitempty"`
}
type materialOutput struct {
URI string `json:",omitempty"`
Digests []string `json:",omitempty"`
}
type attachmentOutput struct {
Digest string `json:",omitempty"`
Platform string `json:",omitempty"`
Type string `json:",omitempty"`
}
type errorOutput struct {
Code int `json:",omitempty"`
Message string `json:",omitempty"`
Name string `json:",omitempty"`
Logs []string `json:",omitempty"`
Sources []byte `json:",omitempty"`
Stack []byte `json:",omitempty"`
}
type keyValueOutput struct {
Name string `json:",omitempty"`
Value string `json:",omitempty"`
}
func readAttr[T any](attrs map[string]string, k string, dest *T, f func(v string) (T, bool)) {
if sv, ok := attrs[k]; ok {
if f != nil {
v, ok := f(sv)
if ok {
*dest = v
}
}
if d, ok := any(dest).(*string); ok {
*d = sv
}
}
delete(attrs, k)
}
func runInspect(ctx context.Context, dockerCli command.Cli, opts inspectOptions) error {
nodes, err := loadNodes(ctx, dockerCli, opts.builder)
if err != nil {
return err
}
recs, err := queryRecords(ctx, opts.ref, nodes, nil)
if err != nil {
return err
}
if len(recs) == 0 {
if opts.ref == "" {
return errors.New("no records found")
}
return errors.Errorf("no record found for ref %q", opts.ref)
}
rec := &recs[0]
c, err := rec.node.Driver.Client(ctx)
if err != nil {
return err
}
store := proxy.NewContentStore(c.ContentClient())
var defaultPlatform string
workers, err := c.ListWorkers(ctx)
if err != nil {
return errors.Wrap(err, "failed to list workers")
}
workers0:
for _, w := range workers {
for _, p := range w.Platforms {
defaultPlatform = platforms.FormatAll(platforms.Normalize(p))
break workers0
}
}
ls, err := localstate.New(confutil.NewConfig(dockerCli))
if err != nil {
return err
}
st, _ := ls.ReadRef(rec.node.Builder, rec.node.Name, rec.Ref)
attrs := rec.FrontendAttrs
delete(attrs, "frontend.caps")
var out inspectOutput
var context string
var dockerfile string
if st != nil {
context = st.LocalPath
dockerfile = st.DockerfilePath
wd, _ := os.Getwd()
if dockerfile != "" && dockerfile != "-" {
if rel, err := filepath.Rel(context, dockerfile); err == nil {
if !strings.HasPrefix(rel, ".."+string(filepath.Separator)) {
dockerfile = rel
}
}
}
if context != "" {
if rel, err := filepath.Rel(wd, context); err == nil {
if !strings.HasPrefix(rel, ".."+string(filepath.Separator)) {
context = rel
}
}
}
}
if v, ok := attrs["context"]; ok && context == "" {
delete(attrs, "context")
context = v
}
if dockerfile == "" {
if v, ok := attrs["filename"]; ok {
dockerfile = v
if dfdir, ok := attrs["vcs:localdir:dockerfile"]; ok {
dockerfile = filepath.Join(dfdir, dockerfile)
}
}
}
delete(attrs, "filename")
out.Name = buildName(rec.FrontendAttrs, st)
out.Ref = rec.Ref
out.Context = context
out.Dockerfile = dockerfile
if _, ok := attrs["context"]; !ok {
if src, ok := attrs["vcs:source"]; ok {
out.VCSRepository = src
}
if rev, ok := attrs["vcs:revision"]; ok {
out.VCSRevision = rev
}
}
readAttr(attrs, "target", &out.Target, nil)
readAttr(attrs, "platform", &out.Platform, func(v string) ([]string, bool) {
return tryParseValue(v, &out.Errors, func(v string) ([]string, error) {
var pp []string
for _, v := range strings.Split(v, ",") {
p, err := platforms.Parse(v)
if err != nil {
return nil, err
}
pp = append(pp, platforms.FormatAll(platforms.Normalize(p)))
}
if len(pp) == 0 {
pp = append(pp, defaultPlatform)
}
return pp, nil
})
})
readAttr(attrs, "build-arg:BUILDKIT_CONTEXT_KEEP_GIT_DIR", &out.KeepGitDir, func(v string) (bool, bool) {
return tryParseValue(v, &out.Errors, strconv.ParseBool)
})
out.NamedContexts = readKeyValues(attrs, "context:")
if rec.CreatedAt != nil {
tm := rec.CreatedAt.AsTime().Local()
out.StartedAt = &tm
}
out.Status = statusRunning
if rec.CompletedAt != nil {
tm := rec.CompletedAt.AsTime().Local()
out.CompletedAt = &tm
out.Status = statusComplete
}
if rec.Error != nil || rec.ExternalError != nil {
out.Error = &errorOutput{}
if rec.Error != nil {
if codes.Code(rec.Error.Code) == codes.Canceled {
out.Status = statusCanceled
} else {
out.Status = statusError
}
out.Error.Code = int(codes.Code(rec.Error.Code))
out.Error.Message = rec.Error.Message
}
if rec.ExternalError != nil {
dt, err := content.ReadBlob(ctx, store, ociDesc(rec.ExternalError))
if err != nil {
return errors.Wrapf(err, "failed to read external error %s", rec.ExternalError.Digest)
}
var st spb.Status
if err := proto.Unmarshal(dt, &st); err != nil {
return errors.Wrapf(err, "failed to unmarshal external error %s", rec.ExternalError.Digest)
}
retErr := grpcerrors.FromGRPC(status.ErrorProto(&st))
var errsources bytes.Buffer
for _, s := range errdefs.Sources(retErr) {
s.Print(&errsources)
errsources.WriteString("\n")
}
out.Error.Sources = errsources.Bytes()
var ve *errdefs.VertexError
if errors.As(retErr, &ve) {
dgst, err := digest.Parse(ve.Digest)
if err != nil {
return errors.Wrapf(err, "failed to parse vertex digest %s", ve.Digest)
}
name, logs, err := loadVertexLogs(ctx, c, rec.Ref, dgst, 16)
if err != nil {
return errors.Wrapf(err, "failed to load vertex logs %s", dgst)
}
out.Error.Name = name
out.Error.Logs = logs
}
out.Error.Stack = fmt.Appendf(nil, "%+v", stack.Formatter(retErr))
}
}
if out.StartedAt != nil {
if out.CompletedAt != nil {
out.Duration = out.CompletedAt.Sub(*out.StartedAt)
} else {
out.Duration = rec.currentTimestamp.Sub(*out.StartedAt)
}
}
out.NumCompletedSteps = rec.NumCompletedSteps
out.NumTotalSteps = rec.NumTotalSteps
out.NumCachedSteps = rec.NumCachedSteps
out.BuildArgs = readKeyValues(attrs, "build-arg:")
out.Labels = readKeyValues(attrs, "label:")
readAttr(attrs, "force-network-mode", &out.Config.Network, nil)
readAttr(attrs, "hostname", &out.Config.Hostname, nil)
readAttr(attrs, "cgroup-parent", &out.Config.CgroupParent, nil)
readAttr(attrs, "image-resolve-mode", &out.Config.ImageResolveMode, nil)
readAttr(attrs, "build-arg:BUILDKIT_MULTI_PLATFORM", &out.Config.MultiPlatform, func(v string) (bool, bool) {
return tryParseValue(v, &out.Errors, strconv.ParseBool)
})
readAttr(attrs, "multi-platform", &out.Config.MultiPlatform, func(v string) (bool, bool) {
return tryParseValue(v, &out.Errors, strconv.ParseBool)
})
readAttr(attrs, "no-cache", &out.Config.NoCache, func(v string) (bool, bool) {
if v == "" {
return true, true
}
return false, false
})
readAttr(attrs, "no-cache", &out.Config.NoCacheFilter, func(v string) ([]string, bool) {
if v == "" {
return nil, false
}
return strings.Split(v, ","), true
})
readAttr(attrs, "add-hosts", &out.Config.ExtraHosts, func(v string) ([]string, bool) {
return tryParseValue(v, &out.Errors, func(v string) ([]string, error) {
fields, err := csvvalue.Fields(v, nil)
if err != nil {
return nil, err
}
return fields, nil
})
})
readAttr(attrs, "shm-size", &out.Config.ShmSize, nil)
readAttr(attrs, "ulimit", &out.Config.Ulimit, nil)
readAttr(attrs, "build-arg:BUILDKIT_CACHE_MOUNT_NS", &out.Config.CacheMountNS, nil)
readAttr(attrs, "build-arg:BUILDKIT_DOCKERFILE_CHECK", &out.Config.DockerfileCheckConfig, nil)
readAttr(attrs, "build-arg:SOURCE_DATE_EPOCH", &out.Config.SourceDateEpoch, nil)
readAttr(attrs, "build-arg:SANDBOX_HOSTNAME", &out.Config.SandboxHostname, nil)
var unusedAttrs []keyValueOutput
for k := range attrs {
if strings.HasPrefix(k, "vcs:") || strings.HasPrefix(k, "build-arg:") || strings.HasPrefix(k, "label:") || strings.HasPrefix(k, "context:") || strings.HasPrefix(k, "attest:") {
continue
}
unusedAttrs = append(unusedAttrs, keyValueOutput{
Name: k,
Value: attrs[k],
})
}
slices.SortFunc(unusedAttrs, func(a, b keyValueOutput) int {
return cmp.Compare(a.Name, b.Name)
})
out.Config.RestRaw = unusedAttrs
attachments, err := allAttachments(ctx, store, *rec)
if err != nil {
return err
}
provIndex := slices.IndexFunc(attachments, func(a attachment) bool {
return strings.HasPrefix(descrType(a.descr), "https://slsa.dev/provenance/")
})
if provIndex != -1 {
prov := attachments[provIndex]
predType := descrType(prov.descr)
dt, err := content.ReadBlob(ctx, store, prov.descr)
if err != nil {
return errors.Errorf("failed to read provenance %s: %v", prov.descr.Digest, err)
}
var pred *provenancetypes.ProvenancePredicateSLSA1
if predType == slsa02.PredicateSLSAProvenance {
var pred02 *provenancetypes.ProvenancePredicateSLSA02
if err := json.Unmarshal(dt, &pred02); err != nil {
return errors.Errorf("failed to unmarshal provenance %s: %v", prov.descr.Digest, err)
}
pred = pred02.ConvertToSLSA1()
} else if err := json.Unmarshal(dt, &pred); err != nil {
return errors.Errorf("failed to unmarshal provenance %s: %v", prov.descr.Digest, err)
}
if pred != nil {
for _, m := range pred.BuildDefinition.ResolvedDependencies {
out.Materials = append(out.Materials, materialOutput{
URI: m.URI,
Digests: digestSetToDigests(m.Digest),
})
}
}
}
if len(attachments) > 0 {
for _, a := range attachments {
p := ""
if a.platform != nil {
p = platforms.FormatAll(*a.platform)
}
out.Attachments = append(out.Attachments, attachmentOutput{
Digest: a.descr.Digest.String(),
Platform: p,
Type: descrType(a.descr),
})
}
}
if opts.format == formatter.JSONFormatKey {
enc := json.NewEncoder(dockerCli.Out())
enc.SetIndent("", " ")
return enc.Encode(out)
} else if opts.format != formatter.PrettyFormatKey {
tmpl, err := template.New("inspect").Parse(opts.format)
if err != nil {
return errors.Wrapf(err, "failed to parse format template")
}
var buf bytes.Buffer
if err := tmpl.Execute(&buf, out); err != nil {
return errors.Wrapf(err, "failed to execute format template")
}
fmt.Fprintln(dockerCli.Out(), buf.String())
return nil
}
tw := tabwriter.NewWriter(dockerCli.Out(), 1, 8, 1, '\t', 0)
if out.Name != "" {
fmt.Fprintf(tw, "Name:\t%s\n", out.Name)
}
if opts.ref == "" && out.Ref != "" {
fmt.Fprintf(tw, "Ref:\t%s\n", out.Ref)
}
if out.Context != "" {
fmt.Fprintf(tw, "Context:\t%s\n", out.Context)
}
if out.Dockerfile != "" {
fmt.Fprintf(tw, "Dockerfile:\t%s\n", out.Dockerfile)
}
if out.VCSRepository != "" {
fmt.Fprintf(tw, "VCS Repository:\t%s\n", out.VCSRepository)
}
if out.VCSRevision != "" {
fmt.Fprintf(tw, "VCS Revision:\t%s\n", out.VCSRevision)
}
if out.Target != "" {
fmt.Fprintf(tw, "Target:\t%s\n", out.Target)
}
if len(out.Platform) > 0 {
fmt.Fprintf(tw, "Platforms:\t%s\n", strings.Join(out.Platform, ", "))
}
if out.KeepGitDir {
fmt.Fprintf(tw, "Keep Git Dir:\t%s\n", strconv.FormatBool(out.KeepGitDir))
}
tw.Flush()
fmt.Fprintln(dockerCli.Out())
printTable(dockerCli.Out(), out.NamedContexts, "Named Context")
tw = tabwriter.NewWriter(dockerCli.Out(), 1, 8, 1, '\t', 0)
fmt.Fprintf(tw, "Started:\t%s\n", out.StartedAt.Format("2006-01-02 15:04:05"))
var statusStr string
if out.Status == statusRunning {
statusStr = " (running)"
}
fmt.Fprintf(tw, "Duration:\t%s%s\n", formatDuration(out.Duration), statusStr)
switch out.Status {
case statusError:
fmt.Fprintf(tw, "Error:\t%s %s\n", codes.Code(rec.Error.Code).String(), rec.Error.Message)
case statusCanceled:
fmt.Fprintf(tw, "Status:\tCanceled\n")
}
fmt.Fprintf(tw, "Build Steps:\t%d/%d (%.0f%% cached)\n", out.NumCompletedSteps, out.NumTotalSteps, float64(out.NumCachedSteps)/float64(out.NumTotalSteps)*100)
tw.Flush()
fmt.Fprintln(dockerCli.Out())
tw = tabwriter.NewWriter(dockerCli.Out(), 1, 8, 1, '\t', 0)
if out.Config.Network != "" {
fmt.Fprintf(tw, "Network:\t%s\n", out.Config.Network)
}
if out.Config.Hostname != "" {
fmt.Fprintf(tw, "Hostname:\t%s\n", out.Config.Hostname)
}
if len(out.Config.ExtraHosts) > 0 {
fmt.Fprintf(tw, "Extra Hosts:\t%s\n", strings.Join(out.Config.ExtraHosts, ", "))
}
if out.Config.CgroupParent != "" {
fmt.Fprintf(tw, "Cgroup Parent:\t%s\n", out.Config.CgroupParent)
}
if out.Config.ImageResolveMode != "" {
fmt.Fprintf(tw, "Image Resolve Mode:\t%s\n", out.Config.ImageResolveMode)
}
if out.Config.MultiPlatform {
fmt.Fprintf(tw, "Multi-Platform:\t%s\n", strconv.FormatBool(out.Config.MultiPlatform))
}
if out.Config.NoCache {
fmt.Fprintf(tw, "No Cache:\t%s\n", strconv.FormatBool(out.Config.NoCache))
}
if len(out.Config.NoCacheFilter) > 0 {
fmt.Fprintf(tw, "No Cache Filter:\t%s\n", strings.Join(out.Config.NoCacheFilter, ", "))
}
if out.Config.ShmSize != "" {
fmt.Fprintf(tw, "Shm Size:\t%s\n", out.Config.ShmSize)
}
if out.Config.Ulimit != "" {
fmt.Fprintf(tw, "Resource Limits:\t%s\n", out.Config.Ulimit)
}
if out.Config.CacheMountNS != "" {
fmt.Fprintf(tw, "Cache Mount Namespace:\t%s\n", out.Config.CacheMountNS)
}
if out.Config.DockerfileCheckConfig != "" {
fmt.Fprintf(tw, "Dockerfile Check Config:\t%s\n", out.Config.DockerfileCheckConfig)
}
if out.Config.SourceDateEpoch != "" {
fmt.Fprintf(tw, "Source Date Epoch:\t%s\n", out.Config.SourceDateEpoch)
}
if out.Config.SandboxHostname != "" {
fmt.Fprintf(tw, "Sandbox Hostname:\t%s\n", out.Config.SandboxHostname)
}
for _, kv := range out.Config.RestRaw {
fmt.Fprintf(tw, "%s:\t%s\n", kv.Name, kv.Value)
}
tw.Flush()
fmt.Fprintln(dockerCli.Out())
printTable(dockerCli.Out(), out.BuildArgs, "Build Arg")
printTable(dockerCli.Out(), out.Labels, "Label")
if len(out.Materials) > 0 {
fmt.Fprintln(dockerCli.Out(), "Materials:")
tw = tabwriter.NewWriter(dockerCli.Out(), 1, 8, 1, '\t', 0)
fmt.Fprintf(tw, "URI\tDIGEST\n")
for _, m := range out.Materials {
fmt.Fprintf(tw, "%s\t%s\n", m.URI, strings.Join(m.Digests, ", "))
}
tw.Flush()
fmt.Fprintln(dockerCli.Out())
}
if len(out.Attachments) > 0 {
fmt.Fprintf(tw, "Attachments:\n")
tw = tabwriter.NewWriter(dockerCli.Out(), 1, 8, 1, '\t', 0)
fmt.Fprintf(tw, "DIGEST\tPLATFORM\tTYPE\n")
for _, a := range out.Attachments {
fmt.Fprintf(tw, "%s\t%s\t%s\n", a.Digest, a.Platform, a.Type)
}
tw.Flush()
fmt.Fprintln(dockerCli.Out())
}
if out.Error != nil {
if out.Error.Sources != nil {
fmt.Fprint(dockerCli.Out(), string(out.Error.Sources))
}
if len(out.Error.Logs) > 0 {
fmt.Fprintln(dockerCli.Out(), "Logs:")
fmt.Fprintf(dockerCli.Out(), "> => %s:\n", out.Error.Name)
for _, l := range out.Error.Logs {
fmt.Fprintln(dockerCli.Out(), "> "+l)
}
fmt.Fprintln(dockerCli.Out())
}
if len(out.Error.Stack) > 0 {
if debug.IsEnabled() {
fmt.Fprintf(dockerCli.Out(), "\n%s\n", out.Error.Stack)
} else {
fmt.Fprintf(dockerCli.Out(), "Enable --debug to see stack traces for error\n")
}
}
}
fmt.Fprintf(dockerCli.Out(), "Print build logs: docker buildx history logs %s\n", rec.Ref)
fmt.Fprintf(dockerCli.Out(), "View build in Docker Desktop: %s\n", desktop.BuildURL(fmt.Sprintf("%s/%s/%s", rec.node.Builder, rec.node.Name, rec.Ref)))
return nil
}
func inspectCmd(dockerCli command.Cli, rootOpts RootOptions) *cobra.Command {
var options inspectOptions
cmd := &cobra.Command{
Use: "inspect [OPTIONS] [REF]",
Short: "Inspect a build record",
Args: cobra.MaximumNArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
if len(args) > 0 {
options.ref = args[0]
}
options.builder = *rootOpts.Builder
return runInspect(cmd.Context(), dockerCli, options)
},
ValidArgsFunction: completion.Disable,
DisableFlagsInUseLine: true,
}
cmd.AddCommand(
attachmentCmd(dockerCli, rootOpts),
)
flags := cmd.Flags()
flags.StringVar(&options.format, "format", formatter.PrettyFormatKey, "Format the output")
return cmd
}
func loadVertexLogs(ctx context.Context, c *client.Client, ref string, dgst digest.Digest, limit int) (string, []string, error) {
st, err := c.ControlClient().Status(ctx, &controlapi.StatusRequest{
Ref: ref,
})
if err != nil {
return "", nil, err
}
var name string
var logs []string
lastState := map[int]int{}
loop0:
for {
select {
case <-ctx.Done():
st.CloseSend()
return "", nil, context.Cause(ctx)
default:
ev, err := st.Recv()
if err != nil {
if errors.Is(err, io.EOF) {
break loop0
}
return "", nil, err
}
ss := client.NewSolveStatus(ev)
for _, v := range ss.Vertexes {
if v.Digest == dgst {
name = v.Name
break
}
}
for _, l := range ss.Logs {
if l.Vertex == dgst {
parts := bytes.Split(l.Data, []byte("\n"))
for i, p := range parts {
var wrote bool
if i == 0 {
idx, ok := lastState[l.Stream]
if ok && idx != -1 {
logs[idx] = logs[idx] + string(p)
wrote = true
}
}
if !wrote {
if len(p) > 0 {
logs = append(logs, string(p))
}
lastState[l.Stream] = len(logs) - 1
}
if i == len(parts)-1 && len(p) == 0 {
lastState[l.Stream] = -1
}
}
}
}
}
}
if limit > 0 && len(logs) > limit {
logs = logs[len(logs)-limit:]
}
return name, logs, nil
}
type attachment struct {
platform *ocispecs.Platform
descr ocispecs.Descriptor
}
func allAttachments(ctx context.Context, store content.Store, rec historyRecord) ([]attachment, error) {
var attachments []attachment
if rec.Result != nil {
for _, a := range rec.Result.Attestations {
attachments = append(attachments, attachment{
descr: ociDesc(a),
})
}
for _, r := range rec.Result.Results {
attachments = append(attachments, walkAttachments(ctx, store, ociDesc(r), nil)...)
}
}
for key, ri := range rec.Results {
p, err := platforms.Parse(key)
if err != nil {
return nil, err
}
for _, a := range ri.Attestations {
attachments = append(attachments, attachment{
platform: &p,
descr: ociDesc(a),
})
}
for _, r := range ri.Results {
attachments = append(attachments, walkAttachments(ctx, store, ociDesc(r), &p)...)
}
}
slices.SortFunc(attachments, func(a, b attachment) int {
pCmp := 0
if a.platform == nil && b.platform != nil {
return -1
} else if a.platform != nil && b.platform == nil {
return 1
} else if a.platform != nil && b.platform != nil {
pCmp = cmp.Compare(platforms.FormatAll(*a.platform), platforms.FormatAll(*b.platform))
}
return cmp.Or(
pCmp,
cmp.Compare(descrType(a.descr), descrType(b.descr)),
)
})
return attachments, nil
}
func walkAttachments(ctx context.Context, store content.Store, desc ocispecs.Descriptor, platform *ocispecs.Platform) []attachment {
_, err := store.Info(ctx, desc.Digest)
if err != nil {
return nil
}
var out []attachment
if desc.Annotations["vnd.docker.reference.type"] != "attestation-manifest" {
out = append(out, attachment{platform: platform, descr: desc})
}
if desc.MediaType != ocispecs.MediaTypeImageIndex && desc.MediaType != images.MediaTypeDockerSchema2ManifestList {
return out
}
dt, err := content.ReadBlob(ctx, store, desc)
if err != nil {
return out
}
var idx ocispecs.Index
if err := json.Unmarshal(dt, &idx); err != nil {
return out
}
for _, d := range idx.Manifests {
p := platform
if d.Platform != nil {
p = d.Platform
}
out = append(out, walkAttachments(ctx, store, d, p)...)
}
return out
}
func ociDesc(in *controlapi.Descriptor) ocispecs.Descriptor {
return ocispecs.Descriptor{
MediaType: in.MediaType,
Digest: digest.Digest(in.Digest),
Size: in.Size,
Annotations: in.Annotations,
}
}
func descrType(desc ocispecs.Descriptor) string {
if typ, ok := desc.Annotations["in-toto.io/predicate-type"]; ok {
return typ
}
return desc.MediaType
}
func tryParseValue[T any](s string, errs *[]string, f func(string) (T, error)) (T, bool) {
v, err := f(s)
if err != nil {
errStr := fmt.Sprintf("failed to parse %s: (%v)", s, err)
*errs = append(*errs, errStr)
}
return v, true
}
func printTable(w io.Writer, kvs []keyValueOutput, title string) {
if len(kvs) == 0 {
return
}
tw := tabwriter.NewWriter(w, 1, 8, 1, '\t', 0)
fmt.Fprintf(tw, "%s\tVALUE\n", strings.ToUpper(title))
for _, k := range kvs {
fmt.Fprintf(tw, "%s\t%s\n", k.Name, k.Value)
}
tw.Flush()
fmt.Fprintln(w)
}
func readKeyValues(attrs map[string]string, prefix string) []keyValueOutput {
var out []keyValueOutput
for k, v := range attrs {
if name, ok := strings.CutPrefix(k, prefix); ok {
out = append(out, keyValueOutput{
Name: name,
Value: v,
})
}
}
if len(out) == 0 {
return nil
}
slices.SortFunc(out, func(a, b keyValueOutput) int {
return cmp.Compare(a.Name, b.Name)
})
return out
}
func digestSetToDigests(ds slsa.DigestSet) []string {
var out []string
for k, v := range ds {
out = append(out, fmt.Sprintf("%s:%s", k, v))
}
return out
}

View File

@ -0,0 +1,141 @@
package history
import (
"context"
"io"
"github.com/containerd/containerd/v2/core/content/proxy"
"github.com/containerd/platforms"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/cli/cli/command"
intoto "github.com/in-toto/in-toto-golang/in_toto"
slsa02 "github.com/in-toto/in-toto-golang/in_toto/slsa_provenance/v0.2"
slsa1 "github.com/in-toto/in-toto-golang/in_toto/slsa_provenance/v1"
"github.com/opencontainers/go-digest"
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/spf13/cobra"
)
type attachmentOptions struct {
builder string
typ string
platform string
ref string
digest digest.Digest
}
func runAttachment(ctx context.Context, dockerCli command.Cli, opts attachmentOptions) error {
nodes, err := loadNodes(ctx, dockerCli, opts.builder)
if err != nil {
return err
}
recs, err := queryRecords(ctx, opts.ref, nodes, nil)
if err != nil {
return err
}
if len(recs) == 0 {
if opts.ref == "" {
return errors.New("no records found")
}
return errors.Errorf("no record found for ref %q", opts.ref)
}
rec := &recs[0]
c, err := rec.node.Driver.Client(ctx)
if err != nil {
return err
}
store := proxy.NewContentStore(c.ContentClient())
if opts.digest != "" {
ra, err := store.ReaderAt(ctx, ocispecs.Descriptor{Digest: opts.digest})
if err != nil {
return err
}
_, err = io.Copy(dockerCli.Out(), io.NewSectionReader(ra, 0, ra.Size()))
return err
}
attachments, err := allAttachments(ctx, store, *rec)
if err != nil {
return err
}
types := make(map[string]struct{})
switch opts.typ {
case "index":
types[ocispecs.MediaTypeImageIndex] = struct{}{}
case "manifest":
types[ocispecs.MediaTypeImageManifest] = struct{}{}
case "image":
types[ocispecs.MediaTypeImageConfig] = struct{}{}
case "provenance":
types[slsa1.PredicateSLSAProvenance] = struct{}{}
types[slsa02.PredicateSLSAProvenance] = struct{}{}
case "sbom":
types[intoto.PredicateSPDX] = struct{}{}
default:
if opts.typ != "" {
types[opts.typ] = struct{}{}
}
}
for _, a := range attachments {
if opts.platform != "" && (a.platform == nil || platforms.FormatAll(*a.platform) != opts.platform) {
continue
}
if _, ok := types[descrType(a.descr)]; opts.typ != "" && !ok {
continue
}
ra, err := store.ReaderAt(ctx, a.descr)
if err != nil {
return err
}
_, err = io.Copy(dockerCli.Out(), io.NewSectionReader(ra, 0, ra.Size()))
return err
}
return errors.Errorf("no matching attachment found for ref %q", opts.ref)
}
func attachmentCmd(dockerCli command.Cli, rootOpts RootOptions) *cobra.Command {
var options attachmentOptions
cmd := &cobra.Command{
Use: "attachment [OPTIONS] [REF [DIGEST]]",
Short: "Inspect a build record attachment",
Args: cobra.MaximumNArgs(2),
RunE: func(cmd *cobra.Command, args []string) error {
if len(args) > 0 {
options.ref = args[0]
}
if len(args) > 1 {
dgst, err := digest.Parse(args[1])
if err != nil {
return errors.Wrapf(err, "invalid digest %q", args[1])
}
options.digest = dgst
}
if options.digest == "" && options.platform == "" && options.typ == "" {
return errors.New("at least one of --type, --platform or DIGEST must be specified")
}
options.builder = *rootOpts.Builder
return runAttachment(cmd.Context(), dockerCli, options)
},
ValidArgsFunction: completion.Disable,
DisableFlagsInUseLine: true,
}
flags := cmd.Flags()
flags.StringVar(&options.typ, "type", "", "Type of attachment")
flags.StringVar(&options.platform, "platform", "", "Platform of attachment")
return cmd
}

107
commands/history/logs.go Normal file
View File

@ -0,0 +1,107 @@
package history
import (
"context"
"io"
"os"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/command"
controlapi "github.com/moby/buildkit/api/services/control"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/util/progress/progressui"
"github.com/pkg/errors"
"github.com/spf13/cobra"
)
type logsOptions struct {
builder string
ref string
progress string
}
func runLogs(ctx context.Context, dockerCli command.Cli, opts logsOptions) error {
nodes, err := loadNodes(ctx, dockerCli, opts.builder)
if err != nil {
return err
}
recs, err := queryRecords(ctx, opts.ref, nodes, nil)
if err != nil {
return err
}
if len(recs) == 0 {
if opts.ref == "" {
return errors.New("no records found")
}
return errors.Errorf("no record found for ref %q", opts.ref)
}
rec := &recs[0]
c, err := rec.node.Driver.Client(ctx)
if err != nil {
return err
}
cl, err := c.ControlClient().Status(ctx, &controlapi.StatusRequest{
Ref: rec.Ref,
})
if err != nil {
return err
}
mode := progressui.DisplayMode(opts.progress)
if mode == progressui.AutoMode {
mode = progressui.PlainMode
}
printer, err := progress.NewPrinter(context.TODO(), os.Stderr, mode)
if err != nil {
return err
}
loop0:
for {
select {
case <-ctx.Done():
cl.CloseSend()
return context.Cause(ctx)
default:
ev, err := cl.Recv()
if err != nil {
if errors.Is(err, io.EOF) {
break loop0
}
return err
}
printer.Write(client.NewSolveStatus(ev))
}
}
return printer.Wait()
}
func logsCmd(dockerCli command.Cli, rootOpts RootOptions) *cobra.Command {
var options logsOptions
cmd := &cobra.Command{
Use: "logs [OPTIONS] [REF]",
Short: "Print the logs of a build record",
Args: cobra.MaximumNArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
if len(args) > 0 {
options.ref = args[0]
}
options.builder = *rootOpts.Builder
return runLogs(cmd.Context(), dockerCli, options)
},
ValidArgsFunction: completion.Disable,
DisableFlagsInUseLine: true,
}
flags := cmd.Flags()
flags.StringVar(&options.progress, "progress", "plain", "Set type of progress output (plain, rawjson, tty)")
return cmd
}

254
commands/history/ls.go Normal file
View File

@ -0,0 +1,254 @@
package history
import (
"context"
"encoding/json"
"fmt"
"os"
"path"
"slices"
"time"
"github.com/containerd/console"
"github.com/docker/buildx/localstate"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/desktop"
"github.com/docker/buildx/util/gitutil"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
"github.com/docker/cli/cli/command/formatter"
"github.com/docker/go-units"
"github.com/pkg/errors"
"github.com/spf13/cobra"
)
const (
lsHeaderBuildID = "BUILD ID"
lsHeaderName = "NAME"
lsHeaderStatus = "STATUS"
lsHeaderCreated = "CREATED AT"
lsHeaderDuration = "DURATION"
lsHeaderLink = ""
lsDefaultTableFormat = "table {{.Ref}}\t{{.Name}}\t{{.Status}}\t{{.CreatedAt}}\t{{.Duration}}\t{{.Link}}"
headerKeyTimestamp = "buildkit-current-timestamp"
)
type lsOptions struct {
builder string
format string
noTrunc bool
filters []string
local bool
}
func runLs(ctx context.Context, dockerCli command.Cli, opts lsOptions) error {
nodes, err := loadNodes(ctx, dockerCli, opts.builder)
if err != nil {
return err
}
queryOptions := &queryOptions{}
if opts.local {
wd, err := os.Getwd()
if err != nil {
return err
}
gitc, err := gitutil.New(gitutil.WithContext(ctx), gitutil.WithWorkingDir(wd))
if err != nil {
if st, err1 := os.Stat(path.Join(wd, ".git")); err1 == nil && st.IsDir() {
return errors.Wrap(err, "git was not found in the system")
}
return errors.Wrapf(err, "could not find git repository for local filter")
}
remote, err := gitc.RemoteURL()
if err != nil {
return errors.Wrapf(err, "could not get remote URL for local filter")
}
queryOptions.Filters = append(queryOptions.Filters, fmt.Sprintf("repository=%s", remote))
}
queryOptions.Filters = append(queryOptions.Filters, opts.filters...)
out, err := queryRecords(ctx, "", nodes, queryOptions)
if err != nil {
return err
}
ls, err := localstate.New(confutil.NewConfig(dockerCli))
if err != nil {
return err
}
for i, rec := range out {
st, _ := ls.ReadRef(rec.node.Builder, rec.node.Name, rec.Ref)
rec.name = buildName(rec.FrontendAttrs, st)
out[i] = rec
}
return lsPrint(dockerCli, out, opts)
}
func lsCmd(dockerCli command.Cli, rootOpts RootOptions) *cobra.Command {
var options lsOptions
cmd := &cobra.Command{
Use: "ls [OPTIONS]",
Short: "List build records",
Args: cli.NoArgs,
RunE: func(cmd *cobra.Command, args []string) error {
options.builder = *rootOpts.Builder
return runLs(cmd.Context(), dockerCli, options)
},
ValidArgsFunction: completion.Disable,
DisableFlagsInUseLine: true,
}
flags := cmd.Flags()
flags.StringVar(&options.format, "format", formatter.TableFormatKey, "Format the output")
flags.BoolVar(&options.noTrunc, "no-trunc", false, "Don't truncate output")
flags.StringArrayVar(&options.filters, "filter", nil, `Provide filter values (e.g., "status=error")`)
flags.BoolVar(&options.local, "local", false, "List records for current repository only")
return cmd
}
func lsPrint(dockerCli command.Cli, records []historyRecord, in lsOptions) error {
if in.format == formatter.TableFormatKey {
in.format = lsDefaultTableFormat
}
ctx := formatter.Context{
Output: dockerCli.Out(),
Format: formatter.Format(in.format),
Trunc: !in.noTrunc,
}
slices.SortFunc(records, func(a, b historyRecord) int {
if a.CompletedAt == nil && b.CompletedAt != nil {
return -1
}
if a.CompletedAt != nil && b.CompletedAt == nil {
return 1
}
return b.CreatedAt.AsTime().Compare(a.CreatedAt.AsTime())
})
var term bool
if _, err := console.ConsoleFromFile(os.Stdout); err == nil {
term = true
}
render := func(format func(subContext formatter.SubContext) error) error {
for _, r := range records {
if err := format(&lsContext{
format: formatter.Format(in.format),
isTerm: term,
trunc: !in.noTrunc,
record: &r,
}); err != nil {
return err
}
}
return nil
}
lsCtx := lsContext{
isTerm: term,
trunc: !in.noTrunc,
}
lsCtx.Header = formatter.SubHeaderContext{
"Ref": lsHeaderBuildID,
"Name": lsHeaderName,
"Status": lsHeaderStatus,
"CreatedAt": lsHeaderCreated,
"Duration": lsHeaderDuration,
"Link": lsHeaderLink,
}
return ctx.Write(&lsCtx, render)
}
type lsContext struct {
formatter.HeaderContext
isTerm bool
trunc bool
format formatter.Format
record *historyRecord
}
func (c *lsContext) MarshalJSON() ([]byte, error) {
m := map[string]any{
"ref": c.FullRef(),
"name": c.Name(),
"status": c.Status(),
"created_at": c.record.CreatedAt.AsTime().Format(time.RFC3339Nano),
"total_steps": c.record.NumTotalSteps,
"completed_steps": c.record.NumCompletedSteps,
"cached_steps": c.record.NumCachedSteps,
}
if c.record.CompletedAt != nil {
m["completed_at"] = c.record.CompletedAt.AsTime().Format(time.RFC3339Nano)
}
return json.Marshal(m)
}
func (c *lsContext) Ref() string {
return c.record.Ref
}
func (c *lsContext) FullRef() string {
return fmt.Sprintf("%s/%s/%s", c.record.node.Builder, c.record.node.Name, c.record.Ref)
}
func (c *lsContext) Name() string {
name := c.record.name
if c.trunc && c.format.IsTable() {
return trimBeginning(name, 36)
}
return name
}
func (c *lsContext) Status() string {
if c.record.CompletedAt != nil {
if c.record.Error != nil {
return "Error"
}
return "Completed"
}
return "Running"
}
func (c *lsContext) CreatedAt() string {
return units.HumanDuration(time.Since(c.record.CreatedAt.AsTime())) + " ago"
}
func (c *lsContext) Duration() string {
lastTime := c.record.currentTimestamp
if c.record.CompletedAt != nil {
tm := c.record.CompletedAt.AsTime()
lastTime = &tm
}
if lastTime == nil {
return ""
}
v := formatDuration(lastTime.Sub(c.record.CreatedAt.AsTime()))
if c.record.CompletedAt == nil {
v += "+"
}
return v
}
func (c *lsContext) Link() string {
url := desktop.BuildURL(c.FullRef())
if c.format.IsTable() {
if c.isTerm {
return desktop.ANSIHyperlink(url, "Open")
}
return ""
}
return url
}

63
commands/history/open.go Normal file
View File

@ -0,0 +1,63 @@
package history
import (
"context"
"fmt"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/desktop"
"github.com/docker/cli/cli/command"
"github.com/pkg/browser"
"github.com/pkg/errors"
"github.com/spf13/cobra"
)
type openOptions struct {
builder string
ref string
}
func runOpen(ctx context.Context, dockerCli command.Cli, opts openOptions) error {
nodes, err := loadNodes(ctx, dockerCli, opts.builder)
if err != nil {
return err
}
recs, err := queryRecords(ctx, opts.ref, nodes, nil)
if err != nil {
return err
}
if len(recs) == 0 {
if opts.ref == "" {
return errors.New("no records found")
}
return errors.Errorf("no record found for ref %q", opts.ref)
}
rec := &recs[0]
url := desktop.BuildURL(fmt.Sprintf("%s/%s/%s", rec.node.Builder, rec.node.Name, rec.Ref))
return browser.OpenURL(url)
}
func openCmd(dockerCli command.Cli, rootOpts RootOptions) *cobra.Command {
var options openOptions
cmd := &cobra.Command{
Use: "open [OPTIONS] [REF]",
Short: "Open a build record in Docker Desktop",
Args: cobra.MaximumNArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
if len(args) > 0 {
options.ref = args[0]
}
options.builder = *rootOpts.Builder
return runOpen(cmd.Context(), dockerCli, options)
},
ValidArgsFunction: completion.Disable,
DisableFlagsInUseLine: true,
}
return cmd
}

140
commands/history/rm.go Normal file
View File

@ -0,0 +1,140 @@
package history
import (
"context"
"io"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/cli/cli/command"
"github.com/hashicorp/go-multierror"
controlapi "github.com/moby/buildkit/api/services/control"
"github.com/pkg/errors"
"github.com/spf13/cobra"
"golang.org/x/sync/errgroup"
)
type rmOptions struct {
builder string
refs []string
all bool
}
func runRm(ctx context.Context, dockerCli command.Cli, opts rmOptions) error {
nodes, err := loadNodes(ctx, dockerCli, opts.builder)
if err != nil {
return err
}
errs := make([][]error, len(opts.refs))
for i := range errs {
errs[i] = make([]error, len(nodes))
}
eg, ctx := errgroup.WithContext(ctx)
for i, node := range nodes {
eg.Go(func() error {
if node.Driver == nil {
return nil
}
c, err := node.Driver.Client(ctx)
if err != nil {
return err
}
refs := opts.refs
if opts.all {
serv, err := c.ControlClient().ListenBuildHistory(ctx, &controlapi.BuildHistoryRequest{
EarlyExit: true,
})
if err != nil {
return err
}
defer serv.CloseSend()
for {
resp, err := serv.Recv()
if err != nil {
if errors.Is(err, io.EOF) {
break
}
return err
}
if resp.Type == controlapi.BuildHistoryEventType_COMPLETE {
refs = append(refs, resp.Record.Ref)
}
}
}
for j, ref := range refs {
_, err = c.ControlClient().UpdateBuildHistory(ctx, &controlapi.UpdateBuildHistoryRequest{
Ref: ref,
Delete: true,
})
if opts.all {
if err != nil {
return err
}
} else {
errs[j][i] = err
}
}
return nil
})
}
if err := eg.Wait(); err != nil {
return err
}
var out []error
loop0:
for _, nodeErrs := range errs {
var nodeErr error
for _, err1 := range nodeErrs {
if err1 == nil {
continue loop0
}
if nodeErr == nil {
nodeErr = err1
} else {
nodeErr = multierror.Append(nodeErr, err1)
}
}
out = append(out, nodeErr)
}
if len(out) == 0 {
return nil
}
if len(out) == 1 {
return out[0]
}
return multierror.Append(out[0], out[1:]...)
}
func rmCmd(dockerCli command.Cli, rootOpts RootOptions) *cobra.Command {
var options rmOptions
cmd := &cobra.Command{
Use: "rm [OPTIONS] [REF...]",
Short: "Remove build records",
RunE: func(cmd *cobra.Command, args []string) error {
if len(args) == 0 && !options.all {
return errors.New("rm requires at least one argument")
}
if len(args) > 0 && options.all {
return errors.New("rm requires either --all or at least one argument")
}
options.refs = args
options.builder = *rootOpts.Builder
return runRm(cmd.Context(), dockerCli, options)
},
ValidArgsFunction: completion.Disable,
DisableFlagsInUseLine: true,
}
flags := cmd.Flags()
flags.BoolVar(&options.all, "all", false, "Remove all build records")
return cmd
}

35
commands/history/root.go Normal file
View File

@ -0,0 +1,35 @@
package history
import (
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/cli/cli/command"
"github.com/spf13/cobra"
)
type RootOptions struct {
Builder *string
}
func RootCmd(rootcmd *cobra.Command, dockerCli command.Cli, opts RootOptions) *cobra.Command {
cmd := &cobra.Command{
Use: "history",
Short: "Commands to work on build records",
ValidArgsFunction: completion.Disable,
RunE: rootcmd.RunE,
DisableFlagsInUseLine: true,
}
cmd.AddCommand(
lsCmd(dockerCli, opts),
rmCmd(dockerCli, opts),
logsCmd(dockerCli, opts),
inspectCmd(dockerCli, opts),
openCmd(dockerCli, opts),
traceCmd(dockerCli, opts),
importCmd(dockerCli, opts),
exportCmd(dockerCli, opts),
)
return cmd
}

211
commands/history/trace.go Normal file
View File

@ -0,0 +1,211 @@
package history
import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"net"
"os"
"time"
"github.com/containerd/console"
"github.com/containerd/containerd/v2/core/content/proxy"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/otelutil"
"github.com/docker/buildx/util/otelutil/jaeger"
"github.com/docker/cli/cli/command"
"github.com/opencontainers/go-digest"
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/browser"
"github.com/pkg/errors"
"github.com/spf13/cobra"
jaegerui "github.com/tonistiigi/jaeger-ui-rest"
)
type traceOptions struct {
builder string
ref string
addr string
compare string
}
func loadTrace(ctx context.Context, ref string, nodes []builder.Node) (string, []byte, error) {
recs, err := queryRecords(ctx, ref, nodes, &queryOptions{
CompletedOnly: true,
})
if err != nil {
return "", nil, err
}
if len(recs) == 0 {
if ref == "" {
return "", nil, errors.New("no records found")
}
return "", nil, errors.Errorf("no record found for ref %q", ref)
}
rec := &recs[0]
if rec.CompletedAt == nil {
return "", nil, errors.Errorf("build %q is not completed, only completed builds can be traced", rec.Ref)
}
if rec.Trace == nil {
// build is complete but no trace yet. try to finalize the trace
time.Sleep(1 * time.Second) // give some extra time for last parts of trace to be written
err := finalizeRecord(ctx, rec.Ref, []builder.Node{*rec.node})
if err != nil {
return "", nil, err
}
recs, err := queryRecords(ctx, rec.Ref, []builder.Node{*rec.node}, &queryOptions{
CompletedOnly: true,
})
if err != nil {
return "", nil, err
}
if len(recs) == 0 {
return "", nil, errors.Errorf("build record %q was deleted", rec.Ref)
}
rec = &recs[0]
if rec.Trace == nil {
return "", nil, errors.Errorf("build record %q is missing a trace", rec.Ref)
}
}
c, err := rec.node.Driver.Client(ctx)
if err != nil {
return "", nil, err
}
store := proxy.NewContentStore(c.ContentClient())
ra, err := store.ReaderAt(ctx, ocispecs.Descriptor{
Digest: digest.Digest(rec.Trace.Digest),
MediaType: rec.Trace.MediaType,
Size: rec.Trace.Size,
})
if err != nil {
return "", nil, err
}
spans, err := otelutil.ParseSpanStubs(io.NewSectionReader(ra, 0, ra.Size()))
if err != nil {
return "", nil, err
}
wrapper := struct {
Data []jaeger.Trace `json:"data"`
}{
Data: spans.JaegerData().Data,
}
if len(wrapper.Data) == 0 {
return "", nil, errors.New("no trace data")
}
buf := &bytes.Buffer{}
enc := json.NewEncoder(buf)
enc.SetIndent("", " ")
if err := enc.Encode(wrapper); err != nil {
return "", nil, err
}
return string(wrapper.Data[0].TraceID), buf.Bytes(), nil
}
func runTrace(ctx context.Context, dockerCli command.Cli, opts traceOptions) error {
nodes, err := loadNodes(ctx, dockerCli, opts.builder)
if err != nil {
return err
}
traceID, data, err := loadTrace(ctx, opts.ref, nodes)
if err != nil {
return err
}
srv := jaegerui.NewServer(jaegerui.Config{})
if err := srv.AddTrace(traceID, bytes.NewReader(data)); err != nil {
return err
}
url := "/trace/" + traceID
if opts.compare != "" {
traceIDcomp, data, err := loadTrace(ctx, opts.compare, nodes)
if err != nil {
return errors.Wrapf(err, "failed to load trace for %s", opts.compare)
}
if err := srv.AddTrace(traceIDcomp, bytes.NewReader(data)); err != nil {
return err
}
url = "/trace/" + traceIDcomp + "..." + traceID
}
var term bool
if _, err := console.ConsoleFromFile(os.Stdout); err == nil {
term = true
}
if !term && opts.compare == "" {
fmt.Fprintln(dockerCli.Out(), string(data))
return nil
}
ln, err := net.Listen("tcp", opts.addr)
if err != nil {
return err
}
go func() {
time.Sleep(100 * time.Millisecond)
browser.OpenURL(url)
}()
url = "http://" + ln.Addr().String() + url
fmt.Fprintf(dockerCli.Err(), "Trace available at %s\n", url)
go func() {
<-ctx.Done()
ln.Close()
}()
err = srv.Serve(ln)
if err != nil {
select {
case <-ctx.Done():
return nil
default:
}
}
return err
}
func traceCmd(dockerCli command.Cli, rootOpts RootOptions) *cobra.Command {
var options traceOptions
cmd := &cobra.Command{
Use: "trace [OPTIONS] [REF]",
Short: "Show the OpenTelemetry trace of a build record",
Args: cobra.MaximumNArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
if len(args) > 0 {
options.ref = args[0]
}
options.builder = *rootOpts.Builder
return runTrace(cmd.Context(), dockerCli, options)
},
ValidArgsFunction: completion.Disable,
DisableFlagsInUseLine: true,
}
flags := cmd.Flags()
flags.StringVar(&options.addr, "addr", "127.0.0.1:0", "Address to bind the UI server")
flags.StringVar(&options.compare, "compare", "", "Compare with another build record")
return cmd
}

453
commands/history/utils.go Normal file
View File

@ -0,0 +1,453 @@
package history
import (
"bytes"
"context"
"encoding/csv"
"fmt"
"io"
"path/filepath"
"slices"
"strconv"
"strings"
"sync"
"time"
"github.com/docker/buildx/build"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/localstate"
"github.com/docker/cli/cli/command"
controlapi "github.com/moby/buildkit/api/services/control"
"github.com/moby/buildkit/frontend/dockerfile/dfgitutil"
"github.com/pkg/errors"
"golang.org/x/sync/errgroup"
)
const recordsLimit = 50
func buildName(fattrs map[string]string, ls *localstate.State) string {
if v, ok := fattrs["build-arg:BUILDKIT_BUILD_NAME"]; ok && v != "" {
return v
}
var res string
var target, contextPath, dockerfilePath, vcsSource string
if v, ok := fattrs["target"]; ok {
target = v
}
if v, ok := fattrs["context"]; ok {
contextPath = filepath.ToSlash(v)
} else if v, ok := fattrs["vcs:localdir:context"]; ok && v != "." {
contextPath = filepath.ToSlash(v)
}
if v, ok := fattrs["vcs:source"]; ok {
vcsSource = v
}
if v, ok := fattrs["filename"]; ok && v != "Dockerfile" {
dockerfilePath = filepath.ToSlash(v)
}
if v, ok := fattrs["vcs:localdir:dockerfile"]; ok && v != "." {
dockerfilePath = filepath.ToSlash(filepath.Join(v, dockerfilePath))
}
var localPath string
if ls != nil && !build.IsRemoteURL(ls.LocalPath) {
if ls.LocalPath != "" && ls.LocalPath != "-" {
localPath = filepath.ToSlash(ls.LocalPath)
}
if ls.DockerfilePath != "" && ls.DockerfilePath != "-" && ls.DockerfilePath != "Dockerfile" {
dockerfilePath = filepath.ToSlash(ls.DockerfilePath)
}
}
// remove default dockerfile name
const defaultFilename = "/Dockerfile"
hasDefaultFileName := strings.HasSuffix(dockerfilePath, defaultFilename) || dockerfilePath == ""
dockerfilePath = strings.TrimSuffix(dockerfilePath, defaultFilename)
// dockerfile is a subpath of context
if strings.HasPrefix(dockerfilePath, localPath) && len(dockerfilePath) > len(localPath) {
res = dockerfilePath[strings.LastIndex(localPath, "/")+1:]
} else {
// Otherwise, use basename
bpath := localPath
if len(dockerfilePath) > 0 {
bpath = dockerfilePath
}
if len(bpath) > 0 {
lidx := strings.LastIndex(bpath, "/")
res = bpath[lidx+1:]
if !hasDefaultFileName {
if lidx != -1 {
res = filepath.ToSlash(filepath.Join(filepath.Base(bpath[:lidx]), res))
} else {
res = filepath.ToSlash(filepath.Join(filepath.Base(bpath), res))
}
}
}
}
if len(contextPath) > 0 {
res = contextPath
}
if len(target) > 0 {
if len(res) > 0 {
res = res + " (" + target + ")"
} else {
res = target
}
}
if res == "" && vcsSource != "" {
return vcsSource
}
return res
}
func trimBeginning(s string, n int) string {
if len(s) <= n {
return s
}
return ".." + s[len(s)-n+2:]
}
type historyRecord struct {
*controlapi.BuildHistoryRecord
currentTimestamp *time.Time
node *builder.Node
name string
}
type queryOptions struct {
CompletedOnly bool
Filters []string
}
func queryRecords(ctx context.Context, ref string, nodes []builder.Node, opts *queryOptions) ([]historyRecord, error) {
var mu sync.Mutex
var out []historyRecord
var offset *int
if strings.HasPrefix(ref, "^") {
off, err := strconv.Atoi(ref[1:])
if err != nil {
return nil, errors.Wrapf(err, "invalid offset %q", ref)
}
offset = &off
ref = ""
}
var filters []string
if opts != nil {
filters = opts.Filters
}
eg, ctx := errgroup.WithContext(ctx)
for _, node := range nodes {
eg.Go(func() error {
if node.Driver == nil {
return nil
}
var records []historyRecord
c, err := node.Driver.Client(ctx)
if err != nil {
return err
}
var matchers []matchFunc
if len(filters) > 0 {
filters, matchers, err = dockerFiltersToBuildkit(filters)
if err != nil {
return err
}
sb := bytes.NewBuffer(nil)
w := csv.NewWriter(sb)
w.Write(filters)
w.Flush()
filters = []string{strings.TrimSuffix(sb.String(), "\n")}
}
serv, err := c.ControlClient().ListenBuildHistory(ctx, &controlapi.BuildHistoryRequest{
EarlyExit: true,
Ref: ref,
Limit: recordsLimit,
Filter: filters,
})
if err != nil {
return err
}
md, err := serv.Header()
if err != nil {
return err
}
var ts *time.Time
if v, ok := md[headerKeyTimestamp]; ok {
t, err := time.Parse(time.RFC3339Nano, v[0])
if err != nil {
return err
}
ts = &t
}
defer serv.CloseSend()
loop0:
for {
he, err := serv.Recv()
if err != nil {
if errors.Is(err, io.EOF) {
break
}
return err
}
if he.Type == controlapi.BuildHistoryEventType_DELETED || he.Record == nil {
continue
}
if opts != nil && opts.CompletedOnly && he.Type != controlapi.BuildHistoryEventType_COMPLETE {
continue
}
// for older buildkit that don't support filters apply local filters
for _, matcher := range matchers {
if !matcher(he.Record) {
continue loop0
}
}
records = append(records, historyRecord{
BuildHistoryRecord: he.Record,
currentTimestamp: ts,
node: &node,
})
}
mu.Lock()
out = append(out, records...)
mu.Unlock()
return nil
})
}
if err := eg.Wait(); err != nil {
return nil, err
}
slices.SortFunc(out, func(a, b historyRecord) int {
return b.CreatedAt.AsTime().Compare(a.CreatedAt.AsTime())
})
if offset != nil {
var filtered []historyRecord
for _, r := range out {
if *offset > 0 {
*offset--
continue
}
filtered = append(filtered, r)
break
}
if *offset > 0 {
return nil, errors.Errorf("no completed build found with offset %d", *offset)
}
out = filtered
}
return out, nil
}
func finalizeRecord(ctx context.Context, ref string, nodes []builder.Node) error {
eg, ctx := errgroup.WithContext(ctx)
for _, node := range nodes {
eg.Go(func() error {
if node.Driver == nil {
return nil
}
c, err := node.Driver.Client(ctx)
if err != nil {
return err
}
_, err = c.ControlClient().UpdateBuildHistory(ctx, &controlapi.UpdateBuildHistoryRequest{
Ref: ref,
Finalize: true,
})
return err
})
}
return eg.Wait()
}
func formatDuration(d time.Duration) string {
if d < time.Minute {
return fmt.Sprintf("%.1fs", d.Seconds())
}
return fmt.Sprintf("%dm %2ds", int(d.Minutes()), int(d.Seconds())%60)
}
type matchFunc func(*controlapi.BuildHistoryRecord) bool
func dockerFiltersToBuildkit(in []string) ([]string, []matchFunc, error) {
out := []string{}
matchers := []matchFunc{}
for _, f := range in {
key, value, sep, found := cutAny(f, "!=", "=", "<=", "<", ">=", ">")
if !found {
return nil, nil, errors.Errorf("invalid filter %q", f)
}
switch key {
case "ref", "repository", "status":
if sep != "=" && sep != "!=" {
return nil, nil, errors.Errorf("invalid separator for %q, expected = or !=", f)
}
matchers = append(matchers, valueFiler(key, value, sep))
if sep == "=" {
if key == "status" {
sep = "=="
} else {
sep = "~="
}
}
case "startedAt", "completedAt", "duration":
if sep == "=" || sep == "!=" {
return nil, nil, errors.Errorf("invalid separator for %q, expected <=, <, >= or >", f)
}
matcher, err := timeBasedFilter(key, value, sep)
if err != nil {
return nil, nil, err
}
matchers = append(matchers, matcher)
default:
return nil, nil, errors.Errorf("unsupported filter %q", f)
}
out = append(out, key+sep+value)
}
return out, matchers, nil
}
func valueFiler(key, value, sep string) matchFunc {
return func(rec *controlapi.BuildHistoryRecord) bool {
var recValue string
switch key {
case "ref":
recValue = rec.Ref
case "repository":
v, ok := rec.FrontendAttrs["vcs:source"]
if ok {
recValue = v
} else {
if context, ok := rec.FrontendAttrs["context"]; ok {
if ref, _, err := dfgitutil.ParseGitRef(context); err == nil {
recValue = ref.Remote
}
}
}
case "status":
if rec.CompletedAt != nil {
if rec.Error != nil {
if strings.Contains(rec.Error.Message, "context canceled") {
recValue = "canceled"
} else {
recValue = "error"
}
} else {
recValue = "completed"
}
} else {
recValue = "running"
}
}
switch sep {
case "=":
if key == "status" {
return recValue == value
}
return strings.Contains(recValue, value)
case "!=":
return recValue != value
default:
return false
}
}
}
func timeBasedFilter(key, value, sep string) (matchFunc, error) {
var cmp int64
switch key {
case "startedAt", "completedAt":
v, err := time.ParseDuration(value)
if err == nil {
tm := time.Now().Add(-v)
cmp = tm.Unix()
} else {
tm, err := time.Parse(time.RFC3339, value)
if err != nil {
return nil, errors.Errorf("invalid time %s", value)
}
cmp = tm.Unix()
}
case "duration":
v, err := time.ParseDuration(value)
if err != nil {
return nil, errors.Errorf("invalid duration %s", value)
}
cmp = int64(v)
default:
return nil, nil
}
return func(rec *controlapi.BuildHistoryRecord) bool {
var val int64
switch key {
case "startedAt":
val = rec.CreatedAt.AsTime().Unix()
case "completedAt":
if rec.CompletedAt != nil {
val = rec.CompletedAt.AsTime().Unix()
}
case "duration":
if rec.CompletedAt != nil {
val = int64(rec.CompletedAt.AsTime().Sub(rec.CreatedAt.AsTime()))
}
}
switch sep {
case ">=":
return val >= cmp
case "<=":
return val <= cmp
case ">":
return val > cmp
default:
return val < cmp
}
}, nil
}
func cutAny(s string, seps ...string) (before, after, sep string, found bool) {
for _, sep := range seps {
if idx := strings.Index(s, sep); idx != -1 {
return s[:idx], s[idx+len(sep):], sep, true
}
}
return s, "", "", false
}
func loadNodes(ctx context.Context, dockerCli command.Cli, builderName string) ([]builder.Node, error) {
b, err := builder.New(dockerCli, builder.WithName(builderName))
if err != nil {
return nil, err
}
nodes, err := b.LoadNodes(ctx, builder.WithData())
if err != nil {
return nil, err
}
if ok, err := b.Boot(ctx); err != nil {
return nil, err
} else if ok {
nodes, err = b.LoadNodes(ctx, builder.WithData())
if err != nil {
return nil, err
}
}
for _, node := range nodes {
if node.Err != nil {
return nil, node.Err
}
}
return nodes, nil
}

View File

@ -9,13 +9,14 @@ import (
"github.com/distribution/reference"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/util/buildflags"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/imagetools"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli/command"
"github.com/moby/buildkit/util/progress/progressui"
"github.com/opencontainers/go-digest"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/spf13/cobra"
"golang.org/x/sync/errgroup"
@ -29,6 +30,7 @@ type createOptions struct {
dryrun bool
actionAppend bool
progress string
preferIndex bool
}
func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, args []string) error {
@ -40,7 +42,7 @@ func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, arg
return errors.Errorf("can't push with no tags specified, please set --tag or --dry-run")
}
fileArgs := make([]string, len(in.files))
fileArgs := make([]string, len(in.files), len(in.files)+len(args))
for i, f := range in.files {
dt, err := os.ReadFile(f)
if err != nil {
@ -153,7 +155,12 @@ func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, arg
}
}
dt, desc, err := r.Combine(ctx, srcs, in.annotations)
annotations, err := buildflags.ParseAnnotations(in.annotations)
if err != nil {
return errors.Wrapf(err, "failed to parse annotations")
}
dt, desc, err := r.Combine(ctx, srcs, annotations, in.preferIndex)
if err != nil {
return err
}
@ -166,8 +173,8 @@ func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, arg
// new resolver cause need new auth
r = imagetools.New(imageopt)
ctx2, cancel := context.WithCancel(context.TODO())
defer cancel()
ctx2, cancel := context.WithCancelCause(context.TODO())
defer func() { cancel(errors.WithStack(context.Canceled)) }()
printer, err := progress.NewPrinter(ctx2, os.Stderr, progressui.DisplayMode(in.progress))
if err != nil {
return err
@ -177,7 +184,6 @@ func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, arg
pw := progress.WithPrefix(printer, "internal", true)
for _, t := range tags {
t := t
eg.Go(func() error {
return progress.Wrap(fmt.Sprintf("pushing %s", t.String()), pw.Write, func(sub progress.SubLogger) error {
eg2, _ := errgroup.WithContext(ctx)
@ -187,7 +193,7 @@ func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, arg
}
s := s
eg2.Go(func() error {
sub.Log(1, []byte(fmt.Sprintf("copying %s from %s to %s\n", s.Desc.Digest.String(), s.Ref.String(), t.String())))
sub.Log(1, fmt.Appendf(nil, "copying %s from %s to %s\n", s.Desc.Digest.String(), s.Ref.String(), t.String()))
return r.Copy(ctx, s, t)
})
}
@ -195,7 +201,7 @@ func runCreate(ctx context.Context, dockerCli command.Cli, in createOptions, arg
if err := eg2.Wait(); err != nil {
return err
}
sub.Log(1, []byte(fmt.Sprintf("pushing %s to %s\n", desc.Digest.String(), t.String())))
sub.Log(1, fmt.Appendf(nil, "pushing %s to %s\n", desc.Digest.String(), t.String()))
return r.Push(ctx, t, desc, dt)
})
})
@ -239,7 +245,7 @@ func parseSource(in string) (*imagetools.Source, error) {
dgst, err := digest.Parse(in)
if err == nil {
return &imagetools.Source{
Desc: ocispec.Descriptor{
Desc: ocispecs.Descriptor{
Digest: dgst,
},
}, nil
@ -267,13 +273,14 @@ func createCmd(dockerCli command.Cli, opts RootOptions) *cobra.Command {
var options createOptions
cmd := &cobra.Command{
Use: "create [OPTIONS] [SOURCE] [SOURCE...]",
Use: "create [OPTIONS] [SOURCE...]",
Short: "Create a new image based on source images",
RunE: func(cmd *cobra.Command, args []string) error {
options.builder = *opts.Builder
return runCreate(cmd.Context(), dockerCli, options, args)
},
ValidArgsFunction: completion.Disable,
ValidArgsFunction: completion.Disable,
DisableFlagsInUseLine: true,
}
flags := cmd.Flags()
@ -281,15 +288,16 @@ func createCmd(dockerCli command.Cli, opts RootOptions) *cobra.Command {
flags.StringArrayVarP(&options.tags, "tag", "t", []string{}, "Set reference for new image")
flags.BoolVar(&options.dryrun, "dry-run", false, "Show final image instead of pushing")
flags.BoolVar(&options.actionAppend, "append", false, "Append to existing manifest")
flags.StringVar(&options.progress, "progress", "auto", `Set type of progress output ("auto", "plain", "tty"). Use plain to show container output`)
flags.StringVar(&options.progress, "progress", "auto", `Set type of progress output ("auto", "plain", "tty", "rawjson"). Use plain to show container output`)
flags.StringArrayVarP(&options.annotations, "annotation", "", []string{}, "Add annotation to the image")
flags.BoolVar(&options.preferIndex, "prefer-index", true, "When only a single source is specified, prefer outputting an image index or manifest list instead of performing a carbon copy")
return cmd
}
func mergeDesc(d1, d2 ocispec.Descriptor) (ocispec.Descriptor, error) {
func mergeDesc(d1, d2 ocispecs.Descriptor) (ocispecs.Descriptor, error) {
if d2.Size != 0 && d1.Size != d2.Size {
return ocispec.Descriptor{}, errors.Errorf("invalid size mismatch for %s, %d != %d", d1.Digest, d2.Size, d1.Size)
return ocispecs.Descriptor{}, errors.Errorf("invalid size mismatch for %s, %d != %d", d1.Digest, d2.Size, d1.Size)
}
if d2.MediaType != "" {
d1.MediaType = d2.MediaType

View File

@ -52,7 +52,8 @@ func inspectCmd(dockerCli command.Cli, rootOpts RootOptions) *cobra.Command {
options.builder = *rootOpts.Builder
return runInspect(cmd.Context(), dockerCli, options, args[0])
},
ValidArgsFunction: completion.Disable,
ValidArgsFunction: completion.Disable,
DisableFlagsInUseLine: true,
}
flags := cmd.Flags()

View File

@ -10,11 +10,13 @@ type RootOptions struct {
Builder *string
}
func RootCmd(dockerCli command.Cli, opts RootOptions) *cobra.Command {
func RootCmd(rootcmd *cobra.Command, dockerCli command.Cli, opts RootOptions) *cobra.Command {
cmd := &cobra.Command{
Use: "imagetools",
Short: "Commands to work on images in registry",
ValidArgsFunction: completion.Disable,
Use: "imagetools",
Short: "Commands to work on images in registry",
ValidArgsFunction: completion.Disable,
RunE: rootcmd.RunE,
DisableFlagsInUseLine: true,
}
cmd.AddCommand(

View File

@ -17,6 +17,7 @@ import (
"github.com/docker/cli/cli/command"
"github.com/docker/cli/cli/debug"
"github.com/docker/go-units"
"github.com/pkg/errors"
"github.com/spf13/cobra"
)
@ -34,8 +35,9 @@ func runInspect(ctx context.Context, dockerCli command.Cli, in inspectOptions) e
return err
}
timeoutCtx, cancel := context.WithTimeout(ctx, 20*time.Second)
defer cancel()
timeoutCtx, cancel := context.WithCancelCause(ctx)
timeoutCtx, _ = context.WithTimeoutCause(timeoutCtx, 20*time.Second, errors.WithStack(context.DeadlineExceeded)) //nolint:govet // no need to manually cancel this context as we already rely on parent
defer func() { cancel(errors.WithStack(context.Canceled)) }()
nodes, err := b.LoadNodes(timeoutCtx, builder.WithData())
if in.bootstrap {
@ -52,8 +54,8 @@ func runInspect(ctx context.Context, dockerCli command.Cli, in inspectOptions) e
w := tabwriter.NewWriter(os.Stdout, 0, 0, 1, ' ', 0)
fmt.Fprintf(w, "Name:\t%s\n", b.Name)
fmt.Fprintf(w, "Driver:\t%s\n", b.Driver)
if !b.NodeGroup.LastActivity.IsZero() {
fmt.Fprintf(w, "Last Activity:\t%v\n", b.NodeGroup.LastActivity)
if !b.LastActivity.IsZero() {
fmt.Fprintf(w, "Last Activity:\t%v\n", b.LastActivity)
}
if err != nil {
@ -113,6 +115,25 @@ func runInspect(ctx context.Context, dockerCli command.Cli, in inspectOptions) e
fmt.Fprintf(w, "\t%s:\t%s\n", k, v)
}
}
if len(nodes[i].CDIDevices) > 0 {
fmt.Fprintf(w, "Devices:\n")
for _, dev := range nodes[i].CDIDevices {
fmt.Fprintf(w, "\tName:\t%s\n", dev.Name)
if dev.OnDemand {
fmt.Fprintf(w, "\tOn-Demand:\t%v\n", dev.OnDemand)
} else {
fmt.Fprintf(w, "\tAutomatically allowed:\t%v\n", dev.AutoAllow)
}
if len(dev.Annotations) > 0 {
fmt.Fprintf(w, "\tAnnotations:\n")
for k, v := range dev.Annotations {
fmt.Fprintf(w, "\t\t%s:\t%s\n", k, v)
}
}
}
}
for ri, rule := range nodes[i].GCPolicy {
fmt.Fprintf(w, "GC Policy rule#%d:\n", ri)
fmt.Fprintf(w, "\tAll:\t%v\n", rule.All)
@ -122,8 +143,20 @@ func runInspect(ctx context.Context, dockerCli command.Cli, in inspectOptions) e
if rule.KeepDuration > 0 {
fmt.Fprintf(w, "\tKeep Duration:\t%v\n", rule.KeepDuration.String())
}
if rule.KeepBytes > 0 {
fmt.Fprintf(w, "\tKeep Bytes:\t%s\n", units.BytesSize(float64(rule.KeepBytes)))
if rule.ReservedSpace > 0 {
fmt.Fprintf(w, "\tReserved Space:\t%s\n", units.BytesSize(float64(rule.ReservedSpace)))
}
if rule.MaxUsedSpace > 0 {
fmt.Fprintf(w, "\tMax Used Space:\t%s\n", units.BytesSize(float64(rule.MaxUsedSpace)))
}
if rule.MinFreeSpace > 0 {
fmt.Fprintf(w, "\tMin Free Space:\t%s\n", units.BytesSize(float64(rule.MinFreeSpace)))
}
}
for f, dt := range nodes[i].Files {
fmt.Fprintf(w, "File#%s:\n", f)
for _, line := range strings.Split(string(dt), "\n") {
fmt.Fprintf(w, "\t> %s\n", line)
}
}
}
@ -149,7 +182,8 @@ func inspectCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
}
return runInspect(cmd.Context(), dockerCli, options)
},
ValidArgsFunction: completion.BuilderNames(dockerCli),
ValidArgsFunction: completion.BuilderNames(dockerCli),
DisableFlagsInUseLine: true,
}
flags := cmd.Flags()

View File

@ -15,7 +15,7 @@ import (
type installOptions struct {
}
func runInstall(dockerCli command.Cli, in installOptions) error {
func runInstall(_ command.Cli, _ installOptions) error {
dir := config.Dir()
if err := os.MkdirAll(dir, 0755); err != nil {
return errors.Wrap(err, "could not create docker config")
@ -47,8 +47,9 @@ func installCmd(dockerCli command.Cli) *cobra.Command {
RunE: func(cmd *cobra.Command, args []string) error {
return runInstall(dockerCli, options)
},
Hidden: true,
ValidArgsFunction: completion.Disable,
Hidden: true,
ValidArgsFunction: completion.Disable,
DisableFlagsInUseLine: true,
}
// hide builder persistent flag for this command

View File

@ -4,10 +4,12 @@ import (
"context"
"encoding/json"
"fmt"
"maps"
"sort"
"strings"
"time"
"github.com/containerd/platforms"
"github.com/docker/buildx/builder"
"github.com/docker/buildx/store"
"github.com/docker/buildx/store/storeutil"
@ -17,6 +19,7 @@ import (
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
"github.com/docker/cli/cli/command/formatter"
"github.com/pkg/errors"
"github.com/spf13/cobra"
"golang.org/x/sync/errgroup"
)
@ -35,7 +38,8 @@ const (
)
type lsOptions struct {
format string
format string
noTrunc bool
}
func runLs(ctx context.Context, dockerCli command.Cli, in lsOptions) error {
@ -55,8 +59,9 @@ func runLs(ctx context.Context, dockerCli command.Cli, in lsOptions) error {
return err
}
timeoutCtx, cancel := context.WithTimeout(ctx, 20*time.Second)
defer cancel()
timeoutCtx, cancel := context.WithCancelCause(ctx)
timeoutCtx, _ = context.WithTimeoutCause(timeoutCtx, 20*time.Second, errors.WithStack(context.DeadlineExceeded)) //nolint:govet // no need to manually cancel this context as we already rely on parent
defer func() { cancel(errors.WithStack(context.Canceled)) }()
eg, _ := errgroup.WithContext(timeoutCtx)
for _, b := range builders {
@ -72,7 +77,7 @@ func runLs(ctx context.Context, dockerCli command.Cli, in lsOptions) error {
return err
}
if hasErrors, err := lsPrint(dockerCli, current, builders, in.format); err != nil {
if hasErrors, err := lsPrint(dockerCli, current, builders, in); err != nil {
return err
} else if hasErrors {
_, _ = fmt.Fprintf(dockerCli.Err(), "\n")
@ -102,11 +107,13 @@ func lsCmd(dockerCli command.Cli) *cobra.Command {
RunE: func(cmd *cobra.Command, args []string) error {
return runLs(cmd.Context(), dockerCli, options)
},
ValidArgsFunction: completion.Disable,
ValidArgsFunction: completion.Disable,
DisableFlagsInUseLine: true,
}
flags := cmd.Flags()
flags.StringVar(&options.format, "format", formatter.TableFormatKey, "Format the output")
flags.BoolVar(&options.noTrunc, "no-trunc", false, "Don't truncate output")
// hide builder persistent flag for this command
cobrautil.HideInheritedFlags(cmd, "builder")
@ -114,14 +121,15 @@ func lsCmd(dockerCli command.Cli) *cobra.Command {
return cmd
}
func lsPrint(dockerCli command.Cli, current *store.NodeGroup, builders []*builder.Builder, format string) (hasErrors bool, _ error) {
if format == formatter.TableFormatKey {
format = lsDefaultTableFormat
func lsPrint(dockerCli command.Cli, current *store.NodeGroup, builders []*builder.Builder, in lsOptions) (hasErrors bool, _ error) {
if in.format == formatter.TableFormatKey {
in.format = lsDefaultTableFormat
}
ctx := formatter.Context{
Output: dockerCli.Out(),
Format: formatter.Format(format),
Format: formatter.Format(in.format),
Trunc: !in.noTrunc,
}
sort.SliceStable(builders, func(i, j int) bool {
@ -138,11 +146,12 @@ func lsPrint(dockerCli command.Cli, current *store.NodeGroup, builders []*builde
render := func(format func(subContext formatter.SubContext) error) error {
for _, b := range builders {
if err := format(&lsContext{
format: ctx.Format,
trunc: ctx.Trunc,
Builder: &lsBuilder{
Builder: b,
Current: b.Name == current.Name,
},
format: ctx.Format,
}); err != nil {
return err
}
@ -152,6 +161,9 @@ func lsPrint(dockerCli command.Cli, current *store.NodeGroup, builders []*builde
}
continue
}
if ctx.Format.IsJSON() {
continue
}
for _, n := range b.Nodes() {
if n.Err != nil {
if ctx.Format.IsTable() {
@ -160,6 +172,7 @@ func lsPrint(dockerCli command.Cli, current *store.NodeGroup, builders []*builde
}
if err := format(&lsContext{
format: ctx.Format,
trunc: ctx.Trunc,
Builder: &lsBuilder{
Builder: b,
Current: b.Name == current.Name,
@ -196,11 +209,22 @@ type lsContext struct {
Builder *lsBuilder
format formatter.Format
trunc bool
node builder.Node
}
func (c *lsContext) MarshalJSON() ([]byte, error) {
return json.Marshal(c.Builder)
// can't marshal c.Builder directly because Builder type has custom MarshalJSON
dt, err := json.Marshal(c.Builder.Builder)
if err != nil {
return nil, err
}
var m map[string]any
if err := json.Unmarshal(dt, &m); err != nil {
return nil, err
}
m["Current"] = c.Builder.Current
return json.Marshal(m)
}
func (c *lsContext) Name() string {
@ -261,7 +285,11 @@ func (c *lsContext) Platforms() string {
if c.node.Name == "" {
return ""
}
return strings.Join(platformutil.FormatInGroups(c.node.Node.Platforms, c.node.Platforms), ", ")
pfs := platformutil.FormatInGroups(c.node.Node.Platforms, c.node.Platforms)
if c.trunc && c.format.IsTable() {
return truncPlatforms(pfs, 4).String()
}
return strings.Join(pfs, ", ")
}
func (c *lsContext) Error() string {
@ -272,3 +300,131 @@ func (c *lsContext) Error() string {
}
return ""
}
var truncMajorPlatforms = []string{
"linux/amd64",
"linux/arm64",
"linux/arm",
"linux/ppc64le",
"linux/s390x",
"linux/riscv64",
"linux/mips64",
}
type truncatedPlatforms struct {
res map[string][]string
input []string
max int
}
func (tp truncatedPlatforms) List() map[string][]string {
return tp.res
}
func (tp truncatedPlatforms) String() string {
var out []string
var count int
var keys []string
for k := range tp.res {
keys = append(keys, k)
}
sort.Strings(keys)
seen := make(map[string]struct{})
for _, mpf := range truncMajorPlatforms {
if tpf, ok := tp.res[mpf]; ok {
seen[mpf] = struct{}{}
if len(tpf) == 1 {
out = append(out, tpf[0])
count++
} else {
hasPreferredPlatform := false
for _, pf := range tpf {
if strings.HasSuffix(pf, "*") {
hasPreferredPlatform = true
break
}
}
mainpf := mpf
if hasPreferredPlatform {
mainpf += "*"
}
out = append(out, fmt.Sprintf("%s (+%d)", mainpf, len(tpf)))
count += len(tpf)
}
}
}
for _, mpf := range keys {
if len(out) >= tp.max {
break
}
if _, ok := seen[mpf]; ok {
continue
}
if len(tp.res[mpf]) == 1 {
out = append(out, tp.res[mpf][0])
count++
} else {
hasPreferredPlatform := false
for _, pf := range tp.res[mpf] {
if strings.HasSuffix(pf, "*") {
hasPreferredPlatform = true
break
}
}
mainpf := mpf
if hasPreferredPlatform {
mainpf += "*"
}
out = append(out, fmt.Sprintf("%s (+%d)", mainpf, len(tp.res[mpf])))
count += len(tp.res[mpf])
}
}
left := len(tp.input) - count
if left > 0 {
out = append(out, fmt.Sprintf("(%d more)", left))
}
return strings.Join(out, ", ")
}
func truncPlatforms(pfs []string, max int) truncatedPlatforms {
res := make(map[string][]string)
for _, mpf := range truncMajorPlatforms {
for _, pf := range pfs {
if len(res) >= max {
break
}
pp, err := platforms.Parse(strings.TrimSuffix(pf, "*"))
if err != nil {
continue
}
if pp.OS+"/"+pp.Architecture == mpf {
res[mpf] = append(res[mpf], pf)
}
}
}
left := make(map[string][]string)
for _, pf := range pfs {
if len(res) >= max {
break
}
pp, err := platforms.Parse(strings.TrimSuffix(pf, "*"))
if err != nil {
continue
}
ppf := strings.TrimSuffix(pp.OS+"/"+pp.Architecture, "*")
if _, ok := res[ppf]; !ok {
left[ppf] = append(left[ppf], pf)
}
}
maps.Copy(res, left)
return truncatedPlatforms{
res: res,
input: pfs,
max: max,
}
}

173
commands/ls_test.go Normal file
View File

@ -0,0 +1,173 @@
package commands
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestTruncPlatforms(t *testing.T) {
tests := []struct {
name string
platforms []string
max int
expectedList map[string][]string
expectedOut string
}{
{
name: "arm64 preferred and emulated",
platforms: []string{"linux/arm64*", "linux/amd64", "linux/amd64/v2", "linux/riscv64", "linux/ppc64le", "linux/s390x", "linux/386", "linux/mips64le", "linux/mips64", "linux/arm/v7", "linux/arm/v6"},
max: 4,
expectedList: map[string][]string{
"linux/amd64": {
"linux/amd64",
"linux/amd64/v2",
},
"linux/arm": {
"linux/arm/v7",
"linux/arm/v6",
},
"linux/arm64": {
"linux/arm64*",
},
"linux/ppc64le": {
"linux/ppc64le",
},
},
expectedOut: "linux/amd64 (+2), linux/arm64*, linux/arm (+2), linux/ppc64le, (5 more)",
},
{
name: "riscv64 preferred only",
platforms: []string{"linux/riscv64*"},
max: 4,
expectedList: map[string][]string{
"linux/riscv64": {
"linux/riscv64*",
},
},
expectedOut: "linux/riscv64*",
},
{
name: "amd64 no preferred and emulated",
platforms: []string{"linux/amd64", "linux/amd64/v2", "linux/amd64/v3", "linux/386", "linux/arm64", "linux/riscv64", "linux/ppc64le", "linux/s390x", "linux/mips64le", "linux/mips64", "linux/arm/v7", "linux/arm/v6"},
max: 4,
expectedList: map[string][]string{
"linux/amd64": {
"linux/amd64",
"linux/amd64/v2",
"linux/amd64/v3",
},
"linux/arm": {
"linux/arm/v7",
"linux/arm/v6",
},
"linux/arm64": {
"linux/arm64",
},
"linux/ppc64le": {
"linux/ppc64le",
}},
expectedOut: "linux/amd64 (+3), linux/arm64, linux/arm (+2), linux/ppc64le, (5 more)",
},
{
name: "amd64 no preferred",
platforms: []string{"linux/amd64", "linux/386"},
max: 4,
expectedList: map[string][]string{
"linux/386": {
"linux/386",
},
"linux/amd64": {
"linux/amd64",
},
},
expectedOut: "linux/amd64, linux/386",
},
{
name: "arm64 no preferred",
platforms: []string{"linux/arm64", "linux/arm/v7", "linux/arm/v6"},
max: 4,
expectedList: map[string][]string{
"linux/arm": {
"linux/arm/v7",
"linux/arm/v6",
},
"linux/arm64": {
"linux/arm64",
},
},
expectedOut: "linux/arm64, linux/arm (+2)",
},
{
name: "all preferred",
platforms: []string{"darwin/arm64*", "linux/arm64*", "linux/arm/v5*", "linux/arm/v6*", "linux/arm/v7*", "windows/arm64*"},
max: 4,
expectedList: map[string][]string{
"darwin/arm64": {
"darwin/arm64*",
},
"linux/arm": {
"linux/arm/v5*",
"linux/arm/v6*",
"linux/arm/v7*",
},
"linux/arm64": {
"linux/arm64*",
},
"windows/arm64": {
"windows/arm64*",
},
},
expectedOut: "linux/arm64*, linux/arm* (+3), darwin/arm64*, windows/arm64*",
},
{
name: "no major preferred",
platforms: []string{"linux/amd64/v2*", "linux/arm/v6*", "linux/mips64le*", "linux/amd64", "linux/amd64/v3", "linux/386", "linux/arm64", "linux/riscv64", "linux/ppc64le", "linux/s390x", "linux/mips64", "linux/arm/v7"},
max: 4,
expectedList: map[string][]string{
"linux/amd64": {
"linux/amd64/v2*",
"linux/amd64",
"linux/amd64/v3",
},
"linux/arm": {
"linux/arm/v6*",
"linux/arm/v7",
},
"linux/arm64": {
"linux/arm64",
},
"linux/ppc64le": {
"linux/ppc64le",
},
},
expectedOut: "linux/amd64* (+3), linux/arm64, linux/arm* (+2), linux/ppc64le, (5 more)",
},
{
name: "no major with multiple variants",
platforms: []string{"linux/arm64", "linux/arm/v7", "linux/arm/v6", "linux/mips64le/softfloat", "linux/mips64le/hardfloat"},
max: 4,
expectedList: map[string][]string{
"linux/arm": {
"linux/arm/v7",
"linux/arm/v6",
},
"linux/arm64": {
"linux/arm64",
},
"linux/mips64le": {
"linux/mips64le/softfloat",
"linux/mips64le/hardfloat",
},
},
expectedOut: "linux/arm64, linux/arm (+2), linux/mips64le (+2)",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
tpfs := truncPlatforms(tt.platforms, tt.max)
assert.Equal(t, tt.expectedList, tpfs.List())
assert.Equal(t, tt.expectedOut, tpfs.String())
})
}
}

View File

@ -3,6 +3,7 @@ package commands
import (
"context"
"fmt"
"io"
"os"
"strings"
"text/tabwriter"
@ -16,18 +17,23 @@ import (
"github.com/docker/docker/api/types/filters"
"github.com/docker/go-units"
"github.com/moby/buildkit/client"
gateway "github.com/moby/buildkit/frontend/gateway/client"
pb "github.com/moby/buildkit/solver/pb"
"github.com/moby/buildkit/util/apicaps"
"github.com/pkg/errors"
"github.com/spf13/cobra"
"golang.org/x/sync/errgroup"
)
type pruneOptions struct {
builder string
all bool
filter opts.FilterOpt
keepStorage opts.MemBytes
force bool
verbose bool
builder string
all bool
filter opts.FilterOpt
reservedSpace opts.MemBytes
maxUsedSpace opts.MemBytes
minFreeSpace opts.MemBytes
force bool
verbose bool
}
const (
@ -105,8 +111,19 @@ func runPrune(ctx context.Context, dockerCli command.Cli, opts pruneOptions) err
if err != nil {
return err
}
// check if the client supports newer prune options
if opts.maxUsedSpace.Value() != 0 || opts.minFreeSpace.Value() != 0 {
caps, err := loadLLBCaps(ctx, c)
if err != nil {
return errors.Wrap(err, "failed to load buildkit capabilities for prune")
}
if caps.Supports(pb.CapGCFreeSpaceFilter) != nil {
return errors.New("buildkit v0.17.0+ is required for max-used-space and min-free-space filters")
}
}
popts := []client.PruneOption{
client.WithKeepOpt(pi.KeepDuration, opts.keepStorage.Value()),
client.WithKeepOpt(pi.KeepDuration, opts.reservedSpace.Value(), opts.maxUsedSpace.Value(), opts.minFreeSpace.Value()),
client.WithFilter(pi.Filter),
}
if opts.all {
@ -131,6 +148,17 @@ func runPrune(ctx context.Context, dockerCli command.Cli, opts pruneOptions) err
return nil
}
func loadLLBCaps(ctx context.Context, c *client.Client) (apicaps.CapSet, error) {
var caps apicaps.CapSet
_, err := c.Build(ctx, client.SolveOpt{
Internal: true,
}, "buildx", func(ctx context.Context, c gateway.Client) (*gateway.Result, error) {
caps = c.BuildOpts().LLBCaps
return nil, nil
}, nil)
return caps, err
}
func pruneCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
options := pruneOptions{filter: opts.NewFilterOpt()}
@ -142,16 +170,22 @@ func pruneCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
options.builder = rootOpts.builder
return runPrune(cmd.Context(), dockerCli, options)
},
ValidArgsFunction: completion.Disable,
ValidArgsFunction: completion.Disable,
DisableFlagsInUseLine: true,
}
flags := cmd.Flags()
flags.BoolVarP(&options.all, "all", "a", false, "Include internal/frontend images")
flags.Var(&options.filter, "filter", `Provide filter values (e.g., "until=24h")`)
flags.Var(&options.keepStorage, "keep-storage", "Amount of disk space to keep for cache")
flags.Var(&options.filter, "filter", `Provide filter values`)
flags.Var(&options.reservedSpace, "reserved-space", "Amount of disk space always allowed to keep for cache")
flags.Var(&options.minFreeSpace, "min-free-space", "Target amount of free disk space after pruning")
flags.Var(&options.maxUsedSpace, "max-used-space", "Maximum amount of disk space allowed to keep for cache")
flags.BoolVar(&options.verbose, "verbose", false, "Provide a more verbose output")
flags.BoolVarP(&options.force, "force", "f", false, "Do not prompt for confirmation")
flags.Var(&options.reservedSpace, "keep-storage", "Amount of disk space to keep for cache")
flags.MarkDeprecated("keep-storage", "keep-storage flag has been changed to reserved-space")
return cmd
}
@ -195,6 +229,8 @@ func toBuildkitPruneInfo(f filters.Args) (*client.PruneInfo, error) {
case 1:
if filterKey == "id" {
filters = append(filters, filterKey+"~="+values[0])
} else if strings.HasSuffix(filterKey, "!") || strings.HasSuffix(filterKey, "~") {
filters = append(filters, filterKey+"="+values[0])
} else {
filters = append(filters, filterKey+"=="+values[0])
}
@ -207,3 +243,55 @@ func toBuildkitPruneInfo(f filters.Args) (*client.PruneInfo, error) {
Filter: []string{strings.Join(filters, ",")},
}, nil
}
func printKV(w io.Writer, k string, v any) {
fmt.Fprintf(w, "%s:\t%v\n", k, v)
}
func printVerbose(tw *tabwriter.Writer, du []*client.UsageInfo) {
for _, di := range du {
printKV(tw, "ID", di.ID)
if len(di.Parents) != 0 {
printKV(tw, "Parent", strings.Join(di.Parents, ","))
}
printKV(tw, "Created at", di.CreatedAt)
printKV(tw, "Mutable", di.Mutable)
printKV(tw, "Reclaimable", !di.InUse)
printKV(tw, "Shared", di.Shared)
printKV(tw, "Size", units.HumanSize(float64(di.Size)))
if di.Description != "" {
printKV(tw, "Description", di.Description)
}
printKV(tw, "Usage count", di.UsageCount)
if di.LastUsedAt != nil {
printKV(tw, "Last used", units.HumanDuration(time.Since(*di.LastUsedAt))+" ago")
}
if di.RecordType != "" {
printKV(tw, "Type", di.RecordType)
}
fmt.Fprintf(tw, "\n")
}
tw.Flush()
}
func printTableHeader(tw *tabwriter.Writer) {
fmt.Fprintln(tw, "ID\tRECLAIMABLE\tSIZE\tLAST ACCESSED")
}
func printTableRow(tw *tabwriter.Writer, di *client.UsageInfo) {
id := di.ID
if di.Mutable {
id += "*"
}
size := units.HumanSize(float64(di.Size))
if di.Shared {
size += "*"
}
lastAccessed := ""
if di.LastUsedAt != nil {
lastAccessed = units.HumanDuration(time.Since(*di.LastUsedAt)) + " ago"
}
fmt.Fprintf(tw, "%-40s\t%-5v\t%-10s\t%s\n", id, !di.InUse, size, lastAccessed)
}

View File

@ -99,7 +99,7 @@ func rmCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
var options rmOptions
cmd := &cobra.Command{
Use: "rm [OPTIONS] [NAME] [NAME...]",
Use: "rm [OPTIONS] [NAME...]",
Short: "Remove one or more builder instances",
RunE: func(cmd *cobra.Command, args []string) error {
options.builders = []string{rootOpts.builder}
@ -111,7 +111,8 @@ func rmCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
}
return runRm(cmd.Context(), dockerCli, options)
},
ValidArgsFunction: completion.BuilderNames(dockerCli),
ValidArgsFunction: completion.BuilderNames(dockerCli),
DisableFlagsInUseLine: true,
}
flags := cmd.Flags()
@ -150,8 +151,9 @@ func rmAllInactive(ctx context.Context, txn *store.Txn, dockerCli command.Cli, i
return err
}
timeoutCtx, cancel := context.WithTimeout(ctx, 20*time.Second)
defer cancel()
timeoutCtx, cancel := context.WithCancelCause(ctx)
timeoutCtx, _ = context.WithTimeoutCause(timeoutCtx, 20*time.Second, errors.WithStack(context.DeadlineExceeded)) //nolint:govet // no need to manually cancel this context as we already rely on parent
defer func() { cancel(errors.WithStack(context.Canceled)) }()
eg, _ := errgroup.WithContext(timeoutCtx)
for _, b := range builders {

View File

@ -1,42 +1,77 @@
package commands
import (
"fmt"
"os"
debugcmd "github.com/docker/buildx/commands/debug"
historycmd "github.com/docker/buildx/commands/history"
imagetoolscmd "github.com/docker/buildx/commands/imagetools"
"github.com/docker/buildx/controller/remote"
"github.com/docker/buildx/util/cobrautil/completion"
"github.com/docker/buildx/util/confutil"
"github.com/docker/buildx/util/logutil"
"github.com/docker/cli-docs-tool/annotation"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli-plugins/plugin"
"github.com/docker/cli/cli/command"
"github.com/docker/cli/cli/debug"
cliflags "github.com/docker/cli/cli/flags"
"github.com/moby/buildkit/util/appcontext"
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
)
func NewRootCmd(name string, isPlugin bool, dockerCli command.Cli) *cobra.Command {
const experimentalCommandHint = `Experimental commands and flags are hidden. Set BUILDX_EXPERIMENTAL=1 to show them.`
func NewRootCmd(name string, isPlugin bool, dockerCli *command.DockerCli) *cobra.Command {
var opt rootOptions
cmd := &cobra.Command{
Short: "Docker Buildx",
Long: `Extended build capabilities with BuildKit`,
Use: name,
Annotations: map[string]string{
annotation.CodeDelimiter: `"`,
"additionalHelp": func() string {
if !confutil.IsExperimental() {
return experimentalCommandHint
}
return ""
}(),
},
CompletionOptions: cobra.CompletionOptions{
HiddenDefaultCmd: true,
},
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
if opt.debug {
debug.Enable()
}
cmd.SetContext(appcontext.Context())
if !isPlugin {
return nil
// InstallFlags and SetDefaultOptions are necessary to match
// the plugin mode behavior to handle env vars such as
// DOCKER_TLS, DOCKER_TLS_VERIFY, ... and we also need to use a
// new flagset to avoid conflict with the global debug flag
// that we already handle in the root command otherwise it
// would panic.
nflags := pflag.NewFlagSet(cmd.DisplayName(), pflag.ContinueOnError)
options := cliflags.NewClientOptions()
options.InstallFlags(nflags)
options.SetDefaultOptions(nflags)
return dockerCli.Initialize(options)
}
return plugin.PersistentPreRunE(cmd, args)
},
RunE: func(cmd *cobra.Command, args []string) error {
if len(args) == 0 {
return cmd.Help()
}
_ = cmd.Help()
return cli.StatusError{
StatusCode: 1,
Status: fmt.Sprintf("ERROR: unknown command: %q", args[0]),
}
},
DisableFlagsInUseLine: true,
}
if !isPlugin {
// match plugin behavior for standalone mode
@ -44,12 +79,8 @@ func NewRootCmd(name string, isPlugin bool, dockerCli command.Cli) *cobra.Comman
cmd.SilenceUsage = true
cmd.SilenceErrors = true
cmd.TraverseChildren = true
cmd.DisableFlagsInUseLine = true
cli.DisableFlagsInUseLine(cmd)
// DEBUG=1 should perform the same as --debug at the docker root level
if debug.IsEnabled() {
debug.Enable()
if !confutil.IsExperimental() {
cmd.SetHelpTemplate(cmd.HelpTemplate() + "\n" + experimentalCommandHint + "\n")
}
}
@ -63,20 +94,16 @@ func NewRootCmd(name string, isPlugin bool, dockerCli command.Cli) *cobra.Comman
"using default config store",
))
if !isExperimental() {
cmd.SetHelpTemplate(cmd.HelpTemplate() + "\nExperimental commands and flags are hidden. Set BUILDX_EXPERIMENTAL=1 to show them.\n")
}
addCommands(cmd, dockerCli)
addCommands(cmd, &opt, dockerCli)
return cmd
}
type rootOptions struct {
builder string
debug bool
}
func addCommands(cmd *cobra.Command, dockerCli command.Cli) {
opts := &rootOptions{}
func addCommands(cmd *cobra.Command, opts *rootOptions, dockerCli command.Cli) {
rootFlags(opts, cmd.PersistentFlags())
cmd.AddCommand(
@ -94,13 +121,12 @@ func addCommands(cmd *cobra.Command, dockerCli command.Cli) {
versionCmd(dockerCli),
pruneCmd(dockerCli, opts),
duCmd(dockerCli, opts),
imagetoolscmd.RootCmd(dockerCli, imagetoolscmd.RootOptions{Builder: &opts.builder}),
imagetoolscmd.RootCmd(cmd, dockerCli, imagetoolscmd.RootOptions{Builder: &opts.builder}),
historycmd.RootCmd(cmd, dockerCli, historycmd.RootOptions{Builder: &opts.builder}),
)
if isExperimental() {
cmd.AddCommand(debugcmd.RootCmd(dockerCli,
newDebuggableBuild(dockerCli, opts),
))
remote.AddControllerCommands(cmd, dockerCli)
if confutil.IsExperimental() {
cmd.AddCommand(debugCmd(dockerCli, opts))
cmd.AddCommand(dapCmd(dockerCli, opts))
}
cmd.RegisterFlagCompletionFunc( //nolint:errcheck
@ -111,4 +137,5 @@ func addCommands(cmd *cobra.Command, dockerCli command.Cli) {
func rootFlags(options *rootOptions, flags *pflag.FlagSet) {
flags.StringVar(&options.builder, "builder", os.Getenv("BUILDX_BUILDER"), "Override the configured builder instance")
flags.BoolVarP(&options.debug, "debug", "D", debug.IsEnabled(), "Enable debug logging")
}

33
commands/root_test.go Normal file
View File

@ -0,0 +1,33 @@
package commands
import (
stderrs "errors"
"testing"
"github.com/docker/cli/cli/command"
"github.com/pkg/errors"
"github.com/spf13/cobra"
"github.com/stretchr/testify/require"
)
func TestDisableFlagsInUseLineIsSet(t *testing.T) {
cmd, err := command.NewDockerCli()
require.NoError(t, err)
rootCmd := NewRootCmd("buildx", true, cmd)
var errs []error
visitAll(rootCmd, func(c *cobra.Command) {
if !c.DisableFlagsInUseLine {
errs = append(errs, errors.New("DisableFlagsInUseLine is not set for "+c.CommandPath()))
}
})
err = stderrs.Join(errs...)
require.NoError(t, err)
}
func visitAll(root *cobra.Command, fn func(*cobra.Command)) {
for _, cmd := range root.Commands() {
visitAll(cmd, fn)
}
fn(root)
}

View File

@ -44,7 +44,8 @@ func stopCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
}
return runStop(cmd.Context(), dockerCli, options)
},
ValidArgsFunction: completion.BuilderNames(dockerCli),
ValidArgsFunction: completion.BuilderNames(dockerCli),
DisableFlagsInUseLine: true,
}
return cmd

Some files were not shown because too many files have changed in this diff Show More