Compare commits

...

204 Commits

Author SHA1 Message Date
Fan Shang f7d513844d Remove mirrors configuration
Signed-off-by: Fan Shang <2444576154@qq.com>
2025-08-05 10:38:09 +08:00
Baptiste Girard-Carrabin 29dc8ec5c8 [registry] Accept empty scope during token auth challenge
The distribution spec (https://distribution.github.io/distribution/spec/auth/scope/#authorization-server-use) mentions that the access token provided during auth challenge "may include a scope" which means that it's not necessary to have one either to comply with the spec.
Additionally, this is something that is already accepted by containerd which will simply log a warning when no scope is specified: https://github.com/containerd/containerd/blob/main/core/remotes/docker/auth/fetch.go#L64
To match with what containerd and the spec suggest, the commit modifies the `parse_auth` logic to accept an empty `scope` field. It also logs the same warning as containerd.

Signed-off-by: Baptiste Girard-Carrabin <baptiste.girardcarrabin@datadoghq.com>
2025-07-31 20:28:47 +08:00
imeoer 7886e1868f storage: fix redirect in registry backend
To fix https://github.com/dragonflyoss/nydus/issues/1720

Signed-off-by: imeoer <yansong.ys@antgroup.com>
2025-07-31 11:49:44 +08:00
Peng Tao e1dffec213 api: increase error.rs UT coverage
Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2025-07-17 11:33:17 +08:00
Peng Tao cc62dd6890 github: add project common copilot instructions
Copilot generated with slight modification.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2025-07-17 11:33:17 +08:00
Peng Tao d140d60bea rafs: increase UT coverage for cached_v5.rs
Copilot generated.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2025-07-17 11:33:17 +08:00
Peng Tao f323c7f6e3 gitignore: ignore temp files generated by UTs
Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2025-07-17 11:33:17 +08:00
Peng Tao 5c8299c7f7 service: skip init fscache test if cachefiles is unavailable
Also skip the test for non-root users.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2025-07-17 11:33:17 +08:00
Jack Decker 14c0062cee Make filesystem sync operation fatal on failure
Signed-off-by: Jack Decker <jack@thundercompute.com>
2025-07-11 10:42:45 +08:00
Jack Decker d3bbc3e509 Add filesystem sync in both container and host namespaces before pausing container for commit to ensure all changes are flushed to disk.
Signed-off-by: Jack Decker <jack@thundercompute.com>
2025-07-11 10:42:45 +08:00
imeoer 80f80dda0e cargo: bump crates version
Signed-off-by: imeoer <yansong.ys@antgroup.com>
2025-07-08 10:38:27 +08:00
Yang Kaiyong a26c7bf99c test: support miri for unit test in actions
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-07-04 10:17:32 +08:00
imeoer 72b1955387 misc: add issue / PR stale workflow
Signed-off-by: imeoer <yansong.ys@antgroup.com>
2025-06-18 10:38:00 +08:00
ymy d589292ebc feat(nydusify): After converting the image, if the push operation fails, increase the number of retries.
Signed-off-by: ymy <ymy@zetyun.com>
2025-06-17 17:11:38 +08:00
Zephyrcf 344a208e86 Make ssl fallback check case-insensitive
Signed-off-by: Zephyrcf <zinsist77@gmail.com>
2025-06-12 19:03:49 +08:00
imeoer 9645820222 docs: add MAINTAINERS doc
Signed-off-by: imeoer <yansong.ys@antgroup.com>
2025-05-30 18:40:33 +08:00
Baptiste Girard-Carrabin d36295a21e [registry] Modify TokenResponse instead
Apply github comment.
Use `serde:default` in TokenResponse to have the same behavior as Option<String> without changing the struct signature.

Signed-off-by: Baptiste Girard-Carrabin <baptiste.girardcarrabin@datadoghq.com>
2025-04-28 16:11:23 +08:00
Baptiste Girard-Carrabin c048fcc45f [registry] Fix auth token parsing for access_token
Extend auth token parsing to support token in different json fields.
There is no real consensus on Oauth2 token response format, which means that each registry can implement their own. In particular, Azure ACR uses `access_token` as described here https://github.com/Azure/acr/blob/main/docs/Token-BasicAuth.md#get-a-pull-access-token-for-the-user. As such, when attempting to parse the JSON response containing the authorization token, we should attempt to deserialize using either `token` or `access_token` (and potentially more fields in the future if needed).
To not break the integration with existing registry, the behavior is to fallback to `access_token` only if `token` does not exist in the response.

Signed-off-by: Baptiste Girard-Carrabin <baptiste.girardcarrabin@datadoghq.com>
2025-04-28 16:11:23 +08:00
Baptiste Girard-Carrabin 67bf8b8283 [storage] Modify redirect policy to follow 10 redirects
From 2378d074fe (diff-c9f1f654cf0ba5d46a4ed25d8bb0ea22c942840c6693d31927a9fd912bcb9456R125-R131)
it seems that the redirect policy of the http client has always been to not follow redirects. However, this means that pulling from registries which have redirects when pulling blobs does not work. This is the case for instance on GCP's former container registries that were migrated to artifact registries.
Additionally, containerd's behavior is to follow up to 10 redirects https://github.com/containerd/containerd/blob/main/core/remotes/docker/resolver.go#L596 so it makes sense to use the same value.

Signed-off-by: Baptiste Girard-Carrabin <baptiste.girardcarrabin@datadoghq.com>
2025-04-27 18:54:04 +08:00
Peng Tao d74629233b readme: add deepwiki reference
Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2025-04-27 18:53:16 +08:00
Yang Kaiyong 21206e75b3 nydusify(refactor): handle layer with retry
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-04-23 11:04:54 +08:00
Yan Song c288169c1a action: add free-disk-space job
Try to fix the broken CI: https://github.com/dragonflyoss/nydus/actions/runs/14569290750/job/40863611290
It might be due to insufficient disk space.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2025-04-23 10:28:06 +08:00
Yang Kaiyong 23fdda1020 nydusify(feat): support for specifing log file and concurrently processing external model manifests
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-04-21 15:16:57 +08:00
Yang Kaiyong 9b915529a9 nydusify(feat): add crc32 in file attributes
Read CRC32 from external models' manifest and pass it to builder.

Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-04-18 18:30:18 +08:00
Yang Kaiyong 96c3e5569a nydus-image: only add crc32 flag in chunk level
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-04-18 14:39:03 +08:00
Yang Kaiyong 44069d6091 feat: support crc32 validation when validating chunks
- Add CRC32 algorithm implementation wiht crc-rs crate.
- Introduced a crc_enable option to the nydus builder.
- Support for generating CRC32 checksums when building images.
- Support for validating CRC32 in both normal chunk or external chunks.

Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-04-18 14:39:03 +08:00
Yang Kaiyong 31c8e896f0 chore: fix cargo-deny check failed
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-04-16 19:39:21 +08:00
Yang Kaiyong 8593498dbd nydusify: remove nydusd code which is working in progress
- remove the unready nydusd (runtime) implemention.
- remove the debug code.
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-04-02 22:21:27 +08:00
Yang Kaiyong 6161868e41 builder: suport build external model image from modctl
builder: add support for build external model image from modctl in local
context or remote registery.

feat(nydusify): add support for mount external large model images

chore: introduce GoReleaser for RPM package generation

nydusify(feat): add support for model image in check command

nydusify(test): add support for binary-based testing in external model's smoke tests

Signed-off-by: Yan Song <yansong.ys@antgroup.com>

Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-04-02 22:21:27 +08:00
Yang Kaiyong 871e1c6e4f chore(smoke): fix broken CI in smoke test
Run `rustup run stable cargo` instead of `cargo` to explicitly specify the toolchain.

Since `nextest` fails due to symlink resolution with new rustup v1.28.

Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-03-25 18:23:18 +08:00
Yan Song 8c0925b091 action: fix bootstrap path for fsck.erofs check
The output bootstrap path has been changed in the nydusify
check subcommand.

Related PR: https://github.com/dragonflyoss/nydus/pull/1652

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2025-03-25 14:51:28 +08:00
Yan Song baadb3990d misc: remove centos image from image conversion CI
The centos image has been deprecated on Docker Hub, so we can't
pull it in "Convert & Check Images" CI pipeline.

See https://hub.docker.com/_/centos

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2025-03-25 14:51:28 +08:00
Yan Song bd2123f2ed smoke: add v0.1.0 nydusd into native layer cases
To check the compatibility between the newer builder and old nydusd.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2025-03-24 20:34:54 +08:00
Yan Song c41ac4760d builder: remove redundant blobs for merge subcommand
After merging all trees, we need to re-calculate the blob index of
referenced blobs, as the upper tree might have deleted some files
or directories by opaques, and some blobs are dereferenced.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2025-03-24 20:34:54 +08:00
Yan Song 7daa0a3cd9 nydusify: refactor check subcommand
- allow either the source or target to be an OCI or nydus image;
- improve output directory structure and log format;

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2025-03-24 17:45:50 +08:00
ymy 7e5147990c feat(nydusify): A short container id is supported when you commit a container
Signed-off-by: ymy <ymy@zetyun.com>
2025-03-10 10:21:06 +08:00
ymy 36382b54dd Optimize: Improve code style in push lower blob section
Signed-off-by: ymy <ymy@zetyun.com>
2025-03-10 10:21:06 +08:00
yumy 8b03fd7593 fix: nydusify golang ci arg
Signed-off-by: ymy <ymy@zetyun.com>
2025-03-04 23:48:02 +08:00
ymy 76651c319a nydusify: fix the issue of blob not found when modifying image name during commit
Signed-off-by: ymy <ymy@zetyun.com>
2025-03-04 23:48:02 +08:00
Yang Kaiyong 91931607f8 fix(nydusd): fix parsing of failover-policy argument
Use `inspect_err` instead of `inspect` to to correctly handle and log
errors when parsing the `failover-policy` argument.

Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-02-24 11:25:26 +08:00
Yan Song dd9ba54e33 misc: remove goproxy.io for go build
The goproxy.io service is unstable for now, it effects,
the github CI, let's remove it.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2025-02-17 09:55:13 +08:00
Yan Song 09b81c50b4 nydusify: fix layer push retry for copy subcommand
Add push retry mechanism, enhance the success rate for image copy
when a single layer copy failure.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2025-02-17 09:55:13 +08:00
Yang Kaiyong 3beb9a72d9 chore: bump deps to address rustsec warnning
- Bump vm-memory to 1.14.1, vmm-sys-util to 0.12.1 and vhost to 0.11.0.
- Bump cargo-deny-action version from v1 to v2 in workflows.

Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-02-11 20:29:22 +08:00
Yang Kaiyong 3c10b59324 chore: comment the unused code to address clippy error
The backend-oss feature is nerver enabled, so comment the test code.

Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-02-10 10:21:58 +08:00
Yang Kaiyong bf17d221d6 fix: Support building rafs without the dedup feature
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-02-10 10:21:58 +08:00
Yang Kaiyong ee5ef64cdd chore: pass rust version to build docker container in CI
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-02-10 10:21:58 +08:00
Yang Kaiyong 05ea41d159 chore: specify the rust version to 1.84.0 and enable docker cache
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-02-10 10:21:58 +08:00
Yang Kaiyong 4def4db396 chore: fix the broken CI on riscv64
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-02-10 10:21:58 +08:00
Yang Kaiyong d48d3dbdb3 chore: bump rust version to 1.8.4 and update deps to reslove cargo deny check failures
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-02-10 10:21:58 +08:00
Kostis Papazafeiropoulos f60e40aafa fix(blobfs): Use correct result types for `open` and `create`
Use the correct result types for `open` and `create` expected by the
`fuse_backend_rs` 0.12.0 `Filesystem` trait

Signed-off-by: Kostis Papazafeiropoulos <papazof@gmail.com>
2025-01-15 10:18:59 +08:00
Kostis Papazafeiropoulos 83fa946897 build(rafs): Add missing `dedup` feature for `storage` crate dependency
Fix `rafs` build by adding missing `dedup` feature for `storage` crate
dependency

Signed-off-by: Kostis Papazafeiropoulos <papazof@gmail.com>
2025-01-15 10:18:59 +08:00
Gaius 365f13edcf chore: rename repo Dragonfly2 to dragonfly
Signed-off-by: Gaius <gaius.qi@gmail.com>
2024-12-20 17:09:10 +08:00
Lin Wang e23d5bc570 fix: dragonflyoss#1644 and #1651 resolve Algorithm to_string and FromStr inconsistency
Signed-off-by: Lin Wang <l.wang@mail.dlut.edu.cn>
2024-12-16 20:39:08 +08:00
Liu Bo acdf021ec9 rafs: fix typo
Fix an invalid info! usage.

Signed-off-by: Liu Bo <liub.liubo@gmail.com>
2024-12-13 14:40:50 +08:00
Xing Ma b175fc4baa nydusify: introduce optimize subcommand of nydusify
We can statically analyze the image entrypoint dependency, or use runtime dynamic
analysis technologies such as ebpf, fanotify, metric, etc. to obtain the container
file access pattern, and then build this part of data into an independent image layer:

* preferentially fetch blob during the image startup phase to reduce network and disk IO.
* avoid frequent image builds, allows for better local cache utilization.

Implement optimize subcommand of nydusify to generate a new image, which references a new
blob included prefetch file chunks.
```
nydusify optimize --policy separated-prefetch-blob \
	--source $existed-nydus-image \
	--target $new-nydus-image \
	--prefetch-files /path/to/prefetch-files
```

More detailed process is as follows:
1. nydusify first downloads the source image and bootstrap, utilize nydus-image to output a
new bootstrap along with an independent prefetchblob;
2. nydusify generate&push new meta layer including new bootstrap and the prefetch-files ,
also generates&push new manifest/config/prefetchblob, completing the incremental image build.

Signed-off-by: Xing Ma <maxing.lan@bytedance.com>
2024-12-09 14:51:13 +08:00
Xing Ma 8edc031a31 builder: Enhance optimize subcommand for prefetch
Major changes:
1. Added compatibility for rafs v5/v6 formats;
2. Set IS_SEPARATED_WITH_PREFETCH_FILES flag in BlobInfo for prefetchblob;
3. Add option output-json to store build output.

Signed-off-by: Xing Ma <maxing.lan@bytedance.com>
2024-12-09 14:51:13 +08:00
pyq bb4744c7fb docs: fix docker-env-setup.md
Signed-off-by: pyq <eilo.pengyq@gmail.com>
2024-12-04 10:10:26 +08:00
dDai Yongxuan 375f55f32e builder: introduce optimize subcommand for prefetch
We can statically analyze the image entrypoint dependency, or use runtime dynamic
analysis technologies such as ebpf, fanotify, metric, etc. to obtain the container
file access pattern, and then build this part of data into an independent image layer:

* preferentially fetch blob during the image startup phase to reduce network and disk IO.
* avoid frequent image builds, allows for better local cache utilization.

Implement optimize subcommand to optimize image bootstrap
from a prefetch file list, generate a new blob.

```
nydus-image optimize --prefetch-files /path/to/prefetch-files.txt \
  --bootstrap /path/to/bootstrap \
  --blob-dir /path/to/blobs
```
This will generate a new bootstrap and new blob in `blob-dir`.

Signed-off-by: daiyongxuan <daiyongxuan20@mails.ucas.ac.cn>
2024-10-29 14:52:17 +08:00
abushwang a575439471 fix: correct some typos about nerdctl image rm
Signed-off-by: abushwang <abushwangs@gmail.com>
2024-10-25 16:11:22 +08:00
abushwang 4ee6ddd931 fix: correct some typos in nydus-fscache.md
Signed-off-by: abushwang <abushwangs@gmail.com>
2024-10-25 15:05:32 +08:00
Yadong Ding 57c112a998 smoke: add smoking test for cas and chunk dedup
Add smoking test case for cas and chunk dedup.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-10-23 18:25:01 +08:00
Jiang Liu b9ba409f13 docs: add documentation for cas
Add documentation for cas.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2024-10-23 18:25:01 +08:00
Yadong Ding 2387fe8217 storage: enable chunk deduplication for file cache
Enable chunk deduplication for file cache. It works in this way:
- When a chunk is not in blob cache file yet, inquery CAS database
  whether other blob data files have the required chunk. If there's
  duplicated data chunk in other data files, copy the chunk data
  into current blob cache file by using copy_file_range().
- After downloading a data chunk from remote, save file/offset/chunk-id
  into CAS database, so it can be reused later.

Co-authored-by: Jiang Liu <gerry@linux.alibaba.com>
Co-authored-by: Yading Ding <ding_yadong@foxmail.com>
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-10-23 18:25:01 +08:00
Yadong Ding 4b1fd55e6e storage: add garbage collection in CasMgr
- Changed `delete_blobs` method in `CasDb` to take an immutable reference (`&self`) instead of a mutable reference (`&mut self`).
- Updated `dedup_chunk` method in `CasMgr` to correctly handle the deletion of non-existent blob files from both the file descriptor cache and the database.
- Implemented the `gc` (garbage collection) method in `CasMgr` to identify and remove blobs that no longer exist on the filesystem, ensuring the database and cache remain consistent.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-10-23 18:25:01 +08:00
Jiang Liu 45e07eab3d storage: implement CasManager to support chunk dedup at runtime
Implement CasManager to support chunk dedup at runtime.
The manager provides to major interfaces:
- add chunk data to the CAS database
- check whether a chunk exists in CAS database and copy it to blob file
  by copy_file_range() if the chunk exists.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2024-10-23 18:25:01 +08:00
Yadong Ding 51a6045d74 storage: improve copy_file_range
- improve copy_file_range when target os is not linux
- add more comprehensive tests

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-10-23 18:25:01 +08:00
Jiang Liu 7d1c2e635a storage: add helper copy_file_range
Add helper copy_file_range() which:
- avoid copy data into userspace
- may support reflink on xfs etc

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2024-10-23 18:25:01 +08:00
Mike Hotan 15ec192e3d Nydusify `localfs` support
Signed-off-by: Mike Hotan <mike@union.ai>
2024-10-17 09:42:59 +08:00
Yadong Ding da2510b6f5 action: bump macos-13
The macOS 12 Actions runner image will begin deprecation on 10/7/24.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-10-16 18:35:50 +08:00
Yadong Ding 47025395fa lint: bump golangci-lint v1.61.0 and fix lint
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-10-16 09:45:05 +08:00
Yadong Ding 678b44ba32 rust: upgrade to 1.75.0
1. reduce the binary size.
2. use more rust-clippy lints.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-10-16 09:45:05 +08:00
Yifan Zhao 7c498497fb nydusify: modify compact interface
This patch modifies the compact interface to meet the change in
nydus-image.

Signed-off-by: Yifan Zhao <zhaoyifan@sjtu.edu.cn>
2024-10-15 09:27:34 +08:00
Yifan Zhao 1ccc603525 nydus-image: modify compact interface
This commit uses compact parameter directly  insteadof compact config
file in the cli interface. It also fix a bug where chunk key for
ChunkWrapper::Ref is not generated correctly.

Signed-off-by: Yifan Zhao <zhaoyifan@sjtu.edu.cn>
2024-10-15 09:27:34 +08:00
Yifan Zhao a4683baa1e rafs: fix bug in InodeWrapper::is_sock()
We incorrectly use is_dir() to check if a file is a socket. This patch
fixes it.

Signed-off-by: Yifan Zhao <zhaoyifan@sjtu.edu.cn>
2024-09-27 12:35:14 +08:00
Yadong Ding 9f439ab404 bats: use nerdctl replace ctr-remote
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-09-25 09:23:19 +08:00
Yadong Ding 0c0ba2adec chore: remove contrib/ctr-remote
Nerdctl is more useful than `ctr-remote`, deprecate it.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-09-25 09:23:19 +08:00
Yadong Ding c5ef5c97a4 chore: keep smoke test component latest version
- Use the latest `nerdctl`, `nydus-snapshotter`, and `cni` in smoke test env.
- Delete `misc/takeover/snapshotter_config.toml`, use modifyed `misc/performance/snapshotter_config.toml` when test.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-09-25 09:11:08 +08:00
Yadong Ding 37a7b96412 nydusctl: fix build version info
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-09-20 17:32:55 +08:00
Yadong Ding 742954eb2c tests: chang assertr of test_worker_mgr_rate_limiter
assert_eq!(mgr.prefetch_inflight.load(Ordering::Acquire), 3); and assert!(mgr.prefetch_inflight.load(Ordering::Acquire) <= 2); sometimes failed.
The reason is the worker threads may have already started processing the requests and decreased the counter before the main thread checks it.

- change assert!(mgr.prefetch_inflight.load(Ordering::Acquire) = 3); to assert!(mgr.prefetch_inflight.load(Ordering::Acquire) <= 3);
-  thread::sleep(Duration::from_secs(1)); to thread::sleep(Duration::from_secs(2));

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-09-19 20:30:27 +08:00
Yadong Ding 849591afa9 feat: add retry mechanism in read blob metadata
When read blob size from blob metadata, we should retry read from the remote if error occurs.
Also set the max retry times is 3.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-09-19 15:12:04 +08:00
Yadong Ding e8a4305773 chore: bump go lint action v6 and version 1.61.0
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-09-19 15:04:16 +08:00
Yadong Ding 7fc9edeec5 chore: change nydus snapshotter work dir
- use /var/lib/containerd/io.containerd.snapshotter.v1.nydus
- bump nydusd snapshotter v1.14.0

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-09-18 11:13:22 +08:00
Yadong Ding f4fb04a50f lint: remove unused fieldsPath
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-09-18 09:18:12 +08:00
dependabot[bot] 481a63b885 build(deps): bump github.com/docker/docker in /contrib/nydusify
Bumps [github.com/docker/docker](https://github.com/docker/docker) from 25.0.5+incompatible to 25.0.6+incompatible.
- [Release notes](https://github.com/docker/docker/releases)
- [Commits](https://github.com/docker/docker/compare/v25.0.5...v25.0.6)

---
updated-dependencies:
- dependency-name: github.com/docker/docker
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-09-16 20:23:59 +08:00
BruceAko 9b4c272d78 fix: add tests for nydusify copy
Signed-off-by: BruceAko <chongzhi@hust.edu.cn>
2024-09-15 21:10:37 +08:00
BruceAko 30d53c3f25 fix: add a doc about nydusify copy
Signed-off-by: BruceAko <chongzhi@hust.edu.cn>
2024-09-15 21:10:37 +08:00
BruceAko 309feab765 fix: add getLocalPath() and close decompressor
Signed-off-by: BruceAko <chongzhi@hust.edu.cn>
2024-09-15 21:10:37 +08:00
BruceAko a1ceb176f4 feat: support local tarball for nydusify copy
Signed-off-by: BruceAko <chongzhi@hust.edu.cn>
2024-09-15 21:10:37 +08:00
Jiancong Zhu 6106fbc539 refactor: fixed the unnecessary mutex lock operation
Signed-off-by: Jiancong Zhu <Chasing1020@gmail.com>
2024-09-12 18:26:26 +08:00
Yifan Zhao d89410f3fc nydus-image: refactor unpack/compact cli interface
Since unpack and compact subcommands does not need the entire nydusd
configuration file, let's refactor their cli interface and directly
take backend configuration file.

Specifically, we introduce `--backend-type`, `--backend-config` and
`--backend-config-file` options to specify the backend type and remove
`--config` option.

Signed-off-by: Yifan Zhao <zhaoyifan@sjtu.edu.cn>

Fixes: #1602
2024-09-10 14:33:51 +08:00
Yifan Zhao 36fe98b3ac smoke: fix invalid cleanup issue in main_test.go
The cleanup of new registry is invalid as TestMain() calls os.Exit()
and will not run defer functions. This patch fixes the issue by
doing the cleanup explicitly.

Signed-off-by: Yifan Zhao <zhaoyifan@sjtu.edu.cn>
2024-09-10 14:33:51 +08:00
fappy1234567 114ec880a2 smoke: add mount api test case
Signed-off-by: fappy1234567 <2019gexinlei@bupt.edu.cn>
2024-08-30 15:36:59 +08:00
Yan Song 3eb5c7b5ef nydusify: small improvements for mount & check subcommands
- Add `--prefetch` option for enabling full image data prefetch.
- Support `HTTP_PROXY` / `HTTPS_PROXY` env for enabling proxy for nydusd.
- Change nydusd log level to `warn` for mount & check subcommands.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2024-08-28 11:07:26 +08:00
Yadong Ding 52ed07b4cf deny: ignore RUSTSEC-2024-0357
openssl 0.10.55 can't build in riscv64 and ppc64le.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-08-08 14:42:44 +08:00
Yan Song a6bd8ccb8d smoke: add nydusd hot upgrade test case
The test case in hot_upgrade_test.go is different with takeover_test.go,
it not depend on snapshotter component.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2024-07-22 09:22:47 +08:00
Yan Song 642571236d smoke: refactor nydusd methods for testing
Renaming and adding some methods for nydusd struct, for easily controlling
nydusd process.

And support SKIP_CASES env to allow skipping some cases.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2024-07-22 09:22:47 +08:00
Yadong Ding 32b6ead5ec action: fix upload-coverage-to-codecov with secret
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-07-15 09:32:19 +08:00
Yadong Ding c92fe6512f action: upgrade macos to 12
macos-11 was deprecated since 2024.06.28.
https://docs.github.com/actions/using-jobs/choosing-the-runner-for-a-job

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-07-15 09:32:19 +08:00
BruceAko 3684474254 fix: rename mirrors' check_pause_elapsed to health_check_pause_elapsed
Signed-off-by: BruceAko <chongzhi@hust.edu.cn>
2024-07-11 09:13:11 +08:00
BruceAko cd24506d43 feat: skip health check if connection is not active
1. Add last_active field for Connection. When Connection.call() is called, last_active is updated to current timestamp.
2. Add check_pause_elapsed field for ProxyConfig and MirrorConfig. Connection is considered to be inactive if the current time to the last_active time exceeds check_pause_elapsed.
3. In proxy and mirror's health checking thread's loop, if the connection is not active (exceeds check_pause_elapsed), this round of health check is skipped.
4. Update the document.

Signed-off-by: BruceAko <chongzhi@hust.edu.cn>
2024-07-11 09:13:11 +08:00
YuQiang 19b09ed12f fix: add namespace flag for nydusify commit.
Signed-off-by: YuQiang <yu_qiang@mail.nwpu.edu.cn>
2024-07-09 18:15:25 +08:00
BruceAko da5d423b8c fix: correct some typos in Nydusify
Signed-off-by: BruceAko <chongzhi@hust.edu.cn>
2024-07-09 18:14:16 +08:00
Lin Wang 455c856aa8 nydus-image: add documentation for chunk-level deduplication
Signed-off-by: Lin Wang <l.wang@mail.dlut.edu.cn>
2024-07-04 18:08:59 +08:00
Lin Wang 5dec7536fa nydusify: add chunkdict generate command and corresponding tests
Signed-off-by: Lin Wang <l.wang@mail.dlut.edu.cn>
2024-07-04 18:08:59 +08:00
Lin Wang 087c0b1baf nydus-image: Add support for chunkdict generation
Signed-off-by: Lin Wang <l.wang@mail.dlut.edu.cn>
2024-07-04 18:08:59 +08:00
泰友 332f3dd456 fix: compatibility to image without ext table for blob cache
There are scenes that cache file is smaller than expect size. Such as:

    1. Nydusd 1.6 generates cache file by prefetch, which is smaller than size in boot.
    2. Nydusd 2.2 generates cache file by prefetch, when image not provide ext blob tables.
    3. Nydusd not have enough time to fill cache for blob.

    Equality check for size is too much strict for both 1.6
    compatibility and 2.2 concurrency. This pr ensures blob size smaller
    or equal than expect size. It also truncates blob cache when smaller
    than expect size.

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2024-06-18 10:43:42 +08:00
泰友 7cf2d4a2d7 fix: bad read by wrong data region
User io may involve discontinuous segments in different chunks. Bad
    read is produced by merging them into continuous one. That is what
    Region does. This pr separate discontinuous segments into different
    regions, avoiding merging forcibly.

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2024-06-18 10:43:42 +08:00
泰友 64dddd2d2b fix: residual fuse mountpoint after graceful shutdown
1. Case1: Fuse server exits in thread not main. There is possibility
       that process finishes before shutdown of server.
    2. Case2: Fuse server exits in thread of state machine. There is
       possibiltiy that state machine not responses to signal catch
       thread. Then dead lock happens. Process exits before shutdown of
       server.

    This pr aims to seperator shutdown actions from signal catch
    handler. It only notifies controller. Controller exits with
    shutdown of fuse server. No race. No deadlock.

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2024-06-18 10:43:42 +08:00
Yan Song de7cfc4088 nydusify: upgrade acceleration-service v0.2.14
To bring the fixup: https://github.com/goharbor/acceleration-service/pull/290

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2024-06-06 10:18:45 +08:00
Yadong Ding 79a7015496 chore: upgrade components version in test env
1. Upgrade cni to v1.5.0 and try to fix error in TestCommit.
2. upgrade nerdctl to v1.7.6.
3. upgrade nydus-snapshotter to v0.13.13 and fix path error.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-06-06 09:56:26 +08:00
BruceAko 3b9b0d4588 fix: correct some typos and grammatical problem
Signed-off-by: chongzhi <chongzhi@hust.edu.cn>
2024-06-06 09:55:11 +08:00
Yadong Ding 7ea510b237 docs: fix incorrect file path
https://github.com/containerd/nydus-snapshotter/blob/main/misc/snapshotter/config.toml#L27
In snapshotter config nydusd config file path is /etc/nydus/nydusd-config.fusedev.json

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-06-06 09:50:40 +08:00
dependabot[bot] 34ab06b6b3 build(deps): bump golang.org/x/net in /contrib/ctr-remote
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.19.0 to 0.23.0.
- [Commits](https://github.com/golang/net/compare/v0.19.0...v0.23.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-06-05 16:32:26 +08:00
dependabot[bot] 9483286863 build(deps): bump golang.org/x/net in /contrib/nydusify
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.19.0 to 0.23.0.
- [Commits](https://github.com/golang/net/compare/v0.19.0...v0.23.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-06-05 15:56:24 +08:00
Yadong Ding 13a9aa625b fix: downgraded to codecov/codecov-action@v4.0.0
codecov/codecov-action@v4 is unstable.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-06-04 15:59:46 +08:00
Yadong Ding 305a418b31 fix: upload-coverage failed in master
When action don't run on pull request, Codecov GitHub Action V4 need token.
Refence:
1. https://github.com/codecov/codecov-action?tab=readme-ov-file#breaking-changes
2. https://docs.codecov.com/docs/codecov-uploader#supporting-token-less-uploads-for-forks-of-open-source-repos-using-codecov

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-06-04 15:18:48 +08:00
Qinqi Qu 4a16402120 action: bump codecov-action to v4
To solve the problem of CI failure.

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2024-04-17 16:39:48 +08:00
Qinqi Qu 1d1691692c deps: update indexmap from v1 to v2
Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2024-04-16 15:12:25 +08:00
Qinqi Qu d1dfe7bd65 backend-proxy: refactor to support latest versions of crates
Also fix some security alerts of Dependabot:
1. https://github.com/advisories/GHSA-q6cp-qfwq-4gcv
2. https://github.com/advisories/GHSA-8r5v-vm4m-4g25
3. https://github.com/advisories/GHSA-g98v-hv3f-hcfr

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2024-04-16 15:12:25 +08:00
Qinqi Qu 3b2a0c0bcc deps: remove dependency on atty
The atty crate is not maintained, so flexi_logger and clap are updated
to remove the dependency on atty.

Fix: https://github.com/advisories/GHSA-g98v-hv3f-hcfr

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>

s

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2024-04-16 15:12:25 +08:00
Qinqi Qu 9826b2cc3f bats test: add a backup image to avoid network errors
Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2024-04-09 17:32:28 +08:00
dependabot[bot] 260a044c6e build(deps): bump h2 from 0.3.24 to 0.3.26
Bumps [h2](https://github.com/hyperium/h2) from 0.3.24 to 0.3.26.
- [Release notes](https://github.com/hyperium/h2/releases)
- [Changelog](https://github.com/hyperium/h2/blob/v0.3.26/CHANGELOG.md)
- [Commits](https://github.com/hyperium/h2/compare/v0.3.24...v0.3.26)

---
updated-dependencies:
- dependency-name: h2
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-09 15:27:13 +08:00
dependabot[bot] e926d2ff9c build(deps): bump google.golang.org/protobuf in /contrib/nydusify
Bumps google.golang.org/protobuf from 1.32.0 to 1.33.0.

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-31 11:36:18 +08:00
dependabot[bot] fc52ebc7a1 build(deps): bump github.com/docker/docker in /contrib/nydusify
Bumps [github.com/docker/docker](https://github.com/docker/docker) from 25.0.3+incompatible to 25.0.5+incompatible.
- [Release notes](https://github.com/docker/docker/releases)
- [Commits](https://github.com/docker/docker/compare/v25.0.3...v25.0.5)

---
updated-dependencies:
- dependency-name: github.com/docker/docker
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-29 17:05:58 +08:00
YuQiang af914dd1a5 fix: modify benchmark prepare bash path
1. correct the performance test prepare bash file path

Signed-off-by: YuQiang <y_q_email@163.com>
2024-03-26 10:02:52 +08:00
Adolfo Ochagavía 2308efa6f7 Add compression method support to zran docs
Signed-off-by: Adolfo Ochagavía <github@adolfo.ochagavia.nl>
2024-03-25 17:38:44 +08:00
Wei Zhang 9ae8e3a7b5 overlay: add overlay implementation
With help of newly introduced Overlay FileSystem in `fuse-backend-rs`
library, now we can create writable rootfs in Nydus. Implementation of
writable rootfs is based on one passthrough FS(as upper layer) over one
readonly rafs(as lower layer).

To do so, configuration is extended with some Overlay options.

Signed-off-by: Wei Zhang <weizhang555.zw@gmail.com>
2024-03-15 14:15:54 +08:00
YuQiang 3dfa9e9776 docs: add doc for nydus-image check command
Signed-off-by: YuQiang <y_q_email@163.com>
2024-03-14 11:10:46 +08:00
YuQiang f10782c79d docs: add doc for nydusify commit command
Signed-off-by: YuQiang <y_q_email@163.com>
2024-03-14 09:33:02 +08:00
YuQiang ae842f9b8b action: merge and move prepare.sh
remove misc/performance/prepare.sh and misc/performance/prepare.sh and merge to misc/prepare.sh

Signed-off-by: YuQiang <y_q_email@163.com>
2024-03-14 09:32:38 +08:00
YuQiang 26b1d7db5a feat: add smoke test for nydusify commit
Signed-off-by: YuQiang <y_q_email@163.com>
2024-03-14 09:32:38 +08:00
YuQiang c14790cb21 feat: add nydusify commit command
add nydusify commit command to commit a nydus container into nydus image

Signed-off-by: YuQiang <y_q_email@163.com>
2024-03-14 09:32:38 +08:00
YuQiang 19daa7df6f feat: ported write overlay upperdir capability
ported  capability of get and write diff between overlayfs upper and lower.

Signed-off-by: YuQiang <y_q_email@163.com>
2024-03-14 09:32:38 +08:00
dependabot[bot] a0ec880182 build(deps): bump github.com/go-jose/go-jose/v3 in /contrib/nydusify
Bumps [github.com/go-jose/go-jose/v3](https://github.com/go-jose/go-jose) from 3.0.1 to 3.0.3.
- [Release notes](https://github.com/go-jose/go-jose/releases)
- [Changelog](https://github.com/go-jose/go-jose/blob/v3.0.3/CHANGELOG.md)
- [Commits](https://github.com/go-jose/go-jose/compare/v3.0.1...v3.0.3)

---
updated-dependencies:
- dependency-name: github.com/go-jose/go-jose/v3
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-08 19:01:13 +08:00
dependabot[bot] c57e7c038c build(deps): bump mio in /contrib/nydus-backend-proxy
Bumps [mio](https://github.com/tokio-rs/mio) from 0.8.5 to 0.8.11.
- [Release notes](https://github.com/tokio-rs/mio/releases)
- [Changelog](https://github.com/tokio-rs/mio/blob/master/CHANGELOG.md)
- [Commits](https://github.com/tokio-rs/mio/compare/v0.8.5...v0.8.11)

---
updated-dependencies:
- dependency-name: mio
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-08 19:00:57 +08:00
dependabot[bot] eba6afe5b8 build(deps): bump mio from 0.8.10 to 0.8.11
Bumps [mio](https://github.com/tokio-rs/mio) from 0.8.10 to 0.8.11.
- [Release notes](https://github.com/tokio-rs/mio/releases)
- [Changelog](https://github.com/tokio-rs/mio/blob/master/CHANGELOG.md)
- [Commits](https://github.com/tokio-rs/mio/compare/v0.8.10...v0.8.11)

---
updated-dependencies:
- dependency-name: mio
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-07 14:46:07 +08:00
YuQiang aaab560aa9 feat: add fs_version and compressor output of nydus image check
1.Add rafs_version value, output like 5 or 6.
2.Add compressor algorithm value, like ztsd.
Add rafs_version and compressor json output of nydus image check,so that more info can be get if it is necessary.

Signed-off-by: YuQiang <y_q_email@163.com>
2024-02-29 14:15:39 +08:00
Yadong Ding 7b3cc503a2 action: add contrib-lint in smoke test
1. Use the official GitHub action for golangci-lint from its authors.
2. fix golang lint error with v1.56
3. separate test and golang lint.Sometimes we need tests without golang lint and sometimes we just want to do golang lint.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-02-21 11:44:33 +08:00
dependabot[bot] 5fb809605d build(deps): bump github.com/opencontainers/runc in /contrib/ctr-remote
Bumps [github.com/opencontainers/runc](https://github.com/opencontainers/runc) from 1.1.11 to 1.1.12.
- [Release notes](https://github.com/opencontainers/runc/releases)
- [Changelog](https://github.com/opencontainers/runc/blob/v1.1.12/CHANGELOG.md)
- [Commits](https://github.com/opencontainers/runc/compare/v1.1.11...v1.1.12)

---
updated-dependencies:
- dependency-name: github.com/opencontainers/runc
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-02-20 13:11:38 +08:00
Yan Song abaf9caa16 docs: update outdated dingtalk QR code
And remove the outdated technical meeting schedule.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2024-02-20 10:17:19 +08:00
dependabot[bot] d7ea50e621 build(deps): bump github.com/opencontainers/runc in /contrib/nydusify
Bumps [github.com/opencontainers/runc](https://github.com/opencontainers/runc) from 1.1.11 to 1.1.12.
- [Release notes](https://github.com/opencontainers/runc/releases)
- [Changelog](https://github.com/opencontainers/runc/blob/v1.1.12/CHANGELOG.md)
- [Commits](https://github.com/opencontainers/runc/compare/v1.1.11...v1.1.12)

---
updated-dependencies:
- dependency-name: github.com/opencontainers/runc
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-02-18 17:11:09 +08:00
Yadong Ding d12634f998 action: bump nodejs20 github action
https://github.blog/changelog/2023-09-22-github-actions-transitioning-from-node-16-to-node-20/

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-02-06 09:36:54 +08:00
loheagn 9a1c47bd00 docs: add doc for nydusd failover and hot upgrade
Signed-off-by: loheagn <loheagn@icloud.com>
2024-01-23 20:01:48 +08:00
Yadong Ding 3f47f1ec6d fix: upload-artifact v4 break changes
upload-artifact v4 can't upload artifact to same name

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-01-19 11:01:50 +08:00
Yadong Ding 5f26f8ee1c fix: upgrade h2 to 0.3.24 to fix RUSTSEC-2024-0003
ID: RUSTSEC-2024-0003
Advisory: https://rustsec.org/advisories/RUSTSEC-2024-0003
An attacker with an HTTP/2 connection to an affected endpoint can send a steady stream of invalid frames to force the
generation of reset frames on the victim endpoint.
By closing their recv window, the attacker could then force these resets to be queued in an unbounded fashion,
resulting in Out Of Memory (OOM) and high CPU usage.

This fix is corrected in [hyperium/h2#737](https://github.com/hyperium/h2/pull/737), which limits the total number of
internal error resets emitted by default before the connection is closed.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-01-18 16:58:44 +08:00
Yadong Ding eae9ed7e45 fix: upload-artifact@v4 breake in release
Error:
Failed to CreateArtifact: Received non-retryable error: Failed request: (409) Conflict: an artifact with this name already exists on the workflow run

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-01-18 16:58:44 +08:00
Yadong Ding a3922b8e0d action: bump upload-artifact/download-artifact v4
Since https://github.com/actions/download-artifact/issues/249 are fixed,
we can use the v4 version.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-01-17 10:04:49 +08:00
Wenhao Ren 9dae4eccee storage: fix the tiny prefetch request for batch chunks
By passing the chunk continuous check, and correctly sort batch chunks,
the prefetch request will not be interrupted by batch chunks anymore.

Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2024-01-15 17:56:26 +08:00
Wenhao Ren d7190d9fee action: add convert test for batch chunk
Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2024-01-15 17:56:26 +08:00
Wenhao Ren 8bb53a873a storage: add validation and unit test for batch chunks
1. Add the validation for batch chunks.
2. Add unit test for `BatchInflateContext`.

Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2024-01-15 17:56:26 +08:00
Wenhao Ren 7f799ec8bb storage: introduce `BlobCCI` for reading batch chunk info
`BlobCompressionContextInfo` is need to read batch chunk info.
`BlobCCI` is introduced for simplifying the code,
and decrease the times of getting this context by lazy loading.

Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2024-01-15 17:56:26 +08:00
Wenhao Ren c557f99d08 storage: fix the read amplification for batch chunks.
Read amplification for batch chunk is not correctly implemented that may crash.
The read amplification is rewrited to fix this bug.
A unit test for read amplification is also added for covering this code.

Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2024-01-15 17:56:26 +08:00
Wenhao Ren 676acd0a6f storage: fix the Error type to log the error correctly
Currently, many error are output as `os error 22` lossing customized log info.
So we change the Error type for correctly output and log the error info
as what we expected.

Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2024-01-15 17:56:26 +08:00
Wenhao Ren fa72c98ffc rafs: add `is_batch()` for `BlobChunkInfo`
Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2024-01-15 17:56:26 +08:00
Wenhao Ren b4fe28aad6 rafs: move `compressed_offset` from `BatchInflateContext` to chunk info for batch chunks.
1. `compressed_offset` is used for build-time and runtime sorting for chunk info.
So we move `compressed_offset` from `BatchInflateContext` to chunk info for batch chunks.

2. the `compressed_size` for the blobs in batch mode is not correctly set.
We thus fix it by setting the value of `dumped_size`.

Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2024-01-15 17:56:26 +08:00
dependabot[bot] 596492b932 build(deps): bump github.com/go-jose/go-jose/v3 in /contrib/nydusify
Bumps [github.com/go-jose/go-jose/v3](https://github.com/go-jose/go-jose) from 3.0.0 to 3.0.1.
- [Release notes](https://github.com/go-jose/go-jose/releases)
- [Changelog](https://github.com/go-jose/go-jose/blob/main/CHANGELOG.md)
- [Commits](https://github.com/go-jose/go-jose/compare/v3.0.0...v3.0.1)

---
updated-dependencies:
- dependency-name: github.com/go-jose/go-jose/v3
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-01-04 18:52:59 +08:00
Yadong Ding 2743f163b9 deps: update the latest version and sync
Bump containerd v1.7.11 and golang.org/x/crypto v0.17.0.
Resolve GHSA-45x7-px36-x8w8 and GHSA-7ww5-4wqc-m92c.
Update dependents to latest version and sync in muti modules.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-01-04 14:11:36 +08:00
loheagn 04b4552e03 tests: add smoke test for hot upgrade
Signed-off-by: loheagn <loheagn@icloud.com>
2024-01-04 14:10:31 +08:00
Qinqi Qu 5ecda8c057 bats test: upgrade golang version to 1.21.5
Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2024-01-03 11:54:30 +08:00
Qinqi Qu 8e1799e5df bats test: change rust docker image to Debian 11 bullseye version
The rust:1.72.1 image is based on the Debian 12 bookworm, and requires
an excessively high version of glibc, resulting in the inability to
find the glibc version to run the compiled nydus program on some old
operating systems.

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2024-01-03 11:54:30 +08:00
Yadong Ding f08587928b rust: bump 1.72.1 and fix errors
https://rust-lang.github.io/rust-clippy/master/index.html#non_minimal_cfg
https://rust-lang.github.io/rust-clippy/master/index.html#unwrap_or_default
https://rust-lang.github.io/rust-clippy/master/index.html#needless_borrows_for_generic_args
https://rust-lang.github.io/rust-clippy/master/index.html#reserve_after_initializatio
https://rust-lang.github.io/rust-clippy/master/index.html#/arc_with_non_send_sync
https://rust-lang.github.io/rust-clippy/master/index.html#useless_vec

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-29 08:58:02 +08:00
Xin Yin cf76edbc52 dep: upgrade tokio to 1.35.1
Fix painc after all prefetch worker exit for fscache mode.

Signed-off-by: Xin Yin <yinxin.x@bytedance.com>
2023-12-27 20:36:23 +08:00
loheagn 7f27b7ae78 tests: add smoke test for nydusd failover
Signed-off-by: loheagn <loheagn@icloud.com>
2023-12-25 16:35:14 +08:00
Yadong Ding 17c373fc29 nydusify: fix error in go vet
`sudo` in action will change go env, remove sudo.
In runner user, we can create file in unpacktargz-test inseted of temp/unpacktargz-test,
so don't use os.CreateTemp in archive_test.go.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-25 14:02:59 +08:00
Yadong Ding d5242901f9 action: delete useless env
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-25 14:02:59 +08:00
Yadong Ding 39daa97bac nydusify: fix unit test fail in utils
utils_test.go:248:
                Error Trace:    /root/nydus/contrib/nydusify/pkg/utils/utils_test.go:248
                Error:          Should be true
                Test:           TestRetryWithHTTP

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-25 14:02:59 +08:00
Yadong Ding 2cd8ba25bd nydusify: add unit test for nydusify
We had removed the tests files(e2e) in nydusify, we need add the unit tests
to improve test coverage.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-25 09:14:54 +08:00
Yadong Ding 3164f19ab7 makefile: remove build in test
use `make test` to run unit tests, it don't need build.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-25 09:14:54 +08:00
Yadong Ding 6675da3186 action: use upload-artifact/download-artifact v3
master branch is unstable, change to v3.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-25 09:14:54 +08:00
Yadong Ding 7772082411 action: use sudo in contrib-unit-test-coverage
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-25 09:14:54 +08:00
Yadong Ding 65046b0533 refactor: use ErrSchemeMismatch and ECONNREFUSED
ref: https://github.com/golang/go/issues/44855

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-25 09:03:20 +08:00
Yadong Ding b5e88a4f4e chore: upgrade go version to 1.21
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-25 09:03:20 +08:00
Yadong Ding 18ba2eda63 action: fix failed to compile `cross v0.2.4`
error: failed to compile `cross v0.2.4`, intermediate artifacts can be found at `/tmp/cargo-installG1Scm4`

Caused by:
  package `home v0.5.9` cannot be built because it requires rustc 1.70.0 or newer, while the currently active rustc version is 1.68.2
  Try re-running cargo install with `--locked`

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-19 14:25:04 +08:00
Yadong Ding ab06841c39 revent build(deps): bump openssl from 0.10.55 to 0.10.60
Revent https://github.com/dragonflyoss/nydus/pull/1513.
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-19 14:25:04 +08:00
Yadong Ding e9d63f5d3b chore: upgrade dbs-snapshot to 1.5.1
v1.5.1 brings support of ppc64le and riscv64.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-19 14:25:04 +08:00
Yadong Ding 1a1e8fdb98 action: test build with more architectures
Test build with more architectures, but only use `amd64` in next jobs.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-19 14:25:04 +08:00
Yadong Ding a4ec9b8061 tests: add go module unit coverage to Codecov
resolve dragonflyoss#1518.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-19 09:48:48 +08:00
Yadong Ding 54a3395434 action: add contrib-test and build
Use contrib-tes job to test the golang modules in contrib.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-19 09:48:48 +08:00
Yadong Ding 0458817278 chore: modify repo to dragonflyoss/nydus
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-18 17:41:00 +08:00
Yadong Ding 763786f316 chore: change go module name to nydus
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-18 17:41:00 +08:00
dependabot[bot] d6da88a8f1 build(deps): bump github.com/docker/docker in /contrib/nydusify
Bumps [github.com/docker/docker](https://github.com/docker/docker) from 23.0.3+incompatible to 24.0.7+incompatible.
- [Release notes](https://github.com/docker/docker/releases)
- [Commits](https://github.com/docker/docker/compare/v23.0.3...v24.0.7)

---
updated-dependencies:
- dependency-name: github.com/docker/docker
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-12-18 13:38:23 +08:00
Yadong Ding 06755fe74b tests: remove useless test files
Since https://github.com/dragonflyoss/nydus/pull/983, we have the new smoke test, we can remove the
old smoke test files including nydusify and nydus.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-15 14:14:05 +08:00
Yadong Ding 2bca6f216a smoke: use golangci-lint to improve code quality
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-15 14:12:44 +08:00
Yadong Ding 0e81f2605d nydusify: fix errors found by golangci-lint
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-15 14:12:44 +08:00
Yadong Ding f98b6e8332 action: upgrade golangci-lint to v1.54.2
We have some golang lint error in nydusify.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-15 14:12:44 +08:00
Yadong Ding 1d289e25f9 rust: update to edition2021
Since we are using cargo 1.68.2 we don't need to require edition 2018 any more.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-15 14:10:50 +08:00
Yadong Ding 194641a624 chore: remove go test cover
In golang smoke test, go test don't need coverage analysis and create coverage profile.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-13 15:54:42 +08:00
Yiqun Leng 45331d5e18 bats test: move the logic of generating dockerfile into common lib
Signed-off-by: Yiqun Leng <yqleng@linux.alibaba.com>
2023-12-13 15:25:15 +08:00
dependabot[bot] 55a999b9e6 build(deps): bump openssl from 0.10.55 to 0.10.60
Bumps [openssl](https://github.com/sfackler/rust-openssl) from 0.10.55 to 0.10.60.
- [Release notes](https://github.com/sfackler/rust-openssl/releases)
- [Commits](https://github.com/sfackler/rust-openssl/compare/openssl-v0.10.55...openssl-v0.10.60)

---
updated-dependencies:
- dependency-name: openssl
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-12-13 13:09:44 +08:00
Yan Song 87e3db7186 nydusify: upgrade containerd package
To import some fixups from https://github.com/containerd/containerd/pull/9405.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-12-13 09:57:20 +08:00
Qinqi Qu a84400d165 misc: update rust-toolchain file to TOML format
1. Move rust-toolchain to rust-toolchain.toml
2. Update the parsing process of rust-toolchain in the test script.

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-12-12 20:27:12 +08:00
Yadong Ding d793aee881 action: delete clean-cache
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-11 09:47:54 +08:00
Yadong Ding a3e60c0801 action: benchmark add conversion_elapsed
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-08 09:33:03 +08:00
Yadong Ding 794f7f7293 smoke: add image conversion time in benchmark
ConversionElapsed can express the performance of accelerated image conversion.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-08 09:33:03 +08:00
Xin Yin e12416ef09 upgrade: change to use dbs_snapshot crate
Signed-off-by: Xin Yin <yinxin.x@bytedance.com>
2023-12-07 20:10:13 +08:00
Xin Yin 7b25d8a059 service: add unit test for upgrade mananger
Signed-off-by: Xin Yin <yinxin.x@bytedance.com>
2023-12-07 20:10:13 +08:00
Xin Yin e0ad430486 feat: support takeover for fscache
refine the UpgradeManager, make it can also support store status for
fscache daemon. And make the takeover feature applies to both fuse and
fscache mode.

Signed-off-by: Xin Yin <yinxin.x@bytedance.com>
2023-12-07 20:10:13 +08:00
Nan Li 16f5ac3d14 feat: implement `takeover` for nydusd fusedev daemon
This patch implements the `save` and `restore` functions in the `fusedev_upgrade` in the service create.
To do this,
- This patch add a new create named `nydus-upgrade` into the workspace. The `nydus-upgrade` create has some util functions help to do serialization and deserialization for the rust structs using the versionize and snapshot crates. The crate also has a trait named `StorageBackend` which can be used to store and restore fuse session fds and state data for the upgrade action, and there's also an implementation named `UdsStorageBackend` which uses unix domain socket to do this.
- as we have to use the same fuse session connection, backend file system mount commands, Vfs to re-mount the rafs for the new daemon (created for "hot upgrade" or failover), this patch add a new struct named `FusedevState` to hold these information. The `FusedevState` is serialized and stored into the `UdsStorageBackend` (which happens in the `save` function in the `fusedev_upgrade` module) before the new daemon is created, and the `FusedevState` is deserialized and restored from the `UdsStorageBackend` (which happens in the `restore` function in the `fusedev_upgrade` module) when the new daemon is triggered by `takeover`.

Signed-off-by: Nan Li <loheagn@icloud.com>
Signed-off-by: linan.loheagn3 <linan.loheagn3@bytedance.com>
2023-12-07 20:10:13 +08:00
Yadong Ding e4cf98b125 action: add oci in benchmark
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-07 10:19:53 +08:00
Yadong Ding b87814b557 smoke: support different snapshooter in bench
We can use overlayfs to test OCI V1 image.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-07 10:19:53 +08:00
Jiang Liu 50b8988751 storage: use connection pool for sqlite
Sqlite connection is not thread safe, so use connection pool to
support multi-threading.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-12-06 15:54:09 +08:00
Jiang Liu 1c293cfefd storage: move cas db from util into storage
Move cas db from util into storage.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-12-06 15:54:09 +08:00
Jiang Liu bfc171a933 util: refine database structure for CAS
Refine the sqlite database structure for storing CAS information.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-12-06 15:54:09 +08:00
xwb1136021767 6ca3ca7dc0 utils: introduce sqlite to store CAS related information
Introduce sqlite to store CAS related information.

Signed-off-by: Huang Jianan <jnhuang95@gmail.com>
Signed-off-by: xwb1136021767 <1136021767@qq.com>
2023-12-06 15:54:09 +08:00
321 changed files with 25621 additions and 6461 deletions

250
.github/copilot-instructions.md vendored Normal file
View File

@ -0,0 +1,250 @@
# GitHub Copilot Instructions for Nydus
## Project Overview
Nydus is a high-performance container image service that implements a content-addressable file system on the RAFS format. It enhances the OCI image specification by enabling on-demand loading, chunk-level deduplication, and improved container startup performance.
### Key Components
- **nydusd**: User-space daemon that processes FUSE/fscache/virtiofs messages and serves Nydus images
- **nydus-image**: CLI tool to convert OCI image layers to Nydus format
- **nydusify**: Tool to convert entire OCI images to Nydus format with registry integration
- **nydusctl**: CLI client for managing and querying nydusd daemon
- **nydus-service**: Library crate for integrating Nydus services into other projects
## Architecture Guidelines
### Crate Structure
```
- api/ # Nydus Image Service APIs and data structures
- builder/ # Image building and conversion logic
- rafs/ # RAFS filesystem implementation
- service/ # Daemon and service management framework
- storage/ # Core storage subsystem with backends and caching
- utils/ # Common utilities and helper functions
- src/bin/ # Binary executables (nydusd, nydus-image, nydusctl)
```
### Key Technologies
- **Language**: Rust with memory safety focus
- **Filesystems**: FUSE, virtiofs, EROFS, fscache
- **Storage Backends**: Registry, OSS, S3, LocalFS, HTTP proxy
- **Compression**: LZ4, Gzip, Zstd
- **Async Runtime**: Tokio (current thread for io-uring compatibility)
## Code Style and Patterns
### Rust Conventions
- Use `#![deny(warnings)]` in all binary crates
- Follow standard Rust naming conventions (snake_case, PascalCase)
- Prefer `anyhow::Result` for error handling in applications
- Use custom error types with `thiserror` for libraries
- Apply `#[macro_use]` for frequently used external crates like `log`
- Always format the code with `cargo fmt`
- Use `clippy` for linting and follow its suggestions
### Error Handling
```rust
// Prefer anyhow for applications
use anyhow::{bail, Context, Result};
// Use custom error types for libraries
use thiserror::Error;
#[derive(Error, Debug)]
pub enum NydusError {
#[error("Invalid arguments: {0}")]
InvalidArguments(String),
#[error("IO error: {0}")]
Io(#[from] std::io::Error),
}
```
### Logging Patterns
- Use structured logging with appropriate levels (trace, debug, info, warn, error)
- Include context in error messages: `.with_context(|| "description")`
- Use `info!`, `warn!`, `error!` macros consistently
### Configuration Management
- Use `serde` for JSON configuration serialization/deserialization
- Support both file-based and environment variable configuration
- Validate configurations at startup with clear error messages
- Follow the `ConfigV2` pattern for versioned configurations
## Development Guidelines
### Storage Backend Development
- When implementing new storage backends:
- - Implement the `BlobBackend` trait
- - Support timeout, retry, and connection management
- - Add configuration in the backend config structure
- - Consider proxy support for high availability
- - Implement proper error handling and logging
### Daemon Service Development
- Use the `NydusDaemon` trait for service implementations
- Support save/restore for hot upgrade functionality
- Implement proper state machine transitions
- Use `DaemonController` for lifecycle management
### RAFS Filesystem Features
- Support both RAFS v5 and v6 formats
- Implement chunk-level deduplication
- Handle prefetch optimization for container startup
- Support overlay filesystem operations
- Maintain POSIX compatibility
### API Development
- Use versioned APIs (v1, v2) with backward compatibility
- Implement HTTP endpoints with proper error handling
- Support both Unix socket and TCP communication
- Follow OpenAPI specification patterns
## Testing Patterns
### Unit Tests
- Test individual functions and modules in isolation
- Use `#[cfg(test)]` modules within source files
- Mock external dependencies when necessary
- Focus on error conditions and edge cases
### Integration Tests
- Place integration tests in `tests/` directory
- Test complete workflows and component interactions
- Use temporary directories for filesystem operations
- Clean up resources properly in test teardown
### Smoke Tests
- Located in `smoke/` directory using Go
- Test real-world scenarios with actual images
- Verify performance and functionality
- Use Bats framework for shell-based testing
## Performance Considerations
### I/O Optimization
- Use async I/O patterns with Tokio
- Implement prefetching for predictable access patterns
- Optimize chunk size (default 1MB) for workload characteristics
- Consider io-uring for high-performance scenarios
### Memory Management
- Use `Arc<T>` for shared ownership of large objects
- Implement lazy loading for metadata structures
- Consider memory mapping for large files
- Profile memory usage in performance-critical paths
### Caching Strategy
- Implement blob caching with configurable backends
- Support compression in cache to save space
- Use chunk-level caching with efficient eviction policies
- Consider cache warming strategies for frequently accessed data
## Security Guidelines
### Data Integrity
- Implement end-to-end digest validation
- Support multiple hash algorithms (SHA256, Blake3)
- Verify chunk integrity on read operations
- Detect and prevent supply chain attacks
### Authentication
- Support registry authentication (basic auth, bearer tokens)
- Handle credential rotation and refresh
- Implement secure credential storage
- Support mutual TLS for backend connections
## Specific Code Patterns
### Configuration Loading
```rust
// Standard pattern for configuration loading
let config = match config_path {
Some(path) => ConfigV2::from_file(path)?,
None => ConfigV2::default(),
};
// Environment variable override
if let Ok(auth) = std::env::var("IMAGE_PULL_AUTH") {
config.update_registry_auth_info(&auth);
}
```
### Daemon Lifecycle
```rust
// Standard daemon initialization pattern
let daemon = create_daemon(config, build_info)?;
DAEMON_CONTROLLER.set_daemon(daemon);
// Event loop management
if DAEMON_CONTROLLER.is_active() {
DAEMON_CONTROLLER.run_loop();
}
// Graceful shutdown
DAEMON_CONTROLLER.shutdown();
```
### Blob Access Pattern
```rust
// Standard blob read pattern
let mut bio = BlobIoDesc::new(blob_id, blob_address, blob_size, user_io);
let blob_device = factory.get_device(&blob_info)?;
blob_device.read(&mut bio)?;
```
## Documentation Standards
### Code Documentation
- Document all public APIs with `///` comments
- Include examples in documentation
- Document safety requirements for unsafe code
- Explain complex algorithms and data structures
### Architecture Documentation
- Maintain design documents in `docs/` directory
- Update documentation when adding new features
- Include diagrams for complex interactions
- Document configuration options comprehensively
### Release Notes
- Document breaking changes clearly
- Include migration guides for major versions
- Highlight performance improvements
- List new features and bug fixes
## Container and Cloud Native Patterns
### OCI Compatibility
- Maintain compatibility with OCI image spec
- Support standard container runtimes (runc, Kata)
- Implement proper layer handling and manifest generation
- Support multi-architecture images
### Kubernetes Integration
- Design for Kubernetes CRI integration
- Support containerd snapshotter pattern
- Handle pod lifecycle events appropriately
- Implement proper resource cleanup
### Cloud Storage Integration
- Support major cloud providers (AWS S3, Alibaba OSS)
- Implement proper credential management
- Handle network interruptions gracefully
- Support cross-region replication patterns
## Build and Release
### Build Configuration
- Use `Cargo.toml` workspace configuration
- Support cross-compilation for multiple architectures
- Implement proper feature flags for optional components
- Use consistent dependency versioning
### Release Process
- Tag releases with semantic versioning
- Generate release binaries for supported platforms
- Update documentation with release notes
- Validate release artifacts before publishing
Remember to follow these guidelines when contributing to or working with the Nydus codebase. The project emphasizes performance, security, and compatibility with the broader container ecosystem.

40
.github/workflows/Dockerfile.cross vendored Normal file
View File

@ -0,0 +1,40 @@
FROM ubuntu:latest
ENV DEBIAN_FRONTEND=noninteractive
ARG RUST_VERSION=1.84.0
RUN apt-get update && apt-get install -y \
software-properties-common \
build-essential \
curl \
git \
libssl-dev \
pkg-config \
cmake \
gcc-riscv64-linux-gnu \
g++-riscv64-linux-gnu \
&& rm -rf /var/lib/apt/lists/*
RUN add-apt-repository ppa:ubuntu-toolchain-r/test \
&& apt-get update && apt-get install -y \
gcc-14 \
g++-14 \
gcc-14-riscv64-linux-gnu \
g++-14-riscv64-linux-gnu \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /root
RUN curl https://sh.rustup.rs -sSf | sh -s -- -y --default-toolchain $RUST_VERSION
ENV PATH="/root/.cargo/bin:${PATH}"
RUN rustup target add \
riscv64gc-unknown-linux-gnu
RUN mkdir -p ~/.cargo && echo '\
[target.riscv64gc-unknown-linux-gnu]\n\
linker = "riscv64-linux-gnu-gcc-14"' > ~/.cargo/config.toml
CMD ["/bin/bash"]

View File

@ -17,18 +17,17 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v3 uses: actions/checkout@v4
- name: Setup Golang - name: Setup Golang
uses: actions/setup-go@v4 uses: actions/setup-go@v5
with: with:
go-version-file: 'go.work' go-version-file: 'go.work'
cache-dependency-path: "**/*.sum" cache-dependency-path: "**/*.sum"
- name: Build Contrib - name: Build Contrib
run: | run: |
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sudo sh -s -- -b /usr/local/bin v1.51.2
make -e DOCKER=false nydusify-release make -e DOCKER=false nydusify-release
- name: Upload Nydusify - name: Upload Nydusify
uses: actions/upload-artifact@master uses: actions/upload-artifact@v4
with: with:
name: nydusify-artifact name: nydusify-artifact
path: contrib/nydusify/cmd/nydusify path: contrib/nydusify/cmd/nydusify
@ -37,18 +36,18 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v3 uses: actions/checkout@v4
- name: Rust Cache - name: Rust Cache
uses: Swatinem/rust-cache@v2.7.0 uses: Swatinem/rust-cache@v2
with: with:
cache-on-failure: true cache-on-failure: true
shared-key: Linux-cargo-amd64 shared-key: Linux-cargo-amd64
- uses: dsherret/rust-toolchain-file@v1
- name: Build Nydus - name: Build Nydus
run: | run: |
rustup component add rustfmt clippy make release
make
- name: Upload Nydus Binaries - name: Upload Nydus Binaries
uses: actions/upload-artifact@master uses: actions/upload-artifact@v4
with: with:
name: nydus-artifact name: nydus-artifact
path: | path: |
@ -65,6 +64,53 @@ jobs:
echo "|:----------------:|:---:|:------ " >> $GITHUB_STEP_SUMMARY echo "|:----------------:|:---:|:------ " >> $GITHUB_STEP_SUMMARY
echo "| ubuntu-22.04 | 2-core CPU (x86_64) | 7GB |" >> $GITHUB_STEP_SUMMARY echo "| ubuntu-22.04 | 2-core CPU (x86_64) | 7GB |" >> $GITHUB_STEP_SUMMARY
benchmark-oci:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
strategy:
matrix:
include:
- image: wordpress
tag: 6.1.1
- image: node
tag: 19.8
- image: python
tag: 3.10.7
- image: golang
tag: 1.19.3
- image: ruby
tag: 3.1.3
- image: amazoncorretto
tag: 8-al2022-jdk
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: target/release
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Environment
run: |
sudo bash misc/prepare.sh
- name: BenchMark Test
run: |
export BENCHMARK_TEST_IMAGE=${{ matrix.image }}:${{ matrix.tag }}
export BENCHMARK_MODE=oci
export BENCHMARK_METRIC_FILE=${{ matrix.image }}-oci.json
export SNAPSHOTTER=overlayfs
sudo -E make smoke-benchmark
- name: Save BenchMark Result
uses: actions/upload-artifact@v4
with:
name: benchmark-oci-${{ matrix.image }}
path: smoke/${{ matrix.image }}-oci.json
benchmark-fsversion-v5: benchmark-fsversion-v5:
runs-on: ubuntu-latest runs-on: ubuntu-latest
needs: [contrib-build, nydus-build] needs: [contrib-build, nydus-build]
@ -85,20 +131,20 @@ jobs:
tag: 8-al2022-jdk tag: 8-al2022-jdk
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v3 uses: actions/checkout@v4
- name: Download Nydus - name: Download Nydus
uses: actions/download-artifact@master uses: actions/download-artifact@v4
with: with:
name: nydus-artifact name: nydus-artifact
path: target/release path: target/release
- name: Download Nydusify - name: Download Nydusify
uses: actions/download-artifact@master uses: actions/download-artifact@v4
with: with:
name: nydusify-artifact name: nydusify-artifact
path: contrib/nydusify/cmd path: contrib/nydusify/cmd
- name: Prepare Environment - name: Prepare Environment
run: | run: |
sudo bash misc/performance/prepare.sh sudo bash misc/prepare.sh
- name: BenchMark Test - name: BenchMark Test
run: | run: |
export BENCHMARK_TEST_IMAGE=${{ matrix.image }}:${{ matrix.tag }} export BENCHMARK_TEST_IMAGE=${{ matrix.image }}:${{ matrix.tag }}
@ -106,7 +152,7 @@ jobs:
export BENCHMARK_METRIC_FILE=${{ matrix.image }}-fsversion-v5.json export BENCHMARK_METRIC_FILE=${{ matrix.image }}-fsversion-v5.json
sudo -E make smoke-benchmark sudo -E make smoke-benchmark
- name: Save BenchMark Result - name: Save BenchMark Result
uses: actions/upload-artifact@v3 uses: actions/upload-artifact@v4
with: with:
name: benchmark-fsversion-v5-${{ matrix.image }} name: benchmark-fsversion-v5-${{ matrix.image }}
path: smoke/${{ matrix.image }}-fsversion-v5.json path: smoke/${{ matrix.image }}-fsversion-v5.json
@ -131,20 +177,20 @@ jobs:
tag: 8-al2022-jdk tag: 8-al2022-jdk
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v3 uses: actions/checkout@v4
- name: Download Nydus - name: Download Nydus
uses: actions/download-artifact@master uses: actions/download-artifact@v4
with: with:
name: nydus-artifact name: nydus-artifact
path: target/release path: target/release
- name: Download Nydusify - name: Download Nydusify
uses: actions/download-artifact@master uses: actions/download-artifact@v4
with: with:
name: nydusify-artifact name: nydusify-artifact
path: contrib/nydusify/cmd path: contrib/nydusify/cmd
- name: Prepare Environment - name: Prepare Environment
run: | run: |
sudo bash misc/performance/prepare.sh sudo bash misc/prepare.sh
- name: BenchMark Test - name: BenchMark Test
run: | run: |
export BENCHMARK_TEST_IMAGE=${{ matrix.image }}:${{ matrix.tag }} export BENCHMARK_TEST_IMAGE=${{ matrix.image }}:${{ matrix.tag }}
@ -152,7 +198,7 @@ jobs:
export BENCHMARK_METRIC_FILE=${{ matrix.image }}-fsversion-v6.json export BENCHMARK_METRIC_FILE=${{ matrix.image }}-fsversion-v6.json
sudo -E make smoke-benchmark sudo -E make smoke-benchmark
- name: Save BenchMark Result - name: Save BenchMark Result
uses: actions/upload-artifact@v3 uses: actions/upload-artifact@v4
with: with:
name: benchmark-fsversion-v6-${{ matrix.image }} name: benchmark-fsversion-v6-${{ matrix.image }}
path: smoke/${{ matrix.image }}-fsversion-v6.json path: smoke/${{ matrix.image }}-fsversion-v6.json
@ -177,20 +223,20 @@ jobs:
tag: 8-al2022-jdk tag: 8-al2022-jdk
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v3 uses: actions/checkout@v4
- name: Download Nydus - name: Download Nydus
uses: actions/download-artifact@master uses: actions/download-artifact@v4
with: with:
name: nydus-artifact name: nydus-artifact
path: target/release path: target/release
- name: Download Nydusify - name: Download Nydusify
uses: actions/download-artifact@master uses: actions/download-artifact@v4
with: with:
name: nydusify-artifact name: nydusify-artifact
path: contrib/nydusify/cmd path: contrib/nydusify/cmd
- name: Prepare Environment - name: Prepare Environment
run: | run: |
sudo bash misc/performance/prepare.sh sudo bash misc/prepare.sh
- name: BenchMark Test - name: BenchMark Test
run: | run: |
export BENCHMARK_TEST_IMAGE=${{ matrix.image }}:${{ matrix.tag }} export BENCHMARK_TEST_IMAGE=${{ matrix.image }}:${{ matrix.tag }}
@ -198,14 +244,14 @@ jobs:
export BENCHMARK_METRIC_FILE=${{ matrix.image }}-zran.json export BENCHMARK_METRIC_FILE=${{ matrix.image }}-zran.json
sudo -E make smoke-benchmark sudo -E make smoke-benchmark
- name: Save BenchMark Result - name: Save BenchMark Result
uses: actions/upload-artifact@v3 uses: actions/upload-artifact@v4
with: with:
name: benchmark-zran-${{ matrix.image }} name: benchmark-zran-${{ matrix.image }}
path: smoke/${{ matrix.image }}-zran.json path: smoke/${{ matrix.image }}-zran.json
benchmark-result: benchmark-result:
runs-on: ubuntu-latest runs-on: ubuntu-latest
needs: [benchmark-fsversion-v5, benchmark-fsversion-v6, benchmark-zran] needs: [benchmark-oci, benchmark-fsversion-v5, benchmark-fsversion-v6, benchmark-zran]
strategy: strategy:
matrix: matrix:
include: include:
@ -223,25 +269,27 @@ jobs:
tag: 8-al2022-jdk tag: 8-al2022-jdk
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v3 uses: actions/checkout@v4
- name: Download benchmark-oci
uses: actions/download-artifact@v4
with:
name: benchmark-oci-${{ matrix.image }}
path: benchmark-result
- name: Download benchmark-fsversion-v5 - name: Download benchmark-fsversion-v5
uses: actions/download-artifact@v3 uses: actions/download-artifact@v4
with: with:
name: benchmark-fsversion-v5-${{ matrix.image }} name: benchmark-fsversion-v5-${{ matrix.image }}
path: benchmark-result path: benchmark-result
- name: Download benchmark-fsversion-v6 - name: Download benchmark-fsversion-v6
uses: actions/download-artifact@v3 uses: actions/download-artifact@v4
with: with:
name: benchmark-fsversion-v6-${{ matrix.image }} name: benchmark-fsversion-v6-${{ matrix.image }}
path: benchmark-result path: benchmark-result
- name: Download benchmark-zran - name: Download benchmark-zran
uses: actions/download-artifact@v3 uses: actions/download-artifact@v4
with: with:
name: benchmark-zran-${{ matrix.image }} name: benchmark-zran-${{ matrix.image }}
path: benchmark-result path: benchmark-result
- uses: geekyeggo/delete-artifact@v2
with:
name: "*-${{matrix.image}}"
- name: Benchmark Summary - name: Benchmark Summary
run: | run: |
case ${{matrix.image}} in case ${{matrix.image}} in
@ -266,15 +314,16 @@ jobs:
esac esac
cd benchmark-result cd benchmark-result
metric_files=( metric_files=(
"${{ matrix.image }}-oci.json"
"${{ matrix.image }}-fsversion-v5.json" "${{ matrix.image }}-fsversion-v5.json"
"${{ matrix.image }}-fsversion-v6.json" "${{ matrix.image }}-fsversion-v6.json"
"${{ matrix.image }}-zran.json" "${{ matrix.image }}-zran.json"
) )
echo "| bench-result | e2e-time(s) | read-count | read-amount(MB) | image-size(MB) |" >> $GITHUB_STEP_SUMMARY echo "| bench-result | e2e-time(s) | read-count | read-amount(MB) | image-size(MB) |convert-time(s)|" >> $GITHUB_STEP_SUMMARY
echo "|:-------------|:----------:|:----------:|:---------------:|:--------:|" >> $GITHUB_STEP_SUMMARY echo "|:-------------|:-----------:|:----------:|:---------------:|:--------------:|:-------------:|" >> $GITHUB_STEP_SUMMARY
for file in "${metric_files[@]}"; do for file in "${metric_files[@]}"; do
name=$(basename "$file" .json | sed 's/^[^-]*-\(.*\)$/\1/') name=$(basename "$file" .json | sed 's/^[^-]*-\(.*\)$/\1/')
data=$(jq -r '. | "\(.e2e_time / 1e9) \(.read_count) \(.read_amount_total / (1024 * 1024)) \(.image_size / (1024 * 1024))"' "$file" | \ data=$(jq -r '. | "\(.e2e_time / 1e9) \(.read_count) \(.read_amount_total / (1024 * 1024)) \(.image_size / (1024 * 1024)) \(.conversion_elapsed / 1e9)"' "$file" | \
awk '{ printf "%.2f | %.0f | %.2f | %.2f", $1, $2, $3, $4 }') awk '{ printf "%.2f | %.0f | %.2f | %.2f | %.2f", $1, $2, $3, $4, $5 }')
echo "| $name | $data |" >> $GITHUB_STEP_SUMMARY echo "| $name | $data |" >> $GITHUB_STEP_SUMMARY
done done

View File

@ -1,33 +0,0 @@
name: Cleanup caches by a branch
on:
pull_request:
types:
- closed
jobs:
cleanup:
runs-on: ubuntu-22.04
steps:
- name: Check out code
uses: actions/checkout@v3
- name: Cleanup
run: |
gh extension install actions/gh-actions-cache
REPO=${{ github.repository }}
BRANCH="refs/pull/${{ github.event.pull_request.number }}/merge"
echo "Fetching list of cache key"
cacheKeysForPR=$(gh actions-cache list -R $REPO -B $BRANCH | cut -f 1 )
## Setting this to not fail the workflow while deleting cache keys.
set +e
echo "Deleting caches..."
for cacheKey in $cacheKeysForPR
do
gh actions-cache delete $cacheKey -R $REPO -B $BRANCH --confirm
done
echo "Done"
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@ -18,18 +18,18 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v3 uses: actions/checkout@v4
- name: Setup Golang - name: Setup Golang
uses: actions/setup-go@v4 uses: actions/setup-go@v5
with: with:
go-version-file: 'go.work' go-version-file: 'go.work'
cache-dependency-path: "**/*.sum" cache-dependency-path: "**/*.sum"
- name: Build Contrib - name: Build Contrib
run: | run: |
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sudo sh -s -- -b /usr/local/bin v1.51.2 curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sudo sh -s -- -b /usr/local/bin v1.61.0
make -e DOCKER=false nydusify-release make -e DOCKER=false nydusify-release
- name: Upload Nydusify - name: Upload Nydusify
uses: actions/upload-artifact@master uses: actions/upload-artifact@v4
with: with:
name: nydusify-artifact name: nydusify-artifact
path: contrib/nydusify/cmd/nydusify path: contrib/nydusify/cmd/nydusify
@ -38,18 +38,18 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v3 uses: actions/checkout@v4
- name: Rust Cache - name: Rust Cache
uses: Swatinem/rust-cache@v2.7.0 uses: Swatinem/rust-cache@v2
with: with:
cache-on-failure: true cache-on-failure: true
shared-key: Linux-cargo-amd64 shared-key: Linux-cargo-amd64
- uses: dsherret/rust-toolchain-file@v1
- name: Build Nydus - name: Build Nydus
run: | run: |
rustup component add rustfmt clippy make release
make
- name: Upload Nydus Binaries - name: Upload Nydus Binaries
uses: actions/upload-artifact@master uses: actions/upload-artifact@v4
with: with:
name: nydus-artifact name: nydus-artifact
path: | path: |
@ -60,7 +60,7 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v3 uses: actions/checkout@v4
- name: Build fsck.erofs - name: Build fsck.erofs
run: | run: |
sudo apt-get update && sudo apt-get install -y build-essential git autotools-dev automake libtool pkg-config uuid-dev liblz4-dev sudo apt-get update && sudo apt-get install -y build-essential git autotools-dev automake libtool pkg-config uuid-dev liblz4-dev
@ -68,7 +68,7 @@ jobs:
cd erofs-utils && git checkout v1.6 && git apply ../${{ env.FSCK_PATCH_PATH }} && ./autogen.sh && ./configure && make && cd .. cd erofs-utils && git checkout v1.6 && git apply ../${{ env.FSCK_PATCH_PATH }} && ./autogen.sh && ./configure && make && cd ..
sudo cp erofs-utils/fsck/fsck.erofs /usr/local/bin/ sudo cp erofs-utils/fsck/fsck.erofs /usr/local/bin/
- name: Upload fsck.erofs - name: Upload fsck.erofs
uses: actions/upload-artifact@master uses: actions/upload-artifact@v4
with: with:
name: fsck-erofs-artifact name: fsck-erofs-artifact
path: | path: |
@ -79,25 +79,25 @@ jobs:
needs: [nydusify-build, nydus-build, fsck-erofs-build] needs: [nydusify-build, nydus-build, fsck-erofs-build]
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v3 uses: actions/checkout@v4
- name: Login ghcr registry - name: Login ghcr registry
uses: docker/login-action@v2 uses: docker/login-action@v3
with: with:
registry: ${{ env.REGISTRY }} registry: ${{ env.REGISTRY }}
username: ${{ github.actor }} username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }} password: ${{ secrets.GITHUB_TOKEN }}
- name: Download Nydus - name: Download Nydus
uses: actions/download-artifact@master uses: actions/download-artifact@v4
with: with:
name: nydus-artifact name: nydus-artifact
path: /usr/local/bin path: /usr/local/bin
- name: Download Nydusify - name: Download Nydusify
uses: actions/download-artifact@master uses: actions/download-artifact@v4
with: with:
name: nydusify-artifact name: nydusify-artifact
path: /usr/local/bin path: /usr/local/bin
- name: Download fsck.erofs - name: Download fsck.erofs
uses: actions/download-artifact@master uses: actions/download-artifact@v4
with: with:
name: fsck-erofs-artifact name: fsck-erofs-artifact
path: /usr/local/bin path: /usr/local/bin
@ -139,11 +139,11 @@ jobs:
--source localhost:5000/$I \ --source localhost:5000/$I \
--target localhost:5000/$I:nydus-nightly-oci-ref --target localhost:5000/$I:nydus-nightly-oci-ref
sudo fsck.erofs -d1 output/nydus_bootstrap sudo fsck.erofs -d1 ./output/target/nydus_bootstrap/image/image.boot
sudo rm -rf ./output sudo rm -rf ./output
done done
- name: Save Nydusify Metric - name: Save Nydusify Metric
uses: actions/upload-artifact@v3 uses: actions/upload-artifact@v4
with: with:
name: convert-zran-metric name: convert-zran-metric
path: convert-zran path: convert-zran
@ -153,20 +153,20 @@ jobs:
needs: [nydusify-build, nydus-build] needs: [nydusify-build, nydus-build]
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v3 uses: actions/checkout@v4
- name: Login ghcr registry - name: Login ghcr registry
uses: docker/login-action@v2 uses: docker/login-action@v3
with: with:
registry: ${{ env.REGISTRY }} registry: ${{ env.REGISTRY }}
username: ${{ github.actor }} username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }} password: ${{ secrets.GITHUB_TOKEN }}
- name: Download Nydus - name: Download Nydus
uses: actions/download-artifact@master uses: actions/download-artifact@v4
with: with:
name: nydus-artifact name: nydus-artifact
path: /usr/local/bin path: /usr/local/bin
- name: Download Nydusify - name: Download Nydusify
uses: actions/download-artifact@master uses: actions/download-artifact@v4
with: with:
name: nydusify-artifact name: nydusify-artifact
path: /usr/local/bin path: /usr/local/bin
@ -197,7 +197,7 @@ jobs:
--target localhost:5000/$I:nydus-nightly-v5 --target localhost:5000/$I:nydus-nightly-v5
done done
- name: Save Nydusify Metric - name: Save Nydusify Metric
uses: actions/upload-artifact@v3 uses: actions/upload-artifact@v4
with: with:
name: convert-native-v5-metric name: convert-native-v5-metric
path: convert-native-v5 path: convert-native-v5
@ -207,25 +207,25 @@ jobs:
needs: [nydusify-build, nydus-build, fsck-erofs-build] needs: [nydusify-build, nydus-build, fsck-erofs-build]
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v3 uses: actions/checkout@v4
- name: Login ghcr registry - name: Login ghcr registry
uses: docker/login-action@v2 uses: docker/login-action@v3
with: with:
registry: ${{ env.REGISTRY }} registry: ${{ env.REGISTRY }}
username: ${{ github.actor }} username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }} password: ${{ secrets.GITHUB_TOKEN }}
- name: Download Nydus - name: Download Nydus
uses: actions/download-artifact@master uses: actions/download-artifact@v4
with: with:
name: nydus-artifact name: nydus-artifact
path: /usr/local/bin path: /usr/local/bin
- name: Download Nydusify - name: Download Nydusify
uses: actions/download-artifact@master uses: actions/download-artifact@v4
with: with:
name: nydusify-artifact name: nydusify-artifact
path: /usr/local/bin path: /usr/local/bin
- name: Download fsck.erofs - name: Download fsck.erofs
uses: actions/download-artifact@master uses: actions/download-artifact@v4
with: with:
name: fsck-erofs-artifact name: fsck-erofs-artifact
path: /usr/local/bin path: /usr/local/bin
@ -256,42 +256,112 @@ jobs:
sudo DOCKER_CONFIG=$HOME/.docker nydusify check --source $I:latest \ sudo DOCKER_CONFIG=$HOME/.docker nydusify check --source $I:latest \
--target localhost:5000/$I:nydus-nightly-v6 --target localhost:5000/$I:nydus-nightly-v6
sudo fsck.erofs -d1 output/nydus_bootstrap sudo fsck.erofs -d1 ./output/target/nydus_bootstrap/image/image.boot
sudo rm -rf ./output sudo rm -rf ./output
done done
- name: Save Nydusify Metric - name: Save Nydusify Metric
uses: actions/upload-artifact@v3 uses: actions/upload-artifact@v4
with: with:
name: convert-native-v6-metric name: convert-native-v6-metric
path: convert-native-v6 path: convert-native-v6
convert-metric: convert-native-v6-batch:
runs-on: ubuntu-latest runs-on: ubuntu-latest
needs: [convert-zran, convert-native-v5, convert-native-v6] needs: [nydusify-build, nydus-build, fsck-erofs-build]
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v3 uses: actions/checkout@v4
- name: Login ghcr registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: /usr/local/bin
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: /usr/local/bin
- name: Download fsck.erofs
uses: actions/download-artifact@v4
with:
name: fsck-erofs-artifact
path: /usr/local/bin
- name: Convert and check RAFS v6 batch images
run: |
sudo chmod +x /usr/local/bin/nydus*
sudo chmod +x /usr/local/bin/fsck.erofs
sudo docker run -d --restart=always -p 5000:5000 registry
sudo mkdir convert-native-v6-batch
for I in $(cat ${{ env.IMAGE_LIST_PATH }}); do
echo "converting $I:latest to $I:nydus-nightly-v6-batch"
# for pre-built images
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--source $I:latest \
--target ${{ env.REGISTRY }}/${{ env.ORGANIZATION }}/$I:nydus-nightly-v6-batch \
--fs-version 6 \
--batch-size 0x100000 \
--platform linux/amd64,linux/arm64
# use local registry for speed
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--source $I:latest \
--target localhost:5000/$I:nydus-nightly-v6-batch \
--fs-version 6 \
--batch-size 0x100000 \
--platform linux/amd64,linux/arm64 \
--output-json convert-native-v6-batch/${I}.json
sudo rm -rf ./tmp
sudo DOCKER_CONFIG=$HOME/.docker nydusify check --source $I:latest \
--target localhost:5000/$I:nydus-nightly-v6-batch
sudo fsck.erofs -d1 ./output/target/nydus_bootstrap/image/image.boot
sudo rm -rf ./output
done
- name: Save Nydusify Metric
uses: actions/upload-artifact@v4
with:
name: convert-native-v6-batch-metric
path: convert-native-v6-batch
convert-metric:
runs-on: ubuntu-latest
needs: [convert-zran, convert-native-v5, convert-native-v6, convert-native-v6-batch]
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Download Zran Metric - name: Download Zran Metric
uses: actions/download-artifact@master uses: actions/download-artifact@v4
with: with:
name: convert-zran-metric name: convert-zran-metric
path: convert-zran path: convert-zran
- name: Download V5 Metric - name: Download V5 Metric
uses: actions/download-artifact@master uses: actions/download-artifact@v4
with: with:
name: convert-native-v5-metric name: convert-native-v5-metric
path: convert-native-v5 path: convert-native-v5
- name: Download V6 Metric - name: Download V6 Metric
uses: actions/download-artifact@master uses: actions/download-artifact@v4
with: with:
name: convert-native-v6-metric name: convert-native-v6-metric
path: convert-native-v6 path: convert-native-v6
- name: Download V6 Batch Metric
uses: actions/download-artifact@v4
with:
name: convert-native-v6-batch-metric
path: convert-native-v6-batch
- name: Summary - name: Summary
run: | run: |
echo "## Image Size(MB)" > $GITHUB_STEP_SUMMARY echo "## Image Size(MB)" > $GITHUB_STEP_SUMMARY
echo "> Compare the size of OCI image and Nydus image." echo "> Compare the size of OCI image and Nydus image."
echo "|image name|oci/nydus-zran|oci/nydus-v5|oci/nydus-v6|" >> $GITHUB_STEP_SUMMARY echo "|image name|oci/nydus-zran|oci/nydus-v5|oci/nydus-v6|oci/nydus-batch|" >> $GITHUB_STEP_SUMMARY
echo "|:--------:|:------------:|:----------:|:----------:|" >> $GITHUB_STEP_SUMMARY echo "|:--------:|:------------:|:----------:|:----------:|:-------------:|" >> $GITHUB_STEP_SUMMARY
for I in $(cat ${{ env.IMAGE_LIST_PATH }}); do for I in $(cat ${{ env.IMAGE_LIST_PATH }}); do
zranSourceImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.SourceImageSize' convert-zran/${I}.json) / 1048576")") zranSourceImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.SourceImageSize' convert-zran/${I}.json) / 1048576")")
zranTargetImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.TargetImageSize' convert-zran/${I}.json) / 1048576")") zranTargetImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.TargetImageSize' convert-zran/${I}.json) / 1048576")")
@ -299,17 +369,20 @@ jobs:
v5TargetImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.TargetImageSize' convert-native-v5/${I}.json) / 1048576")") v5TargetImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.TargetImageSize' convert-native-v5/${I}.json) / 1048576")")
v6SourceImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.SourceImageSize' convert-native-v6/${I}.json) / 1048576")") v6SourceImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.SourceImageSize' convert-native-v6/${I}.json) / 1048576")")
v6TargetImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.TargetImageSize' convert-native-v6/${I}.json) / 1048576")") v6TargetImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.TargetImageSize' convert-native-v6/${I}.json) / 1048576")")
echo "|${I}:latest|${zranSourceImageSize}/${zranTargetImageSize}|${v5SourceImageSize}/${v5TargetImageSize}|${v6SourceImageSize}/${v6TargetImageSize}|" >> $GITHUB_STEP_SUMMARY batchSourceImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.SourceImageSize' convert-native-v6-batch/${I}.json) / 1048576")")
batchTargetImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.TargetImageSize' convert-native-v6-batch/${I}.json) / 1048576")")
echo "|${I}:latest|${zranSourceImageSize}/${zranTargetImageSize}|${v5SourceImageSize}/${v5TargetImageSize}|${v6SourceImageSize}/${v6TargetImageSize}|${batchSourceImageSize}/${batchTargetImageSize}|" >> $GITHUB_STEP_SUMMARY
done done
echo "## Conversion Time(ms)" >> $GITHUB_STEP_SUMMARY echo "## Conversion Time(ms)" >> $GITHUB_STEP_SUMMARY
echo "> Time elapsed to convert OCI image to Nydus image." echo "> Time elapsed to convert OCI image to Nydus image."
echo "|image name|nydus-zran|nydus-v5|nydus-v6|" >> $GITHUB_STEP_SUMMARY echo "|image name|nydus-zran|nydus-v5|nydus-v6|nydus-batch|" >> $GITHUB_STEP_SUMMARY
echo "|:---:|:--:|:-------:|:-------:|" >> $GITHUB_STEP_SUMMARY echo "|:---:|:--:|:-------:|:-------:|:-------:|" >> $GITHUB_STEP_SUMMARY
for I in $(cat ${{ env.IMAGE_LIST_PATH }}); do for I in $(cat ${{ env.IMAGE_LIST_PATH }}); do
zranConversionElapsed=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.ConversionElapsed' convert-zran/${I}.json) / 1000000")") zranConversionElapsed=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.ConversionElapsed' convert-zran/${I}.json) / 1000000")")
v5ConversionElapsed=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.ConversionElapsed' convert-native-v5/${I}.json) / 1000000")") v5ConversionElapsed=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.ConversionElapsed' convert-native-v5/${I}.json) / 1000000")")
v6ConversionElapsed=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.ConversionElapsed' convert-native-v6/${I}.json) / 1000000")") v6ConversionElapsed=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.ConversionElapsed' convert-native-v6/${I}.json) / 1000000")")
echo "|${I}:latest|${zranConversionElapsed}|${v5ConversionElapsed}|${v6ConversionElapsed}|" >> $GITHUB_STEP_SUMMARY batchConversionElapsed=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.ConversionElapsed' convert-native-v6-batch/${I}.json) / 1000000")")
echo "|${I}:latest|${zranConversionElapsed}|${v5ConversionElapsed}|${v6ConversionElapsed}|${batchConversionElapsed}|" >> $GITHUB_STEP_SUMMARY
done done
- uses: geekyeggo/delete-artifact@v2 - uses: geekyeggo/delete-artifact@v2
with: with:

45
.github/workflows/miri.yml vendored Normal file
View File

@ -0,0 +1,45 @@
name: Miri Test
on:
push:
branches: ["**", "stable/**"]
paths-ignore: [ '**.md', '**.png', '**.jpg', '**.svg', '**/docs/**' ]
pull_request:
branches: ["**", "stable/**"]
paths-ignore: [ '**.md', '**.png', '**.jpg', '**.svg', '**/docs/**' ]
schedule:
# Run daily sanity check at 03:00 clock UTC
- cron: "0 03 * * *"
workflow_dispatch:
env:
CARGO_TERM_COLOR: always
jobs:
nydus-unit-test-with-miri:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Rust Cache
uses: Swatinem/rust-cache@v2
with:
cache-on-failure: true
shared-key: Linux-cargo-amd64
save-if: ${{ github.ref == 'refs/heads/master' }}
- name: Install cargo nextest
uses: taiki-e/install-action@nextest
- name: Fscache Setup
run: sudo bash misc/fscache/setup.sh
- name: Install Miri
run: |
rustup toolchain install nightly --component miri
rustup override set nightly
cargo miri setup
- name: Unit Test with Miri
run: |
CARGO_HOME=${HOME}/.cargo
CARGO_BIN=$(which cargo)
RUSTUP_BIN=$(which rustup)
sudo -E RUSTUP=${RUSTUP_BIN} make miri-ut-nextest 2>&1 | tee miri-ut.log
grep -C 2 'Undefined Behavior' miri-ut.log

View File

@ -19,26 +19,60 @@ jobs:
matrix: matrix:
arch: [amd64, arm64, ppc64le, riscv64] arch: [amd64, arm64, ppc64le, riscv64]
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v4
- name: Cache cargo - name: Cache cargo
uses: Swatinem/rust-cache@v2.7.0 uses: Swatinem/rust-cache@v2
with: with:
cache-on-failure: true cache-on-failure: true
shared-key: ${{ runner.os }}-cargo-${{ matrix.arch }} shared-key: ${{ runner.os }}-cargo-${{ matrix.arch }}
- name: Build nydus-rs - uses: dsherret/rust-toolchain-file@v1
- name: Read Rust toolchain version
id: set_toolchain_version
run: |
RUST_TOOLCHAIN_VERSION=$(grep -oP '(?<=channel = ")[^"]*' rust-toolchain.toml)
echo "Rust toolchain version: $RUST_TOOLCHAIN_VERSION"
echo "rust-version=$RUST_TOOLCHAIN_VERSION" >> $GITHUB_OUTPUT
shell: bash
- name: Set up Docker Buildx
if: matrix.arch == 'riscv64'
uses: docker/setup-buildx-action@v3
- name: Build and push Docker image
if: matrix.arch == 'riscv64'
uses: docker/build-push-action@v6
with:
context: .
file: ./.github/workflows/Dockerfile.cross
push: false
load: true
tags: rust-cross-compile-riscv64:latest
cache-from: type=gha
cache-to: type=gha,mode=max
build-args: |
RUST_VERSION=${{ steps.set_toolchain_version.outputs.rust-version }}
- name: Build nydus-rs Non-RISC-V
if: matrix.arch != 'riscv64'
run: |
declare -A rust_target_map=( ["amd64"]="x86_64-unknown-linux-musl" ["arm64"]="aarch64-unknown-linux-musl" ["ppc64le"]="powerpc64le-unknown-linux-gnu")
RUST_TARGET=${rust_target_map[${{ matrix.arch }}]}
cargo install --locked --version 0.2.5 cross
make -e RUST_TARGET_STATIC=$RUST_TARGET -e CARGO=cross static-release
- name : Build Nydus-rs RISC-V
if: matrix.arch == 'riscv64'
run: |
RUST_TARGET=riscv64gc-unknown-linux-gnu
docker run --rm -v ${{ github.workspace }}:/root/src rust-cross-compile-riscv64:latest \
sh -c "cd /root/src && make -e RUST_TARGET_STATIC=$RUST_TARGET static-release"
- name: Prepare to upload artifacts
run: | run: |
declare -A rust_target_map=( ["amd64"]="x86_64-unknown-linux-musl" ["arm64"]="aarch64-unknown-linux-musl" ["ppc64le"]="powerpc64le-unknown-linux-gnu" ["riscv64"]="riscv64gc-unknown-linux-gnu") declare -A rust_target_map=( ["amd64"]="x86_64-unknown-linux-musl" ["arm64"]="aarch64-unknown-linux-musl" ["ppc64le"]="powerpc64le-unknown-linux-gnu" ["riscv64"]="riscv64gc-unknown-linux-gnu")
RUST_TARGET=${rust_target_map[${{ matrix.arch }}]} RUST_TARGET=${rust_target_map[${{ matrix.arch }}]}
cargo install --version 0.2.4 cross
rustup component add rustfmt clippy
make -e RUST_TARGET_STATIC=$RUST_TARGET -e CARGO=cross static-release
sudo mv target/$RUST_TARGET/release/nydusd nydusd sudo mv target/$RUST_TARGET/release/nydusd nydusd
sudo mv target/$RUST_TARGET/release/nydus-image . sudo mv target/$RUST_TARGET/release/nydus-image .
sudo mv target/$RUST_TARGET/release/nydusctl . sudo mv target/$RUST_TARGET/release/nydusctl .
sudo cp -r misc/configs . sudo cp -r misc/configs .
sudo chown -R $(id -un):$(id -gn) . ~/.cargo/ sudo chown -R $(id -un):$(id -gn) . ~/.cargo/
- name: store-artifacts - name: store-artifacts
uses: actions/upload-artifact@v3 uses: actions/upload-artifact@v4
with: with:
name: nydus-artifacts-linux-${{ matrix.arch }} name: nydus-artifacts-linux-${{ matrix.arch }}
path: | path: |
@ -48,17 +82,18 @@ jobs:
configs configs
nydus-macos: nydus-macos:
runs-on: macos-11 runs-on: macos-13
strategy: strategy:
matrix: matrix:
arch: [amd64, arm64] arch: [amd64, arm64]
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v4
- name: Cache cargo - name: Cache cargo
uses: Swatinem/rust-cache@v2.7.0 uses: Swatinem/rust-cache@v2
with: with:
cache-on-failure: true cache-on-failure: true
shared-key: ${{ runner.os }}-cargo-${{ matrix.arch }} shared-key: ${{ runner.os }}-cargo-${{ matrix.arch }}
- uses: dsherret/rust-toolchain-file@v1
- name: build - name: build
run: | run: |
if [[ "${{matrix.arch}}" == "amd64" ]]; then if [[ "${{matrix.arch}}" == "amd64" ]]; then
@ -66,15 +101,14 @@ jobs:
else else
RUST_TARGET="aarch64-apple-darwin" RUST_TARGET="aarch64-apple-darwin"
fi fi
cargo install --version 0.2.4 cross cargo install --version 0.2.5 cross
rustup target add ${RUST_TARGET} rustup target add ${RUST_TARGET}
rustup component add rustfmt clippy
make -e RUST_TARGET_STATIC=$RUST_TARGET -e CARGO=cross static-release make -e RUST_TARGET_STATIC=$RUST_TARGET -e CARGO=cross static-release
sudo mv target/$RUST_TARGET/release/nydusd nydusd sudo mv target/$RUST_TARGET/release/nydusd nydusd
sudo cp -r misc/configs . sudo cp -r misc/configs .
sudo chown -R $(id -un):$(id -gn) . ~/.cargo/ sudo chown -R $(id -un):$(id -gn) . ~/.cargo/
- name: store-artifacts - name: store-artifacts
uses: actions/upload-artifact@v3 uses: actions/upload-artifact@v4
with: with:
name: nydus-artifacts-darwin-${{ matrix.arch }} name: nydus-artifacts-darwin-${{ matrix.arch }}
path: | path: |
@ -91,24 +125,22 @@ jobs:
env: env:
DOCKER: false DOCKER: false
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v4
- name: Setup Golang - name: Setup Golang
uses: actions/setup-go@v4 uses: actions/setup-go@v5
with: with:
go-version-file: 'go.work' go-version-file: 'go.work'
cache-dependency-path: "**/*.sum" cache-dependency-path: "**/*.sum"
- name: build contrib go components - name: build contrib go components
run: | run: |
make -e GOARCH=${{ matrix.arch }} contrib-release make -e GOARCH=${{ matrix.arch }} contrib-release
sudo mv contrib/ctr-remote/bin/ctr-remote .
sudo mv contrib/nydusify/cmd/nydusify . sudo mv contrib/nydusify/cmd/nydusify .
sudo mv contrib/nydus-overlayfs/bin/nydus-overlayfs . sudo mv contrib/nydus-overlayfs/bin/nydus-overlayfs .
- name: store-artifacts - name: store-artifacts
uses: actions/upload-artifact@v3 uses: actions/upload-artifact@v4
with: with:
name: nydus-artifacts-linux-${{ matrix.arch }} name: nydus-artifacts-linux-${{ matrix.arch }}-contrib
path: | path: |
ctr-remote
nydusify nydusify
nydus-overlayfs nydus-overlayfs
containerd-nydus-grpc containerd-nydus-grpc
@ -122,9 +154,10 @@ jobs:
needs: [nydus-linux, contrib-linux] needs: [nydus-linux, contrib-linux]
steps: steps:
- name: download artifacts - name: download artifacts
uses: actions/download-artifact@v3 uses: actions/download-artifact@v4
with: with:
name: nydus-artifacts-${{ matrix.os }}-${{ matrix.arch }} pattern: nydus-artifacts-${{ matrix.os }}-${{ matrix.arch }}*
merge-multiple: true
path: nydus-static path: nydus-static
- name: prepare release tarball - name: prepare release tarball
run: | run: |
@ -138,9 +171,9 @@ jobs:
sha256sum $tarball > $shasum sha256sum $tarball > $shasum
echo "tarball_shasum=${shasum}" >> $GITHUB_ENV echo "tarball_shasum=${shasum}" >> $GITHUB_ENV
- name: store-artifacts - name: store-artifacts
uses: actions/upload-artifact@v3 uses: actions/upload-artifact@v4
with: with:
name: nydus-release-tarball name: nydus-release-tarball-${{ matrix.os }}-${{ matrix.arch }}
path: | path: |
${{ env.tarball }} ${{ env.tarball }}
${{ env.tarball_shasum }} ${{ env.tarball_shasum }}
@ -155,7 +188,7 @@ jobs:
needs: [nydus-macos] needs: [nydus-macos]
steps: steps:
- name: download artifacts - name: download artifacts
uses: actions/download-artifact@v3 uses: actions/download-artifact@v4
with: with:
name: nydus-artifacts-${{ matrix.os }}-${{ matrix.arch }} name: nydus-artifacts-${{ matrix.os }}-${{ matrix.arch }}
path: nydus-static path: nydus-static
@ -171,9 +204,9 @@ jobs:
sha256sum $tarball > $shasum sha256sum $tarball > $shasum
echo "tarball_shasum=${shasum}" >> $GITHUB_ENV echo "tarball_shasum=${shasum}" >> $GITHUB_ENV
- name: store-artifacts - name: store-artifacts
uses: actions/upload-artifact@v3 uses: actions/upload-artifact@v4
with: with:
name: nydus-release-tarball name: nydus-release-tarball-${{ matrix.os }}-${{ matrix.arch }}
path: | path: |
${{ env.tarball }} ${{ env.tarball }}
${{ env.tarball_shasum }} ${{ env.tarball_shasum }}
@ -183,9 +216,10 @@ jobs:
needs: [prepare-tarball-linux, prepare-tarball-darwin] needs: [prepare-tarball-linux, prepare-tarball-darwin]
steps: steps:
- name: download artifacts - name: download artifacts
uses: actions/download-artifact@v3 uses: actions/download-artifact@v4
with: with:
name: nydus-release-tarball pattern: nydus-release-tarball-*
merge-multiple: true
path: nydus-tarball path: nydus-tarball
- name: prepare release env - name: prepare release env
run: | run: |
@ -205,3 +239,87 @@ jobs:
generate_release_notes: true generate_release_notes: true
files: | files: |
${{ env.tarballs }} ${{ env.tarballs }}
goreleaser:
if: github.event_name == 'push' && startsWith(github.ref, 'refs/tags/')
strategy:
matrix:
arch: [amd64, arm64]
os: [linux]
needs: [nydus-linux, contrib-linux]
permissions:
contents: write
runs-on: ubuntu-latest
timeout-minutes: 60
outputs:
hashes: ${{ steps.hash.outputs.hashes }}
steps:
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 0
submodules: recursive
- name: Setup Golang
uses: actions/setup-go@v5
with:
go-version-file: 'go.work'
cache-dependency-path: "**/*.sum"
- name: download artifacts
uses: actions/download-artifact@v4
with:
pattern: nydus-artifacts-${{ matrix.os }}-${{ matrix.arch }}*
merge-multiple: true
path: nydus-static
- name: prepare context
run: |
chmod +x nydus-static/*
export GOARCH=${{ matrix.arch }}
echo "GOARCH: $GOARCH"
sh ./goreleaser.sh
- name: Check GoReleaser config
uses: goreleaser/goreleaser-action@90a3faa9d0182683851fbfa97ca1a2cb983bfca3
with:
version: latest
args: check
- name: Run GoReleaser
uses: goreleaser/goreleaser-action@90a3faa9d0182683851fbfa97ca1a2cb983bfca3
id: run-goreleaser
with:
version: latest
args: release --clean
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Generate subject
id: hash
env:
ARTIFACTS: "${{ steps.run-goreleaser.outputs.artifacts }}"
run: |
set -euo pipefail
hashes=$(echo $ARTIFACTS | jq --raw-output '.[] | {name, "digest": (.extra.Digest // .extra.Checksum)} | select(.digest) | {digest} + {name} | join(" ") | sub("^sha256:";"")' | base64 -w0)
if test "$hashes" = ""; then # goreleaser < v1.13.0
checksum_file=$(echo "$ARTIFACTS" | jq -r '.[] | select (.type=="Checksum") | .path')
hashes=$(cat $checksum_file | base64 -w0)
fi
echo "hashes=$hashes" >> $GITHUB_OUTPUT
- name: Set tag output
id: tag
run: echo "tag_name=${GITHUB_REF#refs/*/}" >> "$GITHUB_OUTPUT"
provenance:
if: github.event_name == 'push' && startsWith(github.ref, 'refs/tags/')
needs: [goreleaser]
permissions:
actions: read # To read the workflow path.
id-token: write # To sign the provenance.
contents: write # To add assets to a release.
uses: slsa-framework/slsa-github-generator/.github/workflows/generator_generic_slsa3.yml@v2.1.0
with:
base64-subjects: "${{ needs.goreleaser.outputs.hashes }}"
upload-assets: true # upload to a new release
upload-tag-name: "${{ needs.release.outputs.tag_name }}"
draft-release: true

View File

@ -14,67 +14,133 @@ on:
env: env:
CARGO_TERM_COLOR: always CARGO_TERM_COLOR: always
IMAGE: wordpress
TAG: 6.1.1
jobs: jobs:
contrib-build: contrib-build:
runs-on: ubuntu-latest runs-on: ubuntu-latest
strategy:
matrix:
arch: [amd64, arm64, ppc64le, riscv64]
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v3 uses: actions/checkout@v4
- name: Setup Golang - name: Setup Golang
uses: actions/setup-go@v4 uses: actions/setup-go@v5
with: with:
go-version-file: 'go.work' go-version-file: 'go.work'
cache-dependency-path: "**/*.sum" cache-dependency-path: "**/*.sum"
- name: Build Contrib - name: Build Contrib
run: | run: |
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sudo sh -s -- -b /usr/bin v1.51.2 make -e DOCKER=false GOARCH=${{ matrix.arch }} contrib-release
make -e DOCKER=false nydusify-release
make -e DOCKER=false contrib-test
- name: Upload Nydusify - name: Upload Nydusify
uses: actions/upload-artifact@master if: matrix.arch == 'amd64'
uses: actions/upload-artifact@v4
with: with:
name: nydusify-artifact name: nydusify-artifact
path: contrib/nydusify/cmd path: contrib/nydusify/cmd
nydus-build: contrib-lint:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Rust Cache
uses: Swatinem/rust-cache@v2.7.0
with:
cache-on-failure: true
shared-key: Linux-cargo-amd64
save-if: ${{ github.ref == 'refs/heads/master' }}
- name: Build Nydus
run: |
rustup component add rustfmt clippy
make
- name: Upload Nydus Binaries
uses: actions/upload-artifact@master
with:
name: nydus-artifact
path: |
target/release/nydus-image
target/release/nydusd
nydusd-build-macos:
runs-on: macos-11
strategy: strategy:
matrix: matrix:
arch: [amd64, arm64] include:
- path: contrib/nydusify
- path: contrib/nydus-overlayfs
steps: steps:
- uses: actions/checkout@v3 - name: Checkout
- name: Cache cargo uses: actions/checkout@v4
uses: Swatinem/rust-cache@v2.7.0 - name: Setup Golang
uses: actions/setup-go@v5
with:
go-version-file: 'go.work'
cache: false
- name: Lint
uses: golangci/golangci-lint-action@v6
with:
version: v1.64
working-directory: ${{ matrix.path }}
args: --timeout=10m --verbose
nydus-build:
runs-on: ubuntu-latest
strategy:
matrix:
arch: [amd64, arm64, ppc64le, riscv64]
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Rust Cache
uses: Swatinem/rust-cache@v2
with: with:
cache-on-failure: true cache-on-failure: true
shared-key: ${{ runner.os }}-cargo-${{ matrix.arch }} shared-key: ${{ runner.os }}-cargo-${{ matrix.arch }}
save-if: ${{ github.ref == 'refs/heads/master' }} save-if: ${{ github.ref == 'refs/heads/master' }}
- uses: dsherret/rust-toolchain-file@v1
- name: Read Rust toolchain version
id: set_toolchain_version
run: |
RUST_TOOLCHAIN_VERSION=$(grep -oP '(?<=channel = ")[^"]*' rust-toolchain.toml)
echo "Rust toolchain version: $RUST_TOOLCHAIN_VERSION"
echo "rust-version=$RUST_TOOLCHAIN_VERSION" >> $GITHUB_OUTPUT
shell: bash
- name: Set up Docker Buildx
if: matrix.arch == 'riscv64'
uses: docker/setup-buildx-action@v3
- name: Build and push Docker image
if: matrix.arch == 'riscv64'
uses: docker/build-push-action@v6
with:
context: .
file: ./.github/workflows/Dockerfile.cross
push: false
load: true
tags: rust-cross-compile-riscv64:latest
cache-from: type=gha
cache-to: type=gha,mode=max
build-args: |
RUST_VERSION=${{ steps.set_toolchain_version.outputs.rust-version }}
- name: Build Nydus Non-RISC-V
if: matrix.arch != 'riscv64'
run: |
declare -A rust_target_map=( ["amd64"]="x86_64-unknown-linux-musl" ["arm64"]="aarch64-unknown-linux-musl" ["ppc64le"]="powerpc64le-unknown-linux-gnu")
RUST_TARGET=${rust_target_map[${{ matrix.arch }}]}
cargo install --locked --version 0.2.5 cross
make -e RUST_TARGET_STATIC=$RUST_TARGET -e CARGO=cross static-release
- name: Build Nydus RISC-V
if: matrix.arch == 'riscv64'
run: |
RUST_TARGET=riscv64gc-unknown-linux-gnu
docker run --rm -v ${{ github.workspace }}:/root/src rust-cross-compile-riscv64:latest \
sh -c "cd /root/src && make -e RUST_TARGET_STATIC=$RUST_TARGET static-release"
- name: Prepare to upload artifacts
run: |
declare -A rust_target_map=( ["amd64"]="x86_64-unknown-linux-musl" ["arm64"]="aarch64-unknown-linux-musl" ["ppc64le"]="powerpc64le-unknown-linux-gnu" ["riscv64"]="riscv64gc-unknown-linux-gnu")
RUST_TARGET=${rust_target_map[${{ matrix.arch }}]}
sudo mv target/$RUST_TARGET/release/nydusd .
sudo mv target/$RUST_TARGET/release/nydus-image .
- name: Upload Nydus Binaries
if: matrix.arch == 'amd64'
uses: actions/upload-artifact@v4
with:
name: nydus-artifact
path: |
nydus-image
nydusd
nydusd-build-macos:
runs-on: macos-13
strategy:
matrix:
arch: [amd64, arm64]
steps:
- uses: actions/checkout@v4
- name: Cache cargo
uses: Swatinem/rust-cache@v2
with:
cache-on-failure: true
shared-key: ${{ runner.os }}-cargo-${{ matrix.arch }}
save-if: ${{ github.ref == 'refs/heads/master' }}
- uses: dsherret/rust-toolchain-file@v1
- name: build - name: build
run: | run: |
if [[ "${{matrix.arch}}" == "amd64" ]]; then if [[ "${{matrix.arch}}" == "amd64" ]]; then
@ -82,9 +148,8 @@ jobs:
else else
RUST_TARGET="aarch64-apple-darwin" RUST_TARGET="aarch64-apple-darwin"
fi fi
cargo install --version 0.2.4 cross cargo install --version 0.2.5 cross
rustup target add ${RUST_TARGET} rustup target add ${RUST_TARGET}
rustup component add rustfmt clippy
make -e RUST_TARGET_STATIC=$RUST_TARGET -e CARGO=cross static-release make -e RUST_TARGET_STATIC=$RUST_TARGET -e CARGO=cross static-release
nydus-integration-test: nydus-integration-test:
@ -92,18 +157,18 @@ jobs:
needs: [contrib-build, nydus-build] needs: [contrib-build, nydus-build]
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v3 uses: actions/checkout@v4
- name: Docker Cache - name: Docker Cache
uses: jpribyl/action-docker-layer-caching@v0.1.0 uses: jpribyl/action-docker-layer-caching@v0.1.0
continue-on-error: true continue-on-error: true
- name: Download Nydus - name: Download Nydus
uses: actions/download-artifact@master uses: actions/download-artifact@v4
with: with:
name: nydus-artifact name: nydus-artifact
path: | path: |
target/release target/release
- name: Download Nydusify - name: Download Nydusify
uses: actions/download-artifact@master uses: actions/download-artifact@v4
with: with:
name: nydusify-artifact name: nydusify-artifact
path: contrib/nydusify/cmd path: contrib/nydusify/cmd
@ -124,15 +189,31 @@ jobs:
sudo cp -r nydus-$version/nydus-static/* /usr/bin/nydus-$version/ sudo cp -r nydus-$version/nydus-static/* /usr/bin/nydus-$version/
done done
- name: Setup Golang - name: Setup Golang
uses: actions/setup-go@v4 uses: actions/setup-go@v5
with: with:
go-version-file: 'go.work' go-version-file: 'go.work'
cache-dependency-path: "**/*.sum" cache-dependency-path: "**/*.sum"
- name: Free Disk Space
uses: jlumbroso/free-disk-space@main
with:
# this might remove tools that are actually needed,
# if set to "true" but frees about 6 GB
tool-cache: false
# all of these default to true, but feel free to set to
# "false" if necessary for your workflow
android: true
dotnet: true
haskell: true
large-packages: true
docker-images: true
swap-storage: true
- name: Integration Test - name: Integration Test
run: | run: |
sudo mkdir -p /usr/bin/nydus-latest /home/runner/work/workdir sudo mkdir -p /usr/bin/nydus-latest /home/runner/work/workdir
sudo install -D -m 755 contrib/nydusify/cmd/nydusify /usr/bin/nydus-latest sudo install -D -m 755 contrib/nydusify/cmd/nydusify /usr/bin/nydus-latest
sudo install -D -m 755 target/release/nydusd target/release/nydus-image /usr/bin/nydus-latest sudo install -D -m 755 target/release/nydusd target/release/nydus-image /usr/bin/nydus-latest
sudo bash misc/prepare.sh
export NYDUS_STABLE_VERSION=$(curl https://api.github.com/repos/Dragonflyoss/nydus/releases/latest | jq -r '.tag_name') export NYDUS_STABLE_VERSION=$(curl https://api.github.com/repos/Dragonflyoss/nydus/releases/latest | jq -r '.tag_name')
export NYDUS_STABLE_VERSION_EXPORT="${NYDUS_STABLE_VERSION//./_}" export NYDUS_STABLE_VERSION_EXPORT="${NYDUS_STABLE_VERSION//./_}"
@ -147,16 +228,16 @@ jobs:
export NYDUS_NYDUSIFY_$version_export=/usr/bin/nydus-$version/nydusify export NYDUS_NYDUSIFY_$version_export=/usr/bin/nydus-$version/nydusify
done done
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sudo sh -s -- -b /usr/bin v1.51.2 curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sudo sh -s -- -b /usr/bin v1.64.8
sudo -E make smoke-only sudo -E make smoke-only
nydus-unit-test: nydus-unit-test:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v3 uses: actions/checkout@v4
- name: Rust Cache - name: Rust Cache
uses: Swatinem/rust-cache@v2.7.0 uses: Swatinem/rust-cache@v2
with: with:
cache-on-failure: true cache-on-failure: true
shared-key: Linux-cargo-amd64 shared-key: Linux-cargo-amd64
@ -169,16 +250,37 @@ jobs:
run: | run: |
CARGO_HOME=${HOME}/.cargo CARGO_HOME=${HOME}/.cargo
CARGO_BIN=$(which cargo) CARGO_BIN=$(which cargo)
sudo -E CARGO=${CARGO_BIN} make ut-nextest RUSTUP_BIN=$(which rustup)
sudo -E RUSTUP=${RUSTUP_BIN} make ut-nextest
contrib-unit-test-coverage:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Golang
uses: actions/setup-go@v5
with:
go-version-file: 'go.work'
cache-dependency-path: "**/*.sum"
- name: Unit Test
run: |
make -e DOCKER=false contrib-test
- name: Upload contrib coverage file
uses: actions/upload-artifact@v4
with:
name: contrib-test-coverage-artifact
path: |
contrib/nydusify/coverage.txt
nydus-unit-test-coverage: nydus-unit-test-coverage:
runs-on: ubuntu-latest runs-on: ubuntu-latest
env: env:
CARGO_TERM_COLOR: always CARGO_TERM_COLOR: always
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v4
- name: Rust Cache - name: Rust Cache
uses: Swatinem/rust-cache@v2.7.0 uses: Swatinem/rust-cache@v2
with: with:
cache-on-failure: true cache-on-failure: true
shared-key: Linux-cargo-amd64 shared-key: Linux-cargo-amd64
@ -191,20 +293,43 @@ jobs:
run: | run: |
CARGO_HOME=${HOME}/.cargo CARGO_HOME=${HOME}/.cargo
CARGO_BIN=$(which cargo) CARGO_BIN=$(which cargo)
sudo -E CARGO=${CARGO_BIN} make coverage-codecov RUSTUP_BIN=$(which rustup)
- name: Upload coverage to Codecov sudo -E RUSTUP=${RUSTUP_BIN} make coverage-codecov
uses: codecov/codecov-action@v3 - name: Upload nydus coverage file
uses: actions/upload-artifact@v4
with: with:
files: codecov.json name: nydus-test-coverage-artifact
fail_ci_if_error: true path: |
codecov.json
upload-coverage-to-codecov:
runs-on: ubuntu-latest
needs: [contrib-unit-test-coverage, nydus-unit-test-coverage]
steps:
- uses: actions/checkout@v4
- name: Download nydus coverage file
uses: actions/download-artifact@v4
with:
name: nydus-test-coverage-artifact
- name: Download contrib coverage file
uses: actions/download-artifact@v4
with:
name: contrib-test-coverage-artifact
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v4
with:
files: ./codecov.json,./coverage.txt
token: ${{ secrets.CODECOV_TOKEN }}
verbose: true
fail_ci_if_error: true
nydus-cargo-deny: nydus-cargo-deny:
name: cargo-deny name: cargo-deny
runs-on: ubuntu-latest runs-on: ubuntu-latest
timeout-minutes: 10 timeout-minutes: 10
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v4
- uses: EmbarkStudios/cargo-deny-action@v1 - uses: EmbarkStudios/cargo-deny-action@v2
performance-test: performance-test:
runs-on: ubuntu-latest runs-on: ubuntu-latest
@ -217,21 +342,45 @@ jobs:
- mode: zran - mode: zran
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v3 uses: actions/checkout@v4
- name: Download Nydus - name: Download Nydus
uses: actions/download-artifact@master uses: actions/download-artifact@v4
with: with:
name: nydus-artifact name: nydus-artifact
path: target/release path: target/release
- name: Download Nydusify - name: Download Nydusify
uses: actions/download-artifact@master uses: actions/download-artifact@v4
with: with:
name: nydusify-artifact name: nydusify-artifact
path: contrib/nydusify/cmd path: contrib/nydusify/cmd
- name: Prepare Nydus Container Environment - name: Prepare Nydus Container Environment
run: | run: |
sudo bash misc/performance/prepare.sh sudo bash misc/prepare.sh
- name: Performance Test - name: Performance Test
run: | run: |
export PERFORMANCE_TEST_MODE=${{ matrix.mode }} export PERFORMANCE_TEST_MODE=${{ matrix.mode }}
sudo -E make smoke-performance sudo -E make smoke-performance
takeover-test:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: target/release
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Nydus Container Environment
run: |
sudo bash misc/prepare.sh takeover_test
- name: Takeover Test
run: |
export NEW_NYDUSD_BINARY_PATH=target/release/nydusd
sudo -E make smoke-takeover

31
.github/workflows/stale.yaml vendored Normal file
View File

@ -0,0 +1,31 @@
name: Close stale issues and PRs
on:
workflow_dispatch:
schedule:
- cron: "0 0 * * *"
permissions:
issues: write
pull-requests: write
jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@5bef64f19d7facfb25b37b414482c7164d639639 # v9.1.0
id: stale
with:
delete-branch: true
days-before-close: 7
days-before-stale: 60
days-before-pr-close: 7
days-before-pr-stale: 60
stale-issue-label: "stale"
exempt-issue-labels: bug,wip
exempt-pr-labels: bug,wip
exempt-all-milestones: true
stale-issue-message: 'This issue is stale because it has been open 60 days with no activity.'
close-issue-message: 'This issue was closed because it has been stalled for 7 days with no activity.'
stale-pr-message: 'This PR is stale because it has been open 60 days with no activity.'
close-pr-message: 'This PR was closed because it has been stalled for 7 days with no activity.'

5
.gitignore vendored
View File

@ -7,3 +7,8 @@
__pycache__ __pycache__
.DS_Store .DS_Store
go.work.sum go.work.sum
dist/
nydus-static/
.goreleaser.yml
metadata.db
tests/texture/zran/233c72f2b6b698c07021c4da367cfe2dff4f049efbaa885ca0ff760ea297865a

2167
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -6,9 +6,9 @@ description = "Nydus Image Service"
authors = ["The Nydus Developers"] authors = ["The Nydus Developers"]
license = "Apache-2.0 OR BSD-3-Clause" license = "Apache-2.0 OR BSD-3-Clause"
homepage = "https://nydus.dev/" homepage = "https://nydus.dev/"
repository = "https://github.com/dragonflyoss/image-service" repository = "https://github.com/dragonflyoss/nydus"
exclude = ["contrib/", "smoke/", "tests/"] exclude = ["contrib/", "smoke/", "tests/"]
edition = "2018" edition = "2021"
resolver = "2" resolver = "2"
build = "build.rs" build = "build.rs"
@ -35,7 +35,7 @@ path = "src/lib.rs"
anyhow = "1" anyhow = "1"
clap = { version = "4.0.18", features = ["derive", "cargo"] } clap = { version = "4.0.18", features = ["derive", "cargo"] }
flexi_logger = { version = "0.25", features = ["compress"] } flexi_logger = { version = "0.25", features = ["compress"] }
fuse-backend-rs = "^0.10.4" fuse-backend-rs = "^0.12.0"
hex = "0.4.3" hex = "0.4.3"
hyper = "0.14.11" hyper = "0.14.11"
hyperlocal = "0.8.0" hyperlocal = "0.8.0"
@ -46,37 +46,44 @@ log-panics = { version = "2.1.0", features = ["with-backtrace"] }
mio = { version = "0.8", features = ["os-poll", "os-ext"] } mio = { version = "0.8", features = ["os-poll", "os-ext"] }
nix = "0.24.0" nix = "0.24.0"
rlimit = "0.9.0" rlimit = "0.9.0"
rusqlite = { version = "0.29.0", features = ["bundled"] } rusqlite = { version = "0.30.0", features = ["bundled"] }
serde = { version = "1.0.110", features = ["serde_derive", "rc"] } serde = { version = "1.0.110", features = ["serde_derive", "rc"] }
serde_json = "1.0.51" serde_json = "1.0.51"
tar = "0.4.40" tar = "0.4.40"
tokio = { version = "1.24", features = ["macros"] } tokio = { version = "1.35.1", features = ["macros"] }
# Build static linked openssl library # Build static linked openssl library
openssl = { version = "0.10.55", features = ["vendored"] } openssl = { version = '0.10.72', features = ["vendored"] }
# pin openssl-src to bring in fix for https://rustsec.org/advisories/RUSTSEC-2022-0032
#openssl-src = { version = "111.22" }
nydus-api = { version = "0.3.0", path = "api", features = ["error-backtrace", "handler"] } nydus-api = { version = "0.4.0", path = "api", features = [
nydus-builder = { version = "0.1.0", path = "builder" } "error-backtrace",
nydus-rafs = { version = "0.3.1", path = "rafs" } "handler",
nydus-service = { version = "0.3.0", path = "service", features = ["block-device"] } ] }
nydus-storage = { version = "0.6.3", path = "storage", features = ["prefetch-rate-limit"] } nydus-builder = { version = "0.2.0", path = "builder" }
nydus-utils = { version = "0.4.2", path = "utils" } nydus-rafs = { version = "0.4.0", path = "rafs" }
nydus-service = { version = "0.4.0", path = "service", features = [
"block-device",
] }
nydus-storage = { version = "0.7.0", path = "storage", features = [
"prefetch-rate-limit",
] }
nydus-utils = { version = "0.5.0", path = "utils" }
vhost = { version = "0.6.0", features = ["vhost-user-slave"], optional = true } vhost = { version = "0.11.0", features = ["vhost-user"], optional = true }
vhost-user-backend = { version = "0.8.0", optional = true } vhost-user-backend = { version = "0.15.0", optional = true }
virtio-bindings = { version = "0.1", features = ["virtio-v5_0_0"], optional = true } virtio-bindings = { version = "0.1", features = [
virtio-queue = { version = "0.7.0", optional = true } "virtio-v5_0_0",
vm-memory = { version = "0.10.0", features = ["backend-mmap"], optional = true } ], optional = true }
vmm-sys-util = { version = "0.11.0", optional = true } virtio-queue = { version = "0.12.0", optional = true }
vm-memory = { version = "0.14.1", features = ["backend-mmap","backend-atomic"], optional = true }
vmm-sys-util = { version = "0.12.1", optional = true }
[build-dependencies] [build-dependencies]
time = { version = "0.3.14", features = ["formatting"] } time = { version = "0.3.14", features = ["formatting"] }
[dev-dependencies] [dev-dependencies]
xattr = "1.0.1" xattr = "1.0.1"
vmm-sys-util = "0.11.0" vmm-sys-util = "0.12.1"
[features] [features]
default = [ default = [
@ -86,6 +93,7 @@ default = [
"backend-s3", "backend-s3",
"backend-http-proxy", "backend-http-proxy",
"backend-localdisk", "backend-localdisk",
"dedup",
] ]
virtiofs = [ virtiofs = [
"nydus-service/virtiofs", "nydus-service/virtiofs",
@ -96,15 +104,27 @@ virtiofs = [
"vm-memory", "vm-memory",
"vmm-sys-util", "vmm-sys-util",
] ]
block-nbd = [ block-nbd = ["nydus-service/block-nbd"]
"nydus-service/block-nbd"
]
backend-http-proxy = ["nydus-storage/backend-http-proxy"] backend-http-proxy = ["nydus-storage/backend-http-proxy"]
backend-localdisk = ["nydus-storage/backend-localdisk", "nydus-storage/backend-localdisk-gpt"] backend-localdisk = [
"nydus-storage/backend-localdisk",
"nydus-storage/backend-localdisk-gpt",
]
backend-oss = ["nydus-storage/backend-oss"] backend-oss = ["nydus-storage/backend-oss"]
backend-registry = ["nydus-storage/backend-registry"] backend-registry = ["nydus-storage/backend-registry"]
backend-s3 = ["nydus-storage/backend-s3"] backend-s3 = ["nydus-storage/backend-s3"]
dedup = ["nydus-storage/dedup"]
[workspace] [workspace]
members = ["api", "builder", "clib", "rafs", "storage", "service", "utils"] members = [
"api",
"builder",
"clib",
"rafs",
"storage",
"service",
"upgrade",
"utils",
]

15
MAINTAINERS.md Normal file
View File

@ -0,0 +1,15 @@
# Maintainers
<!-- markdownlint-disable -->
| GitHub ID | Name | Email | Company |
| :-------------------------------------------: | :---------: | :-----------------------------: | :-----------: |
| [imeoer](https://github.com/imeoer) | Yan Song | imeoer@gmail.com | Ant Group |
| [bergwolf](https://github.com/bergwolf) | Peng Tao | bergwolf@hyper.sh | Ant Group |
| [jiangliu](https://github.com/jiangliu) | Jiang Liu | gerry@linux.alibaba.com | Alibaba Group |
| [liubogithub](https://github.com/liubogithub) | Liu Bo | liub.liubo@gmail.com | Alibaba Group |
| [luodw](https://github.com/luodw) | daowen luo | luodaowen.backend@bytedance.com | ByteDance |
| [changweige](https://github.com/changweige) | Changwei Ge | gechangwei@live.cn | ByteDance |
| [hsiangkao](https://github.com/hsiangkao) | Gao Xiang | hsiangkao@linux.alibaba.com | Alibaba Group |
<!-- markdownlint-restore -->

View File

@ -18,7 +18,7 @@ CARGO ?= $(shell which cargo)
RUSTUP ?= $(shell which rustup) RUSTUP ?= $(shell which rustup)
CARGO_BUILD_GEARS = -v ~/.ssh/id_rsa:/root/.ssh/id_rsa -v ~/.cargo/git:/root/.cargo/git -v ~/.cargo/registry:/root/.cargo/registry CARGO_BUILD_GEARS = -v ~/.ssh/id_rsa:/root/.ssh/id_rsa -v ~/.cargo/git:/root/.cargo/git -v ~/.cargo/registry:/root/.cargo/registry
SUDO = $(shell which sudo) SUDO = $(shell which sudo)
CARGO_COMMON ?= CARGO_COMMON ?=
EXCLUDE_PACKAGES = EXCLUDE_PACKAGES =
UNAME_M := $(shell uname -m) UNAME_M := $(shell uname -m)
@ -44,7 +44,6 @@ endif
endif endif
RUST_TARGET_STATIC ?= $(STATIC_TARGET) RUST_TARGET_STATIC ?= $(STATIC_TARGET)
CTR-REMOTE_PATH = contrib/ctr-remote
NYDUSIFY_PATH = contrib/nydusify NYDUSIFY_PATH = contrib/nydusify
NYDUS-OVERLAYFS_PATH = contrib/nydus-overlayfs NYDUS-OVERLAYFS_PATH = contrib/nydus-overlayfs
@ -52,12 +51,6 @@ current_dir := $(shell dirname $(realpath $(firstword $(MAKEFILE_LIST))))
env_go_path := $(shell go env GOPATH 2> /dev/null) env_go_path := $(shell go env GOPATH 2> /dev/null)
go_path := $(if $(env_go_path),$(env_go_path),"$(HOME)/go") go_path := $(if $(env_go_path),$(env_go_path),"$(HOME)/go")
# Set the env DIND_CACHE_DIR to specify a cache directory for
# docker-in-docker container, used to cache data for docker pull,
# then mitigate the impact of docker hub rate limit, for example:
# env DIND_CACHE_DIR=/path/to/host/var-lib-docker make docker-nydusify-smoke
dind_cache_mount := $(if $(DIND_CACHE_DIR),-v $(DIND_CACHE_DIR):/var/lib/docker,)
# Functions # Functions
# Func: build golang target in docker # Func: build golang target in docker
@ -67,7 +60,7 @@ dind_cache_mount := $(if $(DIND_CACHE_DIR),-v $(DIND_CACHE_DIR):/var/lib/docker,
define build_golang define build_golang
echo "Building target $@ by invoking: $(2)" echo "Building target $@ by invoking: $(2)"
if [ $(DOCKER) = "true" ]; then \ if [ $(DOCKER) = "true" ]; then \
docker run --rm -v ${go_path}:/go -v ${current_dir}:/nydus-rs --workdir /nydus-rs/$(1) golang:1.20 $(2) ;\ docker run --rm -v ${go_path}:/go -v ${current_dir}:/nydus-rs --workdir /nydus-rs/$(1) golang:1.21 $(2) ;\
else \ else \
$(2) -C $(1); \ $(2) -C $(1); \
fi fi
@ -115,7 +108,11 @@ ut: .release_version
# you need install cargo nextest first from: https://nexte.st/book/pre-built-binaries.html # you need install cargo nextest first from: https://nexte.st/book/pre-built-binaries.html
ut-nextest: .release_version ut-nextest: .release_version
TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) RUST_BACKTRACE=1 ${CARGO} nextest run --no-fail-fast --filter-expr 'test(test) - test(integration)' --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS) --test-threads 8 TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) RUST_BACKTRACE=1 ${RUSTUP} run stable cargo nextest run --no-fail-fast --filter-expr 'test(test) - test(integration)' --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS)
# install miri first from https://github.com/rust-lang/miri/
miri-ut-nextest: .release_version
MIRIFLAGS=-Zmiri-disable-isolation TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) RUST_BACKTRACE=1 ${RUSTUP} run nightly cargo miri nextest run --no-fail-fast --filter-expr 'test(test) - test(integration) - test(deduplicate::tests) - test(inode_bitmap::tests::test_inode_bitmap)' --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS)
# install test dependencies # install test dependencies
pre-coverage: pre-coverage:
@ -128,8 +125,8 @@ coverage: pre-coverage
# write unit teset coverage to codecov.json, used for Github CI # write unit teset coverage to codecov.json, used for Github CI
coverage-codecov: coverage-codecov:
TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) ${CARGO} llvm-cov --codecov --output-path codecov.json --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS) -- --skip integration --nocapture --test-threads=8 TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) ${RUSTUP} run stable cargo llvm-cov --codecov --output-path codecov.json --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS) -- --skip integration --nocapture --test-threads=8
smoke-only: smoke-only:
make -C smoke test make -C smoke test
@ -139,53 +136,23 @@ smoke-performance:
smoke-benchmark: smoke-benchmark:
make -C smoke test-benchmark make -C smoke test-benchmark
smoke-takeover:
make -C smoke test-takeover
smoke: release smoke-only smoke: release smoke-only
docker-nydus-smoke: contrib-build: nydusify nydus-overlayfs
docker build -t nydus-smoke --build-arg RUST_TARGET=${RUST_TARGET_STATIC} misc/nydus-smoke
docker run --rm --privileged ${CARGO_BUILD_GEARS} \
-e TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) \
-v ~/.cargo:/root/.cargo \
-v $(TEST_WORKDIR_PREFIX) \
-v ${current_dir}:/nydus-rs \
nydus-smoke
# TODO: Nydusify smoke has to be time consuming for a while since it relies on musl nydusd and nydus-image. contrib-release: nydusify-release nydus-overlayfs-release
# So musl compilation must be involved.
# And docker-in-docker deployment involves image building?
docker-nydusify-smoke: docker-static
$(call build_golang,$(NYDUSIFY_PATH),make build-smoke)
docker build -t nydusify-smoke misc/nydusify-smoke
docker run --rm --privileged \
-e BACKEND_TYPE=$(BACKEND_TYPE) \
-e BACKEND_CONFIG=$(BACKEND_CONFIG) \
-v $(current_dir):/nydus-rs $(dind_cache_mount) nydusify-smoke TestSmoke
docker-nydusify-image-test: docker-static contrib-test: nydusify-test nydus-overlayfs-test
$(call build_golang,$(NYDUSIFY_PATH),make build-smoke)
docker build -t nydusify-smoke misc/nydusify-smoke
docker run --rm --privileged \
-e BACKEND_TYPE=$(BACKEND_TYPE) \
-e BACKEND_CONFIG=$(BACKEND_CONFIG) \
-v $(current_dir):/nydus-rs $(dind_cache_mount) nydusify-smoke TestDockerHubImage
# Run integration smoke test in docker-in-docker container. It requires some special settings, contrib-lint: nydusify-lint nydus-overlayfs-lint
docker-smoke: docker-nydus-smoke docker-nydusify-smoke
contrib-build: nydusify ctr-remote nydus-overlayfs contrib-clean: nydusify-clean nydus-overlayfs-clean
contrib-release: nydusify-release ctr-remote-release \
nydus-overlayfs-release
contrib-test: nydusify-test ctr-remote-test \
nydus-overlayfs-test
contrib-clean: nydusify-clean ctr-remote-clean \
nydus-overlayfs-clean
contrib-install: contrib-install:
@sudo mkdir -m 755 -p $(INSTALL_DIR_PREFIX) @sudo mkdir -m 755 -p $(INSTALL_DIR_PREFIX)
@sudo install -m 755 contrib/ctr-remote/bin/ctr-remote $(INSTALL_DIR_PREFIX)/ctr-remote
@sudo install -m 755 contrib/nydus-overlayfs/bin/nydus-overlayfs $(INSTALL_DIR_PREFIX)/nydus-overlayfs @sudo install -m 755 contrib/nydus-overlayfs/bin/nydus-overlayfs $(INSTALL_DIR_PREFIX)/nydus-overlayfs
@sudo install -m 755 contrib/nydusify/cmd/nydusify $(INSTALL_DIR_PREFIX)/nydusify @sudo install -m 755 contrib/nydusify/cmd/nydusify $(INSTALL_DIR_PREFIX)/nydusify
@ -201,17 +168,8 @@ nydusify-test:
nydusify-clean: nydusify-clean:
$(call build_golang,${NYDUSIFY_PATH},make clean) $(call build_golang,${NYDUSIFY_PATH},make clean)
ctr-remote: nydusify-lint:
$(call build_golang,${CTR-REMOTE_PATH},make) $(call build_golang,${NYDUSIFY_PATH},make lint)
ctr-remote-release:
$(call build_golang,${CTR-REMOTE_PATH},make release)
ctr-remote-test:
$(call build_golang,${CTR-REMOTE_PATH},make test)
ctr-remote-clean:
$(call build_golang,${CTR-REMOTE_PATH},make clean)
nydus-overlayfs: nydus-overlayfs:
$(call build_golang,${NYDUS-OVERLAYFS_PATH},make) $(call build_golang,${NYDUS-OVERLAYFS_PATH},make)
@ -225,6 +183,9 @@ nydus-overlayfs-test:
nydus-overlayfs-clean: nydus-overlayfs-clean:
$(call build_golang,${NYDUS-OVERLAYFS_PATH},make clean) $(call build_golang,${NYDUS-OVERLAYFS_PATH},make clean)
nydus-overlayfs-lint:
$(call build_golang,${NYDUS-OVERLAYFS_PATH},make lint)
docker-static: docker-static:
docker build -t nydus-rs-static --build-arg RUST_TARGET=${RUST_TARGET_STATIC} misc/musl-static docker build -t nydus-rs-static --build-arg RUST_TARGET=${RUST_TARGET_STATIC} misc/musl-static
docker run --rm ${CARGO_BUILD_GEARS} -e RUST_TARGET=${RUST_TARGET_STATIC} --workdir /nydus-rs -v ${current_dir}:/nydus-rs nydus-rs-static docker run --rm ${CARGO_BUILD_GEARS} -e RUST_TARGET=${RUST_TARGET_STATIC} --workdir /nydus-rs -v ${current_dir}:/nydus-rs nydus-rs-static

View File

@ -11,7 +11,8 @@
[![Release Version](https://img.shields.io/github/v/release/dragonflyoss/nydus?style=flat)](https://github.com/dragonflyoss/nydus/releases) [![Release Version](https://img.shields.io/github/v/release/dragonflyoss/nydus?style=flat)](https://github.com/dragonflyoss/nydus/releases)
[![License](https://img.shields.io/crates/l/nydus-rs)](https://crates.io/crates/nydus-rs) [![License](https://img.shields.io/crates/l/nydus-rs)](https://crates.io/crates/nydus-rs)
[![Twitter](https://img.shields.io/twitter/url?style=social&url=https%3A%2F%2Ftwitter.com%2Fdragonfly_oss)](https://twitter.com/dragonfly_oss) [![Twitter](https://img.shields.io/twitter/url?style=social&url=https%3A%2F%2Ftwitter.com%2Fdragonfly_oss)](https://twitter.com/dragonfly_oss)
[![Nydus Stars](https://img.shields.io/github/stars/dragonflyoss/image-service?label=Nydus%20Stars&style=social)](https://github.com/dragonflyoss/image-service) [![Nydus Stars](https://img.shields.io/github/stars/dragonflyoss/nydus?label=Nydus%20Stars&style=social)](https://github.com/dragonflyoss/nydus)
[<img src="https://app.devin.ai/devin_v4.png" width="20" title="deepwiki">](https://deepwiki.com/dragonflyoss/nydus)
[![Smoke Test](https://github.com/dragonflyoss/nydus/actions/workflows/smoke.yml/badge.svg?event=schedule)](https://github.com/dragonflyoss/nydus/actions/workflows/smoke.yml?query=event%3Aschedule) [![Smoke Test](https://github.com/dragonflyoss/nydus/actions/workflows/smoke.yml/badge.svg?event=schedule)](https://github.com/dragonflyoss/nydus/actions/workflows/smoke.yml?query=event%3Aschedule)
[![Image Conversion](https://github.com/dragonflyoss/nydus/actions/workflows/convert.yml/badge.svg?event=schedule)](https://github.com/dragonflyoss/nydus/actions/workflows/convert.yml?query=event%3Aschedule) [![Image Conversion](https://github.com/dragonflyoss/nydus/actions/workflows/convert.yml/badge.svg?event=schedule)](https://github.com/dragonflyoss/nydus/actions/workflows/convert.yml?query=event%3Aschedule)
@ -53,7 +54,6 @@ The following Benchmarking results demonstrate that Nydus images significantly o
| [nydus-image](https://github.com/dragonflyoss/nydus/blob/master/docs/nydus-image.md) | Convert a single layer of OCI format container image into a nydus format container image generating meta part file and data part file respectively | | [nydus-image](https://github.com/dragonflyoss/nydus/blob/master/docs/nydus-image.md) | Convert a single layer of OCI format container image into a nydus format container image generating meta part file and data part file respectively |
| [nydusify](https://github.com/dragonflyoss/nydus/blob/master/docs/nydusify.md) | It pulls OCI image down and unpack it, invokes `nydus-image create` to convert image and then pushes the converted image back to registry and data storage | | [nydusify](https://github.com/dragonflyoss/nydus/blob/master/docs/nydusify.md) | It pulls OCI image down and unpack it, invokes `nydus-image create` to convert image and then pushes the converted image back to registry and data storage |
| [nydusctl](https://github.com/dragonflyoss/nydus/blob/master/docs/nydus-image.md) | Nydusd CLI client (`nydus-image inspect`), query daemon's working status/metrics and configure it | | [nydusctl](https://github.com/dragonflyoss/nydus/blob/master/docs/nydus-image.md) | Nydusd CLI client (`nydus-image inspect`), query daemon's working status/metrics and configure it |
| [ctr-remote](https://github.com/dragonflyoss/nydus/tree/master/contrib/ctr-remote) | An enhanced `containerd` CLI tool enable nydus support with `containerd` ctr |
| [nydus-docker-graphdriver](https://github.com/nydusaccelerator/docker-nydus-graphdriver) | [Experimental] Works as a `docker` remote graph driver to control how images and containers are stored and managed | | [nydus-docker-graphdriver](https://github.com/nydusaccelerator/docker-nydus-graphdriver) | [Experimental] Works as a `docker` remote graph driver to control how images and containers are stored and managed |
| [nydus-overlayfs](https://github.com/dragonflyoss/nydus/tree/master/contrib/nydus-overlayfs) | `Containerd` mount helper to invoke overlayfs mount with tweaking mount options a bit. So nydus prerequisites can be passed to vm-based runtime | | [nydus-overlayfs](https://github.com/dragonflyoss/nydus/tree/master/contrib/nydus-overlayfs) | `Containerd` mount helper to invoke overlayfs mount with tweaking mount options a bit. So nydus prerequisites can be passed to vm-based runtime |
| [nydus-backend-proxy](./contrib/nydus-backend-proxy/README.md) | A simple HTTP server to serve local directory as a blob backend for nydusd | | [nydus-backend-proxy](./contrib/nydus-backend-proxy/README.md) | A simple HTTP server to serve local directory as a blob backend for nydusd |
@ -64,7 +64,7 @@ The following Benchmarking results demonstrate that Nydus images significantly o
| ------------- | --------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------ | | ------------- | --------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------ |
| Storage | Registry/OSS/S3/NAS | Support for OCI-compatible distribution implementations such as Docker Hub, Harbor, Github GHCR, Aliyun ACR, NAS, and Aliyun OSS-like object storage service | ✅ | | Storage | Registry/OSS/S3/NAS | Support for OCI-compatible distribution implementations such as Docker Hub, Harbor, Github GHCR, Aliyun ACR, NAS, and Aliyun OSS-like object storage service | ✅ |
| Storage/Build | [Harbor](https://github.com/goharbor/acceleration-service) | Provides a general service for Harbor to support acceleration image conversion based on kinds of accelerator like Nydus and eStargz etc | ✅ | | Storage/Build | [Harbor](https://github.com/goharbor/acceleration-service) | Provides a general service for Harbor to support acceleration image conversion based on kinds of accelerator like Nydus and eStargz etc | ✅ |
| Distribution | [Dragonfly](https://github.com/dragonflyoss/Dragonfly2) | Improve the runtime performance of Nydus image even further with the Dragonfly P2P data distribution system | ✅ | | Distribution | [Dragonfly](https://github.com/dragonflyoss/dragonfly) | Improve the runtime performance of Nydus image even further with the Dragonfly P2P data distribution system | ✅ |
| Build | [Buildkit](https://github.com/nydusaccelerator/buildkit/blob/master/docs/nydus.md) | Provides the ability to build and export Nydus images directly from Dockerfile | ✅ | | Build | [Buildkit](https://github.com/nydusaccelerator/buildkit/blob/master/docs/nydus.md) | Provides the ability to build and export Nydus images directly from Dockerfile | ✅ |
| Build/Runtime | [Nerdctl](https://github.com/containerd/nerdctl/blob/master/docs/nydus.md) | The containerd client to build or run (requires nydus snapshotter) Nydus image | ✅ | | Build/Runtime | [Nerdctl](https://github.com/containerd/nerdctl/blob/master/docs/nydus.md) | The containerd client to build or run (requires nydus snapshotter) Nydus image | ✅ |
| Runtime | [Docker / Moby](https://github.com/dragonflyoss/nydus/blob/master/docs/docker-env-setup.md) | Run Nydus image in Docker container with containerd and nydus-snapshotter | ✅ | | Runtime | [Docker / Moby](https://github.com/dragonflyoss/nydus/blob/master/docs/docker-env-setup.md) | Run Nydus image in Docker container with containerd and nydus-snapshotter | ✅ |
@ -155,6 +155,8 @@ Using the key features of nydus as native in your project without preparing and
Please visit [**Wiki**](https://github.com/dragonflyoss/nydus/wiki), or [**docs**](./docs) Please visit [**Wiki**](https://github.com/dragonflyoss/nydus/wiki), or [**docs**](./docs)
There is also a very nice [Devin](https://devin.ai/) generated document available at [**deepwiki**](https://deepwiki.com/dragonflyoss/nydus).
## Community ## Community
Nydus aims to form a **vendor-neutral opensource** image distribution solution to all communities. Nydus aims to form a **vendor-neutral opensource** image distribution solution to all communities.
@ -170,5 +172,3 @@ Feel free to reach us via Slack or Dingtalk.
- **Dingtalk:** [34971767](https://qr.dingtalk.com/action/joingroup?code=v1,k1,ioWGzuDZEIO10Bf+/ohz4RcQqAkW0MtOwoG1nbbMxQg=&_dt_no_comment=1&origin=11) - **Dingtalk:** [34971767](https://qr.dingtalk.com/action/joingroup?code=v1,k1,ioWGzuDZEIO10Bf+/ohz4RcQqAkW0MtOwoG1nbbMxQg=&_dt_no_comment=1&origin=11)
<img src="./misc/dingtalk.jpg" width="250" height="300"/> <img src="./misc/dingtalk.jpg" width="250" height="300"/>
- **Technical Meeting:** Every Wednesday at 06:00 UTC (Beijing, Shanghai 14:00), please see our [HackMD](https://hackmd.io/@Nydus/Bk8u2X0p9) page for more information.

View File

@ -1,12 +1,12 @@
[package] [package]
name = "nydus-api" name = "nydus-api"
version = "0.3.1" version = "0.4.0"
description = "APIs for Nydus Image Service" description = "APIs for Nydus Image Service"
authors = ["The Nydus Developers"] authors = ["The Nydus Developers"]
license = "Apache-2.0 OR BSD-3-Clause" license = "Apache-2.0 OR BSD-3-Clause"
homepage = "https://nydus.dev/" homepage = "https://nydus.dev/"
repository = "https://github.com/dragonflyoss/image-service" repository = "https://github.com/dragonflyoss/nydus"
edition = "2018" edition = "2021"
[dependencies] [dependencies]
libc = "0.2" libc = "0.2"
@ -24,7 +24,7 @@ serde = { version = "1.0.110", features = ["rc", "serde_derive"] }
url = { version = "2.1.1", optional = true } url = { version = "2.1.1", optional = true }
[dev-dependencies] [dev-dependencies]
vmm-sys-util = { version = "0.11" } vmm-sys-util = { version = "0.12.1" }
[features] [features]
error-backtrace = ["backtrace"] error-backtrace = ["backtrace"]

View File

@ -25,10 +25,15 @@ pub struct ConfigV2 {
pub id: String, pub id: String,
/// Configuration information for storage backend. /// Configuration information for storage backend.
pub backend: Option<BackendConfigV2>, pub backend: Option<BackendConfigV2>,
/// Configuration for external storage backends, order insensitivity.
#[serde(default)]
pub external_backends: Vec<ExternalBackendConfig>,
/// Configuration information for local cache system. /// Configuration information for local cache system.
pub cache: Option<CacheConfigV2>, pub cache: Option<CacheConfigV2>,
/// Configuration information for RAFS filesystem. /// Configuration information for RAFS filesystem.
pub rafs: Option<RafsConfigV2>, pub rafs: Option<RafsConfigV2>,
/// Overlay configuration information for the instance.
pub overlay: Option<OverlayConfig>,
/// Internal runtime configuration. /// Internal runtime configuration.
#[serde(skip)] #[serde(skip)]
pub internal: ConfigV2Internal, pub internal: ConfigV2Internal,
@ -40,8 +45,10 @@ impl Default for ConfigV2 {
version: 2, version: 2,
id: String::new(), id: String::new(),
backend: None, backend: None,
external_backends: Vec::new(),
cache: None, cache: None,
rafs: None, rafs: None,
overlay: None,
internal: ConfigV2Internal::default(), internal: ConfigV2Internal::default(),
} }
} }
@ -54,8 +61,10 @@ impl ConfigV2 {
version: 2, version: 2,
id: id.to_string(), id: id.to_string(),
backend: None, backend: None,
external_backends: Vec::new(),
cache: None, cache: None,
rafs: None, rafs: None,
overlay: None,
internal: ConfigV2Internal::default(), internal: ConfigV2Internal::default(),
} }
} }
@ -510,9 +519,6 @@ pub struct OssConfig {
/// Enable HTTP proxy for the read request. /// Enable HTTP proxy for the read request.
#[serde(default)] #[serde(default)]
pub proxy: ProxyConfig, pub proxy: ProxyConfig,
/// Enable mirrors for the read request.
#[serde(default)]
pub mirrors: Vec<MirrorConfig>,
} }
/// S3 configuration information to access blobs. /// S3 configuration information to access blobs.
@ -554,9 +560,6 @@ pub struct S3Config {
/// Enable HTTP proxy for the read request. /// Enable HTTP proxy for the read request.
#[serde(default)] #[serde(default)]
pub proxy: ProxyConfig, pub proxy: ProxyConfig,
/// Enable mirrors for the read request.
#[serde(default)]
pub mirrors: Vec<MirrorConfig>,
} }
/// Http proxy configuration information to access blobs. /// Http proxy configuration information to access blobs.
@ -583,9 +586,6 @@ pub struct HttpProxyConfig {
/// Enable HTTP proxy for the read request. /// Enable HTTP proxy for the read request.
#[serde(default)] #[serde(default)]
pub proxy: ProxyConfig, pub proxy: ProxyConfig,
/// Enable mirrors for the read request.
#[serde(default)]
pub mirrors: Vec<MirrorConfig>,
} }
/// Container registry configuration information to access blobs. /// Container registry configuration information to access blobs.
@ -626,9 +626,6 @@ pub struct RegistryConfig {
/// Enable HTTP proxy for the read request. /// Enable HTTP proxy for the read request.
#[serde(default)] #[serde(default)]
pub proxy: ProxyConfig, pub proxy: ProxyConfig,
/// Enable mirrors for the read request.
#[serde(default)]
pub mirrors: Vec<MirrorConfig>,
} }
/// Configuration information for blob cache manager. /// Configuration information for blob cache manager.
@ -903,6 +900,9 @@ pub struct ProxyConfig {
/// Replace URL to http to request source registry with proxy, and allow fallback to https if the proxy is unhealthy. /// Replace URL to http to request source registry with proxy, and allow fallback to https if the proxy is unhealthy.
#[serde(default)] #[serde(default)]
pub use_http: bool, pub use_http: bool,
/// Elapsed time to pause proxy health check when the request is inactive, in seconds.
#[serde(default = "default_check_pause_elapsed")]
pub check_pause_elapsed: u64,
} }
impl Default for ProxyConfig { impl Default for ProxyConfig {
@ -913,37 +913,7 @@ impl Default for ProxyConfig {
fallback: true, fallback: true,
check_interval: 5, check_interval: 5,
use_http: false, use_http: false,
} check_pause_elapsed: 300,
}
}
/// Configuration for registry mirror.
#[derive(Clone, Debug, Deserialize, Eq, PartialEq, Serialize)]
pub struct MirrorConfig {
/// Mirror server URL, for example http://127.0.0.1:65001.
pub host: String,
/// Ping URL to check mirror server health.
#[serde(default)]
pub ping_url: String,
/// HTTP request headers to be passed to mirror server.
#[serde(default)]
pub headers: HashMap<String, String>,
/// Interval for mirror health checking, in seconds.
#[serde(default = "default_check_interval")]
pub health_check_interval: u64,
/// Maximum number of failures before marking a mirror as unusable.
#[serde(default = "default_failure_limit")]
pub failure_limit: u8,
}
impl Default for MirrorConfig {
fn default() -> Self {
Self {
host: String::new(),
headers: HashMap::new(),
health_check_interval: 5,
failure_limit: 5,
ping_url: String::new(),
} }
} }
} }
@ -959,6 +929,9 @@ pub struct BlobCacheEntryConfigV2 {
/// Configuration information for storage backend. /// Configuration information for storage backend.
#[serde(default)] #[serde(default)]
pub backend: BackendConfigV2, pub backend: BackendConfigV2,
/// Configuration for external storage backends, order insensitivity.
#[serde(default)]
pub external_backends: Vec<ExternalBackendConfig>,
/// Configuration information for local cache system. /// Configuration information for local cache system.
#[serde(default)] #[serde(default)]
pub cache: CacheConfigV2, pub cache: CacheConfigV2,
@ -1022,8 +995,10 @@ impl From<&BlobCacheEntryConfigV2> for ConfigV2 {
version: c.version, version: c.version,
id: c.id.clone(), id: c.id.clone(),
backend: Some(c.backend.clone()), backend: Some(c.backend.clone()),
external_backends: c.external_backends.clone(),
cache: Some(c.cache.clone()), cache: Some(c.cache.clone()),
rafs: None, rafs: None,
overlay: None,
internal: ConfigV2Internal::default(), internal: ConfigV2Internal::default(),
} }
} }
@ -1070,7 +1045,7 @@ pub const BLOB_CACHE_TYPE_META_BLOB: &str = "bootstrap";
pub const BLOB_CACHE_TYPE_DATA_BLOB: &str = "datablob"; pub const BLOB_CACHE_TYPE_DATA_BLOB: &str = "datablob";
/// Configuration information for a cached blob. /// Configuration information for a cached blob.
#[derive(Debug, Deserialize, Serialize)] #[derive(Debug, Deserialize, Serialize, Clone)]
pub struct BlobCacheEntry { pub struct BlobCacheEntry {
/// Type of blob object, bootstrap or data blob. /// Type of blob object, bootstrap or data blob.
#[serde(rename = "type")] #[serde(rename = "type")]
@ -1186,8 +1161,8 @@ fn default_check_interval() -> u64 {
5 5
} }
fn default_failure_limit() -> u8 { fn default_check_pause_elapsed() -> u64 {
5 300
} }
fn default_work_dir() -> String { fn default_work_dir() -> String {
@ -1285,13 +1260,26 @@ struct CacheConfig {
#[serde(default, rename = "config")] #[serde(default, rename = "config")]
pub cache_config: Value, pub cache_config: Value,
/// Whether to validate data read from the cache. /// Whether to validate data read from the cache.
#[serde(skip_serializing, skip_deserializing)] #[serde(default, rename = "validate")]
pub cache_validate: bool, pub cache_validate: bool,
/// Configuration for blob data prefetching. /// Configuration for blob data prefetching.
#[serde(skip_serializing, skip_deserializing)] #[serde(skip_serializing, skip_deserializing)]
pub prefetch_config: BlobPrefetchConfig, pub prefetch_config: BlobPrefetchConfig,
} }
/// Additional configuration information for external backend, its items
/// will be merged to the configuration from image.
#[derive(Clone, Debug, Default, Deserialize, Eq, PartialEq, Serialize)]
pub struct ExternalBackendConfig {
/// External backend identifier to merge.
pub patch: HashMap<String, String>,
/// External backend type.
#[serde(rename = "type")]
pub kind: String,
/// External backend config items to merge.
pub config: HashMap<String, String>,
}
impl TryFrom<&CacheConfig> for CacheConfigV2 { impl TryFrom<&CacheConfig> for CacheConfigV2 {
type Error = std::io::Error; type Error = std::io::Error;
@ -1333,6 +1321,9 @@ struct FactoryConfig {
pub id: String, pub id: String,
/// Configuration for storage backend. /// Configuration for storage backend.
pub backend: BackendConfig, pub backend: BackendConfig,
/// Configuration for external storage backends, order insensitivity.
#[serde(default)]
pub external_backends: Vec<ExternalBackendConfig>,
/// Configuration for blob cache manager. /// Configuration for blob cache manager.
#[serde(default)] #[serde(default)]
pub cache: CacheConfig, pub cache: CacheConfig,
@ -1393,8 +1384,10 @@ impl TryFrom<RafsConfig> for ConfigV2 {
version: 2, version: 2,
id: v.device.id, id: v.device.id,
backend: Some(backend), backend: Some(backend),
external_backends: v.device.external_backends,
cache: Some(cache), cache: Some(cache),
rafs: Some(rafs), rafs: Some(rafs),
overlay: None,
internal: ConfigV2Internal::default(), internal: ConfigV2Internal::default(),
}) })
} }
@ -1482,6 +1475,9 @@ pub(crate) struct BlobCacheEntryConfig {
/// ///
/// Possible value: `LocalFsConfig`, `RegistryConfig`, `OssConfig`, `LocalDiskConfig`. /// Possible value: `LocalFsConfig`, `RegistryConfig`, `OssConfig`, `LocalDiskConfig`.
backend_config: Value, backend_config: Value,
/// Configuration for external storage backends, order insensitivity.
#[serde(default)]
external_backends: Vec<ExternalBackendConfig>,
/// Type of blob cache, corresponding to `FactoryConfig::CacheConfig::cache_type`. /// Type of blob cache, corresponding to `FactoryConfig::CacheConfig::cache_type`.
/// ///
/// Possible value: "fscache", "filecache". /// Possible value: "fscache", "filecache".
@ -1517,12 +1513,22 @@ impl TryFrom<&BlobCacheEntryConfig> for BlobCacheEntryConfigV2 {
version: 2, version: 2,
id: v.id.clone(), id: v.id.clone(),
backend: (&backend_config).try_into()?, backend: (&backend_config).try_into()?,
external_backends: v.external_backends.clone(),
cache: (&cache_config).try_into()?, cache: (&cache_config).try_into()?,
metadata_path: v.metadata_path.clone(), metadata_path: v.metadata_path.clone(),
}) })
} }
} }
/// Configuration information for Overlay filesystem.
/// OverlayConfig is used to configure the writable layer(upper layer),
/// The filesystem will be writable when OverlayConfig is set.
#[derive(Clone, Debug, Default, Deserialize, Eq, PartialEq, Serialize)]
pub struct OverlayConfig {
pub upper_dir: String,
pub work_dir: String,
}
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;
@ -1829,11 +1835,6 @@ mod tests {
fallback = true fallback = true
check_interval = 10 check_interval = 10
use_http = true use_http = true
[[backend.oss.mirrors]]
host = "http://127.0.0.1:65001"
ping_url = "http://127.0.0.1:65001/ping"
health_check_interval = 10
failure_limit = 10
"#; "#;
let config: ConfigV2 = toml::from_str(content).unwrap(); let config: ConfigV2 = toml::from_str(content).unwrap();
assert_eq!(config.version, 2); assert_eq!(config.version, 2);
@ -1860,14 +1861,6 @@ mod tests {
assert_eq!(oss.proxy.check_interval, 10); assert_eq!(oss.proxy.check_interval, 10);
assert!(oss.proxy.fallback); assert!(oss.proxy.fallback);
assert!(oss.proxy.use_http); assert!(oss.proxy.use_http);
assert_eq!(oss.mirrors.len(), 1);
let mirror = &oss.mirrors[0];
assert_eq!(mirror.host, "http://127.0.0.1:65001");
assert_eq!(mirror.ping_url, "http://127.0.0.1:65001/ping");
assert!(mirror.headers.is_empty());
assert_eq!(mirror.health_check_interval, 10);
assert_eq!(mirror.failure_limit, 10);
} }
#[test] #[test]
@ -1893,11 +1886,6 @@ mod tests {
fallback = true fallback = true
check_interval = 10 check_interval = 10
use_http = true use_http = true
[[backend.registry.mirrors]]
host = "http://127.0.0.1:65001"
ping_url = "http://127.0.0.1:65001/ping"
health_check_interval = 10
failure_limit = 10
"#; "#;
let config: ConfigV2 = toml::from_str(content).unwrap(); let config: ConfigV2 = toml::from_str(content).unwrap();
assert_eq!(config.version, 2); assert_eq!(config.version, 2);
@ -1926,14 +1914,6 @@ mod tests {
assert_eq!(registry.proxy.check_interval, 10); assert_eq!(registry.proxy.check_interval, 10);
assert!(registry.proxy.fallback); assert!(registry.proxy.fallback);
assert!(registry.proxy.use_http); assert!(registry.proxy.use_http);
assert_eq!(registry.mirrors.len(), 1);
let mirror = &registry.mirrors[0];
assert_eq!(mirror.host, "http://127.0.0.1:65001");
assert_eq!(mirror.ping_url, "http://127.0.0.1:65001/ping");
assert!(mirror.headers.is_empty());
assert_eq!(mirror.health_check_interval, 10);
assert_eq!(mirror.failure_limit, 10);
} }
#[test] #[test]
@ -2100,7 +2080,7 @@ mod tests {
"type": "blobcache", "type": "blobcache",
"compressed": true, "compressed": true,
"config": { "config": {
"work_dir": "/var/lib/containerd-nydus/cache", "work_dir": "/var/lib/containerd/io.containerd.snapshotter.v1.nydus/cache",
"disable_indexed_map": false "disable_indexed_map": false
} }
} }
@ -2284,7 +2264,7 @@ mod tests {
} }
#[test] #[test]
fn test_get_confg() { fn test_get_config() {
get_config("localdisk"); get_config("localdisk");
get_config("localfs"); get_config("localfs");
get_config("oss"); get_config("oss");
@ -2340,15 +2320,6 @@ mod tests {
assert!(res); assert!(res);
} }
#[test]
fn test_default_mirror_config() {
let cfg = MirrorConfig::default();
assert_eq!(cfg.host, "");
assert_eq!(cfg.health_check_interval, 5);
assert_eq!(cfg.failure_limit, 5);
assert_eq!(cfg.ping_url, "");
}
#[test] #[test]
fn test_config_v2_from_file() { fn test_config_v2_from_file() {
let content = r#"version=2 let content = r#"version=2
@ -2558,13 +2529,12 @@ mod tests {
#[test] #[test]
fn test_default_value() { fn test_default_value() {
assert!(default_true()); assert!(default_true());
assert_eq!(default_failure_limit(), 5);
assert_eq!(default_prefetch_batch_size(), 1024 * 1024); assert_eq!(default_prefetch_batch_size(), 1024 * 1024);
assert_eq!(default_prefetch_threads_count(), 8); assert_eq!(default_prefetch_threads_count(), 8);
} }
#[test] #[test]
fn test_bckend_config_try_from() { fn test_backend_config_try_from() {
let config = BackendConfig { let config = BackendConfig {
backend_type: "localdisk".to_string(), backend_type: "localdisk".to_string(),
backend_config: serde_json::to_value(LocalDiskConfig::default()).unwrap(), backend_config: serde_json::to_value(LocalDiskConfig::default()).unwrap(),

View File

@ -11,7 +11,7 @@ pub fn make_error(
_file: &str, _file: &str,
_line: u32, _line: u32,
) -> std::io::Error { ) -> std::io::Error {
#[cfg(all(feature = "error-backtrace"))] #[cfg(feature = "error-backtrace")]
{ {
if let Ok(val) = std::env::var("RUST_BACKTRACE") { if let Ok(val) = std::env::var("RUST_BACKTRACE") {
if val.trim() != "0" { if val.trim() != "0" {
@ -86,6 +86,8 @@ define_error_macro!(eother, std::io::Error::new(std::io::ErrorKind::Other, ""));
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use std::io::{Error, ErrorKind};
fn check_size(size: usize) -> std::io::Result<()> { fn check_size(size: usize) -> std::io::Result<()> {
if size > 0x1000 { if size > 0x1000 {
return Err(einval!()); return Err(einval!());
@ -101,4 +103,150 @@ mod tests {
std::io::Error::from_raw_os_error(libc::EINVAL).kind() std::io::Error::from_raw_os_error(libc::EINVAL).kind()
); );
} }
#[test]
fn test_make_error() {
let original_error = Error::new(ErrorKind::Other, "test error");
let debug_info = "debug information";
let file = "test.rs";
let line = 42;
let result_error = super::make_error(original_error, debug_info, file, line);
assert_eq!(result_error.kind(), ErrorKind::Other);
}
#[test]
fn test_libc_error_macros() {
// Test einval macro
let err = einval!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EINVAL).kind());
// Test enoent macro
let err = enoent!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::ENOENT).kind());
// Test ebadf macro
let err = ebadf!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EBADF).kind());
// Test eacces macro
let err = eacces!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EACCES).kind());
// Test enotdir macro
let err = enotdir!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::ENOTDIR).kind());
// Test eisdir macro
let err = eisdir!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EISDIR).kind());
// Test ealready macro
let err = ealready!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EALREADY).kind());
// Test enosys macro
let err = enosys!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::ENOSYS).kind());
// Test epipe macro
let err = epipe!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EPIPE).kind());
// Test eio macro
let err = eio!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EIO).kind());
}
#[test]
fn test_libc_error_macros_with_context() {
let test_msg = "test context";
// Test einval macro with context
let err = einval!(test_msg);
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EINVAL).kind());
// Test enoent macro with context
let err = enoent!(test_msg);
assert_eq!(err.kind(), Error::from_raw_os_error(libc::ENOENT).kind());
// Test eio macro with context
let err = eio!(test_msg);
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EIO).kind());
}
#[test]
fn test_custom_error_macros() {
// Test last_error macro
let err = last_error!();
// We can't predict the exact error, but we can check it's a valid error
assert!(!err.to_string().is_empty());
// Test eother macro
let err = eother!();
assert_eq!(err.kind(), ErrorKind::Other);
// Test eother macro with context
let err = eother!("custom context");
assert_eq!(err.kind(), ErrorKind::Other);
}
fn test_bail_einval_function() -> std::io::Result<()> {
bail_einval!("test error message");
}
fn test_bail_eio_function() -> std::io::Result<()> {
bail_eio!("test error message");
}
#[test]
fn test_bail_macros() {
// Test bail_einval macro
let result = test_bail_einval_function();
assert!(result.is_err());
let err = result.unwrap_err();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EINVAL).kind());
// The error message format is controlled by the macro, so just check it's not empty
assert!(!err.to_string().is_empty());
// Test bail_eio macro
let result = test_bail_eio_function();
assert!(result.is_err());
let err = result.unwrap_err();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EIO).kind());
// The error message format is controlled by the macro, so just check it's not empty
assert!(!err.to_string().is_empty());
}
#[test]
fn test_bail_macros_with_formatting() {
fn test_bail_with_format(code: i32) -> std::io::Result<()> {
if code == 1 {
bail_einval!("error code: {}", code);
} else if code == 2 {
bail_eio!("I/O error with code: {}", code);
}
Ok(())
}
// Test bail_einval with formatting
let result = test_bail_with_format(1);
assert!(result.is_err());
let err = result.unwrap_err();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EINVAL).kind());
// The error message format is controlled by the macro, so just check it's not empty
assert!(!err.to_string().is_empty());
// Test bail_eio with formatting
let result = test_bail_with_format(2);
assert!(result.is_err());
let err = result.unwrap_err();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EIO).kind());
// The error message format is controlled by the macro, so just check it's not empty
assert!(!err.to_string().is_empty());
// Test success case
let result = test_bail_with_format(3);
assert!(result.is_ok());
}
} }

View File

@ -132,7 +132,7 @@ pub enum DaemonErrorKind {
/// Unexpected event type. /// Unexpected event type.
UnexpectedEvent(String), UnexpectedEvent(String),
/// Can't upgrade the daemon. /// Can't upgrade the daemon.
UpgradeManager, UpgradeManager(String),
/// Unsupported requests. /// Unsupported requests.
Unsupported, Unsupported,
} }

View File

@ -140,7 +140,7 @@ impl EndpointHandler for MetricsFsFilesHandler {
(Method::Get, None) => { (Method::Get, None) => {
let id = extract_query_part(req, "id"); let id = extract_query_part(req, "id");
let latest_read_files = extract_query_part(req, "latest") let latest_read_files = extract_query_part(req, "latest")
.map_or(false, |b| b.parse::<bool>().unwrap_or(false)); .is_some_and(|b| b.parse::<bool>().unwrap_or(false));
let r = kicker(ApiRequest::ExportFsFilesMetrics(id, latest_read_files)); let r = kicker(ApiRequest::ExportFsFilesMetrics(id, latest_read_files));
Ok(convert_to_response(r, HttpError::FsFilesMetrics)) Ok(convert_to_response(r, HttpError::FsFilesMetrics))
} }

View File

@ -43,9 +43,8 @@ pub fn extract_query_part(req: &Request, key: &str) -> Option<String> {
// right now, below way makes it easy to obtain query parts from uri. // right now, below way makes it easy to obtain query parts from uri.
let http_prefix = format!("http:{}", req.uri().get_abs_path()); let http_prefix = format!("http:{}", req.uri().get_abs_path());
let url = Url::parse(&http_prefix) let url = Url::parse(&http_prefix)
.map_err(|e| { .inspect_err(|e| {
error!("api: can't parse request {:?}", e); error!("api: can't parse request {:?}", e);
e
}) })
.ok()?; .ok()?;
@ -326,35 +325,30 @@ mod tests {
#[test] #[test]
fn test_http_api_routes_v1() { fn test_http_api_routes_v1() {
assert!(HTTP_ROUTES.routes.get("/api/v1/daemon").is_some()); assert!(HTTP_ROUTES.routes.contains_key("/api/v1/daemon"));
assert!(HTTP_ROUTES.routes.get("/api/v1/daemon/events").is_some()); assert!(HTTP_ROUTES.routes.contains_key("/api/v1/daemon/events"));
assert!(HTTP_ROUTES.routes.get("/api/v1/daemon/backend").is_some()); assert!(HTTP_ROUTES.routes.contains_key("/api/v1/daemon/backend"));
assert!(HTTP_ROUTES.routes.get("/api/v1/daemon/start").is_some()); assert!(HTTP_ROUTES.routes.contains_key("/api/v1/daemon/start"));
assert!(HTTP_ROUTES.routes.get("/api/v1/daemon/exit").is_some()); assert!(HTTP_ROUTES.routes.contains_key("/api/v1/daemon/exit"));
assert!(HTTP_ROUTES assert!(HTTP_ROUTES
.routes .routes
.get("/api/v1/daemon/fuse/sendfd") .contains_key("/api/v1/daemon/fuse/sendfd"));
.is_some());
assert!(HTTP_ROUTES assert!(HTTP_ROUTES
.routes .routes
.get("/api/v1/daemon/fuse/takeover") .contains_key("/api/v1/daemon/fuse/takeover"));
.is_some()); assert!(HTTP_ROUTES.routes.contains_key("/api/v1/mount"));
assert!(HTTP_ROUTES.routes.get("/api/v1/mount").is_some()); assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics"));
assert!(HTTP_ROUTES.routes.get("/api/v1/metrics").is_some()); assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics/files"));
assert!(HTTP_ROUTES.routes.get("/api/v1/metrics/files").is_some()); assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics/pattern"));
assert!(HTTP_ROUTES.routes.get("/api/v1/metrics/pattern").is_some()); assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics/backend"));
assert!(HTTP_ROUTES.routes.get("/api/v1/metrics/backend").is_some()); assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics/blobcache"));
assert!(HTTP_ROUTES assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics/inflight"));
.routes
.get("/api/v1/metrics/blobcache")
.is_some());
assert!(HTTP_ROUTES.routes.get("/api/v1/metrics/inflight").is_some());
} }
#[test] #[test]
fn test_http_api_routes_v2() { fn test_http_api_routes_v2() {
assert!(HTTP_ROUTES.routes.get("/api/v2/daemon").is_some()); assert!(HTTP_ROUTES.routes.contains_key("/api/v2/daemon"));
assert!(HTTP_ROUTES.routes.get("/api/v2/blobs").is_some()); assert!(HTTP_ROUTES.routes.contains_key("/api/v2/blobs"));
} }
#[test] #[test]

View File

@ -1,18 +1,18 @@
[package] [package]
name = "nydus-builder" name = "nydus-builder"
version = "0.1.0" version = "0.2.0"
description = "Nydus Image Builder" description = "Nydus Image Builder"
authors = ["The Nydus Developers"] authors = ["The Nydus Developers"]
license = "Apache-2.0" license = "Apache-2.0"
homepage = "https://nydus.dev/" homepage = "https://nydus.dev/"
repository = "https://github.com/dragonflyoss/image-service" repository = "https://github.com/dragonflyoss/nydus"
edition = "2018" edition = "2021"
[dependencies] [dependencies]
anyhow = "1.0.35" anyhow = "1.0.35"
base64 = "0.21" base64 = "0.21"
hex = "0.4.3" hex = "0.4.3"
indexmap = "1" indexmap = "2"
libc = "0.2" libc = "0.2"
log = "0.4" log = "0.4"
nix = "0.24" nix = "0.24"
@ -20,13 +20,15 @@ serde = { version = "1.0.110", features = ["serde_derive", "rc"] }
serde_json = "1.0.53" serde_json = "1.0.53"
sha2 = "0.10.2" sha2 = "0.10.2"
tar = "0.4.40" tar = "0.4.40"
vmm-sys-util = "0.11.0" vmm-sys-util = "0.12.1"
xattr = "1.0.1" xattr = "1.0.1"
parse-size = "1.1.0"
nydus-api = { version = "0.3", path = "../api" } nydus-api = { version = "0.4.0", path = "../api" }
nydus-rafs = { version = "0.3", path = "../rafs" } nydus-rafs = { version = "0.4.0", path = "../rafs" }
nydus-storage = { version = "0.6", path = "../storage", features = ["backend-localfs"] } nydus-storage = { version = "0.7.0", path = "../storage", features = ["backend-localfs"] }
nydus-utils = { version = "0.4", path = "../utils" } nydus-utils = { version = "0.5.0", path = "../utils" }
gix-attributes = "0.25.0"
[package.metadata.docs.rs] [package.metadata.docs.rs]
all-features = true all-features = true

189
builder/src/attributes.rs Normal file
View File

@ -0,0 +1,189 @@
// Copyright 2024 Nydus Developers. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use std::collections::HashMap;
use std::path::{Path, PathBuf};
use std::{fs, path};
use anyhow::Result;
use gix_attributes::parse;
use gix_attributes::parse::Kind;
const KEY_TYPE: &str = "type";
const KEY_CRCS: &str = "crcs";
const VAL_EXTERNAL: &str = "external";
pub struct Parser {}
#[derive(Clone, Debug, Eq, PartialEq, Default)]
pub struct Item {
pub pattern: PathBuf,
pub attributes: HashMap<String, String>,
}
#[derive(Clone, Debug, Eq, PartialEq, Default)]
pub struct Attributes {
pub items: HashMap<PathBuf, HashMap<String, String>>,
pub crcs: HashMap<PathBuf, Vec<u32>>,
}
impl Attributes {
/// Parse nydus attributes from a file.
pub fn from<P: AsRef<Path>>(path: P) -> Result<Attributes> {
let content = fs::read(path)?;
let _items = parse(&content);
let mut items = HashMap::new();
let mut crcs = HashMap::new();
for _item in _items {
let _item = _item?;
if let Kind::Pattern(pattern) = _item.0 {
let mut path = PathBuf::from(pattern.text.to_string());
if !path.is_absolute() {
path = path::Path::new("/").join(path);
}
let mut current_path = path.clone();
let mut attributes = HashMap::new();
let mut _type = String::new();
let mut _crcs = vec![];
for line in _item.1 {
let line = line?;
let name = line.name.as_str();
let state = line.state.as_bstr().unwrap_or_default();
if name == KEY_TYPE {
_type = state.to_string();
}
if name == KEY_CRCS {
_crcs = state
.to_string()
.split(',')
.map(|s| {
let trimmed = s.trim();
let hex_str = if let Some(stripped) = trimmed.strip_prefix("0x") {
stripped
} else {
trimmed
};
u32::from_str_radix(hex_str, 16).map_err(|e| anyhow::anyhow!(e))
})
.collect::<Result<Vec<u32>, _>>()?;
}
attributes.insert(name.to_string(), state.to_string());
}
crcs.insert(path.clone(), _crcs);
items.insert(path, attributes);
// process parent directory
while let Some(parent) = current_path.parent() {
if parent == Path::new("/") {
break;
}
let mut attributes = HashMap::new();
if !items.contains_key(parent) {
attributes.insert(KEY_TYPE.to_string(), VAL_EXTERNAL.to_string());
items.insert(parent.to_path_buf(), attributes);
}
current_path = parent.to_path_buf();
}
}
}
Ok(Attributes { items, crcs })
}
fn check_external(&self, attributes: &HashMap<String, String>) -> bool {
attributes.get(KEY_TYPE) == Some(&VAL_EXTERNAL.to_string())
}
pub fn is_external<P: AsRef<Path>>(&self, path: P) -> bool {
if let Some(attributes) = self.items.get(path.as_ref()) {
return self.check_external(attributes);
}
false
}
pub fn is_prefix_external<P: AsRef<Path>>(&self, target: P) -> bool {
self.items
.iter()
.any(|item| item.0.starts_with(&target) && self.check_external(item.1))
}
pub fn get_value<P: AsRef<Path>, K: AsRef<str>>(&self, path: P, key: K) -> Option<String> {
if let Some(attributes) = self.items.get(path.as_ref()) {
return attributes.get(key.as_ref()).map(|s| s.to_string());
}
None
}
pub fn get_values<P: AsRef<Path>>(&self, path: P) -> Option<&HashMap<String, String>> {
self.items.get(path.as_ref())
}
pub fn get_crcs<P: AsRef<Path>>(&self, path: P) -> Option<&Vec<u32>> {
self.crcs.get(path.as_ref())
}
}
#[cfg(test)]
mod tests {
use std::{collections::HashMap, fs, path::PathBuf};
use super::{Attributes, Item};
use vmm_sys_util::tempfile::TempFile;
#[test]
fn test_attribute_parse() {
let file = TempFile::new().unwrap();
fs::write(
file.as_path(),
"/foo type=external crcs=0x1234,0x5678
/bar type=external crcs=0x1234,0x5678
/models/foo/bar type=external",
)
.unwrap();
let attributes = Attributes::from(file.as_path()).unwrap();
let _attributes_base: HashMap<String, String> =
[("type".to_string(), "external".to_string())]
.iter()
.cloned()
.collect();
let _attributes: HashMap<String, String> = [
("type".to_string(), "external".to_string()),
("crcs".to_string(), "0x1234,0x5678".to_string()),
]
.iter()
.cloned()
.collect();
let items_map: HashMap<PathBuf, HashMap<String, String>> = vec![
Item {
pattern: PathBuf::from("/foo"),
attributes: _attributes.clone(),
},
Item {
pattern: PathBuf::from("/bar"),
attributes: _attributes.clone(),
},
Item {
pattern: PathBuf::from("/models"),
attributes: _attributes_base.clone(),
},
Item {
pattern: PathBuf::from("/models/foo"),
attributes: _attributes_base.clone(),
},
Item {
pattern: PathBuf::from("/models/foo/bar"),
attributes: _attributes_base.clone(),
},
]
.into_iter()
.map(|item| (item.pattern, item.attributes))
.collect();
assert_eq!(attributes.items, items_map);
assert_eq!(attributes.get_crcs("/foo"), Some(&vec![0x1234, 0x5678]))
}
}

View File

@ -0,0 +1,283 @@
// Copyright (C) 2023 Nydus Developers. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
//! Generate Chunkdict RAFS bootstrap.
//! -------------------------------------------------------------------------------------------------
//! Bug 1: Inconsistent Chunk Size Leading to Blob Size Less Than 4K(v6_block_size)
//! Description: The size of chunks is not consistent, which results in the possibility that a blob,
//! composed of a group of these chunks, may be less than 4K(v6_block_size) in size.
//! This inconsistency leads to a failure in passing the size check.
//! -------------------------------------------------------------------------------------------------
//! Bug 2: Incorrect Chunk Number Calculation Due to Premature Check Logic
//! Description: The current logic for calculating the chunk number is based on the formula size/chunk size.
//! However, this approach is flawed as it precedes the actual check which accounts for chunk statistics.
//! Consequently, this leads to inaccurate counting of chunk numbers.
use super::core::node::{ChunkSource, NodeInfo};
use super::{BlobManager, Bootstrap, BootstrapManager, BuildContext, BuildOutput, Tree};
use crate::core::node::Node;
use crate::NodeChunk;
use crate::OsString;
use anyhow::{Ok, Result};
use nydus_rafs::metadata::chunk::ChunkWrapper;
use nydus_rafs::metadata::inode::InodeWrapper;
use nydus_rafs::metadata::layout::RafsXAttrs;
use nydus_storage::meta::BlobChunkInfoV1Ondisk;
use nydus_utils::compress::Algorithm;
use nydus_utils::digest::RafsDigest;
use std::mem::size_of;
use std::path::PathBuf;
use std::str::FromStr;
use std::sync::Arc;
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
pub struct ChunkdictChunkInfo {
pub image_reference: String,
pub version: String,
pub chunk_blob_id: String,
pub chunk_digest: String,
pub chunk_crc32: u32,
pub chunk_compressed_size: u32,
pub chunk_uncompressed_size: u32,
pub chunk_compressed_offset: u64,
pub chunk_uncompressed_offset: u64,
}
pub struct ChunkdictBlobInfo {
pub blob_id: String,
pub blob_compressed_size: u64,
pub blob_uncompressed_size: u64,
pub blob_compressor: String,
pub blob_meta_ci_compressed_size: u64,
pub blob_meta_ci_uncompressed_size: u64,
pub blob_meta_ci_offset: u64,
}
/// Struct to generate chunkdict RAFS bootstrap.
pub struct Generator {}
impl Generator {
// Generate chunkdict RAFS bootstrap.
pub fn generate(
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_mgr: &mut BlobManager,
chunkdict_chunks_origin: Vec<ChunkdictChunkInfo>,
chunkdict_blobs: Vec<ChunkdictBlobInfo>,
) -> Result<BuildOutput> {
// Validate and remove chunks whose belonged blob sizes are smaller than a block.
let mut chunkdict_chunks = chunkdict_chunks_origin.to_vec();
Self::validate_and_remove_chunks(ctx, &mut chunkdict_chunks);
// Build root tree.
let mut tree = Self::build_root_tree(ctx)?;
// Build child tree.
let child = Self::build_child_tree(ctx, blob_mgr, &chunkdict_chunks, &chunkdict_blobs)?;
let result = vec![child];
tree.children = result;
Self::validate_tree(&tree)?;
// Build bootstrap.
let mut bootstrap_ctx = bootstrap_mgr.create_ctx()?;
let mut bootstrap = Bootstrap::new(tree)?;
bootstrap.build(ctx, &mut bootstrap_ctx)?;
let blob_table = blob_mgr.to_blob_table(ctx)?;
let storage = &mut bootstrap_mgr.bootstrap_storage;
bootstrap.dump(ctx, storage, &mut bootstrap_ctx, &blob_table)?;
BuildOutput::new(blob_mgr, None, &bootstrap_mgr.bootstrap_storage, &None)
}
/// Validate tree.
fn validate_tree(tree: &Tree) -> Result<()> {
let pre = &mut |t: &Tree| -> Result<()> {
let node = t.borrow_mut_node();
debug!("chunkdict tree: ");
debug!("inode: {}", node);
for chunk in &node.chunks {
debug!("\t chunk: {}", chunk);
}
Ok(())
};
tree.walk_dfs_pre(pre)?;
debug!("chunkdict tree is valid.");
Ok(())
}
/// Validates and removes chunks with a total uncompressed size smaller than the block size limit.
fn validate_and_remove_chunks(ctx: &mut BuildContext, chunkdict: &mut Vec<ChunkdictChunkInfo>) {
let mut chunk_sizes = std::collections::HashMap::new();
// Accumulate the uncompressed size for each chunk_blob_id.
for chunk in chunkdict.iter() {
*chunk_sizes.entry(chunk.chunk_blob_id.clone()).or_insert(0) +=
chunk.chunk_uncompressed_size as u64;
}
// Find all chunk_blob_ids with a total uncompressed size > v6_block_size.
let small_chunks: Vec<String> = chunk_sizes
.into_iter()
.filter(|&(_, size)| size < ctx.v6_block_size())
.inspect(|(id, _)| {
eprintln!(
"Warning: Blob with id '{}' is smaller than {} bytes.",
id,
ctx.v6_block_size()
)
})
.map(|(id, _)| id)
.collect();
// Retain only chunks with chunk_blob_id that has a total uncompressed size > v6_block_size.
chunkdict.retain(|chunk| !small_chunks.contains(&chunk.chunk_blob_id));
}
/// Build the root tree.
pub fn build_root_tree(ctx: &mut BuildContext) -> Result<Tree> {
let mut inode = InodeWrapper::new(ctx.fs_version);
inode.set_ino(1);
inode.set_uid(1000);
inode.set_gid(1000);
inode.set_projid(0);
inode.set_mode(0o660 | libc::S_IFDIR as u32);
inode.set_nlink(3);
inode.set_name_size("/".len());
inode.set_rdev(0);
inode.set_blocks(256);
let node_info = NodeInfo {
explicit_uidgid: true,
src_dev: 0,
src_ino: 0,
rdev: 0,
source: PathBuf::from("/"),
path: PathBuf::from("/"),
target: PathBuf::from("/"),
target_vec: vec![OsString::from("/")],
symlink: None,
xattrs: RafsXAttrs::default(),
v6_force_extended_inode: true,
};
let root_node = Node::new(inode, node_info, 0);
let tree = Tree::new(root_node);
Ok(tree)
}
/// Build the child tree.
fn build_child_tree(
ctx: &mut BuildContext,
blob_mgr: &mut BlobManager,
chunkdict_chunks: &[ChunkdictChunkInfo],
chunkdict_blobs: &[ChunkdictBlobInfo],
) -> Result<Tree> {
let mut inode = InodeWrapper::new(ctx.fs_version);
inode.set_ino(2);
inode.set_uid(0);
inode.set_gid(0);
inode.set_projid(0);
inode.set_mode(0o660 | libc::S_IFREG as u32);
inode.set_nlink(1);
inode.set_name_size("chunkdict".len());
inode.set_rdev(0);
inode.set_blocks(256);
let node_info = NodeInfo {
explicit_uidgid: true,
src_dev: 0,
src_ino: 1,
rdev: 0,
source: PathBuf::from("/"),
path: PathBuf::from("/chunkdict"),
target: PathBuf::from("/chunkdict"),
target_vec: vec![OsString::from("/"), OsString::from("/chunkdict")],
symlink: None,
xattrs: RafsXAttrs::new(),
v6_force_extended_inode: true,
};
let mut node = Node::new(inode, node_info, 0);
// Insert chunks.
Self::insert_chunks(ctx, blob_mgr, &mut node, chunkdict_chunks, chunkdict_blobs)?;
let node_size: u64 = node
.chunks
.iter()
.map(|chunk| chunk.inner.uncompressed_size() as u64)
.sum();
node.inode.set_size(node_size);
// Update child count.
node.inode.set_child_count(node.chunks.len() as u32);
let child = Tree::new(node);
child
.borrow_mut_node()
.v5_set_dir_size(ctx.fs_version, &child.children);
Ok(child)
}
/// Insert chunks.
fn insert_chunks(
ctx: &mut BuildContext,
blob_mgr: &mut BlobManager,
node: &mut Node,
chunkdict_chunks: &[ChunkdictChunkInfo],
chunkdict_blobs: &[ChunkdictBlobInfo],
) -> Result<()> {
for (index, chunk_info) in chunkdict_chunks.iter().enumerate() {
let chunk_size: u32 = chunk_info.chunk_compressed_size;
let file_offset = index as u64 * chunk_size as u64;
let mut chunk = ChunkWrapper::new(ctx.fs_version);
// Update blob context.
let (blob_index, blob_ctx) =
blob_mgr.get_or_cerate_blob_for_chunkdict(ctx, &chunk_info.chunk_blob_id)?;
let chunk_uncompressed_size = chunk_info.chunk_uncompressed_size;
let pre_d_offset = blob_ctx.current_uncompressed_offset;
blob_ctx.uncompressed_blob_size = pre_d_offset + chunk_uncompressed_size as u64;
blob_ctx.current_uncompressed_offset += chunk_uncompressed_size as u64;
blob_ctx.blob_meta_header.set_ci_uncompressed_size(
blob_ctx.blob_meta_header.ci_uncompressed_size()
+ size_of::<BlobChunkInfoV1Ondisk>() as u64,
);
blob_ctx.blob_meta_header.set_ci_compressed_size(
blob_ctx.blob_meta_header.ci_uncompressed_size()
+ size_of::<BlobChunkInfoV1Ondisk>() as u64,
);
let chunkdict_blob_info = chunkdict_blobs
.iter()
.find(|blob| blob.blob_id == chunk_info.chunk_blob_id)
.unwrap();
blob_ctx.blob_compressor =
Algorithm::from_str(chunkdict_blob_info.blob_compressor.as_str())?;
blob_ctx
.blob_meta_header
.set_ci_uncompressed_size(chunkdict_blob_info.blob_meta_ci_uncompressed_size);
blob_ctx
.blob_meta_header
.set_ci_compressed_size(chunkdict_blob_info.blob_meta_ci_compressed_size);
blob_ctx
.blob_meta_header
.set_ci_compressed_offset(chunkdict_blob_info.blob_meta_ci_offset);
blob_ctx.blob_meta_header.set_ci_compressor(Algorithm::Zstd);
// Update chunk context.
let chunk_index = blob_ctx.alloc_chunk_index()?;
chunk.set_blob_index(blob_index);
chunk.set_index(chunk_index);
chunk.set_file_offset(file_offset);
chunk.set_compressed_size(chunk_info.chunk_compressed_size);
chunk.set_compressed_offset(chunk_info.chunk_compressed_offset);
chunk.set_uncompressed_size(chunk_info.chunk_uncompressed_size);
chunk.set_uncompressed_offset(chunk_info.chunk_uncompressed_offset);
chunk.set_id(RafsDigest::from_string(&chunk_info.chunk_digest));
chunk.set_crc32(chunk_info.chunk_crc32);
node.chunks.push(NodeChunk {
source: ChunkSource::Build,
inner: Arc::new(chunk.clone()),
});
}
Ok(())
}
}

View File

@ -21,6 +21,7 @@ use nydus_utils::{digest, try_round_up_4k};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use sha2::Digest; use sha2::Digest;
use crate::attributes::Attributes;
use crate::core::context::Artifact; use crate::core::context::Artifact;
use super::core::blob::Blob; use super::core::blob::Blob;
@ -48,22 +49,30 @@ pub struct Config {
/// available value: 0-99, 0 means disable /// available value: 0-99, 0 means disable
/// hint: it's better to disable this option when there are some shared blobs /// hint: it's better to disable this option when there are some shared blobs
/// for example: build-cache /// for example: build-cache
#[serde(default)] pub min_used_ratio: u8,
min_used_ratio: u8,
/// we compact blobs whose size are less than compact_blob_size /// we compact blobs whose size are less than compact_blob_size
#[serde(default = "default_compact_blob_size")] pub compact_blob_size: usize,
compact_blob_size: usize, /// size of compacted blobs should not be larger than max_compact_size
/// size of compacted blobs should not be large than max_compact_size pub max_compact_size: usize,
#[serde(default = "default_max_compact_size")]
max_compact_size: usize,
/// if number of blobs >= layers_to_compact, do compact /// if number of blobs >= layers_to_compact, do compact
/// 0 means always try compact /// 0 means always try compact
#[serde(default)] pub layers_to_compact: usize,
layers_to_compact: usize,
/// local blobs dir, may haven't upload to backend yet /// local blobs dir, may haven't upload to backend yet
/// what's more, new blobs will output to this dir /// what's more, new blobs will output to this dir
/// name of blob file should be equal to blob_id /// name of blob file should be equal to blob_id
blobs_dir: String, pub blobs_dir: String,
}
impl Default for Config {
fn default() -> Self {
Self {
min_used_ratio: 0,
compact_blob_size: default_compact_blob_size(),
max_compact_size: default_max_compact_size(),
layers_to_compact: 0,
blobs_dir: String::new(),
}
}
} }
#[derive(Debug, Clone, Copy, Hash, PartialEq, Eq)] #[derive(Debug, Clone, Copy, Hash, PartialEq, Eq)]
@ -79,7 +88,7 @@ impl ChunkKey {
match c { match c {
ChunkWrapper::V5(_) => Self::Digest(*c.id()), ChunkWrapper::V5(_) => Self::Digest(*c.id()),
ChunkWrapper::V6(_) => Self::Offset(c.blob_index(), c.compressed_offset()), ChunkWrapper::V6(_) => Self::Offset(c.blob_index(), c.compressed_offset()),
ChunkWrapper::Ref(_) => unimplemented!("unsupport ChunkWrapper::Ref(c)"), ChunkWrapper::Ref(_) => Self::Digest(*c.id()),
} }
} }
} }
@ -285,7 +294,7 @@ impl BlobCompactor {
version, version,
states: vec![Default::default(); ori_blobs_number], states: vec![Default::default(); ori_blobs_number],
ori_blob_mgr, ori_blob_mgr,
new_blob_mgr: BlobManager::new(digester), new_blob_mgr: BlobManager::new(digester, false),
c2nodes: HashMap::new(), c2nodes: HashMap::new(),
b2nodes: HashMap::new(), b2nodes: HashMap::new(),
backend, backend,
@ -304,7 +313,7 @@ impl BlobCompactor {
let chunk_dict = self.get_chunk_dict(); let chunk_dict = self.get_chunk_dict();
let cb = &mut |n: &Tree| -> Result<()> { let cb = &mut |n: &Tree| -> Result<()> {
let mut node = n.lock_node(); let mut node = n.borrow_mut_node();
for chunk_idx in 0..node.chunks.len() { for chunk_idx in 0..node.chunks.len() {
let chunk = &mut node.chunks[chunk_idx]; let chunk = &mut node.chunks[chunk_idx];
let chunk_key = ChunkKey::from(&chunk.inner); let chunk_key = ChunkKey::from(&chunk.inner);
@ -367,7 +376,7 @@ impl BlobCompactor {
fn apply_blob_move(&mut self, from: u32, to: u32) -> Result<()> { fn apply_blob_move(&mut self, from: u32, to: u32) -> Result<()> {
if let Some(idx_list) = self.b2nodes.get(&from) { if let Some(idx_list) = self.b2nodes.get(&from) {
for (n, chunk_idx) in idx_list.iter() { for (n, chunk_idx) in idx_list.iter() {
let mut node = n.lock().unwrap(); let mut node = n.borrow_mut();
ensure!( ensure!(
node.chunks[*chunk_idx].inner.blob_index() == from, node.chunks[*chunk_idx].inner.blob_index() == from,
"unexpected blob_index of chunk" "unexpected blob_index of chunk"
@ -381,7 +390,7 @@ impl BlobCompactor {
fn apply_chunk_change(&mut self, c: &(ChunkWrapper, ChunkWrapper)) -> Result<()> { fn apply_chunk_change(&mut self, c: &(ChunkWrapper, ChunkWrapper)) -> Result<()> {
if let Some(chunks) = self.c2nodes.get(&ChunkKey::from(&c.0)) { if let Some(chunks) = self.c2nodes.get(&ChunkKey::from(&c.0)) {
for (n, chunk_idx) in chunks.iter() { for (n, chunk_idx) in chunks.iter() {
let mut node = n.lock().unwrap(); let mut node = n.borrow_mut();
let chunk = &mut node.chunks[*chunk_idx]; let chunk = &mut node.chunks[*chunk_idx];
let mut chunk_inner = chunk.inner.deref().clone(); let mut chunk_inner = chunk.inner.deref().clone();
apply_chunk_change(&c.1, &mut chunk_inner)?; apply_chunk_change(&c.1, &mut chunk_inner)?;
@ -547,7 +556,8 @@ impl BlobCompactor {
info!("compactor: delete compacted blob {}", ori_blob_ids[idx]); info!("compactor: delete compacted blob {}", ori_blob_ids[idx]);
} }
State::Rebuild(cs) => { State::Rebuild(cs) => {
let blob_storage = ArtifactStorage::FileDir(PathBuf::from(dir)); let blob_storage =
ArtifactStorage::FileDir((PathBuf::from(dir), String::new()));
let mut blob_ctx = BlobContext::new( let mut blob_ctx = BlobContext::new(
String::from(""), String::from(""),
0, 0,
@ -557,6 +567,7 @@ impl BlobCompactor {
build_ctx.cipher, build_ctx.cipher,
Default::default(), Default::default(),
None, None,
false,
); );
blob_ctx.set_meta_info_enabled(self.is_v6()); blob_ctx.set_meta_info_enabled(self.is_v6());
let blob_idx = self.new_blob_mgr.alloc_index()?; let blob_idx = self.new_blob_mgr.alloc_index()?;
@ -609,14 +620,16 @@ impl BlobCompactor {
PathBuf::from(""), PathBuf::from(""),
Default::default(), Default::default(),
None, None,
None,
false, false,
Features::new(), Features::new(),
false, false,
Attributes::default(),
); );
let mut bootstrap_mgr = let mut bootstrap_mgr =
BootstrapManager::new(Some(ArtifactStorage::SingleFile(d_bootstrap)), None); BootstrapManager::new(Some(ArtifactStorage::SingleFile(d_bootstrap)), None);
let mut bootstrap_ctx = bootstrap_mgr.create_ctx()?; let mut bootstrap_ctx = bootstrap_mgr.create_ctx()?;
let mut ori_blob_mgr = BlobManager::new(rs.meta.get_digester()); let mut ori_blob_mgr = BlobManager::new(rs.meta.get_digester(), false);
ori_blob_mgr.extend_from_blob_table(&build_ctx, rs.superblock.get_blob_infos())?; ori_blob_mgr.extend_from_blob_table(&build_ctx, rs.superblock.get_blob_infos())?;
if let Some(dict) = chunk_dict { if let Some(dict) = chunk_dict {
ori_blob_mgr.set_chunk_dict(dict); ori_blob_mgr.set_chunk_dict(dict);
@ -642,7 +655,7 @@ impl BlobCompactor {
return Ok(None); return Ok(None);
} }
info!("compatctor: successfully compacted blob"); info!("compactor: successfully compacted blob");
// blobs have already been dumped, dump bootstrap only // blobs have already been dumped, dump bootstrap only
let blob_table = compactor.new_blob_mgr.to_blob_table(&build_ctx)?; let blob_table = compactor.new_blob_mgr.to_blob_table(&build_ctx)?;
bootstrap.build(&mut build_ctx, &mut bootstrap_ctx)?; bootstrap.build(&mut build_ctx, &mut bootstrap_ctx)?;
@ -655,7 +668,9 @@ impl BlobCompactor {
Ok(Some(BuildOutput::new( Ok(Some(BuildOutput::new(
&compactor.new_blob_mgr, &compactor.new_blob_mgr,
None,
&bootstrap_mgr.bootstrap_storage, &bootstrap_mgr.bootstrap_storage,
&None,
)?)) )?))
} }
} }
@ -701,8 +716,7 @@ mod tests {
pub uncompress_offset: u64, pub uncompress_offset: u64,
pub file_offset: u64, pub file_offset: u64,
pub index: u32, pub index: u32,
#[allow(unused)] pub crc32: u32,
pub reserved: u32,
} }
impl BlobChunkInfo for MockChunkInfo { impl BlobChunkInfo for MockChunkInfo {
@ -716,10 +730,26 @@ mod tests {
self.flags.contains(BlobChunkFlags::COMPRESSED) self.flags.contains(BlobChunkFlags::COMPRESSED)
} }
fn is_batch(&self) -> bool {
self.flags.contains(BlobChunkFlags::BATCH)
}
fn is_encrypted(&self) -> bool { fn is_encrypted(&self) -> bool {
false false
} }
fn has_crc32(&self) -> bool {
self.flags.contains(BlobChunkFlags::HAS_CRC32)
}
fn crc32(&self) -> u32 {
if self.has_crc32() {
self.crc32
} else {
0
}
}
fn as_any(&self) -> &dyn Any { fn as_any(&self) -> &dyn Any {
self self
} }
@ -786,7 +816,6 @@ mod tests {
} }
#[test] #[test]
#[should_panic = "not implemented: unsupport ChunkWrapper::Ref(c)"]
fn test_chunk_key_from() { fn test_chunk_key_from() {
let cw = ChunkWrapper::new(RafsVersion::V5); let cw = ChunkWrapper::new(RafsVersion::V5);
matches!(ChunkKey::from(&cw), ChunkKey::Digest(_)); matches!(ChunkKey::from(&cw), ChunkKey::Digest(_));
@ -804,7 +833,7 @@ mod tests {
uncompress_offset: 0x1000, uncompress_offset: 0x1000,
file_offset: 0x1000, file_offset: 0x1000,
index: 1, index: 1,
reserved: 0, crc32: 0,
}) as Arc<dyn BlobChunkInfo>; }) as Arc<dyn BlobChunkInfo>;
let cw = ChunkWrapper::Ref(chunk); let cw = ChunkWrapper::Ref(chunk);
ChunkKey::from(&cw); ChunkKey::from(&cw);
@ -853,6 +882,7 @@ mod tests {
crypt::Algorithm::Aes256Xts, crypt::Algorithm::Aes256Xts,
Arc::new(cipher_object), Arc::new(cipher_object),
None, None,
false,
); );
let ori_blob_ids = ["1".to_owned(), "2".to_owned()]; let ori_blob_ids = ["1".to_owned(), "2".to_owned()];
let backend = Arc::new(MockBackend { let backend = Arc::new(MockBackend {
@ -965,7 +995,7 @@ mod tests {
HashChunkDict::from_commandline_arg(path, Arc::new(ConfigV2::default()), &rafs_config) HashChunkDict::from_commandline_arg(path, Arc::new(ConfigV2::default()), &rafs_config)
.unwrap(); .unwrap();
let mut ori_blob_mgr = BlobManager::new(digest::Algorithm::Sha256); let mut ori_blob_mgr = BlobManager::new(digest::Algorithm::Sha256, false);
ori_blob_mgr.set_chunk_dict(dict); ori_blob_mgr.set_chunk_dict(dict);
let backend = Arc::new(MockBackend { let backend = Arc::new(MockBackend {
@ -980,6 +1010,7 @@ mod tests {
tmpfile.as_path().to_path_buf(), tmpfile.as_path().to_path_buf(),
Overlay::UpperAddition, Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32, RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true, true,
false, false,
)?; )?;
@ -1069,6 +1100,7 @@ mod tests {
tmpfile.as_path().to_path_buf(), tmpfile.as_path().to_path_buf(),
Overlay::UpperAddition, Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32, RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true, true,
false, false,
)?; )?;
@ -1080,6 +1112,7 @@ mod tests {
tmpfile2.as_path().to_path_buf(), tmpfile2.as_path().to_path_buf(),
Overlay::UpperAddition, Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32, RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true, true,
false, false,
)?; )?;
@ -1096,9 +1129,9 @@ mod tests {
assert_eq!(compactor.b2nodes.len(), 2); assert_eq!(compactor.b2nodes.len(), 2);
let chunk_key1 = ChunkKey::from(&chunk1); let chunk_key1 = ChunkKey::from(&chunk1);
assert!(compactor.c2nodes.get(&chunk_key1).is_some()); assert!(compactor.c2nodes.contains_key(&chunk_key1));
assert_eq!(compactor.c2nodes.get(&chunk_key1).unwrap().len(), 1); assert_eq!(compactor.c2nodes.get(&chunk_key1).unwrap().len(), 1);
assert!(compactor.b2nodes.get(&chunk2.blob_index()).is_some()); assert!(compactor.b2nodes.contains_key(&chunk2.blob_index()));
assert_eq!( assert_eq!(
compactor.b2nodes.get(&chunk2.blob_index()).unwrap().len(), compactor.b2nodes.get(&chunk2.blob_index()).unwrap().len(),
2 2
@ -1127,9 +1160,11 @@ mod tests {
PathBuf::from(tmp_dir.as_path()), PathBuf::from(tmp_dir.as_path()),
Default::default(), Default::default(),
None, None,
None,
false, false,
Features::new(), Features::new(),
false, false,
Attributes::default(),
); );
let mut compactor = blob_compactor_load_and_dedup_chunks().unwrap(); let mut compactor = blob_compactor_load_and_dedup_chunks().unwrap();
@ -1143,6 +1178,7 @@ mod tests {
build_ctx.cipher, build_ctx.cipher,
Default::default(), Default::default(),
None, None,
false,
); );
let blob_ctx2 = BlobContext::new( let blob_ctx2 = BlobContext::new(
"blob_id2".to_owned(), "blob_id2".to_owned(),
@ -1153,6 +1189,7 @@ mod tests {
build_ctx.cipher, build_ctx.cipher,
Default::default(), Default::default(),
None, None,
false,
); );
let blob_ctx3 = BlobContext::new( let blob_ctx3 = BlobContext::new(
"blob_id3".to_owned(), "blob_id3".to_owned(),
@ -1163,6 +1200,7 @@ mod tests {
build_ctx.cipher, build_ctx.cipher,
Default::default(), Default::default(),
None, None,
false,
); );
let blob_ctx4 = BlobContext::new( let blob_ctx4 = BlobContext::new(
"blob_id4".to_owned(), "blob_id4".to_owned(),
@ -1173,6 +1211,7 @@ mod tests {
build_ctx.cipher, build_ctx.cipher,
Default::default(), Default::default(),
None, None,
false,
); );
let blob_ctx5 = BlobContext::new( let blob_ctx5 = BlobContext::new(
"blob_id5".to_owned(), "blob_id5".to_owned(),
@ -1183,6 +1222,7 @@ mod tests {
build_ctx.cipher, build_ctx.cipher,
Default::default(), Default::default(),
None, None,
false,
); );
compactor.ori_blob_mgr.add_blob(blob_ctx1); compactor.ori_blob_mgr.add_blob(blob_ctx1);
compactor.ori_blob_mgr.add_blob(blob_ctx2); compactor.ori_blob_mgr.add_blob(blob_ctx2);
@ -1224,9 +1264,11 @@ mod tests {
PathBuf::from(tmp_dir.as_path()), PathBuf::from(tmp_dir.as_path()),
Default::default(), Default::default(),
None, None,
None,
false, false,
Features::new(), Features::new(),
false, false,
Attributes::default(),
); );
let mut blob_ctx1 = BlobContext::new( let mut blob_ctx1 = BlobContext::new(
"blob_id1".to_owned(), "blob_id1".to_owned(),
@ -1237,6 +1279,7 @@ mod tests {
build_ctx.cipher, build_ctx.cipher,
Default::default(), Default::default(),
None, None,
false,
); );
blob_ctx1.compressed_blob_size = 2; blob_ctx1.compressed_blob_size = 2;
let mut blob_ctx2 = BlobContext::new( let mut blob_ctx2 = BlobContext::new(
@ -1248,6 +1291,7 @@ mod tests {
build_ctx.cipher, build_ctx.cipher,
Default::default(), Default::default(),
None, None,
false,
); );
blob_ctx2.compressed_blob_size = 0; blob_ctx2.compressed_blob_size = 0;
let blob_ctx3 = BlobContext::new( let blob_ctx3 = BlobContext::new(
@ -1259,6 +1303,7 @@ mod tests {
build_ctx.cipher, build_ctx.cipher,
Default::default(), Default::default(),
None, None,
false,
); );
let blob_ctx4 = BlobContext::new( let blob_ctx4 = BlobContext::new(
"blob_id4".to_owned(), "blob_id4".to_owned(),
@ -1269,6 +1314,7 @@ mod tests {
build_ctx.cipher, build_ctx.cipher,
Default::default(), Default::default(),
None, None,
false,
); );
let blob_ctx5 = BlobContext::new( let blob_ctx5 = BlobContext::new(
"blob_id5".to_owned(), "blob_id5".to_owned(),
@ -1279,6 +1325,7 @@ mod tests {
build_ctx.cipher, build_ctx.cipher,
Default::default(), Default::default(),
None, None,
false,
); );
compactor.ori_blob_mgr.add_blob(blob_ctx1); compactor.ori_blob_mgr.add_blob(blob_ctx1);
compactor.ori_blob_mgr.add_blob(blob_ctx2); compactor.ori_blob_mgr.add_blob(blob_ctx2);

View File

@ -5,7 +5,7 @@
use std::borrow::Cow; use std::borrow::Cow;
use std::slice; use std::slice;
use anyhow::{Context, Result}; use anyhow::{bail, Context, Result};
use nydus_rafs::metadata::RAFS_MAX_CHUNK_SIZE; use nydus_rafs::metadata::RAFS_MAX_CHUNK_SIZE;
use nydus_storage::device::BlobFeatures; use nydus_storage::device::BlobFeatures;
use nydus_storage::meta::{toc, BlobMetaChunkArray}; use nydus_storage::meta::{toc, BlobMetaChunkArray};
@ -18,6 +18,8 @@ use super::node::Node;
use crate::core::context::Artifact; use crate::core::context::Artifact;
use crate::{BlobContext, BlobManager, BuildContext, ConversionType, Feature}; use crate::{BlobContext, BlobManager, BuildContext, ConversionType, Feature};
const VALID_BLOB_ID_LENGTH: usize = 64;
/// Generator for RAFS data blob. /// Generator for RAFS data blob.
pub(crate) struct Blob {} pub(crate) struct Blob {}
@ -33,7 +35,7 @@ impl Blob {
let mut chunk_data_buf = vec![0u8; RAFS_MAX_CHUNK_SIZE as usize]; let mut chunk_data_buf = vec![0u8; RAFS_MAX_CHUNK_SIZE as usize];
let (inodes, prefetch_entries) = BlobLayout::layout_blob_simple(&ctx.prefetch)?; let (inodes, prefetch_entries) = BlobLayout::layout_blob_simple(&ctx.prefetch)?;
for (idx, node) in inodes.iter().enumerate() { for (idx, node) in inodes.iter().enumerate() {
let mut node = node.lock().unwrap(); let mut node = node.borrow_mut();
let size = node let size = node
.dump_node_data(ctx, blob_mgr, blob_writer, &mut chunk_data_buf) .dump_node_data(ctx, blob_mgr, blob_writer, &mut chunk_data_buf)
.context("failed to dump blob chunks")?; .context("failed to dump blob chunks")?;
@ -94,7 +96,7 @@ impl Blob {
Ok(()) Ok(())
} }
fn finalize_blob_data( pub fn finalize_blob_data(
ctx: &BuildContext, ctx: &BuildContext,
blob_mgr: &mut BlobManager, blob_mgr: &mut BlobManager,
blob_writer: &mut dyn Artifact, blob_writer: &mut dyn Artifact,
@ -104,13 +106,13 @@ impl Blob {
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() { if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
let mut batch = batch.lock().unwrap(); let mut batch = batch.lock().unwrap();
if !batch.chunk_data_buf_is_empty() { if !batch.chunk_data_buf_is_empty() {
let (pre_compressed_offset, compressed_size, _) = Node::write_chunk_data( let (_, compressed_size, _) = Node::write_chunk_data(
&ctx, &ctx,
blob_ctx, blob_ctx,
blob_writer, blob_writer,
batch.chunk_data_buf(), batch.chunk_data_buf(),
)?; )?;
batch.add_context(pre_compressed_offset, compressed_size); batch.add_context(compressed_size);
batch.clear_chunk_data_buf(); batch.clear_chunk_data_buf();
} }
} }
@ -120,6 +122,9 @@ impl Blob {
&& (ctx.blob_inline_meta || ctx.features.is_enabled(Feature::BlobToc)) && (ctx.blob_inline_meta || ctx.features.is_enabled(Feature::BlobToc))
{ {
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() { if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
if blob_ctx.external {
return Ok(());
}
blob_ctx.write_tar_header( blob_ctx.write_tar_header(
blob_writer, blob_writer,
toc::TOC_ENTRY_BLOB_RAW, toc::TOC_ENTRY_BLOB_RAW,
@ -141,6 +146,20 @@ impl Blob {
} }
} }
// check blobs to make sure all blobs are valid.
if blob_mgr.external {
for (index, blob_ctx) in blob_mgr.get_blobs().iter().enumerate() {
if blob_ctx.blob_id.len() != VALID_BLOB_ID_LENGTH {
bail!(
"invalid blob id:{}, length:{}, index:{}",
blob_ctx.blob_id,
blob_ctx.blob_id.len(),
index
);
}
}
}
Ok(()) Ok(())
} }

View File

@ -30,7 +30,7 @@ impl Bootstrap {
bootstrap_ctx: &mut BootstrapContext, bootstrap_ctx: &mut BootstrapContext,
) -> Result<()> { ) -> Result<()> {
// Special handling of the root inode // Special handling of the root inode
let mut root_node = self.tree.lock_node(); let mut root_node = self.tree.borrow_mut_node();
assert!(root_node.is_dir()); assert!(root_node.is_dir());
let index = bootstrap_ctx.generate_next_ino(); let index = bootstrap_ctx.generate_next_ino();
// 0 is reserved and 1 also matches RAFS_V5_ROOT_INODE. // 0 is reserved and 1 also matches RAFS_V5_ROOT_INODE.
@ -50,7 +50,7 @@ impl Bootstrap {
Self::build_rafs(ctx, bootstrap_ctx, &mut self.tree)?; Self::build_rafs(ctx, bootstrap_ctx, &mut self.tree)?;
if ctx.fs_version.is_v6() { if ctx.fs_version.is_v6() {
let root_offset = self.tree.node.lock().unwrap().v6_offset; let root_offset = self.tree.node.borrow().v6_offset;
Self::v6_update_dirents(&self.tree, root_offset); Self::v6_update_dirents(&self.tree, root_offset);
} }
@ -75,7 +75,9 @@ impl Bootstrap {
let digest = RafsDigest::from_buf(&bootstrap_data, digest::Algorithm::Sha256); let digest = RafsDigest::from_buf(&bootstrap_data, digest::Algorithm::Sha256);
let name = digest.to_string(); let name = digest.to_string();
bootstrap_ctx.writer.finalize(Some(name.clone()))?; bootstrap_ctx.writer.finalize(Some(name.clone()))?;
*bootstrap_storage = Some(ArtifactStorage::SingleFile(p.join(name))); let mut path = p.0.join(name);
path.set_extension(&p.1);
*bootstrap_storage = Some(ArtifactStorage::SingleFile(path));
Ok(()) Ok(())
} else { } else {
bootstrap_ctx.writer.finalize(Some(String::default())) bootstrap_ctx.writer.finalize(Some(String::default()))
@ -90,7 +92,7 @@ impl Bootstrap {
tree: &mut Tree, tree: &mut Tree,
) -> Result<()> { ) -> Result<()> {
let parent_node = tree.node.clone(); let parent_node = tree.node.clone();
let mut parent_node = parent_node.lock().unwrap(); let mut parent_node = parent_node.borrow_mut();
let parent_ino = parent_node.inode.ino(); let parent_ino = parent_node.inode.ino();
let block_size = ctx.v6_block_size(); let block_size = ctx.v6_block_size();
@ -113,7 +115,7 @@ impl Bootstrap {
let mut dirs: Vec<&mut Tree> = Vec::new(); let mut dirs: Vec<&mut Tree> = Vec::new();
for child in tree.children.iter_mut() { for child in tree.children.iter_mut() {
let child_node = child.node.clone(); let child_node = child.node.clone();
let mut child_node = child_node.lock().unwrap(); let mut child_node = child_node.borrow_mut();
let index = bootstrap_ctx.generate_next_ino(); let index = bootstrap_ctx.generate_next_ino();
child_node.index = index; child_node.index = index;
if ctx.fs_version.is_v5() { if ctx.fs_version.is_v5() {
@ -134,11 +136,11 @@ impl Bootstrap {
let nlink = indexes.len() as u32 + 1; let nlink = indexes.len() as u32 + 1;
// Update nlink for previous hardlink inodes // Update nlink for previous hardlink inodes
for n in indexes.iter() { for n in indexes.iter() {
n.lock().unwrap().inode.set_nlink(nlink); n.borrow_mut().inode.set_nlink(nlink);
} }
let (first_ino, first_offset) = { let (first_ino, first_offset) = {
let first_node = indexes[0].lock().unwrap(); let first_node = indexes[0].borrow_mut();
(first_node.inode.ino(), first_node.v6_offset) (first_node.inode.ino(), first_node.v6_offset)
}; };
// set offset for rafs v6 hardlinks // set offset for rafs v6 hardlinks

View File

@ -19,7 +19,7 @@ use nydus_utils::digest::{self, RafsDigest};
use crate::Tree; use crate::Tree;
#[derive(Debug, PartialEq, Eq, Hash, Ord, PartialOrd)] #[derive(Debug, PartialEq, Eq, Hash, Ord, PartialOrd)]
pub struct DigestWithBlobIndex(pub RafsDigest, pub u32); pub struct DigestWithBlobIndex(pub RafsDigest, pub u32, pub Option<u32>);
/// Trait to manage chunk cache for chunk deduplication. /// Trait to manage chunk cache for chunk deduplication.
pub trait ChunkDict: Sync + Send + 'static { pub trait ChunkDict: Sync + Send + 'static {

View File

@ -13,11 +13,13 @@ use std::io::{BufWriter, Cursor, Read, Seek, Write};
use std::mem::size_of; use std::mem::size_of;
use std::os::unix::fs::FileTypeExt; use std::os::unix::fs::FileTypeExt;
use std::path::{Display, Path, PathBuf}; use std::path::{Display, Path, PathBuf};
use std::result::Result::Ok;
use std::str::FromStr; use std::str::FromStr;
use std::sync::{Arc, Mutex}; use std::sync::{Arc, Mutex};
use std::{fmt, fs}; use std::{fmt, fs};
use anyhow::{anyhow, Context, Error, Result}; use anyhow::{anyhow, Context, Error, Result};
use nydus_utils::crc32;
use nydus_utils::crypt::{self, Cipher, CipherContext}; use nydus_utils::crypt::{self, Cipher, CipherContext};
use sha2::{Digest, Sha256}; use sha2::{Digest, Sha256};
use tar::{EntryType, Header}; use tar::{EntryType, Header};
@ -44,6 +46,7 @@ use nydus_utils::digest::DigestData;
use nydus_utils::{compress, digest, div_round_up, round_down, try_round_up_4k, BufReaderInfo}; use nydus_utils::{compress, digest, div_round_up, round_down, try_round_up_4k, BufReaderInfo};
use super::node::ChunkSource; use super::node::ChunkSource;
use crate::attributes::Attributes;
use crate::core::tree::TreeNode; use crate::core::tree::TreeNode;
use crate::{ChunkDict, Feature, Features, HashChunkDict, Prefetch, PrefetchPolicy, WhiteoutSpec}; use crate::{ChunkDict, Feature, Features, HashChunkDict, Prefetch, PrefetchPolicy, WhiteoutSpec};
@ -138,7 +141,7 @@ pub enum ArtifactStorage {
// Won't rename user's specification // Won't rename user's specification
SingleFile(PathBuf), SingleFile(PathBuf),
// Will rename it from tmp file as user didn't specify a name. // Will rename it from tmp file as user didn't specify a name.
FileDir(PathBuf), FileDir((PathBuf, String)),
} }
impl ArtifactStorage { impl ArtifactStorage {
@ -146,7 +149,16 @@ impl ArtifactStorage {
pub fn display(&self) -> Display { pub fn display(&self) -> Display {
match self { match self {
ArtifactStorage::SingleFile(p) => p.display(), ArtifactStorage::SingleFile(p) => p.display(),
ArtifactStorage::FileDir(p) => p.display(), ArtifactStorage::FileDir(p) => p.0.display(),
}
}
pub fn add_suffix(&mut self, suffix: &str) {
match self {
ArtifactStorage::SingleFile(p) => {
p.set_extension(suffix);
}
ArtifactStorage::FileDir(p) => p.1 = String::from(suffix),
} }
} }
} }
@ -335,8 +347,8 @@ impl ArtifactWriter {
ArtifactStorage::FileDir(ref p) => { ArtifactStorage::FileDir(ref p) => {
// Better we can use open(2) O_TMPFILE, but for compatibility sake, we delay this job. // Better we can use open(2) O_TMPFILE, but for compatibility sake, we delay this job.
// TODO: Blob dir existence? // TODO: Blob dir existence?
let tmp = TempFile::new_in(p) let tmp = TempFile::new_in(&p.0)
.with_context(|| format!("failed to create temp file in {}", p.display()))?; .with_context(|| format!("failed to create temp file in {}", p.0.display()))?;
let tmp2 = tmp.as_file().try_clone()?; let tmp2 = tmp.as_file().try_clone()?;
let reader = OpenOptions::new() let reader = OpenOptions::new()
.read(true) .read(true)
@ -368,7 +380,10 @@ impl Artifact for ArtifactWriter {
if let Some(n) = name { if let Some(n) = name {
if let ArtifactStorage::FileDir(s) = &self.storage { if let ArtifactStorage::FileDir(s) = &self.storage {
let path = Path::new(s).join(n); let mut path = Path::new(&s.0).join(n);
if !s.1.is_empty() {
path.set_extension(&s.1);
}
if !path.exists() { if !path.exists() {
if let Some(tmp_file) = &self.tmp_file { if let Some(tmp_file) = &self.tmp_file {
rename(tmp_file.as_path(), &path).with_context(|| { rename(tmp_file.as_path(), &path).with_context(|| {
@ -459,6 +474,7 @@ impl BlobCacheGenerator {
} }
} }
#[derive(Clone)]
/// BlobContext is used to hold the blob information of a layer during build. /// BlobContext is used to hold the blob information of a layer during build.
pub struct BlobContext { pub struct BlobContext {
/// Blob id (user specified or sha256(blob)). /// Blob id (user specified or sha256(blob)).
@ -509,6 +525,9 @@ pub struct BlobContext {
/// Cipher to encrypt the RAFS blobs. /// Cipher to encrypt the RAFS blobs.
pub cipher_object: Arc<Cipher>, pub cipher_object: Arc<Cipher>,
pub cipher_ctx: Option<CipherContext>, pub cipher_ctx: Option<CipherContext>,
/// Whether the blob is from external storage backend.
pub external: bool,
} }
impl BlobContext { impl BlobContext {
@ -523,6 +542,7 @@ impl BlobContext {
cipher: crypt::Algorithm, cipher: crypt::Algorithm,
cipher_object: Arc<Cipher>, cipher_object: Arc<Cipher>,
cipher_ctx: Option<CipherContext>, cipher_ctx: Option<CipherContext>,
external: bool,
) -> Self { ) -> Self {
let blob_meta_info = if features.contains(BlobFeatures::CHUNK_INFO_V2) { let blob_meta_info = if features.contains(BlobFeatures::CHUNK_INFO_V2) {
BlobMetaChunkArray::new_v2() BlobMetaChunkArray::new_v2()
@ -559,6 +579,8 @@ impl BlobContext {
entry_list: toc::TocEntryList::new(), entry_list: toc::TocEntryList::new(),
cipher_object, cipher_object,
cipher_ctx, cipher_ctx,
external,
}; };
blob_ctx blob_ctx
@ -597,6 +619,12 @@ impl BlobContext {
blob_ctx blob_ctx
.blob_meta_header .blob_meta_header
.set_encrypted(features.contains(BlobFeatures::ENCRYPTED)); .set_encrypted(features.contains(BlobFeatures::ENCRYPTED));
blob_ctx
.blob_meta_header
.set_is_chunkdict_generated(features.contains(BlobFeatures::IS_CHUNKDICT_GENERATED));
blob_ctx
.blob_meta_header
.set_external(features.contains(BlobFeatures::EXTERNAL));
blob_ctx blob_ctx
} }
@ -696,6 +724,7 @@ impl BlobContext {
cipher, cipher,
cipher_object, cipher_object,
cipher_ctx, cipher_ctx,
false,
); );
blob_ctx.blob_prefetch_size = blob.prefetch_size(); blob_ctx.blob_prefetch_size = blob.prefetch_size();
blob_ctx.chunk_count = blob.chunk_count(); blob_ctx.chunk_count = blob.chunk_count();
@ -779,6 +808,10 @@ impl BlobContext {
info.set_uncompressed_offset(chunk.uncompressed_offset()); info.set_uncompressed_offset(chunk.uncompressed_offset());
self.blob_meta_info.add_v2_info(info); self.blob_meta_info.add_v2_info(info);
} else { } else {
let mut data: u64 = 0;
if chunk.has_crc32() {
data = chunk.crc32() as u64;
}
self.blob_meta_info.add_v2( self.blob_meta_info.add_v2(
chunk.compressed_offset(), chunk.compressed_offset(),
chunk.compressed_size(), chunk.compressed_size(),
@ -786,8 +819,9 @@ impl BlobContext {
chunk.uncompressed_size(), chunk.uncompressed_size(),
chunk.is_compressed(), chunk.is_compressed(),
chunk.is_encrypted(), chunk.is_encrypted(),
chunk.has_crc32(),
chunk.is_batch(), chunk.is_batch(),
0, data,
); );
} }
self.blob_chunk_digest.push(chunk.id().data); self.blob_chunk_digest.push(chunk.id().data);
@ -814,7 +848,7 @@ impl BlobContext {
} }
/// Get blob id if the blob has some chunks. /// Get blob id if the blob has some chunks.
pub fn blob_id(&mut self) -> Option<String> { pub fn blob_id(&self) -> Option<String> {
if self.uncompressed_blob_size > 0 { if self.uncompressed_blob_size > 0 {
Some(self.blob_id.to_string()) Some(self.blob_id.to_string())
} else { } else {
@ -882,20 +916,28 @@ pub struct BlobManager {
/// Used for chunk data de-duplication between layers (with `--parent-bootstrap`) /// Used for chunk data de-duplication between layers (with `--parent-bootstrap`)
/// or within layer (with `--inline-bootstrap`). /// or within layer (with `--inline-bootstrap`).
pub(crate) layered_chunk_dict: HashChunkDict, pub(crate) layered_chunk_dict: HashChunkDict,
// Whether the managed blobs is from external storage backend.
pub external: bool,
} }
impl BlobManager { impl BlobManager {
/// Create a new instance of [BlobManager]. /// Create a new instance of [BlobManager].
pub fn new(digester: digest::Algorithm) -> Self { pub fn new(digester: digest::Algorithm, external: bool) -> Self {
Self { Self {
blobs: Vec::new(), blobs: Vec::new(),
current_blob_index: None, current_blob_index: None,
global_chunk_dict: Arc::new(()), global_chunk_dict: Arc::new(()),
layered_chunk_dict: HashChunkDict::new(digester), layered_chunk_dict: HashChunkDict::new(digester),
external,
} }
} }
fn new_blob_ctx(ctx: &BuildContext) -> Result<BlobContext> { /// Set current blob index
pub fn set_current_blob_index(&mut self, index: usize) {
self.current_blob_index = Some(index as u32)
}
pub fn new_blob_ctx(&self, ctx: &BuildContext) -> Result<BlobContext> {
let (cipher_object, cipher_ctx) = match ctx.cipher { let (cipher_object, cipher_ctx) = match ctx.cipher {
crypt::Algorithm::None => (Default::default(), None), crypt::Algorithm::None => (Default::default(), None),
crypt::Algorithm::Aes128Xts => { crypt::Algorithm::Aes128Xts => {
@ -903,7 +945,7 @@ impl BlobManager {
let iv = crypt::Cipher::generate_random_iv()?; let iv = crypt::Cipher::generate_random_iv()?;
let cipher_ctx = CipherContext::new(key, iv, false, ctx.cipher)?; let cipher_ctx = CipherContext::new(key, iv, false, ctx.cipher)?;
( (
ctx.cipher.new_cipher().ok().unwrap_or(Default::default()), ctx.cipher.new_cipher().ok().unwrap_or_default(),
Some(cipher_ctx), Some(cipher_ctx),
) )
} }
@ -914,15 +956,22 @@ impl BlobManager {
))) )))
} }
}; };
let mut blob_features = ctx.blob_features;
let mut compressor = ctx.compressor;
if self.external {
blob_features.insert(BlobFeatures::EXTERNAL);
compressor = compress::Algorithm::None;
}
let mut blob_ctx = BlobContext::new( let mut blob_ctx = BlobContext::new(
ctx.blob_id.clone(), ctx.blob_id.clone(),
ctx.blob_offset, ctx.blob_offset,
ctx.blob_features, blob_features,
ctx.compressor, compressor,
ctx.digester, ctx.digester,
ctx.cipher, ctx.cipher,
Arc::new(cipher_object), Arc::new(cipher_object),
cipher_ctx, cipher_ctx,
self.external,
); );
blob_ctx.set_chunk_size(ctx.chunk_size); blob_ctx.set_chunk_size(ctx.chunk_size);
blob_ctx.set_meta_info_enabled( blob_ctx.set_meta_info_enabled(
@ -938,7 +987,7 @@ impl BlobManager {
ctx: &BuildContext, ctx: &BuildContext,
) -> Result<(u32, &mut BlobContext)> { ) -> Result<(u32, &mut BlobContext)> {
if self.current_blob_index.is_none() { if self.current_blob_index.is_none() {
let blob_ctx = Self::new_blob_ctx(ctx)?; let blob_ctx = self.new_blob_ctx(ctx)?;
self.current_blob_index = Some(self.alloc_index()?); self.current_blob_index = Some(self.alloc_index()?);
self.add_blob(blob_ctx); self.add_blob(blob_ctx);
} }
@ -946,6 +995,21 @@ impl BlobManager {
Ok(self.get_current_blob().unwrap()) Ok(self.get_current_blob().unwrap())
} }
pub fn get_or_create_blob_by_idx(
&mut self,
ctx: &BuildContext,
blob_idx: u32,
) -> Result<(u32, &mut BlobContext)> {
let blob_idx = blob_idx as usize;
if blob_idx >= self.blobs.len() {
for _ in self.blobs.len()..=blob_idx {
let blob_ctx = self.new_blob_ctx(ctx)?;
self.add_blob(blob_ctx);
}
}
Ok((blob_idx as u32, &mut self.blobs[blob_idx as usize]))
}
/// Get the current blob object. /// Get the current blob object.
pub fn get_current_blob(&mut self) -> Option<(u32, &mut BlobContext)> { pub fn get_current_blob(&mut self) -> Option<(u32, &mut BlobContext)> {
if let Some(idx) = self.current_blob_index { if let Some(idx) = self.current_blob_index {
@ -955,6 +1019,33 @@ impl BlobManager {
} }
} }
/// Get or cerate blob for chunkdict, this is used for chunk deduplication.
pub fn get_or_cerate_blob_for_chunkdict(
&mut self,
ctx: &BuildContext,
id: &str,
) -> Result<(u32, &mut BlobContext)> {
let blob_mgr = Self::new(ctx.digester, false);
if self.get_blob_idx_by_id(id).is_none() {
let blob_ctx = blob_mgr.new_blob_ctx(ctx)?;
self.current_blob_index = Some(self.alloc_index()?);
self.add_blob(blob_ctx);
} else {
self.current_blob_index = self.get_blob_idx_by_id(id);
}
let (_, blob_ctx) = self.get_current_blob().unwrap();
if blob_ctx.blob_id.is_empty() {
blob_ctx.blob_id = id.to_string();
}
// Safe to unwrap because the blob context has been added.
Ok(self.get_current_blob().unwrap())
}
/// Determine if the given blob has been created.
pub fn has_blob(&self, blob_id: &str) -> bool {
self.get_blob_idx_by_id(blob_id).is_some()
}
/// Set the global chunk dictionary for chunk deduplication. /// Set the global chunk dictionary for chunk deduplication.
pub fn set_chunk_dict(&mut self, dict: Arc<dyn ChunkDict>) { pub fn set_chunk_dict(&mut self, dict: Arc<dyn ChunkDict>) {
self.global_chunk_dict = dict self.global_chunk_dict = dict
@ -1097,6 +1188,7 @@ impl BlobManager {
compressed_blob_size, compressed_blob_size,
blob_features, blob_features,
flags, flags,
build_ctx.is_chunkdict_generated,
); );
} }
RafsBlobTable::V6(table) => { RafsBlobTable::V6(table) => {
@ -1116,6 +1208,7 @@ impl BlobManager {
ctx.blob_toc_digest, ctx.blob_toc_digest,
ctx.blob_meta_size, ctx.blob_meta_size,
ctx.blob_toc_size, ctx.blob_toc_size,
build_ctx.is_chunkdict_generated,
ctx.blob_meta_header, ctx.blob_meta_header,
ctx.cipher_object.clone(), ctx.cipher_object.clone(),
ctx.cipher_ctx.clone(), ctx.cipher_ctx.clone(),
@ -1222,6 +1315,7 @@ impl BootstrapContext {
} }
/// BootstrapManager is used to hold the parent bootstrap reader and create new bootstrap context. /// BootstrapManager is used to hold the parent bootstrap reader and create new bootstrap context.
#[derive(Clone)]
pub struct BootstrapManager { pub struct BootstrapManager {
pub(crate) f_parent_path: Option<PathBuf>, pub(crate) f_parent_path: Option<PathBuf>,
pub(crate) bootstrap_storage: Option<ArtifactStorage>, pub(crate) bootstrap_storage: Option<ArtifactStorage>,
@ -1258,6 +1352,7 @@ pub struct BuildContext {
pub digester: digest::Algorithm, pub digester: digest::Algorithm,
/// Blob encryption algorithm flag. /// Blob encryption algorithm flag.
pub cipher: crypt::Algorithm, pub cipher: crypt::Algorithm,
pub crc32_algorithm: crc32::Algorithm,
/// Save host uid gid in each inode. /// Save host uid gid in each inode.
pub explicit_uidgid: bool, pub explicit_uidgid: bool,
/// whiteout spec: overlayfs or oci /// whiteout spec: overlayfs or oci
@ -1283,6 +1378,7 @@ pub struct BuildContext {
/// Storage writing blob to single file or a directory. /// Storage writing blob to single file or a directory.
pub blob_storage: Option<ArtifactStorage>, pub blob_storage: Option<ArtifactStorage>,
pub external_blob_storage: Option<ArtifactStorage>,
pub blob_zran_generator: Option<Mutex<ZranContextGenerator<File>>>, pub blob_zran_generator: Option<Mutex<ZranContextGenerator<File>>>,
pub blob_batch_generator: Option<Mutex<BatchContextGenerator>>, pub blob_batch_generator: Option<Mutex<BatchContextGenerator>>,
pub blob_tar_reader: Option<BufReaderInfo<File>>, pub blob_tar_reader: Option<BufReaderInfo<File>>,
@ -1293,6 +1389,11 @@ pub struct BuildContext {
pub configuration: Arc<ConfigV2>, pub configuration: Arc<ConfigV2>,
/// Generate the blob cache and blob meta /// Generate the blob cache and blob meta
pub blob_cache_generator: Option<BlobCacheGenerator>, pub blob_cache_generator: Option<BlobCacheGenerator>,
/// Whether is chunkdict.
pub is_chunkdict_generated: bool,
/// Nydus attributes for different build behavior.
pub attributes: Attributes,
} }
impl BuildContext { impl BuildContext {
@ -1309,9 +1410,11 @@ impl BuildContext {
source_path: PathBuf, source_path: PathBuf,
prefetch: Prefetch, prefetch: Prefetch,
blob_storage: Option<ArtifactStorage>, blob_storage: Option<ArtifactStorage>,
external_blob_storage: Option<ArtifactStorage>,
blob_inline_meta: bool, blob_inline_meta: bool,
features: Features, features: Features,
encrypt: bool, encrypt: bool,
attributes: Attributes,
) -> Self { ) -> Self {
// It's a flag for images built with new nydus-image 2.2 and newer. // It's a flag for images built with new nydus-image 2.2 and newer.
let mut blob_features = BlobFeatures::CAP_TAR_TOC; let mut blob_features = BlobFeatures::CAP_TAR_TOC;
@ -1332,6 +1435,8 @@ impl BuildContext {
} else { } else {
crypt::Algorithm::None crypt::Algorithm::None
}; };
let crc32_algorithm = crc32::Algorithm::Crc32Iscsi;
BuildContext { BuildContext {
blob_id, blob_id,
aligned_chunk, aligned_chunk,
@ -1339,6 +1444,7 @@ impl BuildContext {
compressor, compressor,
digester, digester,
cipher, cipher,
crc32_algorithm,
explicit_uidgid, explicit_uidgid,
whiteout_spec, whiteout_spec,
@ -1351,6 +1457,7 @@ impl BuildContext {
prefetch, prefetch,
blob_storage, blob_storage,
external_blob_storage,
blob_zran_generator: None, blob_zran_generator: None,
blob_batch_generator: None, blob_batch_generator: None,
blob_tar_reader: None, blob_tar_reader: None,
@ -1361,6 +1468,9 @@ impl BuildContext {
features, features,
configuration: Arc::new(ConfigV2::default()), configuration: Arc::new(ConfigV2::default()),
blob_cache_generator: None, blob_cache_generator: None,
is_chunkdict_generated: false,
attributes,
} }
} }
@ -1379,6 +1489,10 @@ impl BuildContext {
pub fn set_configuration(&mut self, config: Arc<ConfigV2>) { pub fn set_configuration(&mut self, config: Arc<ConfigV2>) {
self.configuration = config; self.configuration = config;
} }
pub fn set_is_chunkdict(&mut self, is_chunkdict: bool) {
self.is_chunkdict_generated = is_chunkdict;
}
} }
impl Default for BuildContext { impl Default for BuildContext {
@ -1390,6 +1504,7 @@ impl Default for BuildContext {
compressor: compress::Algorithm::default(), compressor: compress::Algorithm::default(),
digester: digest::Algorithm::default(), digester: digest::Algorithm::default(),
cipher: crypt::Algorithm::None, cipher: crypt::Algorithm::None,
crc32_algorithm: crc32::Algorithm::default(),
explicit_uidgid: true, explicit_uidgid: true,
whiteout_spec: WhiteoutSpec::default(), whiteout_spec: WhiteoutSpec::default(),
@ -1402,6 +1517,7 @@ impl Default for BuildContext {
prefetch: Prefetch::default(), prefetch: Prefetch::default(),
blob_storage: None, blob_storage: None,
external_blob_storage: None,
blob_zran_generator: None, blob_zran_generator: None,
blob_batch_generator: None, blob_batch_generator: None,
blob_tar_reader: None, blob_tar_reader: None,
@ -1411,6 +1527,9 @@ impl Default for BuildContext {
features: Features::new(), features: Features::new(),
configuration: Arc::new(ConfigV2::default()), configuration: Arc::new(ConfigV2::default()),
blob_cache_generator: None, blob_cache_generator: None,
is_chunkdict_generated: false,
attributes: Attributes::default(),
} }
} }
} }
@ -1422,8 +1541,12 @@ pub struct BuildOutput {
pub blobs: Vec<String>, pub blobs: Vec<String>,
/// The size of output blob in this build. /// The size of output blob in this build.
pub blob_size: Option<u64>, pub blob_size: Option<u64>,
/// External blob ids in the blob table of external bootstrap.
pub external_blobs: Vec<String>,
/// File path for the metadata blob. /// File path for the metadata blob.
pub bootstrap_path: Option<String>, pub bootstrap_path: Option<String>,
/// File path for the external metadata blob.
pub external_bootstrap_path: Option<String>,
} }
impl fmt::Display for BuildOutput { impl fmt::Display for BuildOutput {
@ -1438,7 +1561,17 @@ impl fmt::Display for BuildOutput {
"data blob size: 0x{:x}", "data blob size: 0x{:x}",
self.blob_size.unwrap_or_default() self.blob_size.unwrap_or_default()
)?; )?;
write!(f, "data blobs: {:?}", self.blobs)?; if self.external_blobs.is_empty() {
write!(f, "data blobs: {:?}", self.blobs)?;
} else {
writeln!(f, "data blobs: {:?}", self.blobs)?;
writeln!(
f,
"external meta blob path: {}",
self.external_bootstrap_path.as_deref().unwrap_or("<none>")
)?;
write!(f, "external data blobs: {:?}", self.external_blobs)?;
}
Ok(()) Ok(())
} }
} }
@ -1447,20 +1580,28 @@ impl BuildOutput {
/// Create a new instance of [BuildOutput]. /// Create a new instance of [BuildOutput].
pub fn new( pub fn new(
blob_mgr: &BlobManager, blob_mgr: &BlobManager,
external_blob_mgr: Option<&BlobManager>,
bootstrap_storage: &Option<ArtifactStorage>, bootstrap_storage: &Option<ArtifactStorage>,
external_bootstrap_storage: &Option<ArtifactStorage>,
) -> Result<BuildOutput> { ) -> Result<BuildOutput> {
let blobs = blob_mgr.get_blob_ids(); let blobs = blob_mgr.get_blob_ids();
let blob_size = blob_mgr.get_last_blob().map(|b| b.compressed_blob_size); let blob_size = blob_mgr.get_last_blob().map(|b| b.compressed_blob_size);
let bootstrap_path = if let Some(ArtifactStorage::SingleFile(p)) = bootstrap_storage { let bootstrap_path = bootstrap_storage
Some(p.display().to_string()) .as_ref()
} else { .map(|stor| stor.display().to_string());
None let external_bootstrap_path = external_bootstrap_storage
}; .as_ref()
.map(|stor| stor.display().to_string());
let external_blobs = external_blob_mgr
.map(|mgr| mgr.get_blob_ids())
.unwrap_or_default();
Ok(Self { Ok(Self {
blobs, blobs,
external_blobs,
blob_size, blob_size,
bootstrap_path, bootstrap_path,
external_bootstrap_path,
}) })
} }
} }
@ -1513,9 +1654,11 @@ mod tests {
registry: None, registry: None,
http_proxy: None, http_proxy: None,
}), }),
external_backends: Vec::new(),
id: "id".to_owned(), id: "id".to_owned(),
cache: None, cache: None,
rafs: None, rafs: None,
overlay: None,
internal: ConfigV2Internal { internal: ConfigV2Internal {
blob_accessible: Arc::new(AtomicBool::new(true)), blob_accessible: Arc::new(AtomicBool::new(true)),
}, },

View File

@ -16,11 +16,11 @@ impl BlobLayout {
let (pre, non_pre) = prefetch.get_file_nodes(); let (pre, non_pre) = prefetch.get_file_nodes();
let mut inodes: Vec<TreeNode> = pre let mut inodes: Vec<TreeNode> = pre
.into_iter() .into_iter()
.filter(|x| Self::should_dump_node(x.lock().unwrap().deref())) .filter(|x| Self::should_dump_node(x.borrow().deref()))
.collect(); .collect();
let mut non_prefetch_inodes: Vec<TreeNode> = non_pre let mut non_prefetch_inodes: Vec<TreeNode> = non_pre
.into_iter() .into_iter()
.filter(|x| Self::should_dump_node(x.lock().unwrap().deref())) .filter(|x| Self::should_dump_node(x.borrow().deref()))
.collect(); .collect();
let prefetch_entries = inodes.len(); let prefetch_entries = inodes.len();
@ -53,7 +53,7 @@ mod tests {
let tree = Tree::new(node1); let tree = Tree::new(node1);
let mut prefetch = Prefetch::default(); let mut prefetch = Prefetch::default();
prefetch.insert(&tree.node, tree.node.lock().unwrap().deref()); prefetch.insert(&tree.node, tree.node.borrow().deref());
let (inodes, prefetch_entries) = BlobLayout::layout_blob_simple(&prefetch).unwrap(); let (inodes, prefetch_entries) = BlobLayout::layout_blob_simple(&prefetch).unwrap();
assert_eq!(inodes.len(), 1); assert_eq!(inodes.len(), 1);

View File

@ -25,8 +25,9 @@ use nydus_rafs::metadata::{Inode, RafsVersion};
use nydus_storage::device::BlobFeatures; use nydus_storage::device::BlobFeatures;
use nydus_storage::meta::{BlobChunkInfoV2Ondisk, BlobMetaChunkInfo}; use nydus_storage::meta::{BlobChunkInfoV2Ondisk, BlobMetaChunkInfo};
use nydus_utils::digest::{DigestHasher, RafsDigest}; use nydus_utils::digest::{DigestHasher, RafsDigest};
use nydus_utils::{compress, crypt}; use nydus_utils::{compress, crc32, crypt};
use nydus_utils::{div_round_up, event_tracer, root_tracer, try_round_up_4k, ByteSize}; use nydus_utils::{div_round_up, event_tracer, root_tracer, try_round_up_4k, ByteSize};
use parse_size::parse_size;
use sha2::digest::Digest; use sha2::digest::Digest;
use crate::{BlobContext, BlobManager, BuildContext, ChunkDict, ConversionType, Overlay}; use crate::{BlobContext, BlobManager, BuildContext, ChunkDict, ConversionType, Overlay};
@ -34,7 +35,7 @@ use crate::{BlobContext, BlobManager, BuildContext, ChunkDict, ConversionType, O
use super::context::Artifact; use super::context::Artifact;
/// Filesystem root path for Unix OSs. /// Filesystem root path for Unix OSs.
const ROOT_PATH_NAME: &[u8] = &[b'/']; const ROOT_PATH_NAME: &[u8] = b"/";
/// Source of chunk data: chunk dictionary, parent filesystem or builder. /// Source of chunk data: chunk dictionary, parent filesystem or builder.
#[derive(Clone, Hash, PartialEq, Eq)] #[derive(Clone, Hash, PartialEq, Eq)]
@ -223,7 +224,7 @@ impl Node {
chunk_data_buf: &mut [u8], chunk_data_buf: &mut [u8],
) -> Result<u64> { ) -> Result<u64> {
let mut reader = if self.is_reg() { let mut reader = if self.is_reg() {
let file = File::open(&self.path()) let file = File::open(self.path())
.with_context(|| format!("failed to open node file {:?}", self.path()))?; .with_context(|| format!("failed to open node file {:?}", self.path()))?;
Some(file) Some(file)
} else { } else {
@ -275,6 +276,88 @@ impl Node {
None None
}; };
if blob_mgr.external {
let external_values = ctx.attributes.get_values(self.target()).unwrap();
let external_blob_index = external_values
.get("blob_index")
.and_then(|v| v.parse::<u32>().ok())
.ok_or_else(|| anyhow!("failed to parse blob_index"))?;
let external_blob_id = external_values
.get("blob_id")
.ok_or_else(|| anyhow!("failed to parse blob_id"))?;
let external_chunk_size = external_values
.get("chunk_size")
.and_then(|v| parse_size(v).ok())
.ok_or_else(|| anyhow!("failed to parse chunk_size"))?;
let mut external_compressed_offset = external_values
.get("chunk_0_compressed_offset")
.and_then(|v| v.parse::<u64>().ok())
.ok_or_else(|| anyhow!("failed to parse chunk_0_compressed_offset"))?;
let external_compressed_size = external_values
.get("compressed_size")
.and_then(|v| v.parse::<u64>().ok())
.ok_or_else(|| anyhow!("failed to parse compressed_size"))?;
let (_, external_blob_ctx) =
blob_mgr.get_or_create_blob_by_idx(ctx, external_blob_index)?;
external_blob_ctx.blob_id = external_blob_id.to_string();
external_blob_ctx.compressed_blob_size = external_compressed_size;
external_blob_ctx.uncompressed_blob_size = external_compressed_size;
let chunk_count = self
.chunk_count(external_chunk_size as u64)
.with_context(|| {
format!("failed to get chunk count for {}", self.path().display())
})?;
self.inode.set_child_count(chunk_count);
info!(
"target {:?}, file_size {}, blob_index {}, blob_id {}, chunk_size {}, chunk_count {}",
self.target(),
self.inode.size(),
external_blob_index,
external_blob_id,
external_chunk_size,
chunk_count
);
for i in 0..self.inode.child_count() {
let mut chunk = self.inode.create_chunk();
let file_offset = i as u64 * external_chunk_size as u64;
let compressed_size = if i == self.inode.child_count() - 1 {
self.inode.size() - (external_chunk_size * i as u64)
} else {
external_chunk_size
} as u32;
chunk.set_blob_index(external_blob_index);
chunk.set_index(external_blob_ctx.alloc_chunk_index()?);
chunk.set_compressed_offset(external_compressed_offset);
chunk.set_compressed_size(compressed_size);
chunk.set_uncompressed_offset(external_compressed_offset);
chunk.set_uncompressed_size(compressed_size);
chunk.set_compressed(false);
chunk.set_file_offset(file_offset);
external_compressed_offset += compressed_size as u64;
external_blob_ctx.chunk_size = external_chunk_size as u32;
if ctx.crc32_algorithm != crc32::Algorithm::None {
self.set_external_chunk_crc32(ctx, &mut chunk, i)?
}
if let Some(h) = inode_hasher.as_mut() {
h.digest_update(chunk.id().as_ref());
}
self.chunks.push(NodeChunk {
source: ChunkSource::Build,
inner: Arc::new(chunk),
});
}
if let Some(h) = inode_hasher {
self.inode.set_digest(h.digest_finalize());
}
return Ok(0);
}
// `child_count` of regular file is reused as `chunk_count`. // `child_count` of regular file is reused as `chunk_count`.
for i in 0..self.inode.child_count() { for i in 0..self.inode.child_count() {
let chunk_size = ctx.chunk_size; let chunk_size = ctx.chunk_size;
@ -286,13 +369,14 @@ impl Node {
}; };
let chunk_data = &mut data_buf[0..uncompressed_size as usize]; let chunk_data = &mut data_buf[0..uncompressed_size as usize];
let (mut chunk, mut chunk_info) = self.read_file_chunk(ctx, reader, chunk_data)?; let (mut chunk, mut chunk_info) =
self.read_file_chunk(ctx, reader, chunk_data, blob_mgr.external)?;
if let Some(h) = inode_hasher.as_mut() { if let Some(h) = inode_hasher.as_mut() {
h.digest_update(chunk.id().as_ref()); h.digest_update(chunk.id().as_ref());
} }
// No need to perform chunk deduplication for tar-tarfs case. // No need to perform chunk deduplication for tar-tarfs/external blob case.
if ctx.conversion_type != ConversionType::TarToTarfs { if ctx.conversion_type != ConversionType::TarToTarfs && !blob_mgr.external {
chunk = match self.deduplicate_chunk( chunk = match self.deduplicate_chunk(
ctx, ctx,
blob_mgr, blob_mgr,
@ -310,17 +394,23 @@ impl Node {
chunk.set_blob_index(blob_index); chunk.set_blob_index(blob_index);
chunk.set_index(chunk_index); chunk.set_index(chunk_index);
chunk.set_file_offset(file_offset); chunk.set_file_offset(file_offset);
let mut dumped_size = chunk.compressed_size();
if ctx.conversion_type == ConversionType::TarToTarfs { if ctx.conversion_type == ConversionType::TarToTarfs {
chunk.set_uncompressed_offset(chunk.compressed_offset()); chunk.set_uncompressed_offset(chunk.compressed_offset());
chunk.set_uncompressed_size(chunk.compressed_size()); chunk.set_uncompressed_size(chunk.compressed_size());
} else if let Some(info) = } else {
self.dump_file_chunk(ctx, blob_ctx, blob_writer, chunk_data, &mut chunk)? let (info, d_size) =
{ self.dump_file_chunk(ctx, blob_ctx, blob_writer, chunk_data, &mut chunk)?;
chunk_info = Some(info); if info.is_some() {
chunk_info = info;
}
if let Some(d_size) = d_size {
dumped_size = d_size;
}
} }
let chunk = Arc::new(chunk); let chunk = Arc::new(chunk);
blob_size += chunk.compressed_size() as u64; blob_size += dumped_size as u64;
if ctx.conversion_type != ConversionType::TarToTarfs { if ctx.conversion_type != ConversionType::TarToTarfs {
blob_ctx.add_chunk_meta_info(&chunk, chunk_info)?; blob_ctx.add_chunk_meta_info(&chunk, chunk_info)?;
blob_mgr blob_mgr
@ -341,20 +431,43 @@ impl Node {
Ok(blob_size) Ok(blob_size)
} }
fn set_external_chunk_crc32(
&self,
ctx: &BuildContext,
chunk: &mut ChunkWrapper,
i: u32,
) -> Result<()> {
if let Some(crcs) = ctx.attributes.get_crcs(self.target()) {
if (i as usize) >= crcs.len() {
return Err(anyhow!(
"invalid crc index {} for file {}",
i,
self.target().display()
));
}
chunk.set_has_crc32(true);
chunk.set_crc32(crcs[i as usize]);
}
Ok(())
}
fn read_file_chunk<R: Read>( fn read_file_chunk<R: Read>(
&self, &self,
ctx: &BuildContext, ctx: &BuildContext,
reader: &mut R, reader: &mut R,
buf: &mut [u8], buf: &mut [u8],
external: bool,
) -> Result<(ChunkWrapper, Option<BlobChunkInfoV2Ondisk>)> { ) -> Result<(ChunkWrapper, Option<BlobChunkInfoV2Ondisk>)> {
let mut chunk = self.inode.create_chunk(); let mut chunk = self.inode.create_chunk();
let mut chunk_info = None; let mut chunk_info = None;
if let Some(ref zran) = ctx.blob_zran_generator { if let Some(ref zran) = ctx.blob_zran_generator {
let mut zran = zran.lock().unwrap(); let mut zran = zran.lock().unwrap();
zran.start_chunk(ctx.chunk_size as u64)?; zran.start_chunk(ctx.chunk_size as u64)?;
reader if !external {
.read_exact(buf) reader
.with_context(|| format!("failed to read node file {:?}", self.path()))?; .read_exact(buf)
.with_context(|| format!("failed to read node file {:?}", self.path()))?;
}
let info = zran.finish_chunk()?; let info = zran.finish_chunk()?;
chunk.set_compressed_offset(info.compressed_offset()); chunk.set_compressed_offset(info.compressed_offset());
chunk.set_compressed_size(info.compressed_size()); chunk.set_compressed_size(info.compressed_size());
@ -366,21 +479,27 @@ impl Node {
chunk.set_compressed_offset(pos); chunk.set_compressed_offset(pos);
chunk.set_compressed_size(buf.len() as u32); chunk.set_compressed_size(buf.len() as u32);
chunk.set_compressed(false); chunk.set_compressed(false);
reader if !external {
.read_exact(buf) reader
.with_context(|| format!("failed to read node file {:?}", self.path()))?; .read_exact(buf)
} else { .with_context(|| format!("failed to read node file {:?}", self.path()))?;
}
} else if !external {
reader reader
.read_exact(buf) .read_exact(buf)
.with_context(|| format!("failed to read node file {:?}", self.path()))?; .with_context(|| format!("failed to read node file {:?}", self.path()))?;
} }
// For tar-tarfs case, no need to compute chunk id. // For tar-tarfs case, no need to compute chunk id.
if ctx.conversion_type != ConversionType::TarToTarfs { if ctx.conversion_type != ConversionType::TarToTarfs && !external {
chunk.set_id(RafsDigest::from_buf(buf, ctx.digester)); chunk.set_id(RafsDigest::from_buf(buf, ctx.digester));
if ctx.crc32_algorithm != crc32::Algorithm::None {
chunk.set_has_crc32(true);
chunk.set_crc32(crc32::Crc32::new(ctx.crc32_algorithm).from_buf(buf));
}
} }
if ctx.cipher != crypt::Algorithm::None { if ctx.cipher != crypt::Algorithm::None && !external {
chunk.set_encrypted(true); chunk.set_encrypted(true);
} }
@ -388,7 +507,10 @@ impl Node {
} }
/// Dump a chunk from u8 slice into the data blob. /// Dump a chunk from u8 slice into the data blob.
/// Return `BlobChunkInfoV2Ondisk` when the chunk is added into a batch chunk. /// Return `BlobChunkInfoV2Ondisk` iff the chunk is added into a batch chunk.
/// Return dumped size iff not `BlobFeatures::SEPARATE`.
/// Dumped size can be zero if chunk data is cached in Batch Generator,
/// and may contain previous chunk data cached in Batch Generator.
fn dump_file_chunk( fn dump_file_chunk(
&self, &self,
ctx: &BuildContext, ctx: &BuildContext,
@ -396,7 +518,7 @@ impl Node {
blob_writer: &mut dyn Artifact, blob_writer: &mut dyn Artifact,
chunk_data: &[u8], chunk_data: &[u8],
chunk: &mut ChunkWrapper, chunk: &mut ChunkWrapper,
) -> Result<Option<BlobChunkInfoV2Ondisk>> { ) -> Result<(Option<BlobChunkInfoV2Ondisk>, Option<u32>)> {
let d_size = chunk_data.len() as u32; let d_size = chunk_data.len() as u32;
let aligned_d_size = if ctx.aligned_chunk { let aligned_d_size = if ctx.aligned_chunk {
// Safe to unwrap because `chunk_size` is much less than u32::MAX. // Safe to unwrap because `chunk_size` is much less than u32::MAX.
@ -412,34 +534,47 @@ impl Node {
let mut chunk_info = None; let mut chunk_info = None;
let encrypted = blob_ctx.blob_cipher != crypt::Algorithm::None; let encrypted = blob_ctx.blob_cipher != crypt::Algorithm::None;
let mut dumped_size = None;
if self.inode.child_count() == 1 if ctx.blob_batch_generator.is_some()
&& self.inode.child_count() == 1
&& d_size < ctx.batch_size / 2 && d_size < ctx.batch_size / 2
&& ctx.blob_batch_generator.is_some()
{ {
// This chunk will be added into a batch chunk. // This chunk will be added into a batch chunk.
let mut batch = ctx.blob_batch_generator.as_ref().unwrap().lock().unwrap(); let mut batch = ctx.blob_batch_generator.as_ref().unwrap().lock().unwrap();
if batch.chunk_data_buf_len() as u32 + d_size < ctx.batch_size { if batch.chunk_data_buf_len() as u32 + d_size < ctx.batch_size {
// Add into current batch chunk directly. // Add into current batch chunk directly.
chunk_info = Some(batch.generate_chunk_info(pre_d_offset, d_size, encrypted)?); chunk_info = Some(batch.generate_chunk_info(
blob_ctx.current_compressed_offset,
pre_d_offset,
d_size,
encrypted,
)?);
batch.append_chunk_data_buf(chunk_data); batch.append_chunk_data_buf(chunk_data);
} else { } else {
// Dump current batch chunk if exists, and then add into a new batch chunk. // Dump current batch chunk if exists, and then add into a new batch chunk.
if !batch.chunk_data_buf_is_empty() { if !batch.chunk_data_buf_is_empty() {
// Dump current batch chunk. // Dump current batch chunk.
let (pre_c_offset, c_size, _) = let (_, c_size, _) =
Self::write_chunk_data(ctx, blob_ctx, blob_writer, batch.chunk_data_buf())?; Self::write_chunk_data(ctx, blob_ctx, blob_writer, batch.chunk_data_buf())?;
batch.add_context(pre_c_offset, c_size); dumped_size = Some(c_size);
batch.add_context(c_size);
batch.clear_chunk_data_buf(); batch.clear_chunk_data_buf();
} }
// Add into a new batch chunk. // Add into a new batch chunk.
chunk_info = Some(batch.generate_chunk_info(pre_d_offset, d_size, encrypted)?); chunk_info = Some(batch.generate_chunk_info(
blob_ctx.current_compressed_offset,
pre_d_offset,
d_size,
encrypted,
)?);
batch.append_chunk_data_buf(chunk_data); batch.append_chunk_data_buf(chunk_data);
} }
} else if !ctx.blob_features.contains(BlobFeatures::SEPARATE) { } else if !ctx.blob_features.contains(BlobFeatures::SEPARATE) {
// For other case which needs to write chunk data to data blobs. // For other case which needs to write chunk data to data blobs. Which means,
// `tar-ref`, `targz-ref`, `estargz-ref`, and `estargzindex-ref`, are excluded.
// Interrupt and dump buffered batch chunks. // Interrupt and dump buffered batch chunks.
// TODO: cancel the interruption. // TODO: cancel the interruption.
@ -447,9 +582,10 @@ impl Node {
let mut batch = batch.lock().unwrap(); let mut batch = batch.lock().unwrap();
if !batch.chunk_data_buf_is_empty() { if !batch.chunk_data_buf_is_empty() {
// Dump current batch chunk. // Dump current batch chunk.
let (pre_c_offset, c_size, _) = let (_, c_size, _) =
Self::write_chunk_data(ctx, blob_ctx, blob_writer, batch.chunk_data_buf())?; Self::write_chunk_data(ctx, blob_ctx, blob_writer, batch.chunk_data_buf())?;
batch.add_context(pre_c_offset, c_size); dumped_size = Some(c_size);
batch.add_context(c_size);
batch.clear_chunk_data_buf(); batch.clear_chunk_data_buf();
} }
} }
@ -457,6 +593,7 @@ impl Node {
let (pre_c_offset, c_size, is_compressed) = let (pre_c_offset, c_size, is_compressed) =
Self::write_chunk_data(ctx, blob_ctx, blob_writer, chunk_data) Self::write_chunk_data(ctx, blob_ctx, blob_writer, chunk_data)
.with_context(|| format!("failed to write chunk data {:?}", self.path()))?; .with_context(|| format!("failed to write chunk data {:?}", self.path()))?;
dumped_size = Some(dumped_size.unwrap_or(0) + c_size);
chunk.set_compressed_offset(pre_c_offset); chunk.set_compressed_offset(pre_c_offset);
chunk.set_compressed_size(c_size); chunk.set_compressed_size(c_size);
chunk.set_compressed(is_compressed); chunk.set_compressed(is_compressed);
@ -467,16 +604,16 @@ impl Node {
} }
event_tracer!("blob_uncompressed_size", +d_size); event_tracer!("blob_uncompressed_size", +d_size);
Ok(chunk_info) Ok((chunk_info, dumped_size))
} }
pub fn write_chunk_data( pub fn write_chunk_data(
ctx: &BuildContext, _ctx: &BuildContext,
blob_ctx: &mut BlobContext, blob_ctx: &mut BlobContext,
blob_writer: &mut dyn Artifact, blob_writer: &mut dyn Artifact,
chunk_data: &[u8], chunk_data: &[u8],
) -> Result<(u64, u32, bool)> { ) -> Result<(u64, u32, bool)> {
let (compressed, is_compressed) = compress::compress(chunk_data, ctx.compressor) let (compressed, is_compressed) = compress::compress(chunk_data, blob_ctx.blob_compressor)
.with_context(|| "failed to compress node file".to_string())?; .with_context(|| "failed to compress node file".to_string())?;
let encrypted = crypt::encrypt_with_context( let encrypted = crypt::encrypt_with_context(
&compressed, &compressed,
@ -486,10 +623,14 @@ impl Node {
)?; )?;
let compressed_size = encrypted.len() as u32; let compressed_size = encrypted.len() as u32;
let pre_compressed_offset = blob_ctx.current_compressed_offset; let pre_compressed_offset = blob_ctx.current_compressed_offset;
blob_writer if !blob_ctx.external {
.write_all(&encrypted) // For the external blob, both compressor and encrypter should
.context("failed to write blob")?; // be none, and we don't write data into blob file.
blob_ctx.blob_hash.update(&encrypted); blob_writer
.write_all(&encrypted)
.context("failed to write blob")?;
blob_ctx.blob_hash.update(&encrypted);
}
blob_ctx.current_compressed_offset += compressed_size as u64; blob_ctx.current_compressed_offset += compressed_size as u64;
blob_ctx.compressed_blob_size += compressed_size as u64; blob_ctx.compressed_blob_size += compressed_size as u64;
@ -564,6 +705,7 @@ impl Node {
// build node object from a filesystem object. // build node object from a filesystem object.
impl Node { impl Node {
#[allow(clippy::too_many_arguments)]
/// Create a new instance of [Node] from a filesystem object. /// Create a new instance of [Node] from a filesystem object.
pub fn from_fs_object( pub fn from_fs_object(
version: RafsVersion, version: RafsVersion,
@ -571,6 +713,7 @@ impl Node {
path: PathBuf, path: PathBuf,
overlay: Overlay, overlay: Overlay,
chunk_size: u32, chunk_size: u32,
file_size: u64,
explicit_uidgid: bool, explicit_uidgid: bool,
v6_force_extended_inode: bool, v6_force_extended_inode: bool,
) -> Result<Node> { ) -> Result<Node> {
@ -603,7 +746,7 @@ impl Node {
v6_dirents: Vec::new(), v6_dirents: Vec::new(),
}; };
node.build_inode(chunk_size) node.build_inode(chunk_size, file_size)
.context("failed to build Node from fs object")?; .context("failed to build Node from fs object")?;
if version.is_v6() { if version.is_v6() {
node.v6_set_inode_compact(); node.v6_set_inode_compact();
@ -643,7 +786,7 @@ impl Node {
Ok(()) Ok(())
} }
fn build_inode_stat(&mut self) -> Result<()> { fn build_inode_stat(&mut self, file_size: u64) -> Result<()> {
let meta = self let meta = self
.meta() .meta()
.with_context(|| format!("failed to get metadata of {}", self.path().display()))?; .with_context(|| format!("failed to get metadata of {}", self.path().display()))?;
@ -678,7 +821,13 @@ impl Node {
// directory entries, so let's ignore the value provided by source filesystem and // directory entries, so let's ignore the value provided by source filesystem and
// calculate it later by ourself. // calculate it later by ourself.
if !self.is_dir() { if !self.is_dir() {
self.inode.set_size(meta.st_size()); // If the file size is not 0, and the meta size is 0, it means the file is an
// external dummy file. We need to set the size to file_size.
if file_size != 0 && meta.st_size() == 0 {
self.inode.set_size(file_size);
} else {
self.inode.set_size(meta.st_size());
}
self.v5_set_inode_blocks(); self.v5_set_inode_blocks();
} }
self.info = Arc::new(info); self.info = Arc::new(info);
@ -686,7 +835,7 @@ impl Node {
Ok(()) Ok(())
} }
fn build_inode(&mut self, chunk_size: u32) -> Result<()> { fn build_inode(&mut self, chunk_size: u32, file_size: u64) -> Result<()> {
let size = self.name().byte_size(); let size = self.name().byte_size();
if size > u16::MAX as usize { if size > u16::MAX as usize {
bail!("file name length 0x{:x} is too big", size,); bail!("file name length 0x{:x} is too big", size,);
@ -696,7 +845,7 @@ impl Node {
// NOTE: Always retrieve xattr before attr so that we can know the size of xattr pairs. // NOTE: Always retrieve xattr before attr so that we can know the size of xattr pairs.
self.build_inode_xattr() self.build_inode_xattr()
.with_context(|| format!("failed to get xattr for {}", self.path().display()))?; .with_context(|| format!("failed to get xattr for {}", self.path().display()))?;
self.build_inode_stat() self.build_inode_stat(file_size)
.with_context(|| format!("failed to build inode {}", self.path().display()))?; .with_context(|| format!("failed to build inode {}", self.path().display()))?;
if self.is_reg() { if self.is_reg() {
@ -871,12 +1020,12 @@ impl Node {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use std::io::BufReader; use std::{collections::HashMap, io::BufReader};
use nydus_utils::{digest, BufReaderInfo}; use nydus_utils::{digest, BufReaderInfo};
use vmm_sys_util::tempfile::TempFile; use vmm_sys_util::tempfile::TempFile;
use crate::{ArtifactWriter, BlobCacheGenerator, HashChunkDict}; use crate::{attributes::Attributes, ArtifactWriter, BlobCacheGenerator, HashChunkDict};
use super::*; use super::*;
@ -948,7 +1097,7 @@ mod tests {
.unwrap(), .unwrap(),
); );
let mut blob_mgr = BlobManager::new(digest::Algorithm::Sha256); let mut blob_mgr = BlobManager::new(digest::Algorithm::Sha256, false);
let mut chunk_dict = HashChunkDict::new(digest::Algorithm::Sha256); let mut chunk_dict = HashChunkDict::new(digest::Algorithm::Sha256);
let mut chunk_wrapper = ChunkWrapper::new(RafsVersion::V5); let mut chunk_wrapper = ChunkWrapper::new(RafsVersion::V5);
chunk_wrapper.set_id(RafsDigest { chunk_wrapper.set_id(RafsDigest {
@ -1084,4 +1233,43 @@ mod tests {
node.remove_xattr(OsStr::new("system.posix_acl_default.key")); node.remove_xattr(OsStr::new("system.posix_acl_default.key"));
assert!(!node.inode.has_xattr()); assert!(!node.inode.has_xattr());
} }
#[test]
fn test_set_external_chunk_crc32() {
let mut ctx = BuildContext {
crc32_algorithm: crc32::Algorithm::Crc32Iscsi,
attributes: Attributes {
crcs: HashMap::new(),
..Default::default()
},
..Default::default()
};
let target = PathBuf::from("/test_file");
ctx.attributes
.crcs
.insert(target.clone(), vec![0x12345678, 0x87654321]);
let node = Node::new(
InodeWrapper::new(RafsVersion::V5),
NodeInfo {
path: target.clone(),
target: target.clone(),
..Default::default()
},
1,
);
let mut chunk = node.inode.create_chunk();
print!("target: {}", node.target().display());
let result = node.set_external_chunk_crc32(&ctx, &mut chunk, 1);
assert!(result.is_ok());
assert_eq!(chunk.crc32(), 0x87654321);
assert!(chunk.has_crc32());
// test invalid crc index
let result = node.set_external_chunk_crc32(&ctx, &mut chunk, 2);
assert!(result.is_err());
let err = result.unwrap_err().to_string();
assert!(err.contains("invalid crc index 2 for file /test_file"));
}
} }

View File

@ -71,6 +71,16 @@ pub enum WhiteoutSpec {
None, None,
} }
impl fmt::Display for WhiteoutSpec {
fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result {
match self {
WhiteoutSpec::Oci => write!(f, "oci"),
WhiteoutSpec::Overlayfs => write!(f, "overlayfs"),
WhiteoutSpec::None => write!(f, "none"),
}
}
}
impl Default for WhiteoutSpec { impl Default for WhiteoutSpec {
fn default() -> Self { fn default() -> Self {
Self::Oci Self::Oci

View File

@ -213,7 +213,7 @@ impl Prefetch {
if self.policy == PrefetchPolicy::Fs { if self.policy == PrefetchPolicy::Fs {
let mut prefetch_table = RafsV5PrefetchTable::new(); let mut prefetch_table = RafsV5PrefetchTable::new();
for i in self.patterns.values().filter_map(|v| v.clone()) { for i in self.patterns.values().filter_map(|v| v.clone()) {
let node = i.lock().unwrap(); let node = i.borrow_mut();
assert!(node.inode.ino() < u32::MAX as u64); assert!(node.inode.ino() < u32::MAX as u64);
prefetch_table.add_entry(node.inode.ino() as u32); prefetch_table.add_entry(node.inode.ino() as u32);
} }
@ -228,7 +228,7 @@ impl Prefetch {
if self.policy == PrefetchPolicy::Fs { if self.policy == PrefetchPolicy::Fs {
let mut prefetch_table = RafsV6PrefetchTable::new(); let mut prefetch_table = RafsV6PrefetchTable::new();
for i in self.patterns.values().filter_map(|v| v.clone()) { for i in self.patterns.values().filter_map(|v| v.clone()) {
let node = i.lock().unwrap(); let node = i.borrow_mut();
let ino = node.inode.ino(); let ino = node.inode.ino();
debug_assert!(ino > 0); debug_assert!(ino > 0);
let nid = calculate_nid(node.v6_offset, meta_addr); let nid = calculate_nid(node.v6_offset, meta_addr);
@ -270,7 +270,7 @@ mod tests {
use super::*; use super::*;
use crate::core::node::NodeInfo; use crate::core::node::NodeInfo;
use nydus_rafs::metadata::{inode::InodeWrapper, RafsVersion}; use nydus_rafs::metadata::{inode::InodeWrapper, RafsVersion};
use std::sync::Mutex; use std::cell::RefCell;
#[test] #[test]
fn test_generate_pattern() { fn test_generate_pattern() {
@ -332,29 +332,29 @@ mod tests {
let mut info1 = info.clone(); let mut info1 = info.clone();
info1.target = PathBuf::from("/f"); info1.target = PathBuf::from("/f");
let node1 = Node::new(inode.clone(), info1, 1); let node1 = Node::new(inode.clone(), info1, 1);
let node1 = TreeNode::new(Mutex::from(node1)); let node1 = TreeNode::new(RefCell::from(node1));
prefetch.insert(&node1, &node1.lock().unwrap()); prefetch.insert(&node1, &node1.borrow());
let inode2 = inode.clone(); let inode2 = inode.clone();
let mut info2 = info.clone(); let mut info2 = info.clone();
info2.target = PathBuf::from("/a/b"); info2.target = PathBuf::from("/a/b");
let node2 = Node::new(inode2, info2, 1); let node2 = Node::new(inode2, info2, 1);
let node2 = TreeNode::new(Mutex::from(node2)); let node2 = TreeNode::new(RefCell::from(node2));
prefetch.insert(&node2, &node2.lock().unwrap()); prefetch.insert(&node2, &node2.borrow());
let inode3 = inode.clone(); let inode3 = inode.clone();
let mut info3 = info.clone(); let mut info3 = info.clone();
info3.target = PathBuf::from("/h/i/j"); info3.target = PathBuf::from("/h/i/j");
let node3 = Node::new(inode3, info3, 1); let node3 = Node::new(inode3, info3, 1);
let node3 = TreeNode::new(Mutex::from(node3)); let node3 = TreeNode::new(RefCell::from(node3));
prefetch.insert(&node3, &node3.lock().unwrap()); prefetch.insert(&node3, &node3.borrow());
let inode4 = inode.clone(); let inode4 = inode.clone();
let mut info4 = info.clone(); let mut info4 = info.clone();
info4.target = PathBuf::from("/z"); info4.target = PathBuf::from("/z");
let node4 = Node::new(inode4, info4, 1); let node4 = Node::new(inode4, info4, 1);
let node4 = TreeNode::new(Mutex::from(node4)); let node4 = TreeNode::new(RefCell::from(node4));
prefetch.insert(&node4, &node4.lock().unwrap()); prefetch.insert(&node4, &node4.borrow());
let inode5 = inode.clone(); let inode5 = inode.clone();
inode.set_mode(0o755 | libc::S_IFDIR as u32); inode.set_mode(0o755 | libc::S_IFDIR as u32);
@ -362,8 +362,8 @@ mod tests {
let mut info5 = info; let mut info5 = info;
info5.target = PathBuf::from("/a/b/d"); info5.target = PathBuf::from("/a/b/d");
let node5 = Node::new(inode5, info5, 1); let node5 = Node::new(inode5, info5, 1);
let node5 = TreeNode::new(Mutex::from(node5)); let node5 = TreeNode::new(RefCell::from(node5));
prefetch.insert(&node5, &node5.lock().unwrap()); prefetch.insert(&node5, &node5.borrow());
// node1, node2 // node1, node2
assert_eq!(prefetch.fs_prefetch_rule_count(), 2); assert_eq!(prefetch.fs_prefetch_rule_count(), 2);
@ -373,12 +373,12 @@ mod tests {
assert_eq!(non_pre.len(), 1); assert_eq!(non_pre.len(), 1);
let pre_str: Vec<String> = pre let pre_str: Vec<String> = pre
.iter() .iter()
.map(|n| n.lock().unwrap().target().to_str().unwrap().to_owned()) .map(|n| n.borrow().target().to_str().unwrap().to_owned())
.collect(); .collect();
assert_eq!(pre_str, vec!["/a/b", "/a/b/d", "/f", "/h/i/j"]); assert_eq!(pre_str, vec!["/a/b", "/a/b/d", "/f", "/h/i/j"]);
let non_pre_str: Vec<String> = non_pre let non_pre_str: Vec<String> = non_pre
.iter() .iter()
.map(|n| n.lock().unwrap().target().to_str().unwrap().to_owned()) .map(|n| n.borrow().target().to_str().unwrap().to_owned())
.collect(); .collect();
assert_eq!(non_pre_str, vec!["/z"]); assert_eq!(non_pre_str, vec!["/z"]);

View File

@ -16,10 +16,12 @@
//! lower tree (MetadataTree). //! lower tree (MetadataTree).
//! - Traverse the merged tree (OverlayTree) to dump bootstrap and data blobs. //! - Traverse the merged tree (OverlayTree) to dump bootstrap and data blobs.
use std::cell::{RefCell, RefMut};
use std::ffi::OsString; use std::ffi::OsString;
use std::os::unix::ffi::OsStrExt; use std::os::unix::ffi::OsStrExt;
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use std::sync::{Arc, Mutex, MutexGuard}; use std::rc::Rc;
use std::sync::Arc;
use anyhow::{bail, Result}; use anyhow::{bail, Result};
use nydus_rafs::metadata::chunk::ChunkWrapper; use nydus_rafs::metadata::chunk::ChunkWrapper;
@ -34,7 +36,7 @@ use crate::core::overlay::OVERLAYFS_WHITEOUT_OPAQUE;
use crate::{BuildContext, ChunkDict}; use crate::{BuildContext, ChunkDict};
/// Type alias for tree internal node. /// Type alias for tree internal node.
pub type TreeNode = Arc<Mutex<Node>>; pub type TreeNode = Rc<RefCell<Node>>;
/// An in-memory tree structure to maintain information and topology of filesystem nodes. /// An in-memory tree structure to maintain information and topology of filesystem nodes.
#[derive(Clone)] #[derive(Clone)]
@ -52,7 +54,7 @@ impl Tree {
pub fn new(node: Node) -> Self { pub fn new(node: Node) -> Self {
let name = node.name().as_bytes().to_vec(); let name = node.name().as_bytes().to_vec();
Tree { Tree {
node: Arc::new(Mutex::new(node)), node: Rc::new(RefCell::new(node)),
name, name,
children: Vec::new(), children: Vec::new(),
} }
@ -81,12 +83,12 @@ impl Tree {
/// Set `Node` associated with the tree node. /// Set `Node` associated with the tree node.
pub fn set_node(&mut self, node: Node) { pub fn set_node(&mut self, node: Node) {
self.node = Arc::new(Mutex::new(node)); self.node.replace(node);
} }
/// Get mutex guard to access the associated `Node` object. /// Get mutably borrowed value to access the associated `Node` object.
pub fn lock_node(&self) -> MutexGuard<Node> { pub fn borrow_mut_node(&self) -> RefMut<'_, Node> {
self.node.lock().unwrap() self.node.as_ref().borrow_mut()
} }
/// Walk all nodes in DFS mode. /// Walk all nodes in DFS mode.
@ -132,7 +134,7 @@ impl Tree {
let mut dirs = Vec::with_capacity(32); let mut dirs = Vec::with_capacity(32);
for child in &self.children { for child in &self.children {
cb(child)?; cb(child)?;
if child.lock_node().is_dir() { if child.borrow_mut_node().is_dir() {
dirs.push(child); dirs.push(child);
} }
} }
@ -172,13 +174,37 @@ impl Tree {
Some(tree) Some(tree)
} }
/// Get the mutable tree node corresponding to the path.
pub fn get_node_mut(&mut self, path: &Path) -> Option<&mut Tree> {
let target_vec = Node::generate_target_vec(path);
assert!(!target_vec.is_empty());
let mut tree = self;
let last_idx = target_vec.len() - 1;
for name in &target_vec[1..last_idx] {
match tree.get_child_idx(name.as_bytes()) {
Some(idx) => tree = &mut tree.children[idx],
None => return None,
}
}
if let Some(last_name) = target_vec.last() {
match tree.get_child_idx(last_name.as_bytes()) {
Some(idx) => Some(&mut tree.children[idx]),
None => None,
}
} else {
Some(tree)
}
}
/// Merge the upper layer tree into the lower layer tree, applying whiteout rules. /// Merge the upper layer tree into the lower layer tree, applying whiteout rules.
pub fn merge_overaly(&mut self, ctx: &BuildContext, upper: Tree) -> Result<()> { pub fn merge_overaly(&mut self, ctx: &BuildContext, upper: Tree) -> Result<()> {
assert_eq!(self.name, "/".as_bytes()); assert_eq!(self.name, "/".as_bytes());
assert_eq!(upper.name, "/".as_bytes()); assert_eq!(upper.name, "/".as_bytes());
// Handle the root node. // Handle the root node.
upper.lock_node().overlay = Overlay::UpperModification; upper.borrow_mut_node().overlay = Overlay::UpperModification;
self.node = upper.node.clone(); self.node = upper.node.clone();
self.merge_children(ctx, &upper)?; self.merge_children(ctx, &upper)?;
lazy_drop(upper); lazy_drop(upper);
@ -190,7 +216,7 @@ impl Tree {
// Handle whiteout nodes in the first round, and handle other nodes in the second round. // Handle whiteout nodes in the first round, and handle other nodes in the second round.
let mut modified = Vec::with_capacity(upper.children.len()); let mut modified = Vec::with_capacity(upper.children.len());
for u in upper.children.iter() { for u in upper.children.iter() {
let mut u_node = u.lock_node(); let mut u_node = u.borrow_mut_node();
match u_node.whiteout_type(ctx.whiteout_spec) { match u_node.whiteout_type(ctx.whiteout_spec) {
Some(WhiteoutType::OciRemoval) => { Some(WhiteoutType::OciRemoval) => {
if let Some(origin_name) = u_node.origin_name(WhiteoutType::OciRemoval) { if let Some(origin_name) = u_node.origin_name(WhiteoutType::OciRemoval) {
@ -220,7 +246,7 @@ impl Tree {
let mut dirs = Vec::new(); let mut dirs = Vec::new();
for u in modified { for u in modified {
let mut u_node = u.lock_node(); let mut u_node = u.borrow_mut_node();
if let Some(idx) = self.get_child_idx(&u.name) { if let Some(idx) = self.get_child_idx(&u.name) {
u_node.overlay = Overlay::UpperModification; u_node.overlay = Overlay::UpperModification;
self.children[idx].node = u.node.clone(); self.children[idx].node = u.node.clone();
@ -299,7 +325,7 @@ impl<'a> MetadataTreeBuilder<'a> {
children.sort_unstable_by(|a, b| a.name.cmp(&b.name)); children.sort_unstable_by(|a, b| a.name.cmp(&b.name));
for child in children.iter_mut() { for child in children.iter_mut() {
let child_node = child.lock_node(); let child_node = child.borrow_mut_node();
if child_node.is_dir() { if child_node.is_dir() {
let child_ino = child_node.inode.ino(); let child_ino = child_node.inode.ino();
drop(child_node); drop(child_node);
@ -397,13 +423,14 @@ mod tests {
tmpfile.as_path().to_path_buf(), tmpfile.as_path().to_path_buf(),
Overlay::UpperAddition, Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32, RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true, true,
false, false,
) )
.unwrap(); .unwrap();
let mut tree = Tree::new(node); let mut tree = Tree::new(node);
assert_eq!(tree.name, tmpfile.as_path().file_name().unwrap().as_bytes()); assert_eq!(tree.name, tmpfile.as_path().file_name().unwrap().as_bytes());
let node1 = tree.lock_node(); let node1 = tree.borrow_mut_node();
drop(node1); drop(node1);
let tmpfile = TempFile::new_in(tmpdir.as_path()).unwrap(); let tmpfile = TempFile::new_in(tmpdir.as_path()).unwrap();
@ -413,12 +440,13 @@ mod tests {
tmpfile.as_path().to_path_buf(), tmpfile.as_path().to_path_buf(),
Overlay::UpperAddition, Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32, RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true, true,
false, false,
) )
.unwrap(); .unwrap();
tree.set_node(node); tree.set_node(node);
let node2 = tree.lock_node(); let node2 = tree.borrow_mut_node();
assert_eq!(node2.name(), tmpfile.as_path().file_name().unwrap()); assert_eq!(node2.name(), tmpfile.as_path().file_name().unwrap());
} }
@ -432,6 +460,7 @@ mod tests {
tmpfile.as_path().to_path_buf(), tmpfile.as_path().to_path_buf(),
Overlay::UpperAddition, Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32, RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true, true,
false, false,
) )
@ -445,6 +474,7 @@ mod tests {
tmpfile2.as_path().to_path_buf(), tmpfile2.as_path().to_path_buf(),
Overlay::UpperAddition, Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32, RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true, true,
false, false,
) )
@ -459,6 +489,7 @@ mod tests {
tmpfile3.as_path().to_path_buf(), tmpfile3.as_path().to_path_buf(),
Overlay::UpperAddition, Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32, RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true, true,
false, false,
) )

View File

@ -92,7 +92,7 @@ impl Node {
let mut d_size = 0u64; let mut d_size = 0u64;
for child in children.iter() { for child in children.iter() {
d_size += child.lock_node().inode.name_size() as u64 + RAFS_V5_VIRTUAL_ENTRY_SIZE; d_size += child.borrow_mut_node().inode.name_size() as u64 + RAFS_V5_VIRTUAL_ENTRY_SIZE;
} }
if d_size == 0 { if d_size == 0 {
self.inode.set_size(4096); self.inode.set_size(4096);
@ -124,13 +124,13 @@ impl Node {
impl Bootstrap { impl Bootstrap {
/// Calculate inode digest for directory. /// Calculate inode digest for directory.
fn v5_digest_node(&self, ctx: &mut BuildContext, tree: &Tree) { fn v5_digest_node(&self, ctx: &mut BuildContext, tree: &Tree) {
let mut node = tree.lock_node(); let mut node = tree.borrow_mut_node();
// We have set digest for non-directory inode in the previous dump_blob workflow. // We have set digest for non-directory inode in the previous dump_blob workflow.
if node.is_dir() { if node.is_dir() {
let mut inode_hasher = RafsDigest::hasher(ctx.digester); let mut inode_hasher = RafsDigest::hasher(ctx.digester);
for child in tree.children.iter() { for child in tree.children.iter() {
let child = child.lock_node(); let child = child.borrow_mut_node();
inode_hasher.digest_update(child.inode.digest().as_ref()); inode_hasher.digest_update(child.inode.digest().as_ref());
} }
node.inode.set_digest(inode_hasher.digest_finalize()); node.inode.set_digest(inode_hasher.digest_finalize());
@ -200,7 +200,7 @@ impl Bootstrap {
let mut has_xattr = false; let mut has_xattr = false;
self.tree.walk_dfs_pre(&mut |t| { self.tree.walk_dfs_pre(&mut |t| {
let node = t.lock_node(); let node = t.borrow_mut_node();
inode_table.set(node.index, inode_offset)?; inode_table.set(node.index, inode_offset)?;
// Add inode size // Add inode size
inode_offset += node.inode.inode_size() as u32; inode_offset += node.inode.inode_size() as u32;
@ -253,7 +253,7 @@ impl Bootstrap {
timing_tracer!( timing_tracer!(
{ {
self.tree.walk_dfs_pre(&mut |t| { self.tree.walk_dfs_pre(&mut |t| {
t.lock_node() t.borrow_mut_node()
.dump_bootstrap_v5(ctx, bootstrap_ctx.writer.as_mut()) .dump_bootstrap_v5(ctx, bootstrap_ctx.writer.as_mut())
.context("failed to dump bootstrap") .context("failed to dump bootstrap")
}) })

View File

@ -21,7 +21,7 @@ use nydus_rafs::metadata::layout::v6::{
}; };
use nydus_rafs::metadata::RafsStore; use nydus_rafs::metadata::RafsStore;
use nydus_rafs::RafsIoWrite; use nydus_rafs::RafsIoWrite;
use nydus_storage::device::BlobFeatures; use nydus_storage::device::{BlobFeatures, BlobInfo};
use nydus_utils::{root_tracer, round_down, round_up, timing_tracer}; use nydus_utils::{root_tracer, round_down, round_up, timing_tracer};
use super::chunk_dict::DigestWithBlobIndex; use super::chunk_dict::DigestWithBlobIndex;
@ -41,6 +41,7 @@ impl Node {
orig_meta_addr: u64, orig_meta_addr: u64,
meta_addr: u64, meta_addr: u64,
chunk_cache: &mut BTreeMap<DigestWithBlobIndex, Arc<ChunkWrapper>>, chunk_cache: &mut BTreeMap<DigestWithBlobIndex, Arc<ChunkWrapper>>,
blobs: &[Arc<BlobInfo>],
) -> Result<()> { ) -> Result<()> {
let xattr_inline_count = self.info.xattrs.count_v6(); let xattr_inline_count = self.info.xattrs.count_v6();
ensure!( ensure!(
@ -70,7 +71,7 @@ impl Node {
if self.is_dir() { if self.is_dir() {
self.v6_dump_dir(ctx, f_bootstrap, meta_addr, meta_offset, &mut inode)?; self.v6_dump_dir(ctx, f_bootstrap, meta_addr, meta_offset, &mut inode)?;
} else if self.is_reg() { } else if self.is_reg() {
self.v6_dump_file(ctx, f_bootstrap, chunk_cache, &mut inode)?; self.v6_dump_file(ctx, f_bootstrap, chunk_cache, &mut inode, &blobs)?;
} else if self.is_symlink() { } else if self.is_symlink() {
self.v6_dump_symlink(ctx, f_bootstrap, &mut inode)?; self.v6_dump_symlink(ctx, f_bootstrap, &mut inode)?;
} else { } else {
@ -86,17 +87,12 @@ impl Node {
/// Update whether compact mode can be used for this inode or not. /// Update whether compact mode can be used for this inode or not.
pub fn v6_set_inode_compact(&mut self) { pub fn v6_set_inode_compact(&mut self) {
if self.info.v6_force_extended_inode self.v6_compact_inode = !(self.info.v6_force_extended_inode
|| self.inode.uid() > u16::MAX as u32 || self.inode.uid() > u16::MAX as u32
|| self.inode.gid() > u16::MAX as u32 || self.inode.gid() > u16::MAX as u32
|| self.inode.nlink() > u16::MAX as u32 || self.inode.nlink() > u16::MAX as u32
|| self.inode.size() > u32::MAX as u64 || self.inode.size() > u32::MAX as u64
|| self.path().extension() == Some(OsStr::new("pyc")) || self.path().extension() == Some(OsStr::new("pyc")));
{
self.v6_compact_inode = false;
} else {
self.v6_compact_inode = true;
}
} }
/// Layout the normal inode (except directory inode) into the meta blob. /// Layout the normal inode (except directory inode) into the meta blob.
@ -182,10 +178,9 @@ impl Node {
} else { } else {
// Avoid sorting again if "." and ".." are at the head after sorting due to that // Avoid sorting again if "." and ".." are at the head after sorting due to that
// `tree.children` has already been sorted. // `tree.children` has already been sorted.
d_size = (".".as_bytes().len() d_size =
+ size_of::<RafsV6Dirent>() (".".len() + size_of::<RafsV6Dirent>() + "..".len() + size_of::<RafsV6Dirent>())
+ "..".as_bytes().len() as u64;
+ size_of::<RafsV6Dirent>()) as u64;
for child in tree.children.iter() { for child in tree.children.iter() {
let len = child.name().len() + size_of::<RafsV6Dirent>(); let len = child.name().len() + size_of::<RafsV6Dirent>();
// erofs disk format requires dirent to be aligned to block size. // erofs disk format requires dirent to be aligned to block size.
@ -458,6 +453,7 @@ impl Node {
f_bootstrap: &mut dyn RafsIoWrite, f_bootstrap: &mut dyn RafsIoWrite,
chunk_cache: &mut BTreeMap<DigestWithBlobIndex, Arc<ChunkWrapper>>, chunk_cache: &mut BTreeMap<DigestWithBlobIndex, Arc<ChunkWrapper>>,
inode: &mut Box<dyn RafsV6OndiskInode>, inode: &mut Box<dyn RafsV6OndiskInode>,
blobs: &[Arc<BlobInfo>],
) -> Result<()> { ) -> Result<()> {
let mut is_continuous = true; let mut is_continuous = true;
let mut prev = None; let mut prev = None;
@ -479,8 +475,15 @@ impl Node {
v6_chunk.set_block_addr(blk_addr); v6_chunk.set_block_addr(blk_addr);
chunks.extend(v6_chunk.as_ref()); chunks.extend(v6_chunk.as_ref());
let external =
blobs[chunk.inner.blob_index() as usize].has_feature(BlobFeatures::EXTERNAL);
let chunk_index = if external {
Some(chunk.inner.index())
} else {
None
};
chunk_cache.insert( chunk_cache.insert(
DigestWithBlobIndex(*chunk.inner.id(), chunk.inner.blob_index() + 1), DigestWithBlobIndex(*chunk.inner.id(), chunk.inner.blob_index() + 1, chunk_index),
chunk.inner.clone(), chunk.inner.clone(),
); );
if let Some((prev_idx, prev_pos)) = prev { if let Some((prev_idx, prev_pos)) = prev {
@ -581,7 +584,7 @@ impl BuildContext {
impl Bootstrap { impl Bootstrap {
pub(crate) fn v6_update_dirents(parent: &Tree, parent_offset: u64) { pub(crate) fn v6_update_dirents(parent: &Tree, parent_offset: u64) {
let mut node = parent.lock_node(); let mut node = parent.borrow_mut_node();
let node_offset = node.v6_offset; let node_offset = node.v6_offset;
if !node.is_dir() { if !node.is_dir() {
return; return;
@ -601,7 +604,7 @@ impl Bootstrap {
let mut dirs: Vec<&Tree> = Vec::new(); let mut dirs: Vec<&Tree> = Vec::new();
for child in parent.children.iter() { for child in parent.children.iter() {
let child_node = child.lock_node(); let child_node = child.borrow_mut_node();
let entry = ( let entry = (
child_node.v6_offset, child_node.v6_offset,
OsStr::from_bytes(child.name()).to_owned(), OsStr::from_bytes(child.name()).to_owned(),
@ -675,7 +678,7 @@ impl Bootstrap {
// When using nid 0 as root nid, // When using nid 0 as root nid,
// the root directory will not be shown by glibc's getdents/readdir. // the root directory will not be shown by glibc's getdents/readdir.
// Because in some OS, ino == 0 represents corresponding file is deleted. // Because in some OS, ino == 0 represents corresponding file is deleted.
let root_node_offset = self.tree.lock_node().v6_offset; let root_node_offset = self.tree.borrow_mut_node().v6_offset;
let orig_meta_addr = root_node_offset - EROFS_BLOCK_SIZE_4096; let orig_meta_addr = root_node_offset - EROFS_BLOCK_SIZE_4096;
let meta_addr = if blob_table_size > 0 { let meta_addr = if blob_table_size > 0 {
align_offset( align_offset(
@ -709,12 +712,13 @@ impl Bootstrap {
timing_tracer!( timing_tracer!(
{ {
self.tree.walk_bfs(true, &mut |n| { self.tree.walk_bfs(true, &mut |n| {
n.lock_node().dump_bootstrap_v6( n.borrow_mut_node().dump_bootstrap_v6(
ctx, ctx,
bootstrap_ctx.writer.as_mut(), bootstrap_ctx.writer.as_mut(),
orig_meta_addr, orig_meta_addr,
meta_addr, meta_addr,
&mut chunk_cache, &mut chunk_cache,
&blobs,
) )
}) })
}, },
@ -916,6 +920,7 @@ mod tests {
pa_aa.as_path().to_path_buf(), pa_aa.as_path().to_path_buf(),
Overlay::UpperAddition, Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32, RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
false, false,
false, false,
) )
@ -943,6 +948,7 @@ mod tests {
pa.as_path().to_path_buf(), pa.as_path().to_path_buf(),
Overlay::UpperAddition, Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32, RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
false, false,
false, false,
) )
@ -1039,6 +1045,7 @@ mod tests {
pa_reg.as_path().to_path_buf(), pa_reg.as_path().to_path_buf(),
Overlay::UpperAddition, Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32, RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
false, false,
false, false,
) )
@ -1052,6 +1059,7 @@ mod tests {
pa_pyc.as_path().to_path_buf(), pa_pyc.as_path().to_path_buf(),
Overlay::UpperAddition, Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32, RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
false, false,
false, false,
) )

View File

@ -5,14 +5,15 @@
use std::fs; use std::fs;
use std::fs::DirEntry; use std::fs::DirEntry;
use anyhow::{Context, Result}; use anyhow::{anyhow, Context, Result};
use nydus_utils::{event_tracer, lazy_drop, root_tracer, timing_tracer}; use nydus_utils::{event_tracer, lazy_drop, root_tracer, timing_tracer};
use crate::core::context::{Artifact, NoopArtifactWriter}; use crate::core::context::{Artifact, NoopArtifactWriter};
use crate::core::prefetch;
use super::core::blob::Blob; use super::core::blob::Blob;
use super::core::context::{ use super::core::context::{
ArtifactWriter, BlobManager, BootstrapContext, BootstrapManager, BuildContext, BuildOutput, ArtifactWriter, BlobManager, BootstrapManager, BuildContext, BuildOutput,
}; };
use super::core::node::Node; use super::core::node::Node;
use super::{build_bootstrap, dump_bootstrap, finalize_blob, Builder, Overlay, Tree, TreeNode}; use super::{build_bootstrap, dump_bootstrap, finalize_blob, Builder, Overlay, Tree, TreeNode};
@ -29,14 +30,14 @@ impl FilesystemTreeBuilder {
fn load_children( fn load_children(
&self, &self,
ctx: &mut BuildContext, ctx: &mut BuildContext,
bootstrap_ctx: &mut BootstrapContext,
parent: &TreeNode, parent: &TreeNode,
layer_idx: u16, layer_idx: u16,
) -> Result<Vec<Tree>> { ) -> Result<(Vec<Tree>, Vec<Tree>)> {
let mut result = Vec::new(); let mut trees = Vec::new();
let parent = parent.lock().unwrap(); let mut external_trees = Vec::new();
let parent = parent.borrow();
if !parent.is_dir() { if !parent.is_dir() {
return Ok(result); return Ok((trees.clone(), external_trees));
} }
let children = fs::read_dir(parent.path()) let children = fs::read_dir(parent.path())
@ -46,12 +47,26 @@ impl FilesystemTreeBuilder {
event_tracer!("load_from_directory", +children.len()); event_tracer!("load_from_directory", +children.len());
for child in children { for child in children {
let path = child.path(); let path = child.path();
let target = Node::generate_target(&path, &ctx.source_path);
let mut file_size: u64 = 0;
if ctx.attributes.is_external(&target) {
if let Some(value) = ctx.attributes.get_value(&target, "file_size") {
file_size = value.parse::<u64>().ok().ok_or_else(|| {
anyhow!(
"failed to parse file_size for external file {}",
&target.display()
)
})?;
}
}
let mut child = Node::from_fs_object( let mut child = Node::from_fs_object(
ctx.fs_version, ctx.fs_version,
ctx.source_path.clone(), ctx.source_path.clone(),
path.clone(), path.clone(),
Overlay::UpperAddition, Overlay::UpperAddition,
ctx.chunk_size, ctx.chunk_size,
file_size,
parent.info.explicit_uidgid, parent.info.explicit_uidgid,
true, true,
) )
@ -60,24 +75,41 @@ impl FilesystemTreeBuilder {
// as per OCI spec, whiteout file should not be present within final image // as per OCI spec, whiteout file should not be present within final image
// or filesystem, only existed in layers. // or filesystem, only existed in layers.
if !bootstrap_ctx.layered if layer_idx == 0
&& child.whiteout_type(ctx.whiteout_spec).is_some() && child.whiteout_type(ctx.whiteout_spec).is_some()
&& !child.is_overlayfs_opaque(ctx.whiteout_spec) && !child.is_overlayfs_opaque(ctx.whiteout_spec)
{ {
continue; continue;
} }
let mut child = Tree::new(child); let (mut child, mut external_child) = (Tree::new(child.clone()), Tree::new(child));
child.children = self.load_children(ctx, bootstrap_ctx, &child.node, layer_idx)?; let (child_children, external_children) =
self.load_children(ctx, &child.node, layer_idx)?;
child.children = child_children;
external_child.children = external_children;
child child
.lock_node() .borrow_mut_node()
.v5_set_dir_size(ctx.fs_version, &child.children); .v5_set_dir_size(ctx.fs_version, &child.children);
result.push(child); external_child
.borrow_mut_node()
.v5_set_dir_size(ctx.fs_version, &external_child.children);
if ctx.attributes.is_external(&target) {
external_trees.push(external_child);
} else {
// TODO: need to implement type=ignore for nydus attributes,
// let's ignore the tree for workaround.
trees.push(child.clone());
if ctx.attributes.is_prefix_external(target) {
external_trees.push(external_child);
}
};
} }
result.sort_unstable_by(|a, b| a.name().cmp(b.name())); trees.sort_unstable_by(|a, b| a.name().cmp(b.name()));
external_trees.sort_unstable_by(|a, b| a.name().cmp(b.name()));
Ok(result) Ok((trees, external_trees))
} }
} }
@ -90,57 +122,46 @@ impl DirectoryBuilder {
} }
/// Build node tree from a filesystem directory /// Build node tree from a filesystem directory
fn build_tree( fn build_tree(&mut self, ctx: &mut BuildContext, layer_idx: u16) -> Result<(Tree, Tree)> {
&mut self,
ctx: &mut BuildContext,
bootstrap_ctx: &mut BootstrapContext,
layer_idx: u16,
) -> Result<Tree> {
let node = Node::from_fs_object( let node = Node::from_fs_object(
ctx.fs_version, ctx.fs_version,
ctx.source_path.clone(), ctx.source_path.clone(),
ctx.source_path.clone(), ctx.source_path.clone(),
Overlay::UpperAddition, Overlay::UpperAddition,
ctx.chunk_size, ctx.chunk_size,
0,
ctx.explicit_uidgid, ctx.explicit_uidgid,
true, true,
)?; )?;
let mut tree = Tree::new(node); let mut tree = Tree::new(node.clone());
let mut external_tree = Tree::new(node);
let tree_builder = FilesystemTreeBuilder::new(); let tree_builder = FilesystemTreeBuilder::new();
tree.children = timing_tracer!( let (tree_children, external_tree_children) = timing_tracer!(
{ tree_builder.load_children(ctx, bootstrap_ctx, &tree.node, layer_idx) }, { tree_builder.load_children(ctx, &tree.node, layer_idx) },
"load_from_directory" "load_from_directory"
)?; )?;
tree.lock_node() tree.children = tree_children;
external_tree.children = external_tree_children;
tree.borrow_mut_node()
.v5_set_dir_size(ctx.fs_version, &tree.children); .v5_set_dir_size(ctx.fs_version, &tree.children);
external_tree
.borrow_mut_node()
.v5_set_dir_size(ctx.fs_version, &external_tree.children);
Ok(tree) Ok((tree, external_tree))
} }
}
impl Builder for DirectoryBuilder { fn one_build(
fn build(
&mut self, &mut self,
ctx: &mut BuildContext, ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager, bootstrap_mgr: &mut BootstrapManager,
blob_mgr: &mut BlobManager, blob_mgr: &mut BlobManager,
blob_writer: &mut Box<dyn Artifact>,
tree: Tree,
) -> Result<BuildOutput> { ) -> Result<BuildOutput> {
let mut bootstrap_ctx = bootstrap_mgr.create_ctx()?;
let layer_idx = u16::from(bootstrap_ctx.layered);
let mut blob_writer: Box<dyn Artifact> = if let Some(blob_stor) = ctx.blob_storage.clone() {
Box::new(ArtifactWriter::new(blob_stor)?)
} else {
Box::<NoopArtifactWriter>::default()
};
// Scan source directory to build upper layer tree.
let tree = timing_tracer!(
{ self.build_tree(ctx, &mut bootstrap_ctx, layer_idx) },
"build_tree"
)?;
// Build bootstrap // Build bootstrap
let mut bootstrap_ctx = bootstrap_mgr.create_ctx()?;
let mut bootstrap = timing_tracer!( let mut bootstrap = timing_tracer!(
{ build_bootstrap(ctx, bootstrap_mgr, &mut bootstrap_ctx, blob_mgr, tree) }, { build_bootstrap(ctx, bootstrap_mgr, &mut bootstrap_ctx, blob_mgr, tree) },
"build_bootstrap" "build_bootstrap"
@ -192,6 +213,55 @@ impl Builder for DirectoryBuilder {
lazy_drop(bootstrap_ctx); lazy_drop(bootstrap_ctx);
BuildOutput::new(blob_mgr, &bootstrap_mgr.bootstrap_storage) BuildOutput::new(blob_mgr, None, &bootstrap_mgr.bootstrap_storage, &None)
}
}
impl Builder for DirectoryBuilder {
fn build(
&mut self,
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_mgr: &mut BlobManager,
) -> Result<BuildOutput> {
let layer_idx = u16::from(bootstrap_mgr.f_parent_path.is_some());
// Scan source directory to build upper layer tree.
let (tree, external_tree) =
timing_tracer!({ self.build_tree(ctx, layer_idx) }, "build_tree")?;
// Build for tree
let mut blob_writer: Box<dyn Artifact> = if let Some(blob_stor) = ctx.blob_storage.clone() {
Box::new(ArtifactWriter::new(blob_stor)?)
} else {
Box::<NoopArtifactWriter>::default()
};
let mut output = self.one_build(ctx, bootstrap_mgr, blob_mgr, &mut blob_writer, tree)?;
// Build for external tree
ctx.prefetch = prefetch::Prefetch::new(prefetch::PrefetchPolicy::None)?;
let mut external_blob_mgr = BlobManager::new(ctx.digester, true);
let mut external_bootstrap_mgr = bootstrap_mgr.clone();
if let Some(stor) = external_bootstrap_mgr.bootstrap_storage.as_mut() {
stor.add_suffix("external")
}
let mut external_blob_writer: Box<dyn Artifact> =
if let Some(blob_stor) = ctx.external_blob_storage.clone() {
Box::new(ArtifactWriter::new(blob_stor)?)
} else {
Box::<NoopArtifactWriter>::default()
};
let external_output = self.one_build(
ctx,
&mut external_bootstrap_mgr,
&mut external_blob_mgr,
&mut external_blob_writer,
external_tree,
)?;
output.external_bootstrap_path = external_output.bootstrap_path;
output.external_blobs = external_output.blobs;
Ok(output)
} }
} }

View File

@ -23,7 +23,11 @@ use sha2::Digest;
use self::core::node::{Node, NodeInfo}; use self::core::node::{Node, NodeInfo};
pub use self::chunkdict_generator::ChunkdictBlobInfo;
pub use self::chunkdict_generator::ChunkdictChunkInfo;
pub use self::chunkdict_generator::Generator;
pub use self::compact::BlobCompactor; pub use self::compact::BlobCompactor;
pub use self::compact::Config as CompactConfig;
pub use self::core::bootstrap::Bootstrap; pub use self::core::bootstrap::Bootstrap;
pub use self::core::chunk_dict::{parse_chunk_dict_arg, ChunkDict, HashChunkDict}; pub use self::core::chunk_dict::{parse_chunk_dict_arg, ChunkDict, HashChunkDict};
pub use self::core::context::{ pub use self::core::context::{
@ -37,13 +41,18 @@ pub use self::core::prefetch::{Prefetch, PrefetchPolicy};
pub use self::core::tree::{MetadataTreeBuilder, Tree, TreeNode}; pub use self::core::tree::{MetadataTreeBuilder, Tree, TreeNode};
pub use self::directory::DirectoryBuilder; pub use self::directory::DirectoryBuilder;
pub use self::merge::Merger; pub use self::merge::Merger;
pub use self::optimize_prefetch::update_ctx_from_bootstrap;
pub use self::optimize_prefetch::OptimizePrefetch;
pub use self::stargz::StargzBuilder; pub use self::stargz::StargzBuilder;
pub use self::tarball::TarballBuilder; pub use self::tarball::TarballBuilder;
pub mod attributes;
mod chunkdict_generator;
mod compact; mod compact;
mod core; mod core;
mod directory; mod directory;
mod merge; mod merge;
mod optimize_prefetch;
mod stargz; mod stargz;
mod tarball; mod tarball;
@ -112,9 +121,14 @@ fn dump_bootstrap(
if ctx.blob_inline_meta { if ctx.blob_inline_meta {
assert_ne!(ctx.conversion_type, ConversionType::TarToTarfs); assert_ne!(ctx.conversion_type, ConversionType::TarToTarfs);
// Ensure the blob object is created in case of no chunks generated for the blob. // Ensure the blob object is created in case of no chunks generated for the blob.
let (_, blob_ctx) = blob_mgr let blob_ctx = if blob_mgr.external {
.get_or_create_current_blob(ctx) &mut blob_mgr.new_blob_ctx(ctx)?
.map_err(|_e| anyhow!("failed to get current blob object"))?; } else {
let (_, blob_ctx) = blob_mgr
.get_or_create_current_blob(ctx)
.map_err(|_e| anyhow!("failed to get current blob object"))?;
blob_ctx
};
let bootstrap_offset = blob_writer.pos()?; let bootstrap_offset = blob_writer.pos()?;
let uncompressed_bootstrap = bootstrap_ctx.writer.as_bytes()?; let uncompressed_bootstrap = bootstrap_ctx.writer.as_bytes()?;
let uncompressed_size = uncompressed_bootstrap.len(); let uncompressed_size = uncompressed_bootstrap.len();
@ -244,7 +258,6 @@ fn finalize_blob(
blob_cache.finalize(&blob_ctx.blob_id)?; blob_cache.finalize(&blob_ctx.blob_id)?;
} }
} }
Ok(()) Ok(())
} }

View File

@ -129,7 +129,7 @@ impl Merger {
} }
let mut tree: Option<Tree> = None; let mut tree: Option<Tree> = None;
let mut blob_mgr = BlobManager::new(ctx.digester); let mut blob_mgr = BlobManager::new(ctx.digester, false);
let mut blob_idx_map = HashMap::new(); let mut blob_idx_map = HashMap::new();
let mut parent_layers = 0; let mut parent_layers = 0;
@ -257,7 +257,7 @@ impl Merger {
let upper = Tree::from_bootstrap(&rs, &mut ())?; let upper = Tree::from_bootstrap(&rs, &mut ())?;
upper.walk_bfs(true, &mut |n| { upper.walk_bfs(true, &mut |n| {
let mut node = n.lock_node(); let mut node = n.borrow_mut_node();
for chunk in &mut node.chunks { for chunk in &mut node.chunks {
let origin_blob_index = chunk.inner.blob_index() as usize; let origin_blob_index = chunk.inner.blob_index() as usize;
let blob_ctx = blobs[origin_blob_index].as_ref(); let blob_ctx = blobs[origin_blob_index].as_ref();
@ -304,15 +304,40 @@ impl Merger {
ctx.chunk_size = chunk_size; ctx.chunk_size = chunk_size;
} }
// After merging all trees, we need to re-calculate the blob index of
// referenced blobs, as the upper tree might have deleted some files
// or directories by opaques, and some blobs are dereferenced.
let mut used_blobs = HashMap::new(); // HashMap<blob_id, new_blob_index>
let mut used_blob_mgr = BlobManager::new(ctx.digester, false);
let origin_blobs = blob_mgr.get_blobs();
tree.walk_bfs(true, &mut |n| {
let mut node = n.borrow_mut_node();
for chunk in &mut node.chunks {
let origin_blob_index = chunk.inner.blob_index() as usize;
let blob_ctx = origin_blobs[origin_blob_index].clone();
let origin_blob_id = blob_ctx.blob_id();
let new_blob_index = if let Some(new_blob_index) = used_blobs.get(&origin_blob_id) {
*new_blob_index
} else {
let new_blob_index = used_blob_mgr.len();
used_blobs.insert(origin_blob_id, new_blob_index);
used_blob_mgr.add_blob(blob_ctx);
new_blob_index
};
chunk.set_blob_index(new_blob_index as u32);
}
Ok(())
})?;
let mut bootstrap_ctx = BootstrapContext::new(Some(target.clone()), false)?; let mut bootstrap_ctx = BootstrapContext::new(Some(target.clone()), false)?;
let mut bootstrap = Bootstrap::new(tree)?; let mut bootstrap = Bootstrap::new(tree)?;
bootstrap.build(ctx, &mut bootstrap_ctx)?; bootstrap.build(ctx, &mut bootstrap_ctx)?;
let blob_table = blob_mgr.to_blob_table(ctx)?; let blob_table = used_blob_mgr.to_blob_table(ctx)?;
let mut bootstrap_storage = Some(target.clone()); let mut bootstrap_storage = Some(target.clone());
bootstrap bootstrap
.dump(ctx, &mut bootstrap_storage, &mut bootstrap_ctx, &blob_table) .dump(ctx, &mut bootstrap_storage, &mut bootstrap_ctx, &blob_table)
.context(format!("dump bootstrap to {:?}", target.display()))?; .context(format!("dump bootstrap to {:?}", target.display()))?;
BuildOutput::new(&blob_mgr, &bootstrap_storage) BuildOutput::new(&used_blob_mgr, None, &bootstrap_storage, &None)
} }
} }
@ -409,7 +434,7 @@ mod tests {
); );
assert!(build_output.is_ok()); assert!(build_output.is_ok());
let build_output = build_output.unwrap(); let build_output = build_output.unwrap();
println!("BuildOutpu: {}", build_output); println!("BuildOutput: {}", build_output);
assert_eq!(build_output.blob_size, Some(16)); assert_eq!(build_output.blob_size, Some(16));
} }
} }

View File

@ -0,0 +1,302 @@
use crate::anyhow;
use crate::core::blob::Blob;
use crate::finalize_blob;
use crate::Artifact;
use crate::ArtifactWriter;
use crate::BlobContext;
use crate::BlobManager;
use crate::Bootstrap;
use crate::BootstrapManager;
use crate::BuildContext;
use crate::BuildOutput;
use crate::ChunkSource;
use crate::ConversionType;
use crate::NodeChunk;
use crate::Path;
use crate::PathBuf;
use crate::Tree;
use crate::TreeNode;
use anyhow::Context;
use anyhow::{Ok, Result};
use nydus_api::ConfigV2;
use nydus_rafs::metadata::layout::RafsBlobTable;
use nydus_rafs::metadata::RafsSuper;
use nydus_rafs::metadata::RafsVersion;
use nydus_storage::device::BlobInfo;
use nydus_storage::meta::BatchContextGenerator;
use nydus_storage::meta::BlobChunkInfoV1Ondisk;
use nydus_utils::compress;
use sha2::Digest;
use std::fs::File;
use std::io::{Read, Seek, Write};
use std::mem::size_of;
use std::sync::Arc;
pub struct OptimizePrefetch {}
struct PrefetchBlobState {
blob_info: BlobInfo,
blob_ctx: BlobContext,
blob_writer: Box<dyn Artifact>,
}
impl PrefetchBlobState {
fn new(ctx: &BuildContext, blob_layer_num: u32, blobs_dir_path: &Path) -> Result<Self> {
let mut blob_info = BlobInfo::new(
blob_layer_num,
String::from("prefetch-blob"),
0,
0,
ctx.chunk_size,
u32::MAX,
ctx.blob_features,
);
blob_info.set_compressor(ctx.compressor);
blob_info.set_separated_with_prefetch_files_feature(true);
let mut blob_ctx = BlobContext::from(ctx, &blob_info, ChunkSource::Build)?;
blob_ctx.blob_meta_info_enabled = true;
let blob_writer = ArtifactWriter::new(crate::ArtifactStorage::FileDir((
blobs_dir_path.to_path_buf(),
String::new(),
)))
.map(|writer| Box::new(writer) as Box<dyn Artifact>)?;
Ok(Self {
blob_info,
blob_ctx,
blob_writer,
})
}
}
impl OptimizePrefetch {
/// Generate a new bootstrap for prefetch.
pub fn generate_prefetch(
tree: &mut Tree,
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_table: &mut RafsBlobTable,
blobs_dir_path: PathBuf,
prefetch_nodes: Vec<TreeNode>,
) -> Result<BuildOutput> {
// create a new blob for prefetch layer
let blob_layer_num = match blob_table {
RafsBlobTable::V5(table) => table.get_all().len(),
RafsBlobTable::V6(table) => table.get_all().len(),
};
let mut blob_state = PrefetchBlobState::new(&ctx, blob_layer_num as u32, &blobs_dir_path)?;
let mut batch = BatchContextGenerator::new(0)?;
for node in &prefetch_nodes {
Self::process_prefetch_node(
tree,
&node,
&mut blob_state,
&mut batch,
blob_table,
&blobs_dir_path,
)?;
}
let blob_mgr = Self::dump_blob(ctx, blob_table, &mut blob_state)?;
debug!("prefetch blob id: {}", ctx.blob_id);
Self::build_dump_bootstrap(tree, ctx, bootstrap_mgr, blob_table)?;
BuildOutput::new(&blob_mgr, None, &bootstrap_mgr.bootstrap_storage, &None)
}
fn build_dump_bootstrap(
tree: &mut Tree,
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_table: &mut RafsBlobTable,
) -> Result<()> {
let mut bootstrap_ctx = bootstrap_mgr.create_ctx()?;
let mut bootstrap = Bootstrap::new(tree.clone())?;
// Build bootstrap
bootstrap.build(ctx, &mut bootstrap_ctx)?;
let blob_table_withprefetch = match blob_table {
RafsBlobTable::V5(table) => RafsBlobTable::V5(table.clone()),
RafsBlobTable::V6(table) => RafsBlobTable::V6(table.clone()),
};
bootstrap.dump(
ctx,
&mut bootstrap_mgr.bootstrap_storage,
&mut bootstrap_ctx,
&blob_table_withprefetch,
)?;
Ok(())
}
fn dump_blob(
ctx: &mut BuildContext,
blob_table: &mut RafsBlobTable,
blob_state: &mut PrefetchBlobState,
) -> Result<BlobManager> {
match blob_table {
RafsBlobTable::V5(table) => {
table.entries.push(blob_state.blob_info.clone().into());
}
RafsBlobTable::V6(table) => {
table.entries.push(blob_state.blob_info.clone().into());
}
}
let mut blob_mgr = BlobManager::new(ctx.digester, false);
blob_mgr.add_blob(blob_state.blob_ctx.clone());
blob_mgr.set_current_blob_index(0);
Blob::finalize_blob_data(&ctx, &mut blob_mgr, blob_state.blob_writer.as_mut())?;
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
Blob::dump_meta_data(&ctx, blob_ctx, blob_state.blob_writer.as_mut()).unwrap();
};
ctx.blob_id = String::from("");
blob_mgr.get_current_blob().unwrap().1.blob_id = String::from("");
finalize_blob(ctx, &mut blob_mgr, blob_state.blob_writer.as_mut())?;
ctx.blob_id = blob_mgr
.get_current_blob()
.ok_or(anyhow!("failed to get current blob"))?
.1
.blob_id
.clone();
let entries = match blob_table {
RafsBlobTable::V5(table) => table.get_all(),
RafsBlobTable::V6(table) => table.get_all(),
};
// Verify and update prefetch blob
assert!(
entries
.iter()
.filter(|blob| blob.blob_id() == "prefetch-blob")
.count()
== 1,
"Expected exactly one prefetch-blob"
);
// Rewrite prefetch blob id
match blob_table {
RafsBlobTable::V5(table) => {
rewrite_blob_id(&mut table.entries, "prefetch-blob", ctx.blob_id.clone())
}
RafsBlobTable::V6(table) => {
rewrite_blob_id(&mut table.entries, "prefetch-blob", ctx.blob_id.clone())
}
}
Ok(blob_mgr)
}
fn process_prefetch_node(
tree: &mut Tree,
node: &TreeNode,
prefetch_state: &mut PrefetchBlobState,
batch: &mut BatchContextGenerator,
blob_table: &RafsBlobTable,
blobs_dir_path: &Path,
) -> Result<()> {
let tree_node = tree
.get_node_mut(&node.borrow().path())
.ok_or(anyhow!("failed to get node"))?
.node
.as_ref();
let entries = match blob_table {
RafsBlobTable::V5(table) => table.get_all(),
RafsBlobTable::V6(table) => table.get_all(),
};
let blob_id = tree_node
.borrow()
.chunks
.first()
.and_then(|chunk| entries.get(chunk.inner.blob_index() as usize).cloned())
.map(|entry| entry.blob_id())
.ok_or(anyhow!("failed to get blob id"))?;
let mut blob_file = Arc::new(File::open(blobs_dir_path.join(blob_id))?);
tree_node.borrow_mut().layer_idx = prefetch_state.blob_info.blob_index() as u16;
let mut child = tree_node.borrow_mut();
let chunks: &mut Vec<NodeChunk> = child.chunks.as_mut();
let blob_ctx = &mut prefetch_state.blob_ctx;
let blob_info = &mut prefetch_state.blob_info;
let encrypted = blob_ctx.blob_compressor != compress::Algorithm::None;
for chunk in chunks {
let inner = Arc::make_mut(&mut chunk.inner);
let mut buf = vec![0u8; inner.compressed_size() as usize];
blob_file.seek(std::io::SeekFrom::Start(inner.compressed_offset()))?;
blob_file.read_exact(&mut buf)?;
prefetch_state.blob_writer.write_all(&buf)?;
let info = batch.generate_chunk_info(
blob_ctx.current_compressed_offset,
blob_ctx.current_uncompressed_offset,
inner.uncompressed_size(),
encrypted,
)?;
inner.set_blob_index(blob_info.blob_index());
if blob_ctx.chunk_count == u32::MAX {
blob_ctx.chunk_count = 0;
}
inner.set_index(blob_ctx.chunk_count);
blob_ctx.chunk_count += 1;
inner.set_compressed_offset(blob_ctx.current_compressed_offset);
inner.set_uncompressed_offset(blob_ctx.current_uncompressed_offset);
let aligned_d_size: u64 = nydus_utils::try_round_up_4k(inner.uncompressed_size())
.ok_or_else(|| anyhow!("invalid size"))?;
blob_ctx.compressed_blob_size += inner.compressed_size() as u64;
blob_ctx.uncompressed_blob_size += aligned_d_size;
blob_ctx.current_compressed_offset += inner.compressed_size() as u64;
blob_ctx.current_uncompressed_offset += aligned_d_size;
blob_ctx.add_chunk_meta_info(&inner, Some(info))?;
blob_ctx.blob_hash.update(&buf);
blob_info.set_meta_ci_compressed_size(
(blob_info.meta_ci_compressed_size() + size_of::<BlobChunkInfoV1Ondisk>() as u64)
as usize,
);
blob_info.set_meta_ci_uncompressed_size(
(blob_info.meta_ci_uncompressed_size() + size_of::<BlobChunkInfoV1Ondisk>() as u64)
as usize,
);
}
Ok(())
}
}
fn rewrite_blob_id(entries: &mut [Arc<BlobInfo>], blob_id: &str, new_blob_id: String) {
entries
.iter_mut()
.filter(|blob| blob.blob_id() == blob_id)
.for_each(|blob| {
let mut info = (**blob).clone();
info.set_blob_id(new_blob_id.clone());
*blob = Arc::new(info);
});
}
pub fn update_ctx_from_bootstrap(
ctx: &mut BuildContext,
config: Arc<ConfigV2>,
bootstrap_path: &Path,
) -> Result<RafsSuper> {
let (sb, _) = RafsSuper::load_from_file(bootstrap_path, config, false)?;
ctx.blob_features = sb
.superblock
.get_blob_infos()
.first()
.ok_or_else(|| anyhow!("No blob info found in superblock"))?
.features();
let config = sb.meta.get_config();
if config.is_tarfs_mode {
ctx.conversion_type = ConversionType::TarToRafs;
}
ctx.fs_version =
RafsVersion::try_from(sb.meta.version).context("Failed to get RAFS version")?;
ctx.compressor = config.compressor;
Ok(sb)
}

View File

@ -58,10 +58,10 @@ struct TocEntry {
/// - block: block device /// - block: block device
/// - fifo: fifo /// - fifo: fifo
/// - chunk: a chunk of regular file data As described in the above section, /// - chunk: a chunk of regular file data As described in the above section,
/// a regular file can be divided into several chunks. TOCEntry MUST be created for each chunk. /// a regular file can be divided into several chunks. TOCEntry MUST be created for each chunk.
/// TOCEntry of the first chunk of that file MUST be typed as reg. TOCEntry of each chunk after /// TOCEntry of the first chunk of that file MUST be typed as reg. TOCEntry of each chunk after
/// 2nd MUST be typed as chunk. chunk TOCEntry MUST set offset, chunkOffset and chunkSize /// 2nd MUST be typed as chunk. chunk TOCEntry MUST set offset, chunkOffset and chunkSize
/// properties. /// properties.
#[serde(rename = "type")] #[serde(rename = "type")]
pub toc_type: String, pub toc_type: String,
@ -456,7 +456,7 @@ impl StargzBuilder {
uncompressed_offset: self.uncompressed_offset, uncompressed_offset: self.uncompressed_offset,
file_offset: entry.chunk_offset as u64, file_offset: entry.chunk_offset as u64,
index: 0, index: 0,
reserved: 0, crc32: 0,
}); });
let chunk = NodeChunk { let chunk = NodeChunk {
source: ChunkSource::Build, source: ChunkSource::Build,
@ -601,7 +601,7 @@ impl StargzBuilder {
} }
} }
let mut tmp_node = tmp_tree.lock_node(); let mut tmp_node = tmp_tree.borrow_mut_node();
if !tmp_node.is_reg() { if !tmp_node.is_reg() {
bail!( bail!(
"stargz: target {} for hardlink {} is not a regular file", "stargz: target {} for hardlink {} is not a regular file",
@ -788,7 +788,7 @@ impl StargzBuilder {
bootstrap bootstrap
.tree .tree
.walk_bfs(true, &mut |n| { .walk_bfs(true, &mut |n| {
let mut node = n.lock_node(); let mut node = n.borrow_mut_node();
let node_path = node.path(); let node_path = node.path();
if let Some((size, ref mut chunks)) = self.file_chunk_map.get_mut(node_path) { if let Some((size, ref mut chunks)) = self.file_chunk_map.get_mut(node_path) {
node.inode.set_size(*size); node.inode.set_size(*size);
@ -802,9 +802,9 @@ impl StargzBuilder {
for (k, v) in self.hardlink_map.iter() { for (k, v) in self.hardlink_map.iter() {
match bootstrap.tree.get_node(k) { match bootstrap.tree.get_node(k) {
Some(n) => { Some(t) => {
let mut node = n.lock_node(); let mut node = t.borrow_mut_node();
let target = v.lock().unwrap(); let target = v.borrow();
node.inode.set_size(target.inode.size()); node.inode.set_size(target.inode.size());
node.inode.set_child_count(target.inode.child_count()); node.inode.set_child_count(target.inode.child_count());
node.chunks = target.chunks.clone(); node.chunks = target.chunks.clone();
@ -904,14 +904,16 @@ impl Builder for StargzBuilder {
lazy_drop(bootstrap_ctx); lazy_drop(bootstrap_ctx);
BuildOutput::new(blob_mgr, &bootstrap_mgr.bootstrap_storage) BuildOutput::new(blob_mgr, None, &bootstrap_mgr.bootstrap_storage, &None)
} }
} }
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;
use crate::{ArtifactStorage, ConversionType, Features, Prefetch, WhiteoutSpec}; use crate::{
attributes::Attributes, ArtifactStorage, ConversionType, Features, Prefetch, WhiteoutSpec,
};
#[test] #[test]
fn test_build_stargz_toc() { fn test_build_stargz_toc() {
@ -932,16 +934,20 @@ mod tests {
ConversionType::EStargzIndexToRef, ConversionType::EStargzIndexToRef,
source_path, source_path,
prefetch, prefetch,
Some(ArtifactStorage::FileDir(tmp_dir.clone())), Some(ArtifactStorage::FileDir((tmp_dir.clone(), String::new()))),
None,
false, false,
Features::new(), Features::new(),
false, false,
Attributes::default(),
); );
ctx.fs_version = RafsVersion::V6; ctx.fs_version = RafsVersion::V6;
ctx.conversion_type = ConversionType::EStargzToRafs; ctx.conversion_type = ConversionType::EStargzToRafs;
let mut bootstrap_mgr = let mut bootstrap_mgr = BootstrapManager::new(
BootstrapManager::new(Some(ArtifactStorage::FileDir(tmp_dir.clone())), None); Some(ArtifactStorage::FileDir((tmp_dir.clone(), String::new()))),
let mut blob_mgr = BlobManager::new(digest::Algorithm::Sha256); None,
);
let mut blob_mgr = BlobManager::new(digest::Algorithm::Sha256, false);
let mut builder = StargzBuilder::new(0x1000000, &ctx); let mut builder = StargzBuilder::new(0x1000000, &ctx);
let builder = builder.build(&mut ctx, &mut bootstrap_mgr, &mut blob_mgr); let builder = builder.build(&mut ctx, &mut bootstrap_mgr, &mut blob_mgr);

View File

@ -8,11 +8,11 @@
//! //!
//! The tarball data is arrange as a sequence of tar headers with associated file data interleaved. //! The tarball data is arrange as a sequence of tar headers with associated file data interleaved.
//! - (tar header) (tar header) (file data) (tar header) (file data) (tar header) //! - (tar header) (tar header) (file data) (tar header) (file data) (tar header)
//! And to support read tarball data from FIFO, we could only go over the tarball stream once. //! And to support read tarball data from FIFO, we could only go over the tarball stream once.
//! So the workflow is as: //! So the workflow is as:
//! - for each tar header from the stream //! - for each tar header from the stream
//! -- generate RAFS filesystem node from the tar header //! -- generate RAFS filesystem node from the tar header
//! -- optionally dump file data associated with the tar header into RAFS data blob //! -- optionally dump file data associated with the tar header into RAFS data blob
//! - arrange all generated RAFS nodes into a RAFS filesystem tree //! - arrange all generated RAFS nodes into a RAFS filesystem tree
//! - dump the RAFS filesystem tree into RAFS metadata blob //! - dump the RAFS filesystem tree into RAFS metadata blob
use std::ffi::{OsStr, OsString}; use std::ffi::{OsStr, OsString};
@ -349,7 +349,7 @@ impl<'a> TarballTreeBuilder<'a> {
} }
} }
} }
let mut tmp_node = tmp_tree.lock_node(); let mut tmp_node = tmp_tree.borrow_mut_node();
if !tmp_node.is_reg() { if !tmp_node.is_reg() {
bail!( bail!(
"tarball: target {} for hardlink {} is not a regular file", "tarball: target {} for hardlink {} is not a regular file",
@ -452,7 +452,7 @@ impl<'a> TarballTreeBuilder<'a> {
// Tar hardlink header has zero file size and no file data associated, so copy value from // Tar hardlink header has zero file size and no file data associated, so copy value from
// the associated regular file. // the associated regular file.
if let Some(t) = hardlink_target { if let Some(t) = hardlink_target {
let n = t.lock_node(); let n = t.borrow_mut_node();
if n.inode.is_v5() { if n.inode.is_v5() {
node.inode.set_digest(n.inode.digest().to_owned()); node.inode.set_digest(n.inode.digest().to_owned());
} }
@ -540,7 +540,7 @@ impl<'a> TarballTreeBuilder<'a> {
for c in &mut tree.children { for c in &mut tree.children {
Self::set_v5_dir_size(c); Self::set_v5_dir_size(c);
} }
let mut node = tree.lock_node(); let mut node = tree.borrow_mut_node();
node.v5_set_dir_size(RafsVersion::V5, &tree.children); node.v5_set_dir_size(RafsVersion::V5, &tree.children);
} }
@ -659,13 +659,14 @@ impl Builder for TarballBuilder {
lazy_drop(bootstrap_ctx); lazy_drop(bootstrap_ctx);
BuildOutput::new(blob_mgr, &bootstrap_mgr.bootstrap_storage) BuildOutput::new(blob_mgr, None, &bootstrap_mgr.bootstrap_storage, &None)
} }
} }
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;
use crate::attributes::Attributes;
use crate::{ArtifactStorage, Features, Prefetch, WhiteoutSpec}; use crate::{ArtifactStorage, Features, Prefetch, WhiteoutSpec};
use nydus_utils::{compress, digest}; use nydus_utils::{compress, digest};
@ -687,14 +688,18 @@ mod tests {
ConversionType::TarToTarfs, ConversionType::TarToTarfs,
source_path, source_path,
prefetch, prefetch,
Some(ArtifactStorage::FileDir(tmp_dir.clone())), Some(ArtifactStorage::FileDir((tmp_dir.clone(), String::new()))),
None,
false, false,
Features::new(), Features::new(),
false, false,
Attributes::default(),
); );
let mut bootstrap_mgr = let mut bootstrap_mgr = BootstrapManager::new(
BootstrapManager::new(Some(ArtifactStorage::FileDir(tmp_dir)), None); Some(ArtifactStorage::FileDir((tmp_dir, String::new()))),
let mut blob_mgr = BlobManager::new(digest::Algorithm::Sha256); None,
);
let mut blob_mgr = BlobManager::new(digest::Algorithm::Sha256, false);
let mut builder = TarballBuilder::new(ConversionType::TarToTarfs); let mut builder = TarballBuilder::new(ConversionType::TarToTarfs);
builder builder
.build(&mut ctx, &mut bootstrap_mgr, &mut blob_mgr) .build(&mut ctx, &mut bootstrap_mgr, &mut blob_mgr)
@ -719,14 +724,18 @@ mod tests {
ConversionType::TarToTarfs, ConversionType::TarToTarfs,
source_path, source_path,
prefetch, prefetch,
Some(ArtifactStorage::FileDir(tmp_dir.clone())), Some(ArtifactStorage::FileDir((tmp_dir.clone(), String::new()))),
None,
false, false,
Features::new(), Features::new(),
true, true,
Attributes::default(),
); );
let mut bootstrap_mgr = let mut bootstrap_mgr = BootstrapManager::new(
BootstrapManager::new(Some(ArtifactStorage::FileDir(tmp_dir)), None); Some(ArtifactStorage::FileDir((tmp_dir, String::new()))),
let mut blob_mgr = BlobManager::new(digest::Algorithm::Sha256); None,
);
let mut blob_mgr = BlobManager::new(digest::Algorithm::Sha256, false);
let mut builder = TarballBuilder::new(ConversionType::TarToTarfs); let mut builder = TarballBuilder::new(ConversionType::TarToTarfs);
builder builder
.build(&mut ctx, &mut bootstrap_mgr, &mut blob_mgr) .build(&mut ctx, &mut bootstrap_mgr, &mut blob_mgr)

View File

@ -5,7 +5,7 @@ description = "C wrapper library for Nydus SDK"
authors = ["The Nydus Developers"] authors = ["The Nydus Developers"]
license = "Apache-2.0" license = "Apache-2.0"
homepage = "https://nydus.dev/" homepage = "https://nydus.dev/"
repository = "https://github.com/dragonflyoss/image-service" repository = "https://github.com/dragonflyoss/nydus"
edition = "2021" edition = "2021"
[lib] [lib]
@ -15,10 +15,10 @@ crate-type = ["cdylib", "staticlib"]
[dependencies] [dependencies]
libc = "0.2.137" libc = "0.2.137"
log = "0.4.17" log = "0.4.17"
fuse-backend-rs = "^0.10.3" fuse-backend-rs = "^0.12.0"
nydus-api = { version = "0.3", path = "../api" } nydus-api = { version = "0.4.0", path = "../api" }
nydus-rafs = { version = "0.3.1", path = "../rafs" } nydus-rafs = { version = "0.4.0", path = "../rafs" }
nydus-storage = { version = "0.6.3", path = "../storage" } nydus-storage = { version = "0.7.0", path = "../storage" }
[features] [features]
baekend-s3 = ["nydus-storage/backend-s3"] baekend-s3 = ["nydus-storage/backend-s3"]

View File

@ -1 +0,0 @@
bin/

View File

@ -1,21 +0,0 @@
# https://golangci-lint.run/usage/configuration#config-file
linters:
enable:
- staticcheck
- unconvert
- gofmt
- goimports
- revive
- ineffassign
- vet
- unused
- misspell
disable:
- errcheck
run:
deadline: 4m
skip-dirs:
- misc

View File

@ -1,27 +0,0 @@
GIT_COMMIT := $(shell git rev-list -1 HEAD)
BUILD_TIME := $(shell date -u +%Y%m%d.%H%M)
PACKAGES ?= $(shell go list ./... | grep -v /vendor/)
GOARCH ?= $(shell go env GOARCH)
GOPROXY ?= https://goproxy.io
ifdef GOPROXY
PROXY := GOPROXY=${GOPROXY}
endif
.PHONY: all build release test clean
all: build
build:
@CGO_ENABLED=0 ${PROXY} GOOS=linux GOARCH=${GOARCH} go build -v -o bin/ctr-remote ./cmd/main.go
release:
@CGO_ENABLED=0 ${PROXY} GOOS=linux GOARCH=${GOARCH} go build -ldflags '-s -w -extldflags "-static"' -v -o bin/ctr-remote ./cmd/main.go
test: build
go vet $(PACKAGES)
golangci-lint run
go test -v -cover ${PACKAGES}
clean:
rm -f bin/*

View File

@ -1,67 +0,0 @@
/*
Copyright The containerd Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
"fmt"
"os"
"github.com/containerd/containerd/cmd/ctr/app"
"github.com/containerd/containerd/pkg/seed" //nolint:staticcheck // Global math/rand seed is deprecated, but still used by external dependencies
"github.com/dragonflyoss/image-service/contrib/ctr-remote/commands"
"github.com/urfave/cli"
)
func init() {
// From https://github.com/containerd/containerd/blob/f7f2be732159a411eae46b78bfdb479b133a823b/cmd/ctr/main.go
//nolint:staticcheck // Global math/rand seed is deprecated, but still used by external dependencies
seed.WithTimeAndRand()
}
func main() {
customCommands := []cli.Command{commands.RpullCommand}
app := app.New()
app.Description = "NOTE: Enhanced for nydus-snapshotter\n" + app.Description
for i := range app.Commands {
if app.Commands[i].Name == "images" {
sc := map[string]cli.Command{}
for _, subcmd := range customCommands {
sc[subcmd.Name] = subcmd
}
// First, replace duplicated subcommands
for j := range app.Commands[i].Subcommands {
for name, subcmd := range sc {
if name == app.Commands[i].Subcommands[j].Name {
app.Commands[i].Subcommands[j] = subcmd
delete(sc, name)
}
}
}
// Next, append all new sub commands
for _, subcmd := range sc {
app.Commands[i].Subcommands = append(app.Commands[i].Subcommands, subcmd)
}
break
}
}
if err := app.Run(os.Args); err != nil {
fmt.Fprintf(os.Stderr, "ctr-remote: %v\n", err)
os.Exit(1)
}
}

View File

@ -1,103 +0,0 @@
/*
Copyright The containerd Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package commands
import (
"context"
"fmt"
"github.com/containerd/containerd"
"github.com/containerd/containerd/cmd/ctr/commands"
"github.com/containerd/containerd/cmd/ctr/commands/content"
"github.com/containerd/containerd/images"
"github.com/containerd/log"
"github.com/containerd/nydus-snapshotter/pkg/label"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/urfave/cli"
)
const (
remoteSnapshotterName = "nydus"
)
// RpullCommand is a subcommand to pull an image from a registry levaraging nydus snapshotter
var RpullCommand = cli.Command{
Name: "rpull",
Usage: "pull an image from a registry leveraging nydus-snapshotter",
ArgsUsage: "[flags] <ref>",
Description: `Fetch and prepare an image for use in containerd leveraging nydus-snapshotter.
After pulling an image, it should be ready to use the same reference in a run command.`,
Flags: append(commands.RegistryFlags, commands.LabelFlag),
Action: func(context *cli.Context) error {
var (
ref = context.Args().First()
config = &rPullConfig{}
)
if ref == "" {
return fmt.Errorf("please provide an image reference to pull")
}
client, ctx, cancel, err := commands.NewClient(context)
if err != nil {
return err
}
defer cancel()
ctx, done, err := client.WithLease(ctx)
if err != nil {
return err
}
defer done(ctx)
fc, err := content.NewFetchConfig(ctx, context)
if err != nil {
return err
}
config.FetchConfig = fc
return pull(ctx, client, ref, config)
},
}
type rPullConfig struct {
*content.FetchConfig
}
func pull(ctx context.Context, client *containerd.Client, ref string, config *rPullConfig) error {
pCtx := ctx
h := images.HandlerFunc(func(ctx context.Context, desc ocispec.Descriptor) ([]ocispec.Descriptor, error) {
if desc.MediaType != images.MediaTypeDockerSchema1Manifest {
fmt.Printf("fetching %v... %v\n", desc.Digest.String()[:15], desc.MediaType)
}
return nil, nil
})
log.G(pCtx).WithField("image", ref).Debug("fetching")
configLabels := commands.LabelArgs(config.Labels)
if _, err := client.Pull(pCtx, ref, []containerd.RemoteOpt{
containerd.WithPullLabels(configLabels),
containerd.WithResolver(config.Resolver),
containerd.WithImageHandler(h),
containerd.WithPullUnpack,
containerd.WithPullSnapshotter(remoteSnapshotterName),
containerd.WithImageHandlerWrapper(label.AppendLabelsHandlerWrapper(ref)),
}...); err != nil {
return err
}
return nil
}

View File

@ -1,76 +0,0 @@
module github.com/dragonflyoss/image-service/contrib/ctr-remote
go 1.20
require (
github.com/containerd/containerd v1.7.6
github.com/containerd/log v0.1.0
github.com/containerd/nydus-snapshotter v0.13.2
github.com/opencontainers/image-spec v1.1.0-rc4
github.com/urfave/cli v1.22.12
)
require (
github.com/AdaLogics/go-fuzz-headers v0.0.0-20230811130428-ced1acdcaa24 // indirect
github.com/AdamKorcz/go-118-fuzz-build v0.0.0-20230306123547-8075edf89bb0 // indirect
github.com/Microsoft/go-winio v0.6.1 // indirect
github.com/Microsoft/hcsshim v0.11.0 // indirect
github.com/cilium/ebpf v0.10.0 // indirect
github.com/containerd/cgroups v1.1.0 // indirect
github.com/containerd/cgroups/v3 v3.0.2 // indirect
github.com/containerd/console v1.0.3 // indirect
github.com/containerd/continuity v0.4.2 // indirect
github.com/containerd/fifo v1.1.0 // indirect
github.com/containerd/go-cni v1.1.9 // indirect
github.com/containerd/go-runc v1.0.0 // indirect
github.com/containerd/ttrpc v1.2.2 // indirect
github.com/containerd/typeurl/v2 v2.1.1 // indirect
github.com/containernetworking/cni v1.1.2 // indirect
github.com/containernetworking/plugins v1.2.0 // indirect
github.com/coreos/go-systemd/v22 v22.5.0 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.2 // indirect
github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c // indirect
github.com/docker/go-units v0.5.0 // indirect
github.com/go-logr/logr v1.2.4 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/godbus/dbus/v5 v5.1.0 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.3 // indirect
github.com/google/go-cmp v0.5.9 // indirect
github.com/google/uuid v1.3.1 // indirect
github.com/hashicorp/errwrap v1.1.0 // indirect
github.com/hashicorp/go-multierror v1.1.1 // indirect
github.com/intel/goresctrl v0.3.0 // indirect
github.com/klauspost/compress v1.16.3 // indirect
github.com/moby/locker v1.0.1 // indirect
github.com/moby/sys/mountinfo v0.6.2 // indirect
github.com/moby/sys/sequential v0.5.0 // indirect
github.com/moby/sys/signal v0.7.0 // indirect
github.com/moby/sys/symlink v0.2.0 // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/opencontainers/runc v1.1.5 // indirect
github.com/opencontainers/runtime-spec v1.1.0-rc.1 // indirect
github.com/opencontainers/selinux v1.11.0 // indirect
github.com/pelletier/go-toml v1.9.5 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/sirupsen/logrus v1.9.3 // indirect
go.opencensus.io v0.24.0 // indirect
go.opentelemetry.io/otel v1.14.0 // indirect
go.opentelemetry.io/otel/trace v1.14.0 // indirect
golang.org/x/mod v0.9.0 // indirect
golang.org/x/net v0.17.0 // indirect
golang.org/x/sync v0.3.0 // indirect
golang.org/x/sys v0.13.0 // indirect
golang.org/x/text v0.13.0 // indirect
golang.org/x/tools v0.7.0 // indirect
google.golang.org/genproto v0.0.0-20231012201019-e917dd12ba7a // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20231016165738-49dd2c1f3d0b // indirect
google.golang.org/grpc v1.59.0 // indirect
google.golang.org/protobuf v1.31.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
k8s.io/apimachinery v0.26.2 // indirect
sigs.k8s.io/yaml v1.3.0 // indirect
)

View File

@ -1,358 +0,0 @@
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
github.com/AdaLogics/go-fuzz-headers v0.0.0-20230811130428-ced1acdcaa24 h1:bvDV9vkmnHYOMsOr4WLk+Vo07yKIzd94sVoIqshQ4bU=
github.com/AdaLogics/go-fuzz-headers v0.0.0-20230811130428-ced1acdcaa24/go.mod h1:8o94RPi1/7XTJvwPpRSzSUedZrtlirdB3r9Z20bi2f8=
github.com/AdamKorcz/go-118-fuzz-build v0.0.0-20230306123547-8075edf89bb0 h1:59MxjQVfjXsBpLy+dbd2/ELV5ofnUkUZBvWSC85sheA=
github.com/AdamKorcz/go-118-fuzz-build v0.0.0-20230306123547-8075edf89bb0/go.mod h1:OahwfttHWG6eJ0clwcfBAHoDI6X/LV/15hx/wlMZSrU=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/toml v1.2.1/go.mod h1:CxXYINrC8qIiEnFrOxCa7Jy5BFHlXnUU2pbicEuybxQ=
github.com/Microsoft/go-winio v0.6.1 h1:9/kr64B9VUZrLm5YYwbGtUJnMgqWVOdUAXu6Migciow=
github.com/Microsoft/go-winio v0.6.1/go.mod h1:LRdKpFKfdobln8UmuiYcKPot9D2v6svN5+sAH+4kjUM=
github.com/Microsoft/hcsshim v0.11.0 h1:7EFNIY4igHEXUdj1zXgAyU3fLc7QfOKHbkldRVTBdiM=
github.com/Microsoft/hcsshim v0.11.0/go.mod h1:OEthFdQv/AD2RAdzR6Mm1N1KPCztGKDurW1Z8b8VGMM=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/checkpoint-restore/go-criu/v5 v5.3.0/go.mod h1:E/eQpaFtUKGOOSEBZgmKAcn+zUUwWxqcaKZlF54wK8E=
github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
github.com/cilium/ebpf v0.7.0/go.mod h1:/oI2+1shJiTGAMgl6/RgJr36Eo1jzrRcAWbcXO2usCA=
github.com/cilium/ebpf v0.10.0 h1:nk5HPMeoBXtOzbkZBWym+ZWq1GIiHUsBFXxwewXAHLQ=
github.com/cilium/ebpf v0.10.0/go.mod h1:DPiVdY/kT534dgc9ERmvP8mWA+9gvwgKfRvk4nNWnoE=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
github.com/containerd/cgroups v1.1.0 h1:v8rEWFl6EoqHB+swVNjVoCJE8o3jX7e8nqBGPLaDFBM=
github.com/containerd/cgroups v1.1.0/go.mod h1:6ppBcbh/NOOUU+dMKrykgaBnK9lCIBxHqJDGwsa1mIw=
github.com/containerd/cgroups/v3 v3.0.2 h1:f5WFqIVSgo5IZmtTT3qVBo6TzI1ON6sycSBKkymb9L0=
github.com/containerd/cgroups/v3 v3.0.2/go.mod h1:JUgITrzdFqp42uI2ryGA+ge0ap/nxzYgkGmIcetmErE=
github.com/containerd/console v1.0.1/go.mod h1:XUsP6YE/mKtz6bxc+I8UiKKTP04qjQL4qcS3XoQ5xkw=
github.com/containerd/console v1.0.3 h1:lIr7SlA5PxZyMV30bDW0MGbiOPXwc63yRuCP0ARubLw=
github.com/containerd/console v1.0.3/go.mod h1:7LqA/THxQ86k76b8c/EMSiaJ3h1eZkMkXar0TQ1gf3U=
github.com/containerd/containerd v1.7.6 h1:oNAVsnhPoy4BTPQivLgTzI9Oleml9l/+eYIDYXRCYo8=
github.com/containerd/containerd v1.7.6/go.mod h1:SY6lrkkuJT40BVNO37tlYTSnKJnP5AXBc0fhx0q+TJ4=
github.com/containerd/continuity v0.4.2 h1:v3y/4Yz5jwnvqPKJJ+7Wf93fyWoCB3F5EclWG023MDM=
github.com/containerd/continuity v0.4.2/go.mod h1:F6PTNCKepoxEaXLQp3wDAjygEnImnZ/7o4JzpodfroQ=
github.com/containerd/fifo v1.1.0 h1:4I2mbh5stb1u6ycIABlBw9zgtlK8viPI9QkQNRQEEmY=
github.com/containerd/fifo v1.1.0/go.mod h1:bmC4NWMbXlt2EZ0Hc7Fx7QzTFxgPID13eH0Qu+MAb2o=
github.com/containerd/go-cni v1.1.9 h1:ORi7P1dYzCwVM6XPN4n3CbkuOx/NZ2DOqy+SHRdo9rU=
github.com/containerd/go-cni v1.1.9/go.mod h1:XYrZJ1d5W6E2VOvjffL3IZq0Dz6bsVlERHbekNK90PM=
github.com/containerd/go-runc v1.0.0 h1:oU+lLv1ULm5taqgV/CJivypVODI4SUz1znWjv3nNYS0=
github.com/containerd/go-runc v1.0.0/go.mod h1:cNU0ZbCgCQVZK4lgG3P+9tn9/PaJNmoDXPpoJhDR+Ok=
github.com/containerd/log v0.1.0 h1:TCJt7ioM2cr/tfR8GPbGf9/VRAX8D2B4PjzCpfX540I=
github.com/containerd/log v0.1.0/go.mod h1:VRRf09a7mHDIRezVKTRCrOq78v577GXq3bSa3EhrzVo=
github.com/containerd/nydus-snapshotter v0.13.2 h1:Wjc59CetOv9TxtQIvSBY1wrjHxwD48gHR/TxyEeRTmI=
github.com/containerd/nydus-snapshotter v0.13.2/go.mod h1:XWAz9ytsjBuKPVXDKP3xoMlcSKNsGnjXlEup6DuzUIo=
github.com/containerd/ttrpc v1.2.2 h1:9vqZr0pxwOF5koz6N0N3kJ0zDHokrcPxIR/ZR2YFtOs=
github.com/containerd/ttrpc v1.2.2/go.mod h1:sIT6l32Ph/H9cvnJsfXM5drIVzTr5A2flTf1G5tYZak=
github.com/containerd/typeurl/v2 v2.1.1 h1:3Q4Pt7i8nYwy2KmQWIw2+1hTvwTE/6w9FqcttATPO/4=
github.com/containerd/typeurl/v2 v2.1.1/go.mod h1:IDp2JFvbwZ31H8dQbEIY7sDl2L3o3HZj1hsSQlywkQ0=
github.com/containernetworking/cni v1.1.2 h1:wtRGZVv7olUHMOqouPpn3cXJWpJgM6+EUl31EQbXALQ=
github.com/containernetworking/cni v1.1.2/go.mod h1:sDpYKmGVENF3s6uvMvGgldDWeG8dMxakj/u+i9ht9vw=
github.com/containernetworking/plugins v1.2.0 h1:SWgg3dQG1yzUo4d9iD8cwSVh1VqI+bP7mkPDoSfP9VU=
github.com/containernetworking/plugins v1.2.0/go.mod h1:/VjX4uHecW5vVimFa1wkG4s+r/s9qIfPdqlLF4TW8c4=
github.com/coreos/go-systemd/v22 v22.3.2/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/coreos/go-systemd/v22 v22.5.0 h1:RrqgGjYQKalulkV8NGVIfkXQf6YYmOyiJKk8iXXhfZs=
github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/cpuguy83/go-md2man/v2 v2.0.2 h1:p1EgwI/C7NhT0JmVkwCD2ZBK8j4aeHQX2pMHHBfMQ6w=
github.com/cpuguy83/go-md2man/v2 v2.0.2/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/cyphar/filepath-securejoin v0.2.3/go.mod h1:aPGpWjXOXUn2NCNjFvBE6aRxGGx79pTxQpKOJNYHHl4=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c h1:+pKlWGMw7gf6bQ+oDZB4KHQFypsfjYlq/C4rfL7D3g8=
github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c/go.mod h1:Uw6UezgYA44ePAFQYUehOuCzmy5zmg/+nl2ZfMWGkpA=
github.com/docker/go-units v0.4.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/docker/go-units v0.5.0 h1:69rxXcBk27SvSaaxTtLh/8llcHD8vYHT7WSdRZ/jvr4=
github.com/docker/go-units v0.5.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/frankban/quicktest v1.11.3/go.mod h1:wRf/ReqHper53s+kmmSZizM8NamnL3IM0I9ntUbOk+k=
github.com/frankban/quicktest v1.14.4 h1:g2rn0vABPOOXmZUj+vbmUp0lPoXEMuhTpIluN0XL9UY=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=
github.com/fsnotify/fsnotify v1.6.0 h1:n+5WquG0fcWoWp6xPWfHdbskMCQaFnG6PfBrh1Ky4HY=
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.2.4 h1:g01GSCwiDw2xSZfjJ2/T9M+S6pFdcNtFYsp+Y43HYDQ=
github.com/go-logr/logr v1.2.4/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0/go.mod h1:fyg7847qk6SyHyPtNmDHnmrv/HOrqktSC+C9fM+CJOE=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/godbus/dbus/v5 v5.0.6/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/godbus/dbus/v5 v5.1.0 h1:4KLkAxT3aOY8Li4FRJe/KvhoNFFxo0m6fNuFUO8QJUk=
github.com/godbus/dbus/v5 v5.1.0/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da h1:oI5xCqsCo564l8iNU+DwB5epxmsaqB+rhGL0m5jtYqE=
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.3/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8=
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/golang/protobuf v1.5.3 h1:KhyjKVUg7Usr/dYsdSqoFveMYd5ko72D+zANwlG1mmg=
github.com/golang/protobuf v1.5.3/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.3/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0=
github.com/google/pprof v0.0.0-20210407192527-94a9f03dee38/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.3.1 h1:KjJaJ9iWZ3jOFZIf1Lqf4laDRCasjl0BCmnEGxkdLb4=
github.com/google/uuid v1.3.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/errwrap v1.1.0 h1:OxrOeh75EUXMY8TBjag2fzXGZ40LB6IKw45YeGUDY2I=
github.com/hashicorp/errwrap v1.1.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo=
github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/ianlancetaylor/demangle v0.0.0-20200824232613-28f6c0f3b639/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/intel/goresctrl v0.3.0 h1:K2D3GOzihV7xSBedGxONSlaw/un1LZgWsc9IfqipN4c=
github.com/intel/goresctrl v0.3.0/go.mod h1:fdz3mD85cmP9sHD8JUlrNWAxvwM86CrbmVXltEKd7zk=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/klauspost/compress v1.16.3 h1:XuJt9zzcnaz6a16/OU53ZjWp/v7/42WcR5t2a0PcNQY=
github.com/klauspost/compress v1.16.3/go.mod h1:ntbaceVETuRiXiv4DpjP66DpAtAGkEQskQzEyD//IeE=
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/moby/locker v1.0.1 h1:fOXqR41zeveg4fFODix+1Ch4mj/gT0NE1XJbp/epuBg=
github.com/moby/locker v1.0.1/go.mod h1:S7SDdo5zpBK84bzzVlKr2V0hz+7x9hWbYC/kq7oQppc=
github.com/moby/sys/mountinfo v0.5.0/go.mod h1:3bMD3Rg+zkqx8MRYPi7Pyb0Ie97QEBmdxbhnCLlSvSU=
github.com/moby/sys/mountinfo v0.6.2 h1:BzJjoreD5BMFNmD9Rus6gdd1pLuecOFPt8wC+Vygl78=
github.com/moby/sys/mountinfo v0.6.2/go.mod h1:IJb6JQeOklcdMU9F5xQ8ZALD+CUr5VlGpwtX+VE0rpI=
github.com/moby/sys/sequential v0.5.0 h1:OPvI35Lzn9K04PBbCLW0g4LcFAJgHsvXsRyewg5lXtc=
github.com/moby/sys/sequential v0.5.0/go.mod h1:tH2cOOs5V9MlPiXcQzRC+eEyab644PWKGRYaaV5ZZlo=
github.com/moby/sys/signal v0.7.0 h1:25RW3d5TnQEoKvRbEKUGay6DCQ46IxAVTT9CUMgmsSI=
github.com/moby/sys/signal v0.7.0/go.mod h1:GQ6ObYZfqacOwTtlXvcmh9A26dVRul/hbOZn88Kg8Tg=
github.com/moby/sys/symlink v0.2.0 h1:tk1rOM+Ljp0nFmfOIBtlV3rTDlWOwFRhjEeAhZB0nZc=
github.com/moby/sys/symlink v0.2.0/go.mod h1:7uZVF2dqJjG/NsClqul95CqKOBRQyYSNnJ6BMgR/gFs=
github.com/mrunalp/fileutils v0.5.0/go.mod h1:M1WthSahJixYnrXQl/DFQuteStB1weuxD2QJNHXfbSQ=
github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A=
github.com/nxadm/tail v1.4.8 h1:nPr65rt6Y5JFSKQO7qToXr7pePgD6Gwiw05lkbyAQTE=
github.com/nxadm/tail v1.4.8/go.mod h1:+ncqLTQzXmGhMZNUePPaPqPvBxHAIsmXswZKocGu+AU=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk=
github.com/onsi/ginkgo v1.16.4/go.mod h1:dX+/inL/fNMqNlz0e9LfyB9TswhZpCVdJM/Z6Vvnwo0=
github.com/onsi/ginkgo v1.16.5 h1:8xi0RTUf59SOSfEtZMvwTvXYMzG4gV23XVHOZiXNtnE=
github.com/onsi/ginkgo/v2 v2.1.3/go.mod h1:vw5CSIxN1JObi/U8gcbwft7ZxR2dgaR70JSE3/PpL4c=
github.com/onsi/ginkgo/v2 v2.4.0 h1:+Ig9nvqgS5OBSACXNk15PLdp0U9XPYROt9CFzVdFGIs=
github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY=
github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo=
github.com/onsi/gomega v1.17.0/go.mod h1:HnhC7FXeEQY45zxNK3PPoIUhzk/80Xly9PcubAlGdZY=
github.com/onsi/gomega v1.24.2 h1:J/tulyYK6JwBldPViHJReihxxZ+22FHs0piGjQAvoUE=
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
github.com/opencontainers/image-spec v1.1.0-rc4 h1:oOxKUJWnFC4YGHCCMNql1x4YaDfYBTS5Y4x/Cgeo1E0=
github.com/opencontainers/image-spec v1.1.0-rc4/go.mod h1:X4pATf0uXsnn3g5aiGIsVnJBR4mxhKzfwmvK/B2NTm8=
github.com/opencontainers/runc v1.1.5 h1:L44KXEpKmfWDcS02aeGm8QNTFXTo2D+8MYGDIJ/GDEs=
github.com/opencontainers/runc v1.1.5/go.mod h1:1J5XiS+vdZ3wCyZybsuxXZWGrgSr8fFJHLXuG2PsnNg=
github.com/opencontainers/runtime-spec v1.0.2/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/opencontainers/runtime-spec v1.0.3-0.20210326190908-1c3f411f0417/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/opencontainers/runtime-spec v1.1.0-rc.1 h1:wHa9jroFfKGQqFHj0I1fMRKLl0pfj+ynAqBxo3v6u9w=
github.com/opencontainers/runtime-spec v1.1.0-rc.1/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/opencontainers/selinux v1.10.0/go.mod h1:2i0OySw99QjzBBQByd1Gr9gSjvuho1lHsJxIJ3gGbJI=
github.com/opencontainers/selinux v1.11.0 h1:+5Zbo97w3Lbmb3PeqQtpmTkMwsW5nRI3YaLpt7tQ7oU=
github.com/opencontainers/selinux v1.11.0/go.mod h1:E5dMC3VPuVvVHDYmi78qvhJp8+M586T4DlDRYpFkyec=
github.com/pelletier/go-toml v1.9.5 h1:4yBQzkHv+7BHq2PQUZF3Mx0IYxG7LsP222s7Agd3ve8=
github.com/pelletier/go-toml v1.9.5/go.mod h1:u1nR/EPcESfeI/szUZKdtJ0xRNbUoANCkoOuaOx1Y+c=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/procfs v0.6.0/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA=
github.com/prometheus/procfs v0.9.0 h1:wzCHvIvM5SxWqYvwgVL7yJY8Lz3PKn49KQtpgMYJfhI=
github.com/rogpeppe/go-internal v1.9.0 h1:73kH8U+JUqXU8lRuOHeVHaa/SZPifC7BkcraZVejAe8=
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/seccomp/libseccomp-golang v0.9.2-0.20220502022130-f33da4d89646/go.mod h1:JA8cRccbGaA1s33RQf7Y1+q9gHmZX1yB/z9WDN1C6fg=
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
github.com/sirupsen/logrus v1.7.0/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
github.com/sirupsen/logrus v1.8.1/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=
github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0 h1:1zr/of2m5FGMsad5YfcqgdqdWrIhu+EBEJRhR1U7z/c=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.8.3 h1:RP3t2pwF7cMEbC1dqtB6poj3niw/9gnV4Cjg5oW5gtY=
github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
github.com/urfave/cli v1.22.1/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
github.com/urfave/cli v1.22.12 h1:igJgVw1JdKH+trcLWLeLwZjU9fEfPesQ+9/e4MQ44S8=
github.com/urfave/cli v1.22.12/go.mod h1:sSBEIC79qR6OvcmsD4U3KABeOTxDqQtdDnaFuUN30b8=
github.com/vishvananda/netlink v1.1.0/go.mod h1:cTgwzPIzzgDAYoQrMm0EdrjRUBkTqKYppBueQtXaqoE=
github.com/vishvananda/netns v0.0.0-20191106174202-0a2b9b5464df/go.mod h1:JP3t17pCcGlemwknint6hfoeCVQrEMVwxRLRjXpq+BU=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
go.opencensus.io v0.24.0 h1:y73uSU6J157QMP2kn2r30vwW1A2W2WFwSCGnAVxeaD0=
go.opencensus.io v0.24.0/go.mod h1:vNK8G9p7aAivkbmorf4v+7Hgx+Zs0yY+0fOtgBfjQKo=
go.opentelemetry.io/otel v1.14.0 h1:/79Huy8wbf5DnIPhemGB+zEPVwnN6fuQybr/SRXa6hM=
go.opentelemetry.io/otel v1.14.0/go.mod h1:o4buv+dJzx8rohcUeRmWUZhqupFvzWis188WlggnNeU=
go.opentelemetry.io/otel/trace v1.14.0 h1:wp2Mmvj41tDsyAJXiWDWpfNsOiIyd38fy85pyKcFq/M=
go.opentelemetry.io/otel/trace v1.14.0/go.mod h1:8avnQLK+CG77yNLUae4ea2JDQ6iT+gozhnZjy/rw9G8=
go.uber.org/goleak v1.1.12 h1:gZAh5/EyT/HQwlpkCy6wTpqfH9H8Lz8zbm3dZh+OyzA=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.9.0 h1:KENHtAZL2y3NLMYZeHY9DW8HW8V+kQyJsY/V9JlKvCs=
golang.org/x/mod v0.9.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200520004742-59133d7f0dd7/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201224014010-6772e930b67b/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210428140749-89ef3d95e781/go.mod h1:OJAsFXCWl8Ukc7SiCT/9KSuxbyM7479/AVlXFRxuMCk=
golang.org/x/net v0.17.0 h1:pVaXccu2ozPjCXewfr1S7xza/zcXTity9cCdXQYSjIM=
golang.org/x/net v0.17.0/go.mod h1:NxSsAGuq816PNPmqtQdLE42eU2Fs7NoRIZrHJAlaCOE=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.3.0 h1:ftCYgMx6zT/asHUrPw8BLLscYtGznsLAnjq5RH9P66E=
golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190606203320-7fc4e5ec1444/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191115151921-52ab43148777/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200916030750-2334cc1a136f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210112080510-489259a85091/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210906170528-6f6e22806c34/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211025201205-69cdffdb9359/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211116061358-0a5406a5449c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.13.0 h1:Af8nKPmuFypiUBjVoU9V20FiaFXOcuZI21p0ycVYYGE=
golang.org/x/sys v0.13.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.13.0 h1:ablQoSUd0tRdKxZewP80B+BaqeKJuVhuRxj/dkrun3k=
golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20201224043029-2b0845dc783e/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.7.0 h1:W4OVu8VVOaIO0yzWMNdepAulS7YfoS3Zabrm8DOXXU4=
golang.org/x/tools v0.7.0/go.mod h1:4pg6aUX35JBAogB10C9AtvVL+qowtN4pT3CGSQex14s=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20200224152610-e50cd9704f63/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
google.golang.org/genproto v0.0.0-20231012201019-e917dd12ba7a h1:fwgW9j3vHirt4ObdHoYNwuO24BEZjSzbh+zPaNWoiY8=
google.golang.org/genproto v0.0.0-20231012201019-e917dd12ba7a/go.mod h1:EMfReVxb80Dq1hhioy0sOsY9jCE46YDgHlJ7fWVUWRE=
google.golang.org/genproto/googleapis/rpc v0.0.0-20231016165738-49dd2c1f3d0b h1:ZlWIi1wSK56/8hn4QcBp/j9M7Gt3U/3hZw3mC7vDICo=
google.golang.org/genproto/googleapis/rpc v0.0.0-20231016165738-49dd2c1f3d0b/go.mod h1:swOH3j0KzcDDgGUWr+SNpyTen5YrXjS3eyPzFYKc6lc=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY=
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.27.1/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.33.2/go.mod h1:JMHMWHQWaTccqQQlmk3MJZS+GWXOdAesneDmEnv2fbc=
google.golang.org/grpc v1.59.0 h1:Z5Iec2pjwb+LEOqzpB2MR12/eKFhDPhuqW91O+4bwUk=
google.golang.org/grpc v1.59.0/go.mod h1:aUPDwccQo6OTjy7Hct4AfBPD1GptF4fyUjIkQ9YtF98=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=
google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=
google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.31.0 h1:g0LDEJHgrBl9N9r17Ru3sqWhkIx2NB67okBHPwC7hs8=
google.golang.org/protobuf v1.31.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
k8s.io/apimachinery v0.26.2 h1:da1u3D5wfR5u2RpLhE/ZtZS2P7QvDgLZTi9wrNZl/tQ=
k8s.io/apimachinery v0.26.2/go.mod h1:ats7nN1LExKHvJ9TmwootT00Yz05MuYqPXEXaVeOy5I=
sigs.k8s.io/yaml v1.3.0 h1:a2VclLzOGrwOHDiV8EfBGhvjHvP46CtW5j6POvhYGGo=
sigs.k8s.io/yaml v1.3.0/go.mod h1:GeOyir5tyXNByN85N/dRIT9es5UQNerPYEKK56eTBm8=

View File

@ -0,0 +1,8 @@
package main
import "fmt"
// This is a dummy program, to workaround the goreleaser can't pre build the binary.
func main() {
fmt.Println("Hello, World!")
}

File diff suppressed because it is too large Load Diff

View File

@ -1,19 +1,19 @@
[package] [package]
name = "nydus-backend-proxy" name = "nydus-backend-proxy"
version = "0.1.0" version = "0.2.0"
authors = ["The Nydus Developers"] authors = ["The Nydus Developers"]
description = "A simple HTTP server to provide a fake container registry for nydusd" description = "A simple HTTP server to provide a fake container registry for nydusd"
homepage = "https://nydus.dev/" homepage = "https://nydus.dev/"
repository = "https://github.com/dragonflyoss/image-service" repository = "https://github.com/dragonflyoss/nydus"
edition = "2018" edition = "2021"
license = "Apache-2.0" license = "Apache-2.0"
[dependencies] [dependencies]
rocket = "0.5.0-rc" rocket = "0.5.0"
http-range = "0.1.3" http-range = "0.1.5"
nix = ">=0.23.0" nix = { version = "0.28", features = ["uio"] }
clap = "2.33" clap = "4.4"
once_cell = "1.10.0" once_cell = "1.19.0"
lazy_static = "1.4" lazy_static = "1.4"
[workspace] [workspace]

View File

@ -2,29 +2,22 @@
// //
// SPDX-License-Identifier: (Apache-2.0 AND BSD-3-Clause) // SPDX-License-Identifier: (Apache-2.0 AND BSD-3-Clause)
#[macro_use]
extern crate rocket;
#[macro_use]
extern crate lazy_static;
#[macro_use(crate_authors, crate_version)]
extern crate clap;
use std::collections::HashMap; use std::collections::HashMap;
use std::env; use std::env;
use std::os::unix::io::AsRawFd;
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use std::sync::Arc; use std::sync::Arc;
use std::{fs, io}; use std::{fs, io};
use clap::{App, Arg}; use clap::*;
use http_range::HttpRange; use http_range::HttpRange;
use lazy_static::lazy_static;
use nix::sys::uio; use nix::sys::uio;
use rocket::fs::{FileServer, NamedFile}; use rocket::fs::{FileServer, NamedFile};
use rocket::futures::lock::{Mutex, MutexGuard}; use rocket::futures::lock::{Mutex, MutexGuard};
use rocket::http::Status; use rocket::http::Status;
use rocket::request::{self, FromRequest, Outcome}; use rocket::request::{self, FromRequest, Outcome};
use rocket::response::{self, stream::ReaderStream, Responder}; use rocket::response::{self, stream::ReaderStream, Responder};
use rocket::{Request, Response}; use rocket::*;
lazy_static! { lazy_static! {
static ref BLOB_BACKEND: Mutex<BlobBackend> = Mutex::new(BlobBackend { static ref BLOB_BACKEND: Mutex<BlobBackend> = Mutex::new(BlobBackend {
@ -165,12 +158,12 @@ impl<'r> Responder<'r, 'static> for RangeStream {
let mut read = 0u64; let mut read = 0u64;
let startpos = self.start as i64; let startpos = self.start as i64;
let size = self.len; let size = self.len;
let raw_fd = self.file.as_raw_fd(); let file = self.file.clone();
Response::build() Response::build()
.streamed_body(ReaderStream! { .streamed_body(ReaderStream! {
while read < size { while read < size {
match uio::pread(raw_fd, &mut buf, startpos + read as i64) { match uio::pread(file.as_ref(), &mut buf, startpos + read as i64) {
Ok(mut n) => { Ok(mut n) => {
n = std::cmp::min(n, (size - read) as usize); n = std::cmp::min(n, (size - read) as usize);
read += n as u64; read += n as u64;
@ -268,20 +261,31 @@ async fn fetch(
#[rocket::main] #[rocket::main]
async fn main() { async fn main() {
let cmd = App::new("nydus-backend-proxy") let cmd = Command::new("nydus-backend-proxy")
.author(crate_authors!()) .author(env!("CARGO_PKG_AUTHORS"))
.version(crate_version!()) .version(env!("CARGO_PKG_VERSION"))
.about("A simple HTTP server to provide a fake container registry for nydusd.") .about("A simple HTTP server to provide a fake container registry for nydusd.")
.arg( .arg(
Arg::with_name("blobsdir") Arg::new("blobsdir")
.short("b") .short('b')
.long("blobsdir") .long("blobsdir")
.takes_value(true) .required(true)
.help("path to directory hosting nydus blob files"), .help("path to directory hosting nydus blob files"),
) )
.help_template(
"\
{before-help}{name} {version}
{author-with-newline}{about-with-newline}
{usage-heading} {usage}
{all-args}{after-help}
",
)
.get_matches(); .get_matches();
// Safe to unwrap() because `blobsdir` takes a value. // Safe to unwrap() because `blobsdir` takes a value.
let path = cmd.value_of("blobsdir").unwrap(); let path = cmd
.get_one::<String>("blobsdir")
.expect("required argument");
init_blob_backend(Path::new(path)).await; init_blob_backend(Path::new(path)).await;

View File

@ -8,14 +8,14 @@ linters:
- goimports - goimports
- revive - revive
- ineffassign - ineffassign
- vet - govet
- unused - unused
- misspell - misspell
disable: disable:
- errcheck - errcheck
run: run:
deadline: 4m timeout: 5m
skip-dirs: issues:
- misc exclude-dirs:
- misc

View File

@ -2,7 +2,7 @@ GIT_COMMIT := $(shell git rev-parse --verify HEAD --short=7)
BUILD_TIME := $(shell date -u +%Y%m%d.%H%M) BUILD_TIME := $(shell date -u +%Y%m%d.%H%M)
PACKAGES ?= $(shell go list ./... | grep -v /vendor/) PACKAGES ?= $(shell go list ./... | grep -v /vendor/)
GOARCH ?= $(shell go env GOARCH) GOARCH ?= $(shell go env GOARCH)
GOPROXY ?= https://goproxy.io GOPROXY ?=
ifdef GOPROXY ifdef GOPROXY
PROXY := GOPROXY=${GOPROXY} PROXY := GOPROXY=${GOPROXY}
@ -13,15 +13,17 @@ endif
all: build all: build
build: build:
@CGO_ENABLED=0 ${PROXY} GOOS=linux GOARCH=${GOARCH} go build -ldflags="-s -w -X 'main.Version=${GIT_COMMIT}' -X 'main.BuildTime=${BUILD_TIME}'" -v -o bin/nydus-overlayfs ./cmd/main.go @CGO_ENABLED=0 ${PROXY} GOOS=linux GOARCH=${GOARCH} go build -ldflags="-s -w -X 'main.Version=${GIT_COMMIT}' -X 'main.BuildTime=${BUILD_TIME}'" -v -o bin/nydus-overlayfs ./cmd/main.go
release: release:
@CGO_ENABLED=0 ${PROXY} GOOS=linux GOARCH=${GOARCH} go build -ldflags '-s -w -extldflags "-static"' -v -o bin/nydus-overlayfs ./cmd/main.go @CGO_ENABLED=0 ${PROXY} GOOS=linux GOARCH=${GOARCH} go build -ldflags '-s -w -extldflags "-static"' -v -o bin/nydus-overlayfs ./cmd/main.go
test: build test: build
go vet $(PACKAGES) go vet $(PACKAGES)
golangci-lint run
go test -v -cover ${PACKAGES} go test -v -cover ${PACKAGES}
lint:
golangci-lint run
clean: clean:
rm -f bin/* rm -f bin/*

View File

@ -8,7 +8,7 @@ import (
"syscall" "syscall"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/urfave/cli/v2" cli "github.com/urfave/cli/v2"
"golang.org/x/sys/unix" "golang.org/x/sys/unix"
) )

View File

@ -1,15 +1,15 @@
module github.com/dragonflyoss/image-service/contrib/nydus-overlayfs module github.com/dragonflyoss/nydus/contrib/nydus-overlayfs
go 1.20 go 1.21
require ( require (
github.com/pkg/errors v0.9.1 github.com/pkg/errors v0.9.1
github.com/urfave/cli/v2 v2.25.7 github.com/urfave/cli/v2 v2.27.1
golang.org/x/sys v0.13.0 golang.org/x/sys v0.15.0
) )
require ( require (
github.com/cpuguy83/go-md2man/v2 v2.0.2 // indirect github.com/cpuguy83/go-md2man/v2 v2.0.3 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/xrash/smetrics v0.0.0-20201216005158-039620a65673 // indirect github.com/xrash/smetrics v0.0.0-20231213231151-1d8dd44e695e // indirect
) )

View File

@ -1,12 +1,10 @@
github.com/cpuguy83/go-md2man/v2 v2.0.2 h1:p1EgwI/C7NhT0JmVkwCD2ZBK8j4aeHQX2pMHHBfMQ6w= github.com/cpuguy83/go-md2man/v2 v2.0.3 h1:qMCsGGgs+MAzDFyp9LpAe1Lqy/fY/qCovCm0qnXZOBM=
github.com/cpuguy83/go-md2man/v2 v2.0.2/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4= github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk= github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/urfave/cli/v2 v2.25.7 h1:VAzn5oq403l5pHjc4OhD54+XGO9cdKVL/7lDjF+iKUs= github.com/urfave/cli/v2 v2.27.1 h1:8xSQ6szndafKVRmfyeUMxkNUJQMjL1F2zmsZ+qHpfho=
github.com/urfave/cli/v2 v2.25.7/go.mod h1:8qnjx1vcq5s2/wpsqoZFndg2CE5tNFyrTvS6SinrnYQ= github.com/urfave/cli/v2 v2.27.1/go.mod h1:8qnjx1vcq5s2/wpsqoZFndg2CE5tNFyrTvS6SinrnYQ=
github.com/xrash/smetrics v0.0.0-20201216005158-039620a65673 h1:bAn7/zixMGCfxrRTfdpNzjtPYqr8smhKouy9mxVdGPU= github.com/xrash/smetrics v0.0.0-20231213231151-1d8dd44e695e h1:+SOyEddqYF09QP7vr7CgJ1eti3pY9Fn3LHO1M1r/0sI=
github.com/xrash/smetrics v0.0.0-20201216005158-039620a65673/go.mod h1:N3UwUGtsrSj3ccvlPHLoLsHnpR27oXr4ZE984MbSER8= golang.org/x/sys v0.15.0 h1:h48lPFYpsTvQJZF4EKyI4aLHaev3CxivZmv7yZig9pc=
golang.org/x/sys v0.13.0 h1:Af8nKPmuFypiUBjVoU9V20FiaFXOcuZI21p0ycVYYGE= golang.org/x/sys v0.15.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.13.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=

View File

@ -2,5 +2,5 @@
tmp tmp
cmd/nydusify cmd/nydusify
output output
nydusify-smoke
nydus-hook-plugin nydus-hook-plugin
coverage.txt

View File

@ -8,14 +8,14 @@ linters:
- goimports - goimports
- revive - revive
- ineffassign - ineffassign
- vet - govet
- unused - unused
- misspell - misspell
disable: disable:
- errcheck - errcheck
run: run:
deadline: 4m timeout: 5m
skip-dirs: issues:
- misc exclude-dirs:
- misc

View File

@ -1,6 +1,6 @@
PACKAGES ?= $(shell go list ./... | grep -v /vendor/) PACKAGES ?= $(shell go list ./... | grep -v /vendor/)
GOARCH ?= $(shell go env GOARCH) GOARCH ?= $(shell go env GOARCH)
GOPROXY ?= https://goproxy.io GOPROXY ?=
ifdef GOPROXY ifdef GOPROXY
PROXY := GOPROXY=${GOPROXY} PROXY := GOPROXY=${GOPROXY}
@ -26,17 +26,16 @@ release:
plugin: plugin:
@CGO_ENABLED=0 ${PROXY} GOOS=linux GOARCH=${GOARCH} go build -ldflags '-s -w -extldflags "-static"' -o nydus-hook-plugin ./plugin @CGO_ENABLED=0 ${PROXY} GOOS=linux GOARCH=${GOARCH} go build -ldflags '-s -w -extldflags "-static"' -o nydus-hook-plugin ./plugin
test: build build-smoke test:
@go vet $(PACKAGES) @go vet $(PACKAGES)
@go test -covermode=atomic -coverprofile=coverage.txt -count=1 -v -timeout 20m -parallel 16 -race ${PACKAGES}
lint:
golangci-lint run golangci-lint run
@go test -covermode=atomic -coverprofile=coverage.out -count=1 -v -timeout 20m -race ./pkg/... ./cmd/...
coverage: test coverage: test
@go tool cover -func=coverage.out @go tool cover -func=coverage.txt
clean: clean:
rm -f cmd/nydusify rm -f cmd/nydusify
rm -f coverage.out rm -f coverage.txt
build-smoke:
${PROXY} go test -race -v -c -o ./nydusify-smoke ./tests

View File

@ -16,6 +16,8 @@ import (
"runtime" "runtime"
"strings" "strings"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/optimizer"
"github.com/containerd/containerd/reference/docker" "github.com/containerd/containerd/reference/docker"
"github.com/distribution/reference" "github.com/distribution/reference"
"github.com/dustin/go-humanize" "github.com/dustin/go-humanize"
@ -23,15 +25,15 @@ import (
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
"github.com/urfave/cli/v2" "github.com/urfave/cli/v2"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/checker" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/checker"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/checker/rule" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/chunkdict/generator"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/chunkdict/generator" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/committer"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/converter" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/converter"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/copier" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/copier"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/packer" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/packer"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/provider" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/provider"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/utils" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/utils"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/viewer" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/viewer"
) )
var ( var (
@ -79,7 +81,7 @@ func getBackendConfig(c *cli.Context, prefix string, required bool) (string, str
return "", "", nil return "", "", nil
} }
possibleBackendTypes := []string{"oss", "s3"} possibleBackendTypes := []string{"oss", "s3", "localfs"}
if !isPossibleValue(possibleBackendTypes, backendType) { if !isPossibleValue(possibleBackendTypes, backendType) {
return "", "", fmt.Errorf("--%sbackend-type should be one of %v", prefix, possibleBackendTypes) return "", "", fmt.Errorf("--%sbackend-type should be one of %v", prefix, possibleBackendTypes)
} }
@ -89,7 +91,7 @@ func getBackendConfig(c *cli.Context, prefix string, required bool) (string, str
) )
if err != nil { if err != nil {
return "", "", err return "", "", err
} else if (backendType == "oss" || backendType == "s3") && strings.TrimSpace(backendConfig) == "" { } else if (backendType == "oss" || backendType == "s3" || backendType == "localfs") && strings.TrimSpace(backendConfig) == "" {
return "", "", errors.Errorf("backend configuration is empty, please specify option '--%sbackend-config'", prefix) return "", "", errors.Errorf("backend configuration is empty, please specify option '--%sbackend-config'", prefix)
} }
@ -191,22 +193,7 @@ func main() {
} }
// global options // global options
app.Flags = []cli.Flag{ app.Flags = getGlobalFlags()
&cli.BoolFlag{
Name: "debug",
Aliases: []string{"D"},
Required: false,
Value: false,
Usage: "Enable debug log level, overwrites the 'log-level' option",
EnvVars: []string{"DEBUG_LOG_LEVEL"}},
&cli.StringFlag{
Name: "log-level",
Aliases: []string{"l"},
Value: "info",
Usage: "Set log level (panic, fatal, error, warn, info, debug, trace)",
EnvVars: []string{"LOG_LEVEL"},
},
}
app.Commands = []*cli.Command{ app.Commands = []*cli.Command{
{ {
@ -225,6 +212,18 @@ func main() {
Usage: "Target (Nydus) image reference", Usage: "Target (Nydus) image reference",
EnvVars: []string{"TARGET"}, EnvVars: []string{"TARGET"},
}, },
&cli.StringFlag{
Name: "source-backend-type",
Value: "",
Usage: "Type of storage backend, possible values: 'oss', 's3'",
EnvVars: []string{"BACKEND_TYPE"},
},
&cli.StringFlag{
Name: "source-backend-config",
Value: "",
Usage: "Json configuration string for storage backend",
EnvVars: []string{"BACKEND_CONFIG"},
},
&cli.StringFlag{ &cli.StringFlag{
Name: "target-suffix", Name: "target-suffix",
Required: false, Required: false,
@ -399,7 +398,7 @@ func main() {
&cli.StringFlag{ &cli.StringFlag{
Name: "fs-chunk-size", Name: "fs-chunk-size",
Value: "0x100000", Value: "0x100000",
Usage: "size of nydus image data chunk, must be power of two and between 0x1000-0x100000, [default: 0x100000]", Usage: "size of nydus image data chunk, must be power of two and between 0x1000-0x10000000, [default: 0x4000000]",
EnvVars: []string{"FS_CHUNK_SIZE"}, EnvVars: []string{"FS_CHUNK_SIZE"},
Aliases: []string{"chunk-size"}, Aliases: []string{"chunk-size"},
}, },
@ -427,6 +426,24 @@ func main() {
Usage: "File path to save the metrics collected during conversion in JSON format, for example: './output.json'", Usage: "File path to save the metrics collected during conversion in JSON format, for example: './output.json'",
EnvVars: []string{"OUTPUT_JSON"}, EnvVars: []string{"OUTPUT_JSON"},
}, },
&cli.BoolFlag{
Name: "plain-http",
Value: false,
Usage: "Enable plain http for Nydus image push",
EnvVars: []string{"PLAIN_HTTP"},
},
&cli.IntFlag{
Name: "push-retry-count",
Value: 3,
Usage: "Number of retries when pushing to registry fails",
EnvVars: []string{"PUSH_RETRY_COUNT"},
},
&cli.StringFlag{
Name: "push-retry-delay",
Value: "5s",
Usage: "Delay between push retries (e.g. 5s, 1m, 1h)",
EnvVars: []string{"PUSH_RETRY_DELAY"},
},
}, },
Action: func(c *cli.Context) error { Action: func(c *cli.Context) error {
setupLogLevel(c) setupLogLevel(c)
@ -492,10 +509,12 @@ func main() {
WorkDir: c.String("work-dir"), WorkDir: c.String("work-dir"),
NydusImagePath: c.String("nydus-image"), NydusImagePath: c.String("nydus-image"),
Source: c.String("source"), SourceBackendType: c.String("source-backend-type"),
Target: targetRef, SourceBackendConfig: c.String("source-backend-config"),
SourceInsecure: c.Bool("source-insecure"), Source: c.String("source"),
TargetInsecure: c.Bool("target-insecure"), Target: targetRef,
SourceInsecure: c.Bool("source-insecure"),
TargetInsecure: c.Bool("target-insecure"),
BackendType: backendType, BackendType: backendType,
BackendConfig: backendConfig, BackendConfig: backendConfig,
@ -523,7 +542,10 @@ func main() {
AllPlatforms: c.Bool("all-platforms"), AllPlatforms: c.Bool("all-platforms"),
Platforms: c.String("platform"), Platforms: c.String("platform"),
OutputJSON: c.String("output-json"), OutputJSON: c.String("output-json"),
WithPlainHTTP: c.Bool("plain-http"),
PushRetryCount: c.Int("push-retry-count"),
PushRetryDelay: c.String("push-retry-delay"),
} }
return converter.Convert(context.Background(), opt) return converter.Convert(context.Background(), opt)
@ -559,19 +581,39 @@ func main() {
}, },
&cli.StringFlag{ &cli.StringFlag{
Name: "backend-type", Name: "source-backend-type",
Value: "", Value: "",
Usage: "Type of storage backend, enable verification of file data in Nydus image if specified, possible values: 'oss', 's3'", Usage: "Type of storage backend, possible values: 'oss', 's3'",
EnvVars: []string{"BACKEND_TYPE"}, EnvVars: []string{"BACKEND_TYPE"},
}, },
&cli.StringFlag{ &cli.StringFlag{
Name: "backend-config", Name: "source-backend-config",
Value: "", Value: "",
Usage: "Json string for storage backend configuration", Usage: "Json configuration string for storage backend",
EnvVars: []string{"BACKEND_CONFIG"}, EnvVars: []string{"BACKEND_CONFIG"},
}, },
&cli.PathFlag{ &cli.PathFlag{
Name: "backend-config-file", Name: "source-backend-config-file",
Value: "",
TakesFile: true,
Usage: "Json configuration file for storage backend",
EnvVars: []string{"BACKEND_CONFIG_FILE"},
},
&cli.StringFlag{
Name: "target-backend-type",
Value: "",
Usage: "Type of storage backend, possible values: 'oss', 's3'",
EnvVars: []string{"BACKEND_TYPE"},
},
&cli.StringFlag{
Name: "target-backend-config",
Value: "",
Usage: "Json configuration string for storage backend",
EnvVars: []string{"BACKEND_CONFIG"},
},
&cli.PathFlag{
Name: "target-backend-config-file",
Value: "", Value: "",
TakesFile: true, TakesFile: true,
Usage: "Json configuration file for storage backend", Usage: "Json configuration file for storage backend",
@ -612,7 +654,12 @@ func main() {
Action: func(c *cli.Context) error { Action: func(c *cli.Context) error {
setupLogLevel(c) setupLogLevel(c)
backendType, backendConfig, err := getBackendConfig(c, "", false) sourceBackendType, sourceBackendConfig, err := getBackendConfig(c, "source-", false)
if err != nil {
return err
}
targetBackendType, targetBackendConfig, err := getBackendConfig(c, "target-", false)
if err != nil { if err != nil {
return err return err
} }
@ -623,16 +670,20 @@ func main() {
} }
checker, err := checker.New(checker.Opt{ checker, err := checker.New(checker.Opt{
WorkDir: c.String("work-dir"), WorkDir: c.String("work-dir"),
Source: c.String("source"),
Target: c.String("target"), Source: c.String("source"),
Target: c.String("target"),
SourceInsecure: c.Bool("source-insecure"),
TargetInsecure: c.Bool("target-insecure"),
SourceBackendType: sourceBackendType,
SourceBackendConfig: sourceBackendConfig,
TargetBackendType: targetBackendType,
TargetBackendConfig: targetBackendConfig,
MultiPlatform: c.Bool("multi-platform"), MultiPlatform: c.Bool("multi-platform"),
SourceInsecure: c.Bool("source-insecure"),
TargetInsecure: c.Bool("target-insecure"),
NydusImagePath: c.String("nydus-image"), NydusImagePath: c.String("nydus-image"),
NydusdPath: c.String("nydusd"), NydusdPath: c.String("nydusd"),
BackendType: backendType,
BackendConfig: backendConfig,
ExpectedArch: arch, ExpectedArch: arch,
}) })
if err != nil { if err != nil {
@ -656,12 +707,45 @@ func main() {
Usage: "One or more Nydus image reference(Multiple images should be split by commas)", Usage: "One or more Nydus image reference(Multiple images should be split by commas)",
EnvVars: []string{"SOURCES"}, EnvVars: []string{"SOURCES"},
}, },
&cli.StringFlag{
Name: "target",
Required: false,
Usage: "Target chunkdict image (Nydus) reference",
EnvVars: []string{"TARGET"},
},
&cli.BoolFlag{ &cli.BoolFlag{
Name: "source-insecure", Name: "source-insecure",
Required: false, Required: false,
Usage: "Skip verifying server certs for HTTPS source registry", Usage: "Skip verifying server certs for HTTPS source registry",
EnvVars: []string{"SOURCE_INSECURE"}, EnvVars: []string{"SOURCE_INSECURE"},
}, },
&cli.BoolFlag{
Name: "target-insecure",
Required: false,
Usage: "Skip verifying server certs for HTTPS target registry",
EnvVars: []string{"TARGET_INSECURE"},
},
&cli.StringFlag{
Name: "backend-type",
Value: "",
Usage: "Type of storage backend, possible values: 'oss', 's3'",
EnvVars: []string{"BACKEND_TYPE"},
},
&cli.StringFlag{
Name: "backend-config",
Value: "",
Usage: "Json configuration string for storage backend",
EnvVars: []string{"BACKEND_CONFIG"},
},
&cli.PathFlag{
Name: "backend-config-file",
Value: "",
TakesFile: true,
Usage: "Json configuration file for storage backend",
EnvVars: []string{"BACKEND_CONFIG_FILE"},
},
&cli.StringFlag{ &cli.StringFlag{
Name: "work-dir", Name: "work-dir",
Value: "./output", Value: "./output",
@ -674,6 +758,12 @@ func main() {
Usage: "Path to the nydus-image binary, default to search in PATH", Usage: "Path to the nydus-image binary, default to search in PATH",
EnvVars: []string{"NYDUS_IMAGE"}, EnvVars: []string{"NYDUS_IMAGE"},
}, },
&cli.BoolFlag{
Name: "all-platforms",
Value: false,
Usage: "Generate chunkdict image for all platforms, conflicts with --platform",
},
&cli.StringFlag{ &cli.StringFlag{
Name: "platform", Name: "platform",
Value: "linux/" + runtime.GOARCH, Value: "linux/" + runtime.GOARCH,
@ -683,17 +773,31 @@ func main() {
Action: func(c *cli.Context) error { Action: func(c *cli.Context) error {
setupLogLevel(c) setupLogLevel(c)
backendType, backendConfig, err := getBackendConfig(c, "", false)
if err != nil {
return err
}
_, arch, err := provider.ExtractOsArch(c.String("platform")) _, arch, err := provider.ExtractOsArch(c.String("platform"))
if err != nil { if err != nil {
return err return err
} }
generator, err := generator.New(generator.Opt{ generator, err := generator.New(generator.Opt{
WorkDir: c.String("work-dir"),
Sources: c.StringSlice("sources"), Sources: c.StringSlice("sources"),
Target: c.String("target"),
SourceInsecure: c.Bool("source-insecure"), SourceInsecure: c.Bool("source-insecure"),
TargetInsecure: c.Bool("target-insecure"),
BackendType: backendType,
BackendConfig: backendConfig,
BackendForcePush: c.Bool("backend-force-push"),
WorkDir: c.String("work-dir"),
NydusImagePath: c.String("nydus-image"), NydusImagePath: c.String("nydus-image"),
ExpectedArch: arch, ExpectedArch: arch,
AllPlatforms: c.Bool("all-platforms"),
Platforms: c.String("platform"),
}) })
if err != nil { if err != nil {
return err return err
@ -742,7 +846,12 @@ func main() {
Usage: "Json configuration file for storage backend", Usage: "Json configuration file for storage backend",
EnvVars: []string{"BACKEND_CONFIG_FILE"}, EnvVars: []string{"BACKEND_CONFIG_FILE"},
}, },
&cli.BoolFlag{
Name: "prefetch",
Value: false,
Usage: "Enable full image data prefetch",
EnvVars: []string{"PREFETCH"},
},
&cli.StringFlag{ &cli.StringFlag{
Name: "mount-path", Name: "mount-path",
Value: "./image-fs", Value: "./image-fs",
@ -782,13 +891,11 @@ func main() {
return err return err
} }
backendConfigStruct, err := rule.NewRegistryBackendConfig(parsed) backendConfigStruct, err := utils.NewRegistryBackendConfig(parsed, c.Bool("target-insecure"))
if err != nil { if err != nil {
return errors.Wrap(err, "parse registry backend configuration") return errors.Wrap(err, "parse registry backend configuration")
} }
backendConfigStruct.SkipVerify = c.Bool("target-insecure")
bytes, err := json.Marshal(backendConfigStruct) bytes, err := json.Marshal(backendConfigStruct)
if err != nil { if err != nil {
return errors.Wrap(err, "marshal registry backend configuration") return errors.Wrap(err, "marshal registry backend configuration")
@ -811,6 +918,7 @@ func main() {
BackendType: backendType, BackendType: backendType,
BackendConfig: backendConfig, BackendConfig: backendConfig,
ExpectedArch: arch, ExpectedArch: arch,
Prefetch: c.Bool("prefetch"),
}) })
if err != nil { if err != nil {
return err return err
@ -1103,6 +1211,207 @@ func main() {
return copier.Copy(context.Background(), opt) return copier.Copy(context.Background(), opt)
}, },
}, },
{
Name: "optimize",
Usage: "Optimize a source nydus image and push to the target",
Flags: []cli.Flag{
&cli.StringFlag{
Name: "source",
Required: true,
Usage: "Source (Nydus) image reference",
EnvVars: []string{"SOURCE"},
},
&cli.StringFlag{
Name: "target",
Required: true,
Usage: "Target (Nydus) image reference",
EnvVars: []string{"TARGET"},
},
&cli.BoolFlag{
Name: "source-insecure",
Required: false,
Usage: "Skip verifying server certs for HTTPS source registry",
EnvVars: []string{"SOURCE_INSECURE"},
},
&cli.BoolFlag{
Name: "target-insecure",
Required: false,
Usage: "Skip verifying server certs for HTTPS target registry",
EnvVars: []string{"TARGET_INSECURE"},
},
&cli.StringFlag{
Name: "policy",
Value: "separated-blob-with-prefetch-files",
Usage: "Specify the optimizing way",
EnvVars: []string{"OPTIMIZE_POLICY"},
},
&cli.StringFlag{
Name: "prefetch-files",
Required: false,
Usage: "File path to include prefetch files for optimization",
EnvVars: []string{"PREFETCH_FILES"},
},
&cli.StringFlag{
Name: "work-dir",
Value: "./tmp",
Usage: "Working directory for image optimization",
EnvVars: []string{"WORK_DIR"},
},
&cli.StringFlag{
Name: "nydus-image",
Value: "nydus-image",
Usage: "Path to the nydus-image binary, default to search in PATH",
EnvVars: []string{"NYDUS_IMAGE"},
},
&cli.StringFlag{
Name: "push-chunk-size",
Value: "0MB",
Usage: "Chunk size for pushing a blob layer in chunked",
},
},
Action: func(c *cli.Context) error {
setupLogLevel(c)
pushChunkSize, err := humanize.ParseBytes(c.String("push-chunk-size"))
if err != nil {
return errors.Wrap(err, "invalid --push-chunk-size option")
}
if pushChunkSize > 0 {
logrus.Infof("will push layer with chunk size %s", c.String("push-chunk-size"))
}
opt := optimizer.Opt{
WorkDir: c.String("work-dir"),
NydusImagePath: c.String("nydus-image"),
Source: c.String("source"),
Target: c.String("target"),
SourceInsecure: c.Bool("source-insecure"),
TargetInsecure: c.Bool("target-insecure"),
AllPlatforms: c.Bool("all-platforms"),
Platforms: c.String("platform"),
PushChunkSize: int64(pushChunkSize),
PrefetchFilesPath: c.String("prefetch-files"),
}
return optimizer.Optimize(context.Background(), opt)
},
},
{
Name: "commit",
Usage: "Create and push a new nydus image from a container's changes that use a nydus image",
Flags: []cli.Flag{
&cli.StringFlag{
Name: "work-dir",
Value: "./tmp",
Usage: "Working directory for commit workflow",
EnvVars: []string{"WORK_DIR"},
},
&cli.StringFlag{
Name: "nydus-image",
Value: "nydus-image",
Usage: "Path to the nydus-image binary, default to search in PATH",
EnvVars: []string{"NYDUS_IMAGE"},
},
&cli.StringFlag{
Name: "containerd-address",
Value: "/run/containerd/containerd.sock",
Usage: "Containerd address, optionally with \"unix://\" prefix [$CONTAINERD_ADDRESS] (default \"/run/containerd/containerd.sock\")",
EnvVars: []string{"CONTAINERD_ADDR"},
},
&cli.StringFlag{
Name: "namespace",
Aliases: []string{"n"},
Value: "default",
Usage: "Container namespace, default with \"default\" namespace",
EnvVars: []string{"NAMESPACE"},
},
&cli.StringFlag{
Name: "container",
Required: true,
Usage: "Target container ID (supports short ID, full ID)",
EnvVars: []string{"CONTAINER"},
},
&cli.StringFlag{
Name: "target",
Required: true,
Usage: "Target nydus image reference",
EnvVars: []string{"TARGET"},
},
&cli.BoolFlag{
Name: "source-insecure",
Required: false,
Usage: "Skip verifying server certs for HTTPS source registry",
EnvVars: []string{"SOURCE_INSECURE"},
},
&cli.BoolFlag{
Name: "target-insecure",
Required: false,
Usage: "Skip verifying server certs for HTTPS target registry",
EnvVars: []string{"TARGET_INSECURE"},
},
&cli.IntFlag{
Name: "maximum-times",
Required: false,
DefaultText: "400",
Value: 400,
Usage: "The maximum times allowed to be committed",
EnvVars: []string{"MAXIMUM_TIMES"},
},
&cli.StringSliceFlag{
Name: "with-path",
Aliases: []string{"with-mount-path"},
Required: false,
Usage: "The external directory (for example mountpoint) in container that need to be committed",
EnvVars: []string{"WITH_PATH"},
},
},
Action: func(c *cli.Context) error {
setupLogLevel(c)
parsePaths := func(paths []string) ([]string, []string) {
withPaths := []string{}
withoutPaths := []string{}
for _, path := range paths {
path = strings.TrimSpace(path)
if strings.HasPrefix(path, "!") {
path = strings.TrimLeft(path, "!")
path = strings.TrimRight(path, "/")
withoutPaths = append(withoutPaths, path)
} else {
withPaths = append(withPaths, path)
}
}
return withPaths, withoutPaths
}
withPaths, withoutPaths := parsePaths(c.StringSlice("with-path"))
opt := committer.Opt{
WorkDir: c.String("work-dir"),
NydusImagePath: c.String("nydus-image"),
ContainerdAddress: c.String("containerd-address"),
Namespace: c.String("namespace"),
ContainerID: c.String("container"),
TargetRef: c.String("target"),
SourceInsecure: c.Bool("source-insecure"),
TargetInsecure: c.Bool("target-insecure"),
MaximumTimes: c.Int("maximum-times"),
WithPaths: withPaths,
WithoutPaths: withoutPaths,
}
cm, err := committer.NewCommitter(opt)
if err != nil {
return errors.Wrap(err, "failed to create committer instance")
}
return cm.Commit(c.Context, opt)
},
},
} }
if !utils.IsSupportedArch(runtime.GOARCH) { if !utils.IsSupportedArch(runtime.GOARCH) {
@ -1129,4 +1438,39 @@ func setupLogLevel(c *cli.Context) {
} }
logrus.SetLevel(logLevel) logrus.SetLevel(logLevel)
if c.String("log-file") != "" {
f, err := os.OpenFile(c.String("log-file"), os.O_RDWR|os.O_CREATE|os.O_APPEND, 0666)
if err != nil {
logrus.Errorf("failed to open log file: %+v", err)
return
}
logrus.SetOutput(f)
}
}
func getGlobalFlags() []cli.Flag {
return []cli.Flag{
&cli.BoolFlag{
Name: "debug",
Aliases: []string{"D"},
Required: false,
Value: false,
Usage: "Enable debug log level, overwrites the 'log-level' option",
EnvVars: []string{"DEBUG_LOG_LEVEL"},
},
&cli.StringFlag{
Name: "log-level",
Aliases: []string{"l"},
Value: "info",
Usage: "Set log level (panic, fatal, error, warn, info, debug, trace)",
EnvVars: []string{"LOG_LEVEL"},
},
&cli.StringFlag{
Name: "log-file",
Required: false,
Usage: "Write logs to a file",
EnvVars: []string{"LOG_FILE"},
},
}
} }

View File

@ -1,4 +1,5 @@
// Copyright 2023 Alibaba Cloud. All rights reserved. // Copyright 2023 Alibaba Cloud. All rights reserved.
// Copyright 2023 Nydus Developers. All rights reserved.
// //
// SPDX-License-Identifier: Apache-2.0 // SPDX-License-Identifier: Apache-2.0
@ -6,10 +7,16 @@ package main
import ( import (
"encoding/json" "encoding/json"
"flag"
"fmt"
"os" "os"
"testing" "testing"
"github.com/agiledragon/gomonkey/v2"
"github.com/sirupsen/logrus"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
"github.com/urfave/cli/v2"
) )
func TestIsPossibleValue(t *testing.T) { func TestIsPossibleValue(t *testing.T) {
@ -35,6 +42,12 @@ func TestAddReferenceSuffix(t *testing.T) {
_, err = addReferenceSuffix(source, suffix) _, err = addReferenceSuffix(source, suffix)
require.Error(t, err) require.Error(t, err)
require.Contains(t, err.Error(), "invalid source image reference") require.Contains(t, err.Error(), "invalid source image reference")
source = "localhost:5000/nginx:latest@sha256:757574c5a2102627de54971a0083d4ecd24eb48fdf06b234d063f19f7bbc22fb"
suffix = "-suffix"
_, err = addReferenceSuffix(source, suffix)
require.Error(t, err)
require.Contains(t, err.Error(), "unsupported digested image reference")
} }
func TestParseBackendConfig(t *testing.T) { func TestParseBackendConfig(t *testing.T) {
@ -65,4 +78,316 @@ func TestParseBackendConfig(t *testing.T) {
// Failure situation // Failure situation
_, err = parseBackendConfig(configJSON, file.Name()) _, err = parseBackendConfig(configJSON, file.Name())
require.Error(t, err) require.Error(t, err)
_, err = parseBackendConfig("", "non-existent.json")
require.Error(t, err)
}
func TestGetBackendConfig(t *testing.T) {
tests := []struct {
backendType string
backendConfig string
}{
{
backendType: "oss",
backendConfig: `
{
"bucket_name": "test",
"endpoint": "region.oss.com",
"access_key_id": "testAK",
"access_key_secret": "testSK",
"meta_prefix": "meta",
"blob_prefix": "blob"
}`,
},
{
backendType: "localfs",
backendConfig: `
{
"dir": "/path/to/blobs"
}`,
},
}
for _, test := range tests {
t.Run(fmt.Sprintf("backend config %s", test.backendType), func(t *testing.T) {
app := &cli.App{
Flags: []cli.Flag{
&cli.StringFlag{
Name: "prefixbackend-type",
Value: "",
},
&cli.StringFlag{
Name: "prefixbackend-config",
Value: "",
},
&cli.StringFlag{
Name: "prefixbackend-config-file",
Value: "",
},
},
}
ctx := cli.NewContext(app, nil, nil)
backendType, backendConfig, err := getBackendConfig(ctx, "prefix", false)
require.NoError(t, err)
require.Empty(t, backendType)
require.Empty(t, backendConfig)
backendType, backendConfig, err = getBackendConfig(ctx, "prefix", true)
require.Error(t, err)
require.Contains(t, err.Error(), "backend type is empty, please specify option")
require.Empty(t, backendType)
require.Empty(t, backendConfig)
flagSet := flag.NewFlagSet("test1", flag.PanicOnError)
flagSet.String("prefixbackend-type", "errType", "")
ctx = cli.NewContext(app, flagSet, nil)
backendType, backendConfig, err = getBackendConfig(ctx, "prefix", true)
require.Error(t, err)
require.Contains(t, err.Error(), "backend-type should be one of")
require.Empty(t, backendType)
require.Empty(t, backendConfig)
flagSet = flag.NewFlagSet("test2", flag.PanicOnError)
flagSet.String("prefixbackend-type", test.backendType, "")
ctx = cli.NewContext(app, flagSet, nil)
backendType, backendConfig, err = getBackendConfig(ctx, "prefix", true)
require.Error(t, err)
require.Contains(t, err.Error(), "backend configuration is empty, please specify option")
require.Empty(t, backendType)
require.Empty(t, backendConfig)
require.True(t, json.Valid([]byte(test.backendConfig)))
flagSet = flag.NewFlagSet("test3", flag.PanicOnError)
flagSet.String("prefixbackend-type", test.backendType, "")
flagSet.String("prefixbackend-config", test.backendConfig, "")
ctx = cli.NewContext(app, flagSet, nil)
backendType, backendConfig, err = getBackendConfig(ctx, "prefix", true)
require.NoError(t, err)
require.Equal(t, test.backendType, backendType)
require.Equal(t, test.backendConfig, backendConfig)
file, err := os.CreateTemp("", "nydusify-backend-config-test.json")
require.NoError(t, err)
defer os.RemoveAll(file.Name())
_, err = file.WriteString(test.backendConfig)
require.NoError(t, err)
file.Sync()
flagSet = flag.NewFlagSet("test4", flag.PanicOnError)
flagSet.String("prefixbackend-type", test.backendType, "")
flagSet.String("prefixbackend-config-file", file.Name(), "")
ctx = cli.NewContext(app, flagSet, nil)
backendType, backendConfig, err = getBackendConfig(ctx, "prefix", true)
require.NoError(t, err)
require.Equal(t, test.backendType, backendType)
require.Equal(t, test.backendConfig, backendConfig)
flagSet = flag.NewFlagSet("test5", flag.PanicOnError)
flagSet.String("prefixbackend-type", test.backendType, "")
flagSet.String("prefixbackend-config", test.backendConfig, "")
flagSet.String("prefixbackend-config-file", file.Name(), "")
ctx = cli.NewContext(app, flagSet, nil)
backendType, backendConfig, err = getBackendConfig(ctx, "prefix", true)
require.Error(t, err)
require.Contains(t, err.Error(), "--backend-config conflicts with --backend-config-file")
require.Empty(t, backendType)
require.Empty(t, backendConfig)
})
}
}
func TestGetTargetReference(t *testing.T) {
app := &cli.App{
Flags: []cli.Flag{
&cli.StringFlag{
Name: "target",
Value: "",
},
&cli.StringFlag{
Name: "target-suffix",
Value: "",
},
&cli.StringFlag{
Name: "source",
Value: "",
},
},
}
ctx := cli.NewContext(app, nil, nil)
target, err := getTargetReference(ctx)
require.Error(t, err)
require.Contains(t, err.Error(), "--target or --target-suffix is required")
require.Empty(t, target)
flagSet := flag.NewFlagSet("test1", flag.PanicOnError)
flagSet.String("target", "testTarget", "")
flagSet.String("target-suffix", "testSuffix", "")
ctx = cli.NewContext(app, flagSet, nil)
target, err = getTargetReference(ctx)
require.Error(t, err)
require.Contains(t, err.Error(), "-target conflicts with --target-suffix")
require.Empty(t, target)
flagSet = flag.NewFlagSet("test2", flag.PanicOnError)
flagSet.String("target-suffix", "-nydus", "")
flagSet.String("source", "localhost:5000/nginx:latest", "")
ctx = cli.NewContext(app, flagSet, nil)
target, err = getTargetReference(ctx)
require.NoError(t, err)
require.Equal(t, "localhost:5000/nginx:latest-nydus", target)
flagSet = flag.NewFlagSet("test3", flag.PanicOnError)
flagSet.String("target-suffix", "-nydus", "")
flagSet.String("source", "localhost:5000\nginx:latest", "")
ctx = cli.NewContext(app, flagSet, nil)
target, err = getTargetReference(ctx)
require.Error(t, err)
require.Contains(t, err.Error(), "invalid source image reference")
require.Empty(t, target)
flagSet = flag.NewFlagSet("test4", flag.PanicOnError)
flagSet.String("target", "testTarget", "")
ctx = cli.NewContext(app, flagSet, nil)
target, err = getTargetReference(ctx)
require.NoError(t, err)
require.Equal(t, "testTarget", target)
}
func TestGetCacheReference(t *testing.T) {
app := &cli.App{
Flags: []cli.Flag{
&cli.StringFlag{
Name: "build-cache",
Value: "",
},
&cli.StringFlag{
Name: "build-cache-tag",
Value: "",
},
},
}
ctx := cli.NewContext(app, nil, nil)
cache, err := getCacheReference(ctx, "")
require.NoError(t, err)
require.Empty(t, cache)
flagSet := flag.NewFlagSet("test1", flag.PanicOnError)
flagSet.String("build-cache", "cache", "")
flagSet.String("build-cache-tag", "cacheTag", "")
ctx = cli.NewContext(app, flagSet, nil)
cache, err = getCacheReference(ctx, "")
require.Error(t, err)
require.Contains(t, err.Error(), "--build-cache conflicts with --build-cache-tag")
require.Empty(t, cache)
flagSet = flag.NewFlagSet("test2", flag.PanicOnError)
flagSet.String("build-cache-tag", "cacheTag", "errTarget")
ctx = cli.NewContext(app, flagSet, nil)
cache, err = getCacheReference(ctx, "")
require.Error(t, err)
require.Contains(t, err.Error(), "invalid target image reference: invalid reference format")
require.Empty(t, cache)
flagSet = flag.NewFlagSet("test2", flag.PanicOnError)
flagSet.String("build-cache-tag", "latest-cache", "")
ctx = cli.NewContext(app, flagSet, nil)
cache, err = getCacheReference(ctx, "localhost:5000/nginx:latest")
require.NoError(t, err)
require.Equal(t, "localhost:5000/nginx:latest-cache", cache)
}
func TestGetPrefetchPatterns(t *testing.T) {
app := &cli.App{
Flags: []cli.Flag{
&cli.StringFlag{
Name: "prefetch-dir",
Value: "",
},
&cli.BoolFlag{
Name: "prefetch-patterns",
Value: false,
},
},
}
ctx := cli.NewContext(app, nil, nil)
patterns, err := getPrefetchPatterns(ctx)
require.NoError(t, err)
require.Equal(t, "/", patterns)
flagSet := flag.NewFlagSet("test1", flag.PanicOnError)
flagSet.String("prefetch-dir", "/etc/passwd", "")
ctx = cli.NewContext(app, flagSet, nil)
patterns, err = getPrefetchPatterns(ctx)
require.NoError(t, err)
require.Equal(t, "/etc/passwd", patterns)
flagSet = flag.NewFlagSet("test2", flag.PanicOnError)
flagSet.String("prefetch-dir", "/etc/passwd", "")
flagSet.Bool("prefetch-patterns", true, "")
ctx = cli.NewContext(app, flagSet, nil)
patterns, err = getPrefetchPatterns(ctx)
require.Error(t, err)
require.Contains(t, err.Error(), "--prefetch-dir conflicts with --prefetch-patterns")
require.Empty(t, patterns)
flagSet = flag.NewFlagSet("test3", flag.PanicOnError)
flagSet.Bool("prefetch-patterns", true, "")
ctx = cli.NewContext(app, flagSet, nil)
patterns, err = getPrefetchPatterns(ctx)
require.NoError(t, err)
require.Equal(t, "/", patterns)
}
func TestGetGlobalFlags(t *testing.T) {
flags := getGlobalFlags()
require.Equal(t, 3, len(flags))
}
func TestSetupLogLevelWithLogFile(t *testing.T) {
logFilePath := "test_log_file.log"
defer os.Remove(logFilePath)
c := &cli.Context{}
patches := gomonkey.ApplyMethodSeq(c, "String", []gomonkey.OutputCell{
{Values: []interface{}{"info"}, Times: 1},
{Values: []interface{}{"test_log_file.log"}, Times: 2},
})
defer patches.Reset()
setupLogLevel(c)
file, err := os.Open(logFilePath)
assert.NoError(t, err)
assert.NotNil(t, file)
file.Close()
logrusOutput := logrus.StandardLogger().Out
assert.NotNil(t, logrusOutput)
logrus.Info("This is a test log message")
content, err := os.ReadFile(logFilePath)
assert.NoError(t, err)
assert.Contains(t, string(content), "This is a test log message")
}
func TestSetupLogLevelWithInvalidLogFile(t *testing.T) {
c := &cli.Context{}
patches := gomonkey.ApplyMethodSeq(c, "String", []gomonkey.OutputCell{
{Values: []interface{}{"info"}, Times: 1},
{Values: []interface{}{"test/test_log_file.log"}, Times: 2},
})
defer patches.Reset()
setupLogLevel(c)
logrusOutput := logrus.StandardLogger().Out
assert.NotNil(t, logrusOutput)
} }

View File

@ -3,7 +3,7 @@ package main
import ( import (
"context" "context"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/converter" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/converter"
) )
func main() { func main() {

View File

@ -1,124 +1,131 @@
module github.com/dragonflyoss/image-service/contrib/nydusify module github.com/dragonflyoss/nydus/contrib/nydusify
go 1.20 go 1.23.1
toolchain go1.23.6
require ( require (
github.com/aliyun/aliyun-oss-go-sdk v3.0.1+incompatible github.com/aliyun/aliyun-oss-go-sdk v3.0.2+incompatible
github.com/aws/aws-sdk-go-v2 v1.23.5 github.com/aws/aws-sdk-go-v2 v1.24.1
github.com/aws/aws-sdk-go-v2/config v1.25.11 github.com/aws/aws-sdk-go-v2/config v1.26.6
github.com/aws/aws-sdk-go-v2/credentials v1.16.9 github.com/aws/aws-sdk-go-v2/credentials v1.16.16
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.15.4 github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.15.15
github.com/aws/aws-sdk-go-v2/service/s3 v1.47.2 github.com/aws/aws-sdk-go-v2/service/s3 v1.48.1
github.com/containerd/containerd v1.7.6 github.com/containerd/containerd v1.7.18
github.com/containerd/nydus-snapshotter v0.13.3 github.com/containerd/continuity v0.4.3
github.com/containerd/errdefs v0.1.0
github.com/containerd/nydus-snapshotter v0.13.11
github.com/distribution/reference v0.5.0 github.com/distribution/reference v0.5.0
github.com/docker/cli v24.0.7+incompatible github.com/docker/cli v26.0.0+incompatible
github.com/dustin/go-humanize v1.0.1 github.com/dustin/go-humanize v1.0.1
github.com/goharbor/acceleration-service v0.2.12 github.com/goharbor/acceleration-service v0.2.14
github.com/google/uuid v1.4.0 github.com/google/uuid v1.6.0
github.com/hashicorp/go-hclog v1.5.0 github.com/hashicorp/go-hclog v1.6.2
github.com/hashicorp/go-plugin v1.6.0 github.com/hashicorp/go-plugin v1.6.0
github.com/moby/buildkit v0.13.0
github.com/opencontainers/go-digest v1.0.0 github.com/opencontainers/go-digest v1.0.0
github.com/opencontainers/image-spec v1.1.0-rc5 github.com/opencontainers/image-spec v1.1.0
github.com/pkg/errors v0.9.1 github.com/pkg/errors v0.9.1
github.com/pkg/xattr v0.4.9 github.com/pkg/xattr v0.4.9
github.com/prometheus/client_golang v1.17.0 github.com/prometheus/client_golang v1.19.0
github.com/sirupsen/logrus v1.9.3 github.com/sirupsen/logrus v1.9.3
github.com/stretchr/testify v1.8.4 github.com/stretchr/testify v1.9.0
github.com/urfave/cli/v2 v2.25.7 github.com/urfave/cli/v2 v2.27.1
golang.org/x/sync v0.5.0 golang.org/x/sync v0.6.0
golang.org/x/sys v0.15.0 golang.org/x/sys v0.18.0
lukechampine.com/blake3 v1.2.1 lukechampine.com/blake3 v1.2.1
) )
require ( require (
github.com/AdaLogics/go-fuzz-headers v0.0.0-20230811130428-ced1acdcaa24 // indirect github.com/AdaLogics/go-fuzz-headers v0.0.0-20230811130428-ced1acdcaa24 // indirect
github.com/AdamKorcz/go-118-fuzz-build v0.0.0-20230306123547-8075edf89bb0 // indirect github.com/AdamKorcz/go-118-fuzz-build v0.0.0-20231105174938-2b5cbb29f3e2 // indirect
github.com/Microsoft/go-winio v0.6.1 // indirect github.com/BraveY/snapshotter-converter v0.0.5 // indirect
github.com/Microsoft/hcsshim v0.11.4 // indirect github.com/CloudNativeAI/model-spec v0.0.2 // indirect
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.5.3 // indirect github.com/Microsoft/go-winio v0.6.2 // indirect
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.14.9 // indirect github.com/Microsoft/hcsshim v0.11.5 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.2.8 // indirect github.com/agiledragon/gomonkey/v2 v2.13.0 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.5.8 // indirect github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.5.4 // indirect
github.com/aws/aws-sdk-go-v2/internal/ini v1.7.1 // indirect github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.14.11 // indirect
github.com/aws/aws-sdk-go-v2/internal/v4a v1.2.8 // indirect github.com/aws/aws-sdk-go-v2/internal/configsources v1.2.10 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.10.3 // indirect github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.5.10 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.2.8 // indirect github.com/aws/aws-sdk-go-v2/internal/ini v1.7.3 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.10.8 // indirect github.com/aws/aws-sdk-go-v2/internal/v4a v1.2.10 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.16.8 // indirect github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.10.4 // indirect
github.com/aws/aws-sdk-go-v2/service/sso v1.18.2 // indirect github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.2.10 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.21.2 // indirect github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.10.10 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.26.2 // indirect github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.16.10 // indirect
github.com/aws/smithy-go v1.18.1 // indirect github.com/aws/aws-sdk-go-v2/service/sso v1.18.7 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.21.7 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.26.7 // indirect
github.com/aws/smithy-go v1.19.0 // indirect
github.com/beorn7/perks v1.0.1 // indirect github.com/beorn7/perks v1.0.1 // indirect
github.com/cespare/xxhash/v2 v2.2.0 // indirect github.com/cespare/xxhash/v2 v2.2.0 // indirect
github.com/containerd/cgroups v1.1.0 // indirect github.com/containerd/cgroups v1.1.0 // indirect
github.com/containerd/continuity v0.4.2 // indirect
github.com/containerd/fifo v1.1.0 // indirect github.com/containerd/fifo v1.1.0 // indirect
github.com/containerd/log v0.1.0 // indirect github.com/containerd/log v0.1.0 // indirect
github.com/containerd/stargz-snapshotter v0.14.3 // indirect github.com/containerd/stargz-snapshotter v0.15.1 // indirect
github.com/containerd/stargz-snapshotter/estargz v0.14.3 // indirect github.com/containerd/stargz-snapshotter/estargz v0.15.1 // indirect
github.com/containerd/ttrpc v1.2.2 // indirect github.com/containerd/ttrpc v1.2.4 // indirect
github.com/containerd/typeurl/v2 v2.1.1 // indirect github.com/containerd/typeurl/v2 v2.1.1 // indirect
github.com/containers/ocicrypt v1.1.7 // indirect github.com/containers/ocicrypt v1.1.10 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.2 // indirect github.com/cpuguy83/go-md2man/v2 v2.0.3 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect github.com/davecgh/go-spew v1.1.1 // indirect
github.com/docker/docker v23.0.3+incompatible // indirect github.com/docker/docker v25.0.6+incompatible // indirect
github.com/docker/docker-credential-helpers v0.7.0 // indirect github.com/docker/docker-credential-helpers v0.8.0 // indirect
github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c // indirect github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c // indirect
github.com/fatih/color v1.14.1 // indirect github.com/fatih/color v1.16.0 // indirect
github.com/go-logr/logr v1.2.4 // indirect github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/go-jose/go-jose/v3 v3.0.3 // indirect
github.com/go-logr/logr v1.4.1 // indirect
github.com/go-logr/stdr v1.2.2 // indirect github.com/go-logr/stdr v1.2.2 // indirect
github.com/gogo/protobuf v1.3.2 // indirect github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.3 // indirect github.com/golang/protobuf v1.5.4 // indirect
github.com/google/go-cmp v0.5.9 // indirect github.com/google/go-cmp v0.6.0 // indirect
github.com/hashicorp/golang-lru/v2 v2.0.5 // indirect github.com/hashicorp/golang-lru/v2 v2.0.7 // indirect
github.com/hashicorp/yamux v0.1.1 // indirect github.com/hashicorp/yamux v0.1.1 // indirect
github.com/jmespath/go-jmespath v0.4.0 // indirect github.com/jmespath/go-jmespath v0.4.0 // indirect
github.com/klauspost/compress v1.16.3 // indirect github.com/klauspost/compress v1.17.4 // indirect
github.com/klauspost/cpuid/v2 v2.0.9 // indirect github.com/klauspost/cpuid/v2 v2.2.6 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.19 // indirect github.com/mattn/go-isatty v0.0.20 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.4 // indirect
github.com/miekg/pkcs11 v1.1.1 // indirect github.com/miekg/pkcs11 v1.1.1 // indirect
github.com/mitchellh/go-testing-interface v1.14.1 // indirect github.com/mitchellh/go-testing-interface v1.14.1 // indirect
github.com/moby/locker v1.0.1 // indirect github.com/moby/locker v1.0.1 // indirect
github.com/moby/sys/mountinfo v0.6.2 // indirect github.com/moby/sys/mountinfo v0.7.1 // indirect
github.com/moby/sys/sequential v0.5.0 // indirect github.com/moby/sys/sequential v0.5.0 // indirect
github.com/moby/sys/signal v0.7.0 // indirect github.com/moby/sys/signal v0.7.0 // indirect
github.com/moby/sys/user v0.1.0 // indirect
github.com/oklog/run v1.1.0 // indirect github.com/oklog/run v1.1.0 // indirect
github.com/opencontainers/runc v1.1.5 // indirect github.com/opencontainers/runtime-spec v1.1.0 // indirect
github.com/opencontainers/runtime-spec v1.1.0-rc.1 // indirect
github.com/opencontainers/selinux v1.11.0 // indirect github.com/opencontainers/selinux v1.11.0 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/prometheus/client_model v0.4.1-0.20230718164431-9a2bf3000d16 // indirect github.com/prometheus/client_model v0.6.0 // indirect
github.com/prometheus/common v0.44.0 // indirect github.com/prometheus/common v0.50.0 // indirect
github.com/prometheus/procfs v0.11.1 // indirect github.com/prometheus/procfs v0.13.0 // indirect
github.com/rogpeppe/go-internal v1.12.0 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/stefanberger/go-pkcs11uri v0.0.0-20201008174630-78d3cae3a980 // indirect github.com/stefanberger/go-pkcs11uri v0.0.0-20230803200340-78284954bff6 // indirect
github.com/stretchr/objx v0.5.0 // indirect github.com/stretchr/objx v0.5.2 // indirect
github.com/vbatts/tar-split v0.11.2 // indirect github.com/vbatts/tar-split v0.11.5 // indirect
github.com/xrash/smetrics v0.0.0-20201216005158-039620a65673 // indirect github.com/xrash/smetrics v0.0.0-20231213231151-1d8dd44e695e // indirect
go.etcd.io/bbolt v1.3.7 // indirect go.etcd.io/bbolt v1.3.10 // indirect
go.mozilla.org/pkcs7 v0.0.0-20200128120323-432b2356ecb1 // indirect go.mozilla.org/pkcs7 v0.0.0-20210826202110-33d05740a352 // indirect
go.opencensus.io v0.24.0 // indirect go.opencensus.io v0.24.0 // indirect
go.opentelemetry.io/otel v1.19.0 // indirect go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.46.1 // indirect
go.opentelemetry.io/otel/metric v1.19.0 // indirect go.opentelemetry.io/otel v1.21.0 // indirect
go.opentelemetry.io/otel/trace v1.19.0 // indirect go.opentelemetry.io/otel/metric v1.21.0 // indirect
golang.org/x/crypto v0.14.0 // indirect go.opentelemetry.io/otel/trace v1.21.0 // indirect
golang.org/x/mod v0.11.0 // indirect golang.org/x/crypto v0.21.0 // indirect
golang.org/x/net v0.17.0 // indirect golang.org/x/net v0.23.0 // indirect
golang.org/x/term v0.13.0 // indirect golang.org/x/term v0.18.0 // indirect
golang.org/x/text v0.13.0 // indirect golang.org/x/text v0.14.0 // indirect
golang.org/x/time v0.3.0 // indirect golang.org/x/time v0.5.0 // indirect
golang.org/x/tools v0.10.0 // indirect google.golang.org/genproto v0.0.0-20240123012728-ef4313101c80 // indirect
google.golang.org/genproto v0.0.0-20231012201019-e917dd12ba7a // indirect google.golang.org/genproto/googleapis/rpc v0.0.0-20240123012728-ef4313101c80 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20231016165738-49dd2c1f3d0b // indirect google.golang.org/grpc v1.62.1 // indirect
google.golang.org/grpc v1.59.0 // indirect google.golang.org/protobuf v1.33.0 // indirect
google.golang.org/protobuf v1.31.0 // indirect
gopkg.in/square/go-jose.v2 v2.5.1 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect gopkg.in/yaml.v3 v3.0.1 // indirect
) )
replace github.com/containerd/containerd => github.com/nydusaccelerator/containerd v0.0.0-20231121100328-6c4d1f35ac28 replace github.com/containerd/containerd => github.com/nydusaccelerator/containerd v1.7.18-nydus.10

View File

@ -1,103 +1,109 @@
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
github.com/AdaLogics/go-fuzz-headers v0.0.0-20230811130428-ced1acdcaa24 h1:bvDV9vkmnHYOMsOr4WLk+Vo07yKIzd94sVoIqshQ4bU= github.com/AdaLogics/go-fuzz-headers v0.0.0-20230811130428-ced1acdcaa24 h1:bvDV9vkmnHYOMsOr4WLk+Vo07yKIzd94sVoIqshQ4bU=
github.com/AdaLogics/go-fuzz-headers v0.0.0-20230811130428-ced1acdcaa24/go.mod h1:8o94RPi1/7XTJvwPpRSzSUedZrtlirdB3r9Z20bi2f8= github.com/AdaLogics/go-fuzz-headers v0.0.0-20230811130428-ced1acdcaa24/go.mod h1:8o94RPi1/7XTJvwPpRSzSUedZrtlirdB3r9Z20bi2f8=
github.com/AdamKorcz/go-118-fuzz-build v0.0.0-20230306123547-8075edf89bb0 h1:59MxjQVfjXsBpLy+dbd2/ELV5ofnUkUZBvWSC85sheA= github.com/AdamKorcz/go-118-fuzz-build v0.0.0-20231105174938-2b5cbb29f3e2 h1:dIScnXFlF784X79oi7MzVT6GWqr/W1uUt0pB5CsDs9M=
github.com/AdamKorcz/go-118-fuzz-build v0.0.0-20230306123547-8075edf89bb0/go.mod h1:OahwfttHWG6eJ0clwcfBAHoDI6X/LV/15hx/wlMZSrU= github.com/AdamKorcz/go-118-fuzz-build v0.0.0-20231105174938-2b5cbb29f3e2/go.mod h1:gCLVsLfv1egrcZu+GoJATN5ts75F2s62ih/457eWzOw=
github.com/BraveY/snapshotter-converter v0.0.5 h1:h3zAB31u16EOkshS2J9Nx40RiWSjH6zd5baOSmjLCOg=
github.com/BraveY/snapshotter-converter v0.0.5/go.mod h1:nOVwsdXqdeltxr12x0t0JIbYDD+cdmdBx0HA2pYpxQY=
github.com/BraveY/snapshotter-converter v0.0.6-0.20250409034316-66511579fa6d h1:00wAtig4otPLOMJN+CZHvG4MWm+g4NMY6j0K7eYEFNk=
github.com/BraveY/snapshotter-converter v0.0.6-0.20250409034316-66511579fa6d/go.mod h1:nOVwsdXqdeltxr12x0t0JIbYDD+cdmdBx0HA2pYpxQY=
github.com/BraveY/snapshotter-converter v0.0.6-0.20250409042404-e997e14906b7 h1:c9aFn0vSkXe1nrGe5mONSRs18/BXJKEiSiHvZyaXlBE=
github.com/BraveY/snapshotter-converter v0.0.6-0.20250409042404-e997e14906b7/go.mod h1:nOVwsdXqdeltxr12x0t0JIbYDD+cdmdBx0HA2pYpxQY=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/Microsoft/go-winio v0.6.1 h1:9/kr64B9VUZrLm5YYwbGtUJnMgqWVOdUAXu6Migciow= github.com/CloudNativeAI/model-spec v0.0.2 h1:uCO86kMk8wwadn8vKs0wT4petig5crByTIngdO3L2cQ=
github.com/Microsoft/go-winio v0.6.1/go.mod h1:LRdKpFKfdobln8UmuiYcKPot9D2v6svN5+sAH+4kjUM= github.com/CloudNativeAI/model-spec v0.0.2/go.mod h1:3U/4zubBfbUkW59ATSg41HnkYyKrKUcKFH/cVdoPQnk=
github.com/Microsoft/hcsshim v0.11.4 h1:68vKo2VN8DE9AdN4tnkWnmdhqdbpUFM8OF3Airm7fz8= github.com/Microsoft/go-winio v0.6.2 h1:F2VQgta7ecxGYO8k3ZZz3RS8fVIXVxONVUPlNERoyfY=
github.com/Microsoft/hcsshim v0.11.4/go.mod h1:smjE4dvqPX9Zldna+t5FG3rnoHhaB7QYxPRqGcpAD9w= github.com/Microsoft/go-winio v0.6.2/go.mod h1:yd8OoFMLzJbo9gZq8j5qaps8bJ9aShtEA8Ipt1oGCvU=
github.com/aliyun/aliyun-oss-go-sdk v3.0.1+incompatible h1:so4m5rRA32Tc5GgKg/5gKUu0CRsYmVO3ThMP6T3CwLc= github.com/Microsoft/hcsshim v0.11.5 h1:haEcLNpj9Ka1gd3B3tAEs9CpE0c+1IhoL59w/exYU38=
github.com/aliyun/aliyun-oss-go-sdk v3.0.1+incompatible/go.mod h1:T/Aws4fEfogEE9v+HPhhw+CntffsBHJ8nXQCwKr0/g8= github.com/Microsoft/hcsshim v0.11.5/go.mod h1:MV8xMfmECjl5HdO7U/3/hFVnkmSBjAjmA09d4bExKcU=
github.com/aws/aws-sdk-go-v2 v1.23.5 h1:xK6C4udTyDMd82RFvNkDQxtAd00xlzFUtX4fF2nMZyg= github.com/agiledragon/gomonkey/v2 v2.13.0 h1:B24Jg6wBI1iB8EFR1c+/aoTg7QN/Cum7YffG8KMIyYo=
github.com/aws/aws-sdk-go-v2 v1.23.5/go.mod h1:t3szzKfP0NeRU27uBFczDivYJjsmSnqI8kIvKyWb9ds= github.com/agiledragon/gomonkey/v2 v2.13.0/go.mod h1:ap1AmDzcVOAz1YpeJ3TCzIgstoaWLA6jbbgxfB4w2iY=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.5.3 h1:Zx9+31KyB8wQna6SXFWOewlgoY5uGdDAu6PTOEU3OQI= github.com/aliyun/aliyun-oss-go-sdk v3.0.2+incompatible h1:8psS8a+wKfiLt1iVDX79F7Y6wUM49Lcha2FMXt4UM8g=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.5.3/go.mod h1:zxbEJhRdKTH1nqS2qu6UJ7zGe25xaHxZXaC2CvuQFnA= github.com/aliyun/aliyun-oss-go-sdk v3.0.2+incompatible/go.mod h1:T/Aws4fEfogEE9v+HPhhw+CntffsBHJ8nXQCwKr0/g8=
github.com/aws/aws-sdk-go-v2/config v1.25.11 h1:RWzp7jhPRliIcACefGkKp03L0Yofmd2p8M25kbiyvno= github.com/aws/aws-sdk-go-v2 v1.24.1 h1:xAojnj+ktS95YZlDf0zxWBkbFtymPeDP+rvUQIH3uAU=
github.com/aws/aws-sdk-go-v2/config v1.25.11/go.mod h1:BVUs0chMdygHsQtvaMyEOpW2GIW+ubrxJLgIz/JU29s= github.com/aws/aws-sdk-go-v2 v1.24.1/go.mod h1:LNh45Br1YAkEKaAqvmE1m8FUx6a5b/V0oAKV7of29b4=
github.com/aws/aws-sdk-go-v2/credentials v1.16.9 h1:LQo3MUIOzod9JdUK+wxmSdgzLVYUbII3jXn3S/HJZU0= github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.5.4 h1:OCs21ST2LrepDfD3lwlQiOqIGp6JiEUqG84GzTDoyJs=
github.com/aws/aws-sdk-go-v2/credentials v1.16.9/go.mod h1:R7mDuIJoCjH6TxGUc/cylE7Lp/o0bhKVoxdBThsjqCM= github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.5.4/go.mod h1:usURWEKSNNAcAZuzRn/9ZYPT8aZQkR7xcCtunK/LkJo=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.14.9 h1:FZVFahMyZle6WcogZCOxo6D/lkDA2lqKIn4/ueUmVXw= github.com/aws/aws-sdk-go-v2/config v1.26.6 h1:Z/7w9bUqlRI0FFQpetVuFYEsjzE3h7fpU6HuGmfPL/o=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.14.9/go.mod h1:kjq7REMIkxdtcEC9/4BVXjOsNY5isz6jQbEgk6osRTU= github.com/aws/aws-sdk-go-v2/config v1.26.6/go.mod h1:uKU6cnDmYCvJ+pxO9S4cWDb2yWWIH5hra+32hVh1MI4=
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.15.4 h1:TUCNKBd4/JEefsZDxo5deRmrRRPZHqGyBYiUAeBKOWU= github.com/aws/aws-sdk-go-v2/credentials v1.16.16 h1:8q6Rliyv0aUFAVtzaldUEcS+T5gbadPbWdV1WcAddK8=
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.15.4/go.mod h1:egDkcl+zsgFqS6VO142bKboip5Pe1sNMwN55Xy38QsM= github.com/aws/aws-sdk-go-v2/credentials v1.16.16/go.mod h1:UHVZrdUsv63hPXFo1H7c5fEneoVo9UXiz36QG1GEPi0=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.2.8 h1:8GVZIR0y6JRIUNSYI1xAMF4HDfV8H/bOsZ/8AD/uY5Q= github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.14.11 h1:c5I5iH+DZcH3xOIMlz3/tCKJDaHFwYEmxvlh2fAcFo8=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.2.8/go.mod h1:rwBfu0SoUkBUZndVgPZKAD9Y2JigaZtRP68unRiYToQ= github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.14.11/go.mod h1:cRrYDYAMUohBJUtUnOhydaMHtiK/1NZ0Otc9lIb6O0Y=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.5.8 h1:ZE2ds/qeBkhk3yqYvS3CDCFNvd9ir5hMjlVStLZWrvM= github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.15.15 h1:2MUXyGW6dVaQz6aqycpbdLIH1NMcUI6kW6vQ0RabGYg=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.5.8/go.mod h1:/lAPPymDYL023+TS6DJmjuL42nxix2AvEvfjqOBRODk= github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.15.15/go.mod h1:aHbhbR6WEQgHAiRj41EQ2W47yOYwNtIkWTXmcAtYqj8=
github.com/aws/aws-sdk-go-v2/internal/ini v1.7.1 h1:uR9lXYjdPX0xY+NhvaJ4dD8rpSRz5VY81ccIIoNG+lw= github.com/aws/aws-sdk-go-v2/internal/configsources v1.2.10 h1:vF+Zgd9s+H4vOXd5BMaPWykta2a6Ih0AKLq/X6NYKn4=
github.com/aws/aws-sdk-go-v2/internal/ini v1.7.1/go.mod h1:6fQQgfuGmw8Al/3M2IgIllycxV7ZW7WCdVSqfBeUiCY= github.com/aws/aws-sdk-go-v2/internal/configsources v1.2.10/go.mod h1:6BkRjejp/GR4411UGqkX8+wFMbFbqsUIimfK4XjOKR4=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.2.8 h1:abKT+RuM1sdCNZIGIfZpLkvxEX3Rpsto019XG/rkYG8= github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.5.10 h1:nYPe006ktcqUji8S2mqXf9c/7NdiKriOwMvWQHgYztw=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.2.8/go.mod h1:Owc4ysUE71JSruVTTa3h4f2pp3E4hlcAtmeNXxDmjj4= github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.5.10/go.mod h1:6UV4SZkVvmODfXKql4LCbaZUpF7HO2BX38FgBf9ZOLw=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.10.3 h1:e3PCNeEaev/ZF01cQyNZgmYE9oYYePIMJs2mWSKG514= github.com/aws/aws-sdk-go-v2/internal/ini v1.7.3 h1:n3GDfwqF2tzEkXlv5cuy4iy7LpKDtqDMcNLfZDu9rls=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.10.3/go.mod h1:gIeeNyaL8tIEqZrzAnTeyhHcE0yysCtcaP+N9kxLZ+E= github.com/aws/aws-sdk-go-v2/internal/ini v1.7.3/go.mod h1:6fQQgfuGmw8Al/3M2IgIllycxV7ZW7WCdVSqfBeUiCY=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.2.8 h1:xyfOAYV/ujzZOo01H9+OnyeiRKmTEp6EsITTsmq332Q= github.com/aws/aws-sdk-go-v2/internal/v4a v1.2.10 h1:5oE2WzJE56/mVveuDZPJESKlg/00AaS2pY2QZcnxg4M=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.2.8/go.mod h1:coLeQEoKzW9ViTL2bn0YUlU7K0RYjivKudG74gtd+sI= github.com/aws/aws-sdk-go-v2/internal/v4a v1.2.10/go.mod h1:FHbKWQtRBYUz4vO5WBWjzMD2by126ny5y/1EoaWoLfI=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.10.8 h1:EamsKe+ZjkOQjDdHd86/JCEucjFKQ9T0atWKO4s2Lgs= github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.10.4 h1:/b31bi3YVNlkzkBrm9LfpaKoaYZUxIAj4sHfOTmLfqw=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.10.8/go.mod h1:Q0vV3/csTpbkfKLI5Sb56cJQTCTtJ0ixdb7P+Wedqiw= github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.10.4/go.mod h1:2aGXHFmbInwgP9ZfpmdIfOELL79zhdNYNmReK8qDfdQ=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.16.8 h1:ip5ia3JOXl4OAsqeTdrOOmqKgoWiu+t9XSOnRzBwmRs= github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.2.10 h1:L0ai8WICYHozIKK+OtPzVJBugL7culcuM4E4JOpIEm8=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.16.8/go.mod h1:kE+aERnK9VQIw1vrk7ElAvhCsgLNzGyCPNg2Qe4Eq4c= github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.2.10/go.mod h1:byqfyxJBshFk0fF9YmK0M0ugIO8OWjzH2T3bPG4eGuA=
github.com/aws/aws-sdk-go-v2/service/s3 v1.47.2 h1:DLSAG8zpJV2pYsU+UPkj1IEZghyBnnUsvIRs6UuXSDU= github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.10.10 h1:DBYTXwIGQSGs9w4jKm60F5dmCQ3EEruxdc0MFh+3EY4=
github.com/aws/aws-sdk-go-v2/service/s3 v1.47.2/go.mod h1:thjZng67jGsvMyVZnSxlcqKyLwB0XTG8bHIRZPTJ+Bs= github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.10.10/go.mod h1:wohMUQiFdzo0NtxbBg0mSRGZ4vL3n0dKjLTINdcIino=
github.com/aws/aws-sdk-go-v2/service/sso v1.18.2 h1:xJPydhNm0Hiqct5TVKEuHG7weC0+sOs4MUnd7A5n5F4= github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.16.10 h1:KOxnQeWy5sXyS37fdKEvAsGHOr9fa/qvwxfJurR/BzE=
github.com/aws/aws-sdk-go-v2/service/sso v1.18.2/go.mod h1:zxk6y1X2KXThESWMS5CrKRvISD8mbIMab6nZrCGxDG0= github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.16.10/go.mod h1:jMx5INQFYFYB3lQD9W0D8Ohgq6Wnl7NYOJ2TQndbulI=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.21.2 h1:8dU9zqA77C5egbU6yd4hFLaiIdPv3rU+6cp7sz5FjCU= github.com/aws/aws-sdk-go-v2/service/s3 v1.48.1 h1:5XNlsBsEvBZBMO6p82y+sqpWg8j5aBCe+5C2GBFgqBQ=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.21.2/go.mod h1:7Lt5mjQ8x5rVdKqg+sKKDeuwoszDJIIPmkd8BVsEdS0= github.com/aws/aws-sdk-go-v2/service/s3 v1.48.1/go.mod h1:4qXHrG1Ne3VGIMZPCB8OjH/pLFO94sKABIusjh0KWPU=
github.com/aws/aws-sdk-go-v2/service/sts v1.26.2 h1:fFrLsy08wEbAisqW3KDl/cPHrF43GmV79zXB9EwJiZw= github.com/aws/aws-sdk-go-v2/service/sso v1.18.7 h1:eajuO3nykDPdYicLlP3AGgOyVN3MOlFmZv7WGTuJPow=
github.com/aws/aws-sdk-go-v2/service/sts v1.26.2/go.mod h1:7Ld9eTqocTvJqqJ5K/orbSDwmGcpRdlDiLjz2DO+SL8= github.com/aws/aws-sdk-go-v2/service/sso v1.18.7/go.mod h1:+mJNDdF+qiUlNKNC3fxn74WWNN+sOiGOEImje+3ScPM=
github.com/aws/smithy-go v1.18.1 h1:pOdBTUfXNazOlxLrgeYalVnuTpKreACHtc62xLwIB3c= github.com/aws/aws-sdk-go-v2/service/ssooidc v1.21.7 h1:QPMJf+Jw8E1l7zqhZmMlFw6w1NmfkfiSK8mS4zOx3BA=
github.com/aws/smithy-go v1.18.1/go.mod h1:NukqUGpCZIILqqiV0NIjeFh24kd/FAa4beRb6nbIUPE= github.com/aws/aws-sdk-go-v2/service/ssooidc v1.21.7/go.mod h1:ykf3COxYI0UJmxcfcxcVuz7b6uADi1FkiUz6Eb7AgM8=
github.com/aws/aws-sdk-go-v2/service/sts v1.26.7 h1:NzO4Vrau795RkUdSHKEwiR01FaGzGOH1EETJ+5QHnm0=
github.com/aws/aws-sdk-go-v2/service/sts v1.26.7/go.mod h1:6h2YuIoxaMSCFf5fi1EgZAwdfkGMgDY+DVfa61uLe4U=
github.com/aws/smithy-go v1.19.0 h1:KWFKQV80DpP3vJrrA9sVAHQ5gc2z8i4EzrLhLlWXcBM=
github.com/aws/smithy-go v1.19.0/go.mod h1:NukqUGpCZIILqqiV0NIjeFh24kd/FAa4beRb6nbIUPE=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM= github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw= github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/bufbuild/protocompile v0.4.0 h1:LbFKd2XowZvQ/kajzguUp2DC9UEIQhIq77fZZlaQsNA= github.com/bufbuild/protocompile v0.4.0 h1:LbFKd2XowZvQ/kajzguUp2DC9UEIQhIq77fZZlaQsNA=
github.com/bufbuild/protocompile v0.4.0/go.mod h1:3v93+mbWn/v3xzN+31nwkJfrEpAUwp+BagBSZWx+TP8=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU= github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/cespare/xxhash/v2 v2.2.0 h1:DC2CZ1Ep5Y4k3ZQ899DldepgrayRUGE6BBZ/cd9Cj44= github.com/cespare/xxhash/v2 v2.2.0 h1:DC2CZ1Ep5Y4k3ZQ899DldepgrayRUGE6BBZ/cd9Cj44=
github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/checkpoint-restore/go-criu/v5 v5.3.0/go.mod h1:E/eQpaFtUKGOOSEBZgmKAcn+zUUwWxqcaKZlF54wK8E=
github.com/cilium/ebpf v0.7.0/go.mod h1:/oI2+1shJiTGAMgl6/RgJr36Eo1jzrRcAWbcXO2usCA=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw= github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc= github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
github.com/containerd/cgroups v1.1.0 h1:v8rEWFl6EoqHB+swVNjVoCJE8o3jX7e8nqBGPLaDFBM= github.com/containerd/cgroups v1.1.0 h1:v8rEWFl6EoqHB+swVNjVoCJE8o3jX7e8nqBGPLaDFBM=
github.com/containerd/cgroups v1.1.0/go.mod h1:6ppBcbh/NOOUU+dMKrykgaBnK9lCIBxHqJDGwsa1mIw= github.com/containerd/cgroups v1.1.0/go.mod h1:6ppBcbh/NOOUU+dMKrykgaBnK9lCIBxHqJDGwsa1mIw=
github.com/containerd/console v1.0.3/go.mod h1:7LqA/THxQ86k76b8c/EMSiaJ3h1eZkMkXar0TQ1gf3U= github.com/containerd/continuity v0.4.3 h1:6HVkalIp+2u1ZLH1J/pYX2oBVXlJZvh1X1A7bEZ9Su8=
github.com/containerd/continuity v0.4.2 h1:v3y/4Yz5jwnvqPKJJ+7Wf93fyWoCB3F5EclWG023MDM= github.com/containerd/continuity v0.4.3/go.mod h1:F6PTNCKepoxEaXLQp3wDAjygEnImnZ/7o4JzpodfroQ=
github.com/containerd/continuity v0.4.2/go.mod h1:F6PTNCKepoxEaXLQp3wDAjygEnImnZ/7o4JzpodfroQ= github.com/containerd/errdefs v0.1.0 h1:m0wCRBiu1WJT/Fr+iOoQHMQS/eP5myQ8lCv4Dz5ZURM=
github.com/containerd/errdefs v0.1.0/go.mod h1:YgWiiHtLmSeBrvpw+UfPijzbLaB77mEG1WwJTDETIV0=
github.com/containerd/fifo v1.1.0 h1:4I2mbh5stb1u6ycIABlBw9zgtlK8viPI9QkQNRQEEmY= github.com/containerd/fifo v1.1.0 h1:4I2mbh5stb1u6ycIABlBw9zgtlK8viPI9QkQNRQEEmY=
github.com/containerd/fifo v1.1.0/go.mod h1:bmC4NWMbXlt2EZ0Hc7Fx7QzTFxgPID13eH0Qu+MAb2o= github.com/containerd/fifo v1.1.0/go.mod h1:bmC4NWMbXlt2EZ0Hc7Fx7QzTFxgPID13eH0Qu+MAb2o=
github.com/containerd/log v0.1.0 h1:TCJt7ioM2cr/tfR8GPbGf9/VRAX8D2B4PjzCpfX540I= github.com/containerd/log v0.1.0 h1:TCJt7ioM2cr/tfR8GPbGf9/VRAX8D2B4PjzCpfX540I=
github.com/containerd/log v0.1.0/go.mod h1:VRRf09a7mHDIRezVKTRCrOq78v577GXq3bSa3EhrzVo= github.com/containerd/log v0.1.0/go.mod h1:VRRf09a7mHDIRezVKTRCrOq78v577GXq3bSa3EhrzVo=
github.com/containerd/nydus-snapshotter v0.13.3 h1:aa4tz0l2Z+vLDuCCx5Ic6f1AQd9ZH/R0B5C/QXGY774= github.com/containerd/nydus-snapshotter v0.13.11 h1:0euz1viJ0/4sZ5P0GP28wKrd+m0YqKRQcM6GZjuSKZk=
github.com/containerd/nydus-snapshotter v0.13.3/go.mod h1:XWAz9ytsjBuKPVXDKP3xoMlcSKNsGnjXlEup6DuzUIo= github.com/containerd/nydus-snapshotter v0.13.11/go.mod h1:VPVKQ3jmHFIcUIV2yiQ1kImZuBFS3GXDohKs9mRABVE=
github.com/containerd/stargz-snapshotter v0.14.3 h1:OTUVZoPSPs8mGgmQUE1dqw3WX/3nrsmsurW7UPLWl1U= github.com/containerd/stargz-snapshotter v0.15.1 h1:fpsP4kf/Z4n2EYnU0WT8ZCE3eiKDwikDhL6VwxIlgeA=
github.com/containerd/stargz-snapshotter v0.14.3/go.mod h1:j2Ya4JeA5gMZJr8BchSkPjlcCEh++auAxp4nidPI6N0= github.com/containerd/stargz-snapshotter v0.15.1/go.mod h1:74D+J1m1RMXytLmWxegXWhtOSRHPWZKpKc2NdK3S+us=
github.com/containerd/stargz-snapshotter/estargz v0.14.3 h1:OqlDCK3ZVUO6C3B/5FSkDwbkEETK84kQgEeFwDC+62k= github.com/containerd/stargz-snapshotter/estargz v0.15.1 h1:eXJjw9RbkLFgioVaTG+G/ZW/0kEe2oEKCdS/ZxIyoCU=
github.com/containerd/stargz-snapshotter/estargz v0.14.3/go.mod h1:KY//uOCIkSuNAHhJogcZtrNHdKrA99/FCCRjE3HD36o= github.com/containerd/stargz-snapshotter/estargz v0.15.1/go.mod h1:gr2RNwukQ/S9Nv33Lt6UC7xEx58C+LHRdoqbEKjz1Kk=
github.com/containerd/ttrpc v1.2.2 h1:9vqZr0pxwOF5koz6N0N3kJ0zDHokrcPxIR/ZR2YFtOs= github.com/containerd/ttrpc v1.2.4 h1:eQCQK4h9dxDmpOb9QOOMh2NHTfzroH1IkmHiKZi05Oo=
github.com/containerd/ttrpc v1.2.2/go.mod h1:sIT6l32Ph/H9cvnJsfXM5drIVzTr5A2flTf1G5tYZak= github.com/containerd/ttrpc v1.2.4/go.mod h1:ojvb8SJBSch0XkqNO0L0YX/5NxR3UnVk2LzFKBK0upc=
github.com/containerd/typeurl/v2 v2.1.1 h1:3Q4Pt7i8nYwy2KmQWIw2+1hTvwTE/6w9FqcttATPO/4= github.com/containerd/typeurl/v2 v2.1.1 h1:3Q4Pt7i8nYwy2KmQWIw2+1hTvwTE/6w9FqcttATPO/4=
github.com/containerd/typeurl/v2 v2.1.1/go.mod h1:IDp2JFvbwZ31H8dQbEIY7sDl2L3o3HZj1hsSQlywkQ0= github.com/containerd/typeurl/v2 v2.1.1/go.mod h1:IDp2JFvbwZ31H8dQbEIY7sDl2L3o3HZj1hsSQlywkQ0=
github.com/containers/ocicrypt v1.1.7 h1:thhNr4fu2ltyGz8aMx8u48Ae0Pnbip3ePP9/mzkZ/3U= github.com/containers/ocicrypt v1.1.10 h1:r7UR6o8+lyhkEywetubUUgcKFjOWOaWz8cEBrCPX0ic=
github.com/containers/ocicrypt v1.1.7/go.mod h1:7CAhjcj2H8AYp5YvEie7oVSK2AhBY8NscCYRawuDNtw= github.com/containers/ocicrypt v1.1.10/go.mod h1:YfzSSr06PTHQwSTUKqDSjish9BeW1E4HUmreluQcMd8=
github.com/coreos/go-systemd/v22 v22.3.2/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc= github.com/cpuguy83/go-md2man/v2 v2.0.3 h1:qMCsGGgs+MAzDFyp9LpAe1Lqy/fY/qCovCm0qnXZOBM=
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU= github.com/cpuguy83/go-md2man/v2 v2.0.3/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/cpuguy83/go-md2man/v2 v2.0.2 h1:p1EgwI/C7NhT0JmVkwCD2ZBK8j4aeHQX2pMHHBfMQ6w=
github.com/cpuguy83/go-md2man/v2 v2.0.2/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/cyphar/filepath-securejoin v0.2.3/go.mod h1:aPGpWjXOXUn2NCNjFvBE6aRxGGx79pTxQpKOJNYHHl4=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/distribution/reference v0.5.0 h1:/FUIFXtfc/x2gpa5/VGfiGLuOIdYa1t65IKK2OFGvA0= github.com/distribution/reference v0.5.0 h1:/FUIFXtfc/x2gpa5/VGfiGLuOIdYa1t65IKK2OFGvA0=
github.com/distribution/reference v0.5.0/go.mod h1:BbU0aIcezP1/5jX/8MP0YiH4SdvB5Y4f/wlDRiLyi3E= github.com/distribution/reference v0.5.0/go.mod h1:BbU0aIcezP1/5jX/8MP0YiH4SdvB5Y4f/wlDRiLyi3E=
github.com/docker/cli v24.0.7+incompatible h1:wa/nIwYFW7BVTGa7SWPVyyXU9lgORqUb1xfI36MSkFg= github.com/docker/cli v26.0.0+incompatible h1:90BKrx1a1HKYpSnnBFR6AgDq/FqkHxwlUyzJVPxD30I=
github.com/docker/cli v24.0.7+incompatible/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8= github.com/docker/cli v26.0.0+incompatible/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8=
github.com/docker/docker v23.0.3+incompatible h1:9GhVsShNWz1hO//9BNg/dpMnZW25KydO4wtVxWAIbho= github.com/docker/docker v25.0.6+incompatible h1:5cPwbwriIcsua2REJe8HqQV+6WlWc1byg2QSXzBxBGg=
github.com/docker/docker v23.0.3+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk= github.com/docker/docker v25.0.6+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/docker-credential-helpers v0.7.0 h1:xtCHsjxogADNZcdv1pKUHXryefjlVRqWqIhk/uXJp0A= github.com/docker/docker-credential-helpers v0.8.0 h1:YQFtbBQb4VrpoPxhFuzEBPQ9E16qz5SpHLS+uswaCp8=
github.com/docker/docker-credential-helpers v0.7.0/go.mod h1:rETQfLdHNT3foU5kuNkFR1R1V12OJRRO5lzt2D1b5X0= github.com/docker/docker-credential-helpers v0.8.0/go.mod h1:UGFXcuoQ5TxPiB54nHOZ32AWRqQdECoh/Mg0AlEYb40=
github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c h1:+pKlWGMw7gf6bQ+oDZB4KHQFypsfjYlq/C4rfL7D3g8= github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c h1:+pKlWGMw7gf6bQ+oDZB4KHQFypsfjYlq/C4rfL7D3g8=
github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c/go.mod h1:Uw6UezgYA44ePAFQYUehOuCzmy5zmg/+nl2ZfMWGkpA= github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c/go.mod h1:Uw6UezgYA44ePAFQYUehOuCzmy5zmg/+nl2ZfMWGkpA=
github.com/docker/go-units v0.4.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY= github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto= github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
@ -105,20 +111,21 @@ github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.m
github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98= github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c= github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/fatih/color v1.13.0/go.mod h1:kLAiJbzzSOZDVNGyDpeOxJ47H46qBXwg5ILebYFFOfk= github.com/fatih/color v1.13.0/go.mod h1:kLAiJbzzSOZDVNGyDpeOxJ47H46qBXwg5ILebYFFOfk=
github.com/fatih/color v1.14.1 h1:qfhVLaG5s+nCROl1zJsZRxFeYrHLqWroPOQ8BWiNb4w= github.com/fatih/color v1.16.0 h1:zmkK9Ngbjj+K0yRhTVONQh1p/HknKYSlNT+vZCzyokM=
github.com/fatih/color v1.14.1/go.mod h1:2oHN61fhTpgcxD3TSWCgKDiH1+x4OiDVVGH8WlgGZGg= github.com/fatih/color v1.16.0/go.mod h1:fL2Sau1YI5c0pdGEVCbKQbLXB6edEj1ZgiY4NijnWvE=
github.com/frankban/quicktest v1.11.3/go.mod h1:wRf/ReqHper53s+kmmSZizM8NamnL3IM0I9ntUbOk+k= github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
github.com/go-jose/go-jose/v3 v3.0.3 h1:fFKWeig/irsp7XD2zBxvnmA/XaRWp5V3CBsZXJF7G7k=
github.com/go-jose/go-jose/v3 v3.0.3/go.mod h1:5b+7YgP7ZICgJDBdfjZaIt+H/9L9T/YQrVfLAMboGkQ=
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A= github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.2.4 h1:g01GSCwiDw2xSZfjJ2/T9M+S6pFdcNtFYsp+Y43HYDQ= github.com/go-logr/logr v1.4.1 h1:pKouT5E8xu9zeFC39JXRDukb6JFQPXM5p5I91188VAQ=
github.com/go-logr/logr v1.2.4/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A= github.com/go-logr/logr v1.4.1/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag= github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE= github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/godbus/dbus/v5 v5.0.6/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q= github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q= github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/goharbor/acceleration-service v0.2.12 h1:CImqV61sO0Qne6lqHuBdL5pGhWICqCQkBu7ixItSDVo= github.com/goharbor/acceleration-service v0.2.14 h1:VfhahIoWRRWACfMb+520+9MNXIGBUk4QRJHokEUAj8M=
github.com/goharbor/acceleration-service v0.2.12/go.mod h1:TnuIHC8yUi5u0gUl9+TaW4WT8tOsd4JSjEDg0IJtIzM= github.com/goharbor/acceleration-service v0.2.14/go.mod h1:IaoZkVBLwnGpaJ46je7ZD294TBeWaQwFroX/ein2PiE=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da h1:oI5xCqsCo564l8iNU+DwB5epxmsaqB+rhGL0m5jtYqE= github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da h1:oI5xCqsCo564l8iNU+DwB5epxmsaqB+rhGL0m5jtYqE=
@ -126,7 +133,6 @@ github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da/go.mod h1:cIg4er
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A= github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.3/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8= github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA= github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs= github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
@ -135,48 +141,51 @@ github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvq
github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8= github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8=
github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI= github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk= github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/golang/protobuf v1.5.3 h1:KhyjKVUg7Usr/dYsdSqoFveMYd5ko72D+zANwlG1mmg= github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
github.com/golang/protobuf v1.5.3/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY= github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.3/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.5.3/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.4.0 h1:MtMxsa51/r9yyhkyLsVeVt0B+BGQZzpQiTQ4eHZ8bc4= github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.4.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/hashicorp/errwrap v1.1.0 h1:OxrOeh75EUXMY8TBjag2fzXGZ40LB6IKw45YeGUDY2I= github.com/hashicorp/errwrap v1.1.0 h1:OxrOeh75EUXMY8TBjag2fzXGZ40LB6IKw45YeGUDY2I=
github.com/hashicorp/go-hclog v1.5.0 h1:bI2ocEMgcVlz55Oj1xZNBsVi900c7II+fWDyV9o+13c= github.com/hashicorp/errwrap v1.1.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-hclog v1.5.0/go.mod h1:W4Qnvbt70Wk/zYJryRzDRU/4r0kIg0PVHBcfoyhpF5M= github.com/hashicorp/go-hclog v1.6.2 h1:NOtoftovWkDheyUM/8JW3QMiXyxJK3uHRK7wV04nD2I=
github.com/hashicorp/go-hclog v1.6.2/go.mod h1:W4Qnvbt70Wk/zYJryRzDRU/4r0kIg0PVHBcfoyhpF5M=
github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo= github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo=
github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM=
github.com/hashicorp/go-plugin v1.6.0 h1:wgd4KxHJTVGGqWBq4QPB1i5BZNEx9BR8+OFmHDmTk8A= github.com/hashicorp/go-plugin v1.6.0 h1:wgd4KxHJTVGGqWBq4QPB1i5BZNEx9BR8+OFmHDmTk8A=
github.com/hashicorp/go-plugin v1.6.0/go.mod h1:lBS5MtSSBZk0SHc66KACcjjlU6WzEVP/8pwz68aMkCI= github.com/hashicorp/go-plugin v1.6.0/go.mod h1:lBS5MtSSBZk0SHc66KACcjjlU6WzEVP/8pwz68aMkCI=
github.com/hashicorp/golang-lru/v2 v2.0.5 h1:wW7h1TG88eUIJ2i69gaE3uNVtEPIagzhGvHgwfx2Vm4= github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k=
github.com/hashicorp/golang-lru/v2 v2.0.5/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM= github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM=
github.com/hashicorp/yamux v0.1.1 h1:yrQxtgseBDrq9Y652vSRDvsKCJKOUD+GzTS4Y0Y8pvE= github.com/hashicorp/yamux v0.1.1 h1:yrQxtgseBDrq9Y652vSRDvsKCJKOUD+GzTS4Y0Y8pvE=
github.com/hashicorp/yamux v0.1.1/go.mod h1:CtWFDAQgb7dxtzFs4tWbplKIe2jSi3+5vKbgIO0SLnQ= github.com/hashicorp/yamux v0.1.1/go.mod h1:CtWFDAQgb7dxtzFs4tWbplKIe2jSi3+5vKbgIO0SLnQ=
github.com/jhump/protoreflect v1.15.1 h1:HUMERORf3I3ZdX05WaQ6MIpd/NJ434hTp5YiKgfCL6c= github.com/jhump/protoreflect v1.15.1 h1:HUMERORf3I3ZdX05WaQ6MIpd/NJ434hTp5YiKgfCL6c=
github.com/jhump/protoreflect v1.15.1/go.mod h1:jD/2GMKKE6OqX8qTjhADU1e6DShO+gavG9e0Q693nKo=
github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9YPoQUg= github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9YPoQUg=
github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo= github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo=
github.com/jmespath/go-jmespath/internal/testify v1.5.1 h1:shLQSRRSCCPj3f2gpwzGwWFoC7ycTf1rcQZHOlsJ6N8= github.com/jmespath/go-jmespath/internal/testify v1.5.1 h1:shLQSRRSCCPj3f2gpwzGwWFoC7ycTf1rcQZHOlsJ6N8=
github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfCI6z80xFu9LTZmf1ZRjMHUOPmWr69U= github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfCI6z80xFu9LTZmf1ZRjMHUOPmWr69U=
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8= github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck= github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/klauspost/compress v1.16.3 h1:XuJt9zzcnaz6a16/OU53ZjWp/v7/42WcR5t2a0PcNQY= github.com/klauspost/compress v1.17.4 h1:Ej5ixsIri7BrIjBkRZLTo6ghwrEtHFk7ijlczPW4fZ4=
github.com/klauspost/compress v1.16.3/go.mod h1:ntbaceVETuRiXiv4DpjP66DpAtAGkEQskQzEyD//IeE= github.com/klauspost/compress v1.17.4/go.mod h1:/dCuZOvVtNoHsyb+cuJD3itjs3NbnF6KH9zAO4BDxPM=
github.com/klauspost/cpuid/v2 v2.0.9 h1:lgaqFMSdTdQYdZ04uHyN2d/eKdOMyi2YLSvlQIBFYa4= github.com/klauspost/cpuid/v2 v2.2.6 h1:ndNyv040zDGIDh8thGkXYjnFtiN02M1PVVF+JE/48xc=
github.com/klauspost/cpuid/v2 v2.0.9/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg= github.com/klauspost/cpuid/v2 v2.2.6/go.mod h1:Lcz8mBdAVJIBVzewtcLocK12l3Y+JytZYpaMropDUws=
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE= github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/mattn/go-colorable v0.1.9/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc= github.com/mattn/go-colorable v0.1.9/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
github.com/mattn/go-colorable v0.1.12/go.mod h1:u5H1YNBxpqRaxsYJYSkiCWKzEfiAb1Gb520KVy5xxl4= github.com/mattn/go-colorable v0.1.12/go.mod h1:u5H1YNBxpqRaxsYJYSkiCWKzEfiAb1Gb520KVy5xxl4=
github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA= github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA=
@ -184,39 +193,34 @@ github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovk
github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU= github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU=
github.com/mattn/go-isatty v0.0.14/go.mod h1:7GGIvUiUoEMVVmxf/4nioHXj79iQHKdU27kJ6hsGG94= github.com/mattn/go-isatty v0.0.14/go.mod h1:7GGIvUiUoEMVVmxf/4nioHXj79iQHKdU27kJ6hsGG94=
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM= github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
github.com/mattn/go-isatty v0.0.19 h1:JITubQf0MOLdlGRuRq+jtsDlekdYPia9ZFsB8h/APPA= github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.19/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y= github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/matttproud/golang_protobuf_extensions v1.0.4 h1:mmDVorXM7PCGKw94cs5zkfA9PSy5pEvNWRP0ET0TIVo=
github.com/matttproud/golang_protobuf_extensions v1.0.4/go.mod h1:BSXmuO+STAnVfrANrmjBb36TMTDstsz7MSK+HVaYKv4=
github.com/miekg/pkcs11 v1.1.1 h1:Ugu9pdy6vAYku5DEpVWVFPYnzV+bxB+iRdbuFSu7TvU= github.com/miekg/pkcs11 v1.1.1 h1:Ugu9pdy6vAYku5DEpVWVFPYnzV+bxB+iRdbuFSu7TvU=
github.com/miekg/pkcs11 v1.1.1/go.mod h1:XsNlhZGX73bx86s2hdc/FuaLm2CPZJemRLMA+WTFxgs= github.com/miekg/pkcs11 v1.1.1/go.mod h1:XsNlhZGX73bx86s2hdc/FuaLm2CPZJemRLMA+WTFxgs=
github.com/mitchellh/go-testing-interface v1.14.1 h1:jrgshOhYAUVNMAJiKbEu7EqAwgJJ2JqpQmpLJOu07cU= github.com/mitchellh/go-testing-interface v1.14.1 h1:jrgshOhYAUVNMAJiKbEu7EqAwgJJ2JqpQmpLJOu07cU=
github.com/mitchellh/go-testing-interface v1.14.1/go.mod h1:gfgS7OtZj6MA4U1UrDRp04twqAjfvlZyCfX3sDjEym8= github.com/mitchellh/go-testing-interface v1.14.1/go.mod h1:gfgS7OtZj6MA4U1UrDRp04twqAjfvlZyCfX3sDjEym8=
github.com/moby/buildkit v0.13.0 h1:reVR1Y+rbNIUQ9jf0Q1YZVH5a/nhOixZsl+HJ9qQEGI=
github.com/moby/buildkit v0.13.0/go.mod h1:aNmNQKLBFYAOFuzQjR3VA27/FijlvtBD1pjNwTSN37k=
github.com/moby/locker v1.0.1 h1:fOXqR41zeveg4fFODix+1Ch4mj/gT0NE1XJbp/epuBg= github.com/moby/locker v1.0.1 h1:fOXqR41zeveg4fFODix+1Ch4mj/gT0NE1XJbp/epuBg=
github.com/moby/locker v1.0.1/go.mod h1:S7SDdo5zpBK84bzzVlKr2V0hz+7x9hWbYC/kq7oQppc= github.com/moby/locker v1.0.1/go.mod h1:S7SDdo5zpBK84bzzVlKr2V0hz+7x9hWbYC/kq7oQppc=
github.com/moby/sys/mountinfo v0.5.0/go.mod h1:3bMD3Rg+zkqx8MRYPi7Pyb0Ie97QEBmdxbhnCLlSvSU= github.com/moby/sys/mountinfo v0.7.1 h1:/tTvQaSJRr2FshkhXiIpux6fQ2Zvc4j7tAhMTStAG2g=
github.com/moby/sys/mountinfo v0.6.2 h1:BzJjoreD5BMFNmD9Rus6gdd1pLuecOFPt8wC+Vygl78= github.com/moby/sys/mountinfo v0.7.1/go.mod h1:IJb6JQeOklcdMU9F5xQ8ZALD+CUr5VlGpwtX+VE0rpI=
github.com/moby/sys/mountinfo v0.6.2/go.mod h1:IJb6JQeOklcdMU9F5xQ8ZALD+CUr5VlGpwtX+VE0rpI=
github.com/moby/sys/sequential v0.5.0 h1:OPvI35Lzn9K04PBbCLW0g4LcFAJgHsvXsRyewg5lXtc= github.com/moby/sys/sequential v0.5.0 h1:OPvI35Lzn9K04PBbCLW0g4LcFAJgHsvXsRyewg5lXtc=
github.com/moby/sys/sequential v0.5.0/go.mod h1:tH2cOOs5V9MlPiXcQzRC+eEyab644PWKGRYaaV5ZZlo= github.com/moby/sys/sequential v0.5.0/go.mod h1:tH2cOOs5V9MlPiXcQzRC+eEyab644PWKGRYaaV5ZZlo=
github.com/moby/sys/signal v0.7.0 h1:25RW3d5TnQEoKvRbEKUGay6DCQ46IxAVTT9CUMgmsSI= github.com/moby/sys/signal v0.7.0 h1:25RW3d5TnQEoKvRbEKUGay6DCQ46IxAVTT9CUMgmsSI=
github.com/moby/sys/signal v0.7.0/go.mod h1:GQ6ObYZfqacOwTtlXvcmh9A26dVRul/hbOZn88Kg8Tg= github.com/moby/sys/signal v0.7.0/go.mod h1:GQ6ObYZfqacOwTtlXvcmh9A26dVRul/hbOZn88Kg8Tg=
github.com/mrunalp/fileutils v0.5.0/go.mod h1:M1WthSahJixYnrXQl/DFQuteStB1weuxD2QJNHXfbSQ= github.com/moby/sys/user v0.1.0 h1:WmZ93f5Ux6het5iituh9x2zAG7NFY9Aqi49jjE1PaQg=
github.com/nydusaccelerator/containerd v0.0.0-20231121100328-6c4d1f35ac28 h1:MY+OS7zE05AKucljunXAxnDiWesU3/RDdGm5z9oOolw= github.com/moby/sys/user v0.1.0/go.mod h1:fKJhFOnsCN6xZ5gSfbM6zaHGgDJMrqt9/reuj4T7MmU=
github.com/nydusaccelerator/containerd v0.0.0-20231121100328-6c4d1f35ac28/go.mod h1:0/W44LWEYfSHoxBtsHIiNU/duEkgpMokemafHVCpq9Y= github.com/nydusaccelerator/containerd v1.7.18-nydus.10 h1:ir28uQOPtYtFP+gry7sbiwaOHUISC1viPeogTDTff+Q=
github.com/nydusaccelerator/containerd v1.7.18-nydus.10/go.mod h1:IYEk9/IO6wAPUz2bCMVUbsfXjzw5UNP5fLz4PsUygQ4=
github.com/oklog/run v1.1.0 h1:GEenZ1cK0+q0+wsJew9qUg/DyD8k3JzYsZAi5gYi2mA= github.com/oklog/run v1.1.0 h1:GEenZ1cK0+q0+wsJew9qUg/DyD8k3JzYsZAi5gYi2mA=
github.com/oklog/run v1.1.0/go.mod h1:sVPdnTZT1zYwAJeCMu2Th4T21pA3FPOQRfWjQlk7DVU= github.com/oklog/run v1.1.0/go.mod h1:sVPdnTZT1zYwAJeCMu2Th4T21pA3FPOQRfWjQlk7DVU=
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U= github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM= github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
github.com/opencontainers/image-spec v1.0.2/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0= github.com/opencontainers/image-spec v1.1.0 h1:8SG7/vwALn54lVB/0yZ/MMwhFrPYtpEHQb2IpWsCzug=
github.com/opencontainers/image-spec v1.1.0-rc5 h1:Ygwkfw9bpDvs+c9E34SdgGOj41dX/cbdlwvlWt0pnFI= github.com/opencontainers/image-spec v1.1.0/go.mod h1:W4s4sFTMaBeK1BQLXbG4AdM2szdn85PY75RI83NrTrM=
github.com/opencontainers/image-spec v1.1.0-rc5/go.mod h1:X4pATf0uXsnn3g5aiGIsVnJBR4mxhKzfwmvK/B2NTm8= github.com/opencontainers/runtime-spec v1.1.0 h1:HHUyrt9mwHUjtasSbXSMvs4cyFxh+Bll4AjJ9odEGpg=
github.com/opencontainers/runc v1.1.5 h1:L44KXEpKmfWDcS02aeGm8QNTFXTo2D+8MYGDIJ/GDEs= github.com/opencontainers/runtime-spec v1.1.0/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/opencontainers/runc v1.1.5/go.mod h1:1J5XiS+vdZ3wCyZybsuxXZWGrgSr8fFJHLXuG2PsnNg=
github.com/opencontainers/runtime-spec v1.0.3-0.20210326190908-1c3f411f0417/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/opencontainers/runtime-spec v1.1.0-rc.1 h1:wHa9jroFfKGQqFHj0I1fMRKLl0pfj+ynAqBxo3v6u9w=
github.com/opencontainers/runtime-spec v1.1.0-rc.1/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/opencontainers/selinux v1.10.0/go.mod h1:2i0OySw99QjzBBQByd1Gr9gSjvuho1lHsJxIJ3gGbJI=
github.com/opencontainers/selinux v1.11.0 h1:+5Zbo97w3Lbmb3PeqQtpmTkMwsW5nRI3YaLpt7tQ7oU= github.com/opencontainers/selinux v1.11.0 h1:+5Zbo97w3Lbmb3PeqQtpmTkMwsW5nRI3YaLpt7tQ7oU=
github.com/opencontainers/selinux v1.11.0/go.mod h1:E5dMC3VPuVvVHDYmi78qvhJp8+M586T4DlDRYpFkyec= github.com/opencontainers/selinux v1.11.0/go.mod h1:E5dMC3VPuVvVHDYmi78qvhJp8+M586T4DlDRYpFkyec=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4= github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
@ -225,80 +229,75 @@ github.com/pkg/xattr v0.4.9 h1:5883YPCtkSd8LFbs13nXplj9g9tlrwoJRjgpgMu1/fE=
github.com/pkg/xattr v0.4.9/go.mod h1:di8WF84zAKk8jzR1UBTEWh9AUlIZZ7M/JNt8e9B6ktU= github.com/pkg/xattr v0.4.9/go.mod h1:di8WF84zAKk8jzR1UBTEWh9AUlIZZ7M/JNt8e9B6ktU=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v1.17.0 h1:rl2sfwZMtSthVU752MqfjQozy7blglC+1SOtjMAMh+Q= github.com/prometheus/client_golang v1.19.0 h1:ygXvpU1AoN1MhdzckN+PyD9QJOSD4x7kmXYlnfbA6JU=
github.com/prometheus/client_golang v1.17.0/go.mod h1:VeL+gMmOAxkS2IqfCq0ZmHSL+LjWfWDUmp1mBz9JgUY= github.com/prometheus/client_golang v1.19.0/go.mod h1:ZRM9uEAypZakd+q/x7+gmsvXdURP+DABIEIjnmDdp+k=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.4.1-0.20230718164431-9a2bf3000d16 h1:v7DLqVdK4VrYkVD5diGdl4sxJurKJEMnODWRJlxV9oM= github.com/prometheus/client_model v0.6.0 h1:k1v3CzpSRUTrKMppY35TLwPvxHqBu0bYgxZzqGIgaos=
github.com/prometheus/client_model v0.4.1-0.20230718164431-9a2bf3000d16/go.mod h1:oMQmHW1/JoDwqLtg57MGgP/Fb1CJEYF2imWWhWtMkYU= github.com/prometheus/client_model v0.6.0/go.mod h1:NTQHnmxFpouOD0DpvP4XujX3CdOAGQPoaGhyTchlyt8=
github.com/prometheus/common v0.44.0 h1:+5BrQJwiBB9xsMygAB3TNvpQKOwlkc25LbISbrdOOfY= github.com/prometheus/common v0.50.0 h1:YSZE6aa9+luNa2da6/Tik0q0A5AbR+U003TItK57CPQ=
github.com/prometheus/common v0.44.0/go.mod h1:ofAIvZbQ1e/nugmZGz4/qCb9Ap1VoSTIO7x0VV9VvuY= github.com/prometheus/common v0.50.0/go.mod h1:wHFBCEVWVmHMUpg7pYcOm2QUR/ocQdYSJVQJKnHc3xQ=
github.com/prometheus/procfs v0.6.0/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA= github.com/prometheus/procfs v0.13.0 h1:GqzLlQyfsPbaEHaQkO7tbDlriv/4o5Hudv6OXHGKX7o=
github.com/prometheus/procfs v0.11.1 h1:xRC8Iq1yyca5ypa9n1EZnWZkt7dwcoRPQwX/5gwaUuI= github.com/prometheus/procfs v0.13.0/go.mod h1:cd4PFCR54QLnGKPaKGA6l+cfuNXtht43ZKY6tow0Y1g=
github.com/prometheus/procfs v0.11.1/go.mod h1:eesXgaPo1q7lBpVMoMy0ZOFTth9hBn4W/y0/p/ScXhY= github.com/rogpeppe/go-internal v1.12.0 h1:exVL4IDcn6na9z1rAb56Vxr+CgyK3nn3O+epU5NdKM8=
github.com/rogpeppe/go-internal v1.10.0 h1:TMyTOH3F/DB16zRVcYyreMH6GnZZrwQVAoYjRBZyWFQ= github.com/rogpeppe/go-internal v1.12.0/go.mod h1:E+RYuTGaKKdloAfM02xzb0FW3Paa99yedzYV+kq4uf4=
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk= github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/seccomp/libseccomp-golang v0.9.2-0.20220502022130-f33da4d89646/go.mod h1:JA8cRccbGaA1s33RQf7Y1+q9gHmZX1yB/z9WDN1C6fg=
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
github.com/sirupsen/logrus v1.7.0/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
github.com/sirupsen/logrus v1.8.1/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ= github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=
github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ= github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
github.com/stefanberger/go-pkcs11uri v0.0.0-20201008174630-78d3cae3a980 h1:lIOOHPEbXzO3vnmx2gok1Tfs31Q8GQqKLc8vVqyQq/I= github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
github.com/stefanberger/go-pkcs11uri v0.0.0-20201008174630-78d3cae3a980/go.mod h1:AO3tvPzVZ/ayst6UlUKUv6rcPQInYe3IknH3jYhAKu8= github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
github.com/stefanberger/go-pkcs11uri v0.0.0-20230803200340-78284954bff6 h1:pnnLyeX7o/5aX8qUQ69P/mLojDqwda8hFOCBTmP/6hw=
github.com/stefanberger/go-pkcs11uri v0.0.0-20230803200340-78284954bff6/go.mod h1:39R/xuhNgVhi+K0/zst4TLrJrVmbm6LVgl4A0+ZFS5M=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw= github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0 h1:1zr/of2m5FGMsad5YfcqgdqdWrIhu+EBEJRhR1U7z/c=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo= github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.2/go.mod h1:R6va5+xMeoiuVRoj+gSkQ7d3FALtqAAGI1FQKckRals= github.com/stretchr/testify v1.7.2/go.mod h1:R6va5+xMeoiuVRoj+gSkQ7d3FALtqAAGI1FQKckRals=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU= github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4= github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.8.4 h1:CcVxjf3Q8PM0mHUKJCdn+eZZtm5yQwehR5yeSVQQcUk= github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg=
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo= github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww= github.com/urfave/cli/v2 v2.27.1 h1:8xSQ6szndafKVRmfyeUMxkNUJQMjL1F2zmsZ+qHpfho=
github.com/urfave/cli v1.22.1/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0= github.com/urfave/cli/v2 v2.27.1/go.mod h1:8qnjx1vcq5s2/wpsqoZFndg2CE5tNFyrTvS6SinrnYQ=
github.com/urfave/cli v1.22.4/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0= github.com/vbatts/tar-split v0.11.5 h1:3bHCTIheBm1qFTcgh9oPu+nNBtX+XJIupG/vacinCts=
github.com/urfave/cli/v2 v2.25.7 h1:VAzn5oq403l5pHjc4OhD54+XGO9cdKVL/7lDjF+iKUs= github.com/vbatts/tar-split v0.11.5/go.mod h1:yZbwRsSeGjusneWgA781EKej9HF8vme8okylkAeNKLk=
github.com/urfave/cli/v2 v2.25.7/go.mod h1:8qnjx1vcq5s2/wpsqoZFndg2CE5tNFyrTvS6SinrnYQ= github.com/xrash/smetrics v0.0.0-20231213231151-1d8dd44e695e h1:+SOyEddqYF09QP7vr7CgJ1eti3pY9Fn3LHO1M1r/0sI=
github.com/vbatts/tar-split v0.11.2 h1:Via6XqJr0hceW4wff3QRzD5gAk/tatMw/4ZA7cTlIME= github.com/xrash/smetrics v0.0.0-20231213231151-1d8dd44e695e/go.mod h1:N3UwUGtsrSj3ccvlPHLoLsHnpR27oXr4ZE984MbSER8=
github.com/vbatts/tar-split v0.11.2/go.mod h1:vV3ZuO2yWSVsz+pfFzDG/upWH1JhjOiEaWq6kXyQ3VI=
github.com/vishvananda/netlink v1.1.0/go.mod h1:cTgwzPIzzgDAYoQrMm0EdrjRUBkTqKYppBueQtXaqoE=
github.com/vishvananda/netns v0.0.0-20191106174202-0a2b9b5464df/go.mod h1:JP3t17pCcGlemwknint6hfoeCVQrEMVwxRLRjXpq+BU=
github.com/xrash/smetrics v0.0.0-20201216005158-039620a65673 h1:bAn7/zixMGCfxrRTfdpNzjtPYqr8smhKouy9mxVdGPU=
github.com/xrash/smetrics v0.0.0-20201216005158-039620a65673/go.mod h1:N3UwUGtsrSj3ccvlPHLoLsHnpR27oXr4ZE984MbSER8=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
go.etcd.io/bbolt v1.3.7 h1:j+zJOnnEjF/kyHlDDgGnVL/AIqIJPq8UoB2GSNfkUfQ= github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
go.etcd.io/bbolt v1.3.7/go.mod h1:N9Mkw9X8x5fupy0IKsmuqVtoGDyxsaDlbk4Rd05IAQw= go.etcd.io/bbolt v1.3.10 h1:+BqfJTcCzTItrop8mq/lbzL8wSGtj94UO/3U31shqG0=
go.mozilla.org/pkcs7 v0.0.0-20200128120323-432b2356ecb1 h1:A/5uWzF44DlIgdm/PQFwfMkW0JX+cIcQi/SwLAmZP5M= go.etcd.io/bbolt v1.3.10/go.mod h1:bK3UQLPJZly7IlNmV7uVHJDxfe5aK9Ll93e/74Y9oEQ=
go.mozilla.org/pkcs7 v0.0.0-20200128120323-432b2356ecb1/go.mod h1:SNgMg+EgDFwmvSmLRTNKC5fegJjB7v23qTQ0XLGUNHk= go.mozilla.org/pkcs7 v0.0.0-20210826202110-33d05740a352 h1:CCriYyAfq1Br1aIYettdHZTy8mBTIPo7We18TuO/bak=
go.mozilla.org/pkcs7 v0.0.0-20210826202110-33d05740a352/go.mod h1:SNgMg+EgDFwmvSmLRTNKC5fegJjB7v23qTQ0XLGUNHk=
go.opencensus.io v0.24.0 h1:y73uSU6J157QMP2kn2r30vwW1A2W2WFwSCGnAVxeaD0= go.opencensus.io v0.24.0 h1:y73uSU6J157QMP2kn2r30vwW1A2W2WFwSCGnAVxeaD0=
go.opencensus.io v0.24.0/go.mod h1:vNK8G9p7aAivkbmorf4v+7Hgx+Zs0yY+0fOtgBfjQKo= go.opencensus.io v0.24.0/go.mod h1:vNK8G9p7aAivkbmorf4v+7Hgx+Zs0yY+0fOtgBfjQKo=
go.opentelemetry.io/otel v1.19.0 h1:MuS/TNf4/j4IXsZuJegVzI1cwut7Qc00344rgH7p8bs= go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.46.1 h1:aFJWCqJMNjENlcleuuOkGAPH82y0yULBScfXcIEdS24=
go.opentelemetry.io/otel v1.19.0/go.mod h1:i0QyjOq3UPoTzff0PJB2N66fb4S0+rSbSB15/oyH9fY= go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.46.1/go.mod h1:sEGXWArGqc3tVa+ekntsN65DmVbVeW+7lTKTjZF3/Fo=
go.opentelemetry.io/otel/metric v1.19.0 h1:aTzpGtV0ar9wlV4Sna9sdJyII5jTVJEvKETPiOKwvpE= go.opentelemetry.io/otel v1.21.0 h1:hzLeKBZEL7Okw2mGzZ0cc4k/A7Fta0uoPgaJCr8fsFc=
go.opentelemetry.io/otel/metric v1.19.0/go.mod h1:L5rUsV9kM1IxCj1MmSdS+JQAcVm319EUrDVLrt7jqt8= go.opentelemetry.io/otel v1.21.0/go.mod h1:QZzNPQPm1zLX4gZK4cMi+71eaorMSGT3A4znnUvNNEo=
go.opentelemetry.io/otel/trace v1.19.0 h1:DFVQmlVbfVeOuBRrwdtaehRrWiL1JoVs9CPIQ1Dzxpg= go.opentelemetry.io/otel/metric v1.21.0 h1:tlYWfeo+Bocx5kLEloTjbcDwBuELRrIFxwdQ36PlJu4=
go.opentelemetry.io/otel/trace v1.19.0/go.mod h1:mfaSyvGyEJEI0nyV2I4qhNQnbBOUUmYZpYojqMnX2vo= go.opentelemetry.io/otel/metric v1.21.0/go.mod h1:o1p3CA8nNHW8j5yuQLdc1eeqEaPfzug24uvsyIEJRWM=
go.opentelemetry.io/otel/trace v1.21.0 h1:WD9i5gzvoUPuXIXH24ZNBudiarZDKuekPqi/E8fpfLc=
go.opentelemetry.io/otel/trace v1.21.0/go.mod h1:LGbsEB0f9LGjN+OZaQQ26sohbOmiMR+BaslueVtS/qQ=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20210322153248-0c34fe9e7dc2/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4= golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.14.0 h1:wBqGXzWJW6m1XrIKlAH0Hs1JJ7+9KBwnIO8v66Q9cHc= golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU=
golang.org/x/crypto v0.14.0/go.mod h1:MVFd36DqK4CsrnJYDkBA3VC4m2GkXAM0PvzMCn4JQf4= golang.org/x/crypto v0.21.0 h1:X31++rzVUdKhX5sWmSOFZxx8UW/ldWx55cbf08iNAMA=
golang.org/x/crypto v0.21.0/go.mod h1:0BP7YvVV9gBbVKyeTG0Gyn+gZm94bibOW5BjDEYAOMs=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE= golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU= golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.11.0 h1:bUO06HqtnRcc/7l71XBe4WcqTZ+3AH1J59zWDDwLKgU= golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.11.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
@ -308,64 +307,72 @@ golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLL
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201224014010-6772e930b67b/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.17.0 h1:pVaXccu2ozPjCXewfr1S7xza/zcXTity9cCdXQYSjIM= golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.17.0/go.mod h1:NxSsAGuq816PNPmqtQdLE42eU2Fs7NoRIZrHJAlaCOE= golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
golang.org/x/net v0.23.0 h1:7EYJ93RZ9vYSZAIb2x3lnuvqO5zneoD6IvWjuhfxjTs=
golang.org/x/net v0.23.0/go.mod h1:JKghWKKOSdJwpW2GEx0Ja7fmaKnMsbu+MWVZTokSYmg=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.5.0 h1:60k92dhOjHxJkrqnwsfl8KuaHbn/5dl0lUPUklKo3qE= golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.5.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk= golang.org/x/sync v0.6.0 h1:5BMeUDZ7vkXGfEr1x9B4bRcTH4lpkTkpdh0T/J+qjbQ=
golang.org/x/sync v0.6.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190606203320-7fc4e5ec1444/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191115151921-52ab43148777/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200116001909-b77594299b42/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200116001909-b77594299b42/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210906170528-6f6e22806c34/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210927094055-39ccf1dd6fa6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210927094055-39ccf1dd6fa6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211025201205-69cdffdb9359/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20211025201205-69cdffdb9359/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211116061358-0a5406a5449c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220408201424-a24fb2fb8a0f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220408201424-a24fb2fb8a0f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220503163025-988cb79eb6c6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220503163025-988cb79eb6c6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.15.0 h1:h48lPFYpsTvQJZF4EKyI4aLHaev3CxivZmv7yZig9pc= golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.15.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.18.0 h1:DBdB3niSjOA/O0blCZBqDefyWNYveAYMNF1Wum0DYQ4=
golang.org/x/sys v0.18.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.13.0 h1:bb+I9cTfFazGW51MZqBVmZy7+JEJMouUHTUSKVQLBek= golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.13.0/go.mod h1:LTmsnFJwVN6bCy1rVCoS+qHT1HhALEFxKncY3WNNh4U= golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo=
golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk=
golang.org/x/term v0.18.0 h1:FcHjZXDMxI8mM3nwhX9HlKop4C0YQvCVCdwYl2wOtE8=
golang.org/x/term v0.18.0/go.mod h1:ILwASektA3OnRv7amZ1xhE/KTR+u50pbXfZ03+6Nx58=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.13.0 h1:ablQoSUd0tRdKxZewP80B+BaqeKJuVhuRxj/dkrun3k= golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE= golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/time v0.3.0 h1:rg5rLMjNzMS1RkNLzCG38eapWhnYLFYXDXj2gOlr8j4= golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
golang.org/x/time v0.3.0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/text v0.14.0 h1:ScX5w1eTa3QqT8oi6+ziP7dTV1S2+ALU0bI+0zXKWiQ=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/time v0.5.0 h1:o7cqy6amK/52YcAKIPlM3a+Fpj35zvRj2TP+e1xFSfk=
golang.org/x/time v0.5.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY= golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.10.0 h1:tvDr/iQoUqNdohiYm0LmmKcBk+q86lb9EprIUFhHHGg= golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/tools v0.10.0/go.mod h1:UJwyiVBsOA2uwvK/e5OY3GTpDUJriEd+/YlqAwLPmyM= golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
@ -374,20 +381,18 @@ google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9Ywl
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc= google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20200224152610-e50cd9704f63/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo= google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
google.golang.org/genproto v0.0.0-20231012201019-e917dd12ba7a h1:fwgW9j3vHirt4ObdHoYNwuO24BEZjSzbh+zPaNWoiY8= google.golang.org/genproto v0.0.0-20240123012728-ef4313101c80 h1:KAeGQVN3M9nD0/bQXnr/ClcEMJ968gUXJQ9pwfSynuQ=
google.golang.org/genproto v0.0.0-20231012201019-e917dd12ba7a/go.mod h1:EMfReVxb80Dq1hhioy0sOsY9jCE46YDgHlJ7fWVUWRE= google.golang.org/genproto v0.0.0-20240123012728-ef4313101c80/go.mod h1:cc8bqMqtv9gMOr0zHg2Vzff5ULhhL2IXP4sbcn32Dro=
google.golang.org/genproto/googleapis/rpc v0.0.0-20231016165738-49dd2c1f3d0b h1:ZlWIi1wSK56/8hn4QcBp/j9M7Gt3U/3hZw3mC7vDICo= google.golang.org/genproto/googleapis/rpc v0.0.0-20240123012728-ef4313101c80 h1:AjyfHzEPEFp/NpvfN5g+KDla3EMojjhRVZc1i7cj+oM=
google.golang.org/genproto/googleapis/rpc v0.0.0-20231016165738-49dd2c1f3d0b/go.mod h1:swOH3j0KzcDDgGUWr+SNpyTen5YrXjS3eyPzFYKc6lc= google.golang.org/genproto/googleapis/rpc v0.0.0-20240123012728-ef4313101c80/go.mod h1:PAREbraiVEVGVdTZsVWjSbbTtSyGbAgIIvni8a8CD5s=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c= google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg= google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY= google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY=
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk= google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.27.1/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.33.2/go.mod h1:JMHMWHQWaTccqQQlmk3MJZS+GWXOdAesneDmEnv2fbc= google.golang.org/grpc v1.33.2/go.mod h1:JMHMWHQWaTccqQQlmk3MJZS+GWXOdAesneDmEnv2fbc=
google.golang.org/grpc v1.59.0 h1:Z5Iec2pjwb+LEOqzpB2MR12/eKFhDPhuqW91O+4bwUk= google.golang.org/grpc v1.62.1 h1:B4n+nfKzOICUXMgyrNd19h/I9oH0L1pizfk1d4zSgTk=
google.golang.org/grpc v1.59.0/go.mod h1:aUPDwccQo6OTjy7Hct4AfBPD1GptF4fyUjIkQ9YtF98= google.golang.org/grpc v1.62.1/go.mod h1:IWTG0VlJLCh1SkC58F7np9ka9mx/WNkjl4PGJaiq+QE=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8= google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0= google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM= google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
@ -398,23 +403,21 @@ google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2
google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c= google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw= google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc= google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.31.0 h1:g0LDEJHgrBl9N9r17Ru3sqWhkIx2NB67okBHPwC7hs8= google.golang.org/protobuf v1.33.0 h1:uNO2rsAINq/JlFpSdYEKIZ0uKD/R9cpdv0T+yoGwGmI=
google.golang.org/protobuf v1.31.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I= google.golang.org/protobuf v1.33.0/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/square/go-jose.v2 v2.5.1 h1:7odma5RETjNHWJnR32wx8t+Io4djHE1PqxCFx3iiZ2w= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/square/go-jose.v2 v2.5.1/go.mod h1:M9dMgbHiYLoDGQrXy7OpJDJWiKiU//h+vD76mk0e1AI=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY= gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.0/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gotest.tools v2.2.0+incompatible h1:VsBPFP1AI068pPrMxtb/S8Zkgf9xEmTLJjfM+P5UIEo= gotest.tools v2.2.0+incompatible h1:VsBPFP1AI068pPrMxtb/S8Zkgf9xEmTLJjfM+P5UIEo=
gotest.tools/v3 v3.4.0 h1:ZazjZUfuVeZGLAmlKKuyv3IKP5orXcwtOwDQH6YVr6o= gotest.tools/v3 v3.4.0 h1:ZazjZUfuVeZGLAmlKKuyv3IKP5orXcwtOwDQH6YVr6o=
gotest.tools/v3 v3.4.0/go.mod h1:CtbdzLSsqVhDgMtKsx03ird5YTGB3ar27v0u/yKBW5g=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
lukechampine.com/blake3 v1.2.1 h1:YuqqRuaqsGV71BV/nm9xlI0MKUv4QC54jQnBChWbGnI= lukechampine.com/blake3 v1.2.1 h1:YuqqRuaqsGV71BV/nm9xlI0MKUv4QC54jQnBChWbGnI=

View File

@ -9,8 +9,9 @@ import (
"fmt" "fmt"
"io" "io"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/remote" "github.com/containerd/containerd/remotes"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/utils" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/remote"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/utils"
"github.com/opencontainers/go-digest" "github.com/opencontainers/go-digest"
ocispec "github.com/opencontainers/image-spec/specs-go/v1" ocispec "github.com/opencontainers/image-spec/specs-go/v1"
) )
@ -18,7 +19,7 @@ import (
// Backend transfers artifacts generated during image conversion to a backend storage such as: // Backend transfers artifacts generated during image conversion to a backend storage such as:
// 1. registry: complying to OCI distribution specification, push blob file // 1. registry: complying to OCI distribution specification, push blob file
// to registry and use the registry as a storage. // to registry and use the registry as a storage.
// 2. oss: A object storage backend, which uses its SDK to transer blob file. // 2. oss: A object storage backend, which uses its SDK to transfer blob file.
type Backend interface { type Backend interface {
// TODO: Hopefully, we can pass `Layer` struct in, thus to be able to cook both // TODO: Hopefully, we can pass `Layer` struct in, thus to be able to cook both
// file handle and file path. // file handle and file path.
@ -27,6 +28,7 @@ type Backend interface {
Check(blobID string) (bool, error) Check(blobID string) (bool, error)
Type() Type Type() Type
Reader(blobID string) (io.ReadCloser, error) Reader(blobID string) (io.ReadCloser, error)
RangeReader(blobID string) (remotes.RangeReadCloser, error)
Size(blobID string) (int64, error) Size(blobID string) (int64, error)
} }

View File

@ -0,0 +1,66 @@
// Copyright 2023 Nydus Developers. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
package backend
import (
"encoding/json"
"testing"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/provider"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/utils"
"github.com/stretchr/testify/require"
)
func TestBlobDesc(t *testing.T) {
desc := blobDesc(123456, "205eed24cbec29ad9cb4593a73168ef1803402370a82f7d51ce25646fc2f943a")
require.Equal(t, int64(123456), desc.Size)
require.Equal(t, "sha256:205eed24cbec29ad9cb4593a73168ef1803402370a82f7d51ce25646fc2f943a", desc.Digest.String())
require.Equal(t, utils.MediaTypeNydusBlob, desc.MediaType)
require.Equal(t, map[string]string{
utils.LayerAnnotationUncompressed: "sha256:205eed24cbec29ad9cb4593a73168ef1803402370a82f7d51ce25646fc2f943a",
utils.LayerAnnotationNydusBlob: "true",
}, desc.Annotations)
}
func TestNewBackend(t *testing.T) {
ossConfigJSON := `
{
"bucket_name": "test",
"endpoint": "region.oss.com",
"access_key_id": "testAK",
"access_key_secret": "testSK",
"object_prefix": "blob"
}`
require.True(t, json.Valid([]byte(ossConfigJSON)))
backend, err := NewBackend("oss", []byte(ossConfigJSON), nil)
require.NoError(t, err)
require.Equal(t, OssBackend, backend.Type())
s3ConfigJSON := `
{
"bucket_name": "test",
"endpoint": "s3.amazonaws.com",
"access_key_id": "testAK",
"access_key_secret": "testSK",
"object_prefix": "blob",
"scheme": "https",
"region": "region1"
}`
require.True(t, json.Valid([]byte(s3ConfigJSON)))
backend, err = NewBackend("s3", []byte(s3ConfigJSON), nil)
require.NoError(t, err)
require.Equal(t, S3backend, backend.Type())
testRegistryRemote, err := provider.DefaultRemote("test", false)
require.NoError(t, err)
backend, err = NewBackend("registry", nil, testRegistryRemote)
require.NoError(t, err)
require.Equal(t, RegistryBackend, backend.Type())
backend, err = NewBackend("errBackend", nil, testRegistryRemote)
require.Error(t, err)
require.Contains(t, err.Error(), "unsupported backend type")
require.Nil(t, backend)
}

View File

@ -17,6 +17,7 @@ import (
"time" "time"
"github.com/aliyun/aliyun-oss-go-sdk/oss" "github.com/aliyun/aliyun-oss-go-sdk/oss"
"github.com/containerd/containerd/remotes"
ocispec "github.com/opencontainers/image-spec/specs-go/v1" ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
@ -109,7 +110,7 @@ func calcCrc64ECMA(path string) (uint64, error) {
// Upload blob as image layer to oss backend and verify // Upload blob as image layer to oss backend and verify
// integrity by calculate CRC64. // integrity by calculate CRC64.
func (b *OSSBackend) Upload(ctx context.Context, blobID, blobPath string, size int64, forcePush bool) (*ocispec.Descriptor, error) { func (b *OSSBackend) Upload(_ context.Context, blobID, blobPath string, size int64, forcePush bool) (*ocispec.Descriptor, error) {
blobObjectKey := b.objectPrefix + blobID blobObjectKey := b.objectPrefix + blobID
desc := blobDesc(size, blobID) desc := blobDesc(size, blobID)
@ -259,6 +260,20 @@ func (b *OSSBackend) Type() Type {
return OssBackend return OssBackend
} }
type RangeReader struct {
b *OSSBackend
blobID string
}
func (rr *RangeReader) Reader(offset int64, size int64) (io.ReadCloser, error) {
return rr.b.bucket.GetObject(rr.blobID, oss.Range(offset, offset+size-1))
}
func (b *OSSBackend) RangeReader(blobID string) (remotes.RangeReadCloser, error) {
blobID = b.objectPrefix + blobID
return &RangeReader{b: b, blobID: blobID}, nil
}
func (b *OSSBackend) Reader(blobID string) (io.ReadCloser, error) { func (b *OSSBackend) Reader(blobID string) (io.ReadCloser, error) {
blobID = b.objectPrefix + blobID blobID = b.objectPrefix + blobID
rc, err := b.bucket.GetObject(blobID) rc, err := b.bucket.GetObject(blobID)

View File

@ -0,0 +1,137 @@
// Copyright 2023 Nydus Developers. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
package backend
import (
"encoding/json"
"hash/crc64"
"os"
"testing"
"github.com/stretchr/testify/require"
)
func tempOSSBackend() *OSSBackend {
ossConfigJSON := `
{
"bucket_name": "test",
"endpoint": "region.oss.com",
"access_key_id": "testAK",
"access_key_secret": "testSK",
"object_prefix": "blob"
}`
backend, _ := newOSSBackend([]byte(ossConfigJSON))
return backend
}
func TestCalcCrc64ECMA(t *testing.T) {
blobCrc64, err := calcCrc64ECMA("nil")
require.Error(t, err)
require.Contains(t, err.Error(), "calc md5sum")
require.Zero(t, blobCrc64)
file, err := os.CreateTemp("", "temp")
require.NoError(t, err)
defer os.RemoveAll(file.Name())
_, err = file.WriteString("123")
require.NoError(t, err)
file.Sync()
blobCrc64, err = calcCrc64ECMA(file.Name())
require.NoError(t, err)
require.Equal(t, crc64.Checksum([]byte("123"), crc64.MakeTable(crc64.ECMA)), blobCrc64)
}
func TestOSSRemoteID(t *testing.T) {
ossBackend := tempOSSBackend()
id := ossBackend.remoteID("111")
require.Equal(t, "oss://test/blob111", id)
}
func TestNewOSSBackend(t *testing.T) {
ossConfigJSON1 := `
{
"bucket_name": "test",
"endpoint": "region.oss.com",
"access_key_id": "testAK",
"access_key_secret": "testSK",
"object_prefix": "blob"
}`
require.True(t, json.Valid([]byte(ossConfigJSON1)))
backend, err := newOSSBackend([]byte(ossConfigJSON1))
require.NoError(t, err)
require.Equal(t, "test", backend.bucket.BucketName)
require.Equal(t, "blob", backend.objectPrefix)
ossConfigJSON2 := `
{
"bucket_name": "test",
"access_key_id": "testAK",
"access_key_secret": "testSK",
"object_prefix": "blob"
}`
require.True(t, json.Valid([]byte(ossConfigJSON2)))
backend, err = newOSSBackend([]byte(ossConfigJSON2))
require.Error(t, err)
require.Contains(t, err.Error(), "invalid OSS configuration: missing 'endpoint' or 'bucket'")
require.Nil(t, backend)
ossConfigJSON3 := `
{
"bucket_name": "test",
"access_key_id": "testAK",
"access_key_secret": "testSK",
"object_prefix": "blob"
}`
require.True(t, json.Valid([]byte(ossConfigJSON3)))
backend, err = newOSSBackend([]byte(ossConfigJSON3))
require.Error(t, err)
require.Contains(t, err.Error(), "invalid OSS configuration: missing 'endpoint' or 'bucket'")
require.Nil(t, backend)
ossConfigJSON4 := `
{
"bucket_name": "t",
"endpoint": "region.oss.com",
"access_key_id": "testAK",
"access_key_secret": "testSK",
"object_prefix": "blob"
}`
require.True(t, json.Valid([]byte(ossConfigJSON4)))
backend, err = newOSSBackend([]byte(ossConfigJSON4))
require.Error(t, err)
require.Contains(t, err.Error(), "Create bucket")
require.Contains(t, err.Error(), "len is between [3-63],now is")
require.Nil(t, backend)
ossConfigJSON5 := `
{
"bucket_name": "AAA",
"endpoint": "region.oss.com",
"access_key_id": "testAK",
"access_key_secret": "testSK",
"object_prefix": "blob"
}`
require.True(t, json.Valid([]byte(ossConfigJSON5)))
backend, err = newOSSBackend([]byte(ossConfigJSON5))
require.Error(t, err)
require.Contains(t, err.Error(), "Create bucket")
require.Contains(t, err.Error(), "can only include lowercase letters, numbers, and -")
require.Nil(t, backend)
ossConfigJSON6 := `
{
"bucket_name": "AAA",
"endpoint": "region.oss.com",
"access_key_id": "testAK",
"access_key_secret": "testSK",
"object_prefix": "blob",
}`
backend, err = newOSSBackend([]byte(ossConfigJSON6))
require.Error(t, err)
require.Contains(t, err.Error(), "Parse OSS storage backend configuration")
require.Nil(t, backend)
}

View File

@ -5,7 +5,8 @@ import (
"io" "io"
"os" "os"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/remote" "github.com/containerd/containerd/remotes"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/remote"
ocispec "github.com/opencontainers/image-spec/specs-go/v1" ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors" "github.com/pkg/errors"
) )
@ -15,7 +16,7 @@ type Registry struct {
} }
func (r *Registry) Upload( func (r *Registry) Upload(
ctx context.Context, blobID, blobPath string, size int64, forcePush bool, ctx context.Context, blobID, blobPath string, size int64, _ bool,
) (*ocispec.Descriptor, error) { ) (*ocispec.Descriptor, error) {
// The `forcePush` option is useless for registry backend, because // The `forcePush` option is useless for registry backend, because
// the blob existed in registry can't be pushed again. // the blob existed in registry can't be pushed again.
@ -35,11 +36,11 @@ func (r *Registry) Upload(
return &desc, nil return &desc, nil
} }
func (r *Registry) Finalize(cancel bool) error { func (r *Registry) Finalize(_ bool) error {
return nil return nil
} }
func (r *Registry) Check(blobID string) (bool, error) { func (r *Registry) Check(_ string) (bool, error) {
return true, nil return true, nil
} }
@ -47,14 +48,18 @@ func (r *Registry) Type() Type {
return RegistryBackend return RegistryBackend
} }
func (r *Registry) Reader(blobID string) (io.ReadCloser, error) { func (r *Registry) RangeReader(_ string) (remotes.RangeReadCloser, error) {
panic("not implemented") panic("not implemented")
} }
func (r *Registry) Size(blobID string) (int64, error) { func (r *Registry) Reader(_ string) (io.ReadCloser, error) {
panic("not implemented") panic("not implemented")
} }
func newRegistryBackend(rawConfig []byte, remote *remote.Remote) (Backend, error) { func (r *Registry) Size(_ string) (int64, error) {
panic("not implemented")
}
func newRegistryBackend(_ []byte, remote *remote.Remote) (Backend, error) {
return &Registry{remote: remote}, nil return &Registry{remote: remote}, nil
} }

View File

@ -22,6 +22,7 @@ import (
"github.com/aws/aws-sdk-go-v2/feature/s3/manager" "github.com/aws/aws-sdk-go-v2/feature/s3/manager"
"github.com/aws/aws-sdk-go-v2/service/s3" "github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/aws/aws-sdk-go-v2/service/s3/types" "github.com/aws/aws-sdk-go-v2/service/s3/types"
"github.com/containerd/containerd/remotes"
ocispec "github.com/opencontainers/image-spec/specs-go/v1" ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
@ -160,6 +161,25 @@ func (b *S3Backend) blobObjectKey(blobID string) string {
return b.objectPrefix + blobID return b.objectPrefix + blobID
} }
type rangeReader struct {
b *S3Backend
objectKey string
}
func (rr *rangeReader) Reader(offset int64, size int64) (io.ReadCloser, error) {
output, err := rr.b.client.GetObject(context.TODO(), &s3.GetObjectInput{
Bucket: &rr.b.bucketName,
Key: &rr.objectKey,
Range: aws.String(fmt.Sprintf("bytes=%d-%d", offset, offset+size-1)),
})
return output.Body, err
}
func (b *S3Backend) RangeReader(blobID string) (remotes.RangeReadCloser, error) {
objectKey := b.blobObjectKey(blobID)
return &rangeReader{b: b, objectKey: objectKey}, nil
}
func (b *S3Backend) Reader(blobID string) (io.ReadCloser, error) { func (b *S3Backend) Reader(blobID string) (io.ReadCloser, error) {
objectKey := b.blobObjectKey(blobID) objectKey := b.blobObjectKey(blobID)
output, err := b.client.GetObject(context.TODO(), &s3.GetObjectInput{ output, err := b.client.GetObject(context.TODO(), &s3.GetObjectInput{

View File

@ -0,0 +1,119 @@
// Copyright 2023 Nydus Developers. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
package backend
import (
"context"
"encoding/json"
"testing"
"github.com/aws/aws-sdk-go-v2/credentials"
"github.com/stretchr/testify/require"
)
func tempS3Backend() *S3Backend {
s3ConfigJSON := `
{
"bucket_name": "test",
"endpoint": "s3.amazonaws.com",
"access_key_id": "testAK",
"access_key_secret": "testSK",
"object_prefix": "blob",
"scheme": "https",
"region": "region1"
}`
backend, _ := newS3Backend([]byte(s3ConfigJSON))
return backend
}
func TestS3RemoteID(t *testing.T) {
s3Backend := tempS3Backend()
id := s3Backend.remoteID("111")
require.Equal(t, "https://s3.amazonaws.com/test/111", id)
}
func TestBlobObjectKey(t *testing.T) {
s3Backend := tempS3Backend()
blobObjectKey := s3Backend.blobObjectKey("111")
require.Equal(t, "blob111", blobObjectKey)
}
func TestNewS3Backend(t *testing.T) {
s3ConfigJSON1 := `
{
"bucket_name": "test",
"endpoint": "s3.amazonaws.com",
"access_key_id": "testAK",
"access_key_secret": "testSK",
"object_prefix": "blob",
"scheme": "https",
"region": "region1"
}`
require.True(t, json.Valid([]byte(s3ConfigJSON1)))
backend, err := newS3Backend([]byte(s3ConfigJSON1))
require.NoError(t, err)
require.Equal(t, "blob", backend.objectPrefix)
require.Equal(t, "test", backend.bucketName)
require.Equal(t, "https://s3.amazonaws.com", backend.endpointWithScheme)
require.Equal(t, "https://s3.amazonaws.com", *backend.client.Options().BaseEndpoint)
testCredentials, err := backend.client.Options().Credentials.Retrieve(context.Background())
require.NoError(t, err)
realCredentials, err := credentials.NewStaticCredentialsProvider("testAK", "testSK", "").Retrieve(context.Background())
require.NoError(t, err)
require.Equal(t, testCredentials, realCredentials)
s3ConfigJSON2 := `
{
"bucket_name": "test",
"endpoint": "s3.amazonaws.com",
"access_key_id": "testAK",
"access_key_secret": "testSK",
"object_prefix": "blob",
"scheme": "https",
"region": "region1",
}`
backend, err = newS3Backend([]byte(s3ConfigJSON2))
require.Error(t, err)
require.Contains(t, err.Error(), "parse S3 storage backend configuration")
require.Nil(t, backend)
s3ConfigJSON3 := `
{
"bucket_name": "test",
"endpoint": "",
"access_key_id": "testAK",
"access_key_secret": "testSK",
"object_prefix": "blob",
"scheme": "",
"region": "region1"
}`
require.True(t, json.Valid([]byte(s3ConfigJSON3)))
backend, err = newS3Backend([]byte(s3ConfigJSON3))
require.NoError(t, err)
require.Equal(t, "blob", backend.objectPrefix)
require.Equal(t, "test", backend.bucketName)
require.Equal(t, "https://s3.amazonaws.com", backend.endpointWithScheme)
testCredentials, err = backend.client.Options().Credentials.Retrieve(context.Background())
require.NoError(t, err)
realCredentials, err = credentials.NewStaticCredentialsProvider("testAK", "testSK", "").Retrieve(context.Background())
require.NoError(t, err)
require.Equal(t, testCredentials, realCredentials)
s3ConfigJSON4 := `
{
"bucket_name": "",
"endpoint": "s3.amazonaws.com",
"access_key_id": "testAK",
"access_key_secret": "testSK",
"object_prefix": "blob",
"scheme": "https",
"region": ""
}`
require.True(t, json.Valid([]byte(s3ConfigJSON4)))
backend, err = newS3Backend([]byte(s3ConfigJSON4))
require.Error(t, err)
require.Contains(t, err.Error(), "invalid S3 configuration: missing 'bucket_name' or 'region'")
require.Nil(t, backend)
}

View File

@ -38,11 +38,19 @@ type CompactOption struct {
BackendType string BackendType string
BackendConfigPath string BackendConfigPath string
OutputJSONPath string OutputJSONPath string
CompactConfigPath string
MinUsedRatio string
CompactBlobSize string
MaxCompactSize string
LayersToCompact string
BlobsDir string
} }
type SaveOption struct { type GenerateOption struct {
BootstrapPath string BootstrapPaths []string
DatabasePath string
ChunkdictBootstrapPath string
OutputPath string
} }
type Builder struct { type Builder struct {
@ -79,7 +87,11 @@ func (builder *Builder) Compact(option CompactOption) error {
args := []string{ args := []string{
"compact", "compact",
"--bootstrap", option.BootstrapPath, "--bootstrap", option.BootstrapPath,
"--config", option.CompactConfigPath, "--blob-dir", option.BlobsDir,
"--min-used-ratio", option.MinUsedRatio,
"--compact-blob-size", option.CompactBlobSize,
"--max-compact-size", option.MaxCompactSize,
"--layers-to-compact", option.LayersToCompact,
"--backend-type", option.BackendType, "--backend-type", option.BackendType,
"--backend-config-file", option.BackendConfigPath, "--backend-config-file", option.BackendConfigPath,
"--log-level", "info", "--log-level", "info",
@ -148,15 +160,22 @@ func (builder *Builder) Run(option BuilderOption) error {
return builder.run(args, option.PrefetchPatterns) return builder.run(args, option.PrefetchPatterns)
} }
// Save calls `nydus-image chunkdict save` to parse Nydus bootstrap // Generate calls `nydus-image chunkdict generate` to get chunkdict
func (builder *Builder) Save(option SaveOption) error { func (builder *Builder) Generate(option GenerateOption) error {
logrus.Infof("Invoking 'nydus-image chunkdict generate' command")
args := []string{ args := []string{
"chunkdict", "chunkdict",
"save", "generate",
"--log-level", "--log-level",
"warn", "warn",
"--bootstrap", "--bootstrap",
option.BootstrapPath, option.ChunkdictBootstrapPath,
"--database",
option.DatabasePath,
"--output-json",
option.OutputPath,
} }
args = append(args, option.BootstrapPaths...)
return builder.run(args, "") return builder.run(args, "")
} }

View File

@ -150,7 +150,7 @@ func (workflow *Workflow) Build(
// Rename the newly generated blob to its sha256 digest. // Rename the newly generated blob to its sha256 digest.
// Because the flow will use the basename as the blob object to be pushed to registry. // Because the flow will use the basename as the blob object to be pushed to registry.
// When `digestedBlobPath` is void, this layer's bootsrap can be pushed meanwhile not for blob // When `digestedBlobPath` is void, this layer's bootstrap can be pushed meanwhile not for blob
if digestedBlobPath != "" { if digestedBlobPath != "" {
err = os.Rename(blobPath, digestedBlobPath) err = os.Rename(blobPath, digestedBlobPath)
// It's possible that two blobs that are built with the same digest. // It's possible that two blobs that are built with the same digest.

View File

@ -12,13 +12,13 @@ import (
"io" "io"
"strconv" "strconv"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/backend" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/backend"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/remote" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/remote"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/utils" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/utils"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
"github.com/containerd/containerd/images" "github.com/containerd/containerd/images"
digest "github.com/opencontainers/go-digest" "github.com/opencontainers/go-digest"
"github.com/opencontainers/image-spec/specs-go" "github.com/opencontainers/image-spec/specs-go"
ocispec "github.com/opencontainers/image-spec/specs-go/v1" ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors" "github.com/pkg/errors"
@ -295,23 +295,23 @@ func (cache *Cache) layerToRecord(layer *ocispec.Descriptor) *Record {
return nil return nil
} }
func mergeRecord(old, new *Record) *Record { func mergeRecord(oldRec, newRec *Record) *Record {
if old == nil { if oldRec == nil {
old = &Record{ oldRec = &Record{
SourceChainID: new.SourceChainID, SourceChainID: newRec.SourceChainID,
} }
} }
if new.NydusBootstrapDesc != nil { if newRec.NydusBootstrapDesc != nil {
old.NydusBootstrapDesc = new.NydusBootstrapDesc oldRec.NydusBootstrapDesc = newRec.NydusBootstrapDesc
old.NydusBootstrapDiffID = new.NydusBootstrapDiffID oldRec.NydusBootstrapDiffID = newRec.NydusBootstrapDiffID
} }
if new.NydusBlobDesc != nil { if newRec.NydusBlobDesc != nil {
old.NydusBlobDesc = new.NydusBlobDesc oldRec.NydusBlobDesc = newRec.NydusBlobDesc
} }
return old return oldRec
} }
func (cache *Cache) importRecordsFromLayers(layers []ocispec.Descriptor) { func (cache *Cache) importRecordsFromLayers(layers []ocispec.Descriptor) {

View File

@ -13,8 +13,8 @@ import (
ocispec "github.com/opencontainers/image-spec/specs-go/v1" ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/backend" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/backend"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/utils" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/utils"
) )
func makeRecord(id int64, hashBlob bool) *Record { func makeRecord(id int64, hashBlob bool) *Record {

View File

@ -12,44 +12,45 @@ import (
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/checker/rule" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/checker/rule"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/checker/tool" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/parser"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/parser" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/provider"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/provider" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/utils"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/remote"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/utils"
) )
// Opt defines Checker options. // Opt defines Checker options.
// Note: target is the Nydus image reference. // Note: target is the nydus image reference.
type Opt struct { type Opt struct {
WorkDir string WorkDir string
Source string
Target string Source string
SourceInsecure bool Target string
TargetInsecure bool SourceInsecure bool
TargetInsecure bool
SourceBackendType string
SourceBackendConfig string
TargetBackendType string
TargetBackendConfig string
MultiPlatform bool MultiPlatform bool
NydusImagePath string NydusImagePath string
NydusdPath string NydusdPath string
BackendType string
BackendConfig string
ExpectedArch string ExpectedArch string
} }
// Checker validates Nydus image manifest, bootstrap and mounts filesystem // Checker validates nydus image manifest, bootstrap and mounts filesystem
// by Nydusd to compare file metadata and data with OCI image. // by nydusd to compare file metadata and data between OCI / nydus image.
type Checker struct { type Checker struct {
Opt Opt
sourceParser *parser.Parser sourceParser *parser.Parser
targetParser *parser.Parser targetParser *parser.Parser
} }
// New creates Checker instance, target is the Nydus image reference. // New creates Checker instance, target is the nydus image reference.
func New(opt Opt) (*Checker, error) { func New(opt Opt) (*Checker, error) {
// TODO: support source and target resolver
targetRemote, err := provider.DefaultRemote(opt.Target, opt.TargetInsecure) targetRemote, err := provider.DefaultRemote(opt.Target, opt.TargetInsecure)
if err != nil { if err != nil {
return nil, errors.Wrap(err, "Init target image parser") return nil, errors.Wrap(err, "init target image parser")
} }
targetParser, err := parser.New(targetRemote, opt.ExpectedArch) targetParser, err := parser.New(targetRemote, opt.ExpectedArch)
if err != nil { if err != nil {
@ -63,7 +64,7 @@ func New(opt Opt) (*Checker, error) {
return nil, errors.Wrap(err, "Init source image parser") return nil, errors.Wrap(err, "Init source image parser")
} }
sourceParser, err = parser.New(sourceRemote, opt.ExpectedArch) sourceParser, err = parser.New(sourceRemote, opt.ExpectedArch)
if sourceParser == nil { if err != nil {
return nil, errors.Wrap(err, "failed to create parser") return nil, errors.Wrap(err, "failed to create parser")
} }
} }
@ -77,7 +78,7 @@ func New(opt Opt) (*Checker, error) {
return checker, nil return checker, nil
} }
// Check checks Nydus image, and outputs image information to work // Check checks nydus image, and outputs image information to work
// directory, the check workflow is composed of various rules. // directory, the check workflow is composed of various rules.
func (checker *Checker) Check(ctx context.Context) error { func (checker *Checker) Check(ctx context.Context) error {
if err := checker.check(ctx); err != nil { if err := checker.check(ctx); err != nil {
@ -93,12 +94,13 @@ func (checker *Checker) Check(ctx context.Context) error {
return nil return nil
} }
// Check checks Nydus image, and outputs image information to work // Check checks nydus image, and outputs image information to work
// directory, the check workflow is composed of various rules. // directory, the check workflow is composed of various rules.
func (checker *Checker) check(ctx context.Context) error { func (checker *Checker) check(ctx context.Context) error {
logrus.WithField("image", checker.targetParser.Remote.Ref).Infof("parsing image")
targetParsed, err := checker.targetParser.Parse(ctx) targetParsed, err := checker.targetParser.Parse(ctx)
if err != nil { if err != nil {
return errors.Wrap(err, "parse Nydus image") return errors.Wrap(err, "parse nydus image")
} }
var sourceParsed *parser.Parsed var sourceParsed *parser.Parsed
@ -107,88 +109,66 @@ func (checker *Checker) check(ctx context.Context) error {
if err != nil { if err != nil {
return errors.Wrap(err, "parse source image") return errors.Wrap(err, "parse source image")
} }
} else {
sourceParsed = targetParsed
} }
if err := os.RemoveAll(checker.WorkDir); err != nil { if err := os.RemoveAll(checker.WorkDir); err != nil {
return errors.Wrap(err, "clean up work directory") return errors.Wrap(err, "clean up work directory")
} }
if err := os.MkdirAll(filepath.Join(checker.WorkDir, "fs"), 0755); err != nil { if sourceParsed != nil {
return errors.Wrap(err, "create work directory") if err := checker.Output(ctx, sourceParsed, filepath.Join(checker.WorkDir, "source")); err != nil {
} return errors.Wrapf(err, "output image information: %s", sourceParsed.Remote.Ref)
if err := checker.Output(ctx, sourceParsed, targetParsed, checker.WorkDir); err != nil {
return errors.Wrap(err, "output image information")
}
mode := "direct"
digestValidate := false
if targetParsed.NydusImage != nil {
nydusManifest := parser.FindNydusBootstrapDesc(&targetParsed.NydusImage.Manifest)
if nydusManifest != nil {
v := utils.GetNydusFsVersionOrDefault(nydusManifest.Annotations, utils.V5)
if v == utils.V5 {
// Digest validate is not currently supported for v6,
// but v5 supports it. In order to make the check more sufficient,
// this validate needs to be turned on for v5.
digestValidate = true
}
} }
} }
var sourceRemote *remote.Remote if targetParsed != nil {
if checker.sourceParser != nil { if err := checker.Output(ctx, targetParsed, filepath.Join(checker.WorkDir, "target")); err != nil {
sourceRemote = checker.sourceParser.Remote return errors.Wrapf(err, "output image information: %s", targetParsed.Remote.Ref)
}
} }
rules := []rule.Rule{ rules := []rule.Rule{
&rule.ManifestRule{ &rule.ManifestRule{
SourceParsed: sourceParsed, SourceParsed: sourceParsed,
TargetParsed: targetParsed, TargetParsed: targetParsed,
MultiPlatform: checker.MultiPlatform,
BackendType: checker.BackendType,
ExpectedArch: checker.ExpectedArch,
}, },
&rule.BootstrapRule{ &rule.BootstrapRule{
Parsed: targetParsed, WorkDir: checker.WorkDir,
NydusImagePath: checker.NydusImagePath, NydusImagePath: checker.NydusImagePath,
BackendType: checker.BackendType,
BootstrapPath: filepath.Join(checker.WorkDir, "nydus_bootstrap"), SourceParsed: sourceParsed,
DebugOutputPath: filepath.Join(checker.WorkDir, "nydus_bootstrap_debug.json"), TargetParsed: targetParsed,
SourceBackendType: checker.SourceBackendType,
SourceBackendConfig: checker.SourceBackendConfig,
TargetBackendType: checker.TargetBackendType,
TargetBackendConfig: checker.TargetBackendConfig,
}, },
&rule.FilesystemRule{ &rule.FilesystemRule{
Source: checker.Source, WorkDir: checker.WorkDir,
SourceMountPath: filepath.Join(checker.WorkDir, "fs/source_mounted"), NydusdPath: checker.NydusdPath,
SourceParsed: sourceParsed,
SourcePath: filepath.Join(checker.WorkDir, "fs/source"), SourceImage: &rule.Image{
SourceRemote: sourceRemote, Parsed: sourceParsed,
Target: checker.Target, Insecure: checker.SourceInsecure,
TargetInsecure: checker.TargetInsecure,
PlainHTTP: checker.targetParser.Remote.IsWithHTTP(),
NydusdConfig: tool.NydusdConfig{
NydusdPath: checker.NydusdPath,
BackendType: checker.BackendType,
BackendConfig: checker.BackendConfig,
BootstrapPath: filepath.Join(checker.WorkDir, "nydus_bootstrap"),
ConfigPath: filepath.Join(checker.WorkDir, "fs/nydusd_config.json"),
BlobCacheDir: filepath.Join(checker.WorkDir, "fs/nydus_blobs"),
MountPath: filepath.Join(checker.WorkDir, "fs/nydus_mounted"),
APISockPath: filepath.Join(checker.WorkDir, "fs/nydus_api.sock"),
Mode: mode,
DigestValidate: digestValidate,
}, },
TargetImage: &rule.Image{
Parsed: targetParsed,
Insecure: checker.TargetInsecure,
},
SourceBackendType: checker.SourceBackendType,
SourceBackendConfig: checker.SourceBackendConfig,
TargetBackendType: checker.TargetBackendType,
TargetBackendConfig: checker.TargetBackendConfig,
}, },
} }
for _, rule := range rules { for _, rule := range rules {
if err := rule.Validate(); err != nil { if err := rule.Validate(); err != nil {
return errors.Wrapf(err, "validate rule %s", rule.Name()) return errors.Wrapf(err, "validate %s failed", rule.Name())
} }
} }
logrus.Infof("Verified Nydus image %s", checker.targetParser.Remote.Ref) logrus.Info("verified image")
return nil return nil
} }

View File

@ -7,14 +7,20 @@ package checker
import ( import (
"context" "context"
"encoding/json" "encoding/json"
"io"
"os" "os"
"path/filepath" "path/filepath"
"github.com/containerd/containerd/archive/compression"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/parser" modelspec "github.com/CloudNativeAI/model-spec/specs-go/v1"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/utils" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/checker/tool"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/parser"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/utils"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
) )
func prettyDump(obj interface{}, name string) error { func prettyDump(obj interface{}, name string) error {
@ -25,70 +31,107 @@ func prettyDump(obj interface{}, name string) error {
return os.WriteFile(name, bytes, 0644) return os.WriteFile(name, bytes, 0644)
} }
// Output outputs OCI and Nydus image manifest, index, config to JSON file. // Output outputs OCI and nydus image manifest, index, config to JSON file.
// Prefer to use source image to output OCI image information. // Prefer to use source image to output OCI image information.
func (checker *Checker) Output( func (checker *Checker) Output(
ctx context.Context, sourceParsed, targetParsed *parser.Parsed, outputPath string, ctx context.Context, parsed *parser.Parsed, dir string,
) error { ) error {
logrus.Infof("Dumping OCI and Nydus manifests to %s", outputPath) logrus.WithField("type", tool.CheckImageType(parsed)).WithField("image", parsed.Remote.Ref).Info("dumping manifest")
if sourceParsed.Index != nil { if err := os.MkdirAll(dir, 0755); err != nil {
return errors.Wrap(err, "create output directory")
}
if parsed.Index != nil && parsed.OCIImage != nil {
if err := prettyDump( if err := prettyDump(
sourceParsed.Index, parsed.Index,
filepath.Join(outputPath, "oci_index.json"), filepath.Join(dir, "oci_index.json"),
); err != nil { ); err != nil {
return errors.Wrap(err, "output oci index file") return errors.Wrap(err, "output oci index file")
} }
} }
if targetParsed.Index != nil { if parsed.Index != nil && parsed.NydusImage != nil {
if err := prettyDump( if err := prettyDump(
targetParsed.Index, parsed.Index,
filepath.Join(outputPath, "nydus_index.json"), filepath.Join(dir, "nydus_index.json"),
); err != nil { ); err != nil {
return errors.Wrap(err, "output nydus index file") return errors.Wrap(err, "output nydus index file")
} }
} }
if sourceParsed.OCIImage != nil { if parsed.OCIImage != nil {
if err := prettyDump( if err := prettyDump(
sourceParsed.OCIImage.Manifest, parsed.OCIImage.Manifest,
filepath.Join(outputPath, "oci_manifest.json"), filepath.Join(dir, "oci_manifest.json"),
); err != nil { ); err != nil {
return errors.Wrap(err, "output OCI manifest file") return errors.Wrap(err, "output OCI manifest file")
} }
if err := prettyDump( if err := prettyDump(
sourceParsed.OCIImage.Config, parsed.OCIImage.Config,
filepath.Join(outputPath, "oci_config.json"), filepath.Join(dir, "oci_config.json"),
); err != nil { ); err != nil {
return errors.Wrap(err, "output OCI config file") return errors.Wrap(err, "output OCI config file")
} }
} }
if targetParsed.NydusImage != nil { if parsed.NydusImage != nil {
if err := prettyDump( if err := prettyDump(
targetParsed.NydusImage.Manifest, parsed.NydusImage.Manifest,
filepath.Join(outputPath, "nydus_manifest.json"), filepath.Join(dir, "nydus_manifest.json"),
); err != nil { ); err != nil {
return errors.Wrap(err, "output Nydus manifest file") return errors.Wrap(err, "output nydus manifest file")
} }
if err := prettyDump( if err := prettyDump(
targetParsed.NydusImage.Config, parsed.NydusImage.Config,
filepath.Join(outputPath, "nydus_config.json"), filepath.Join(dir, "nydus_config.json"),
); err != nil { ); err != nil {
return errors.Wrap(err, "output Nydus config file") return errors.Wrap(err, "output nydus config file")
} }
target := filepath.Join(outputPath, "nydus_bootstrap") bootstrapDir := filepath.Join(dir, "nydus_bootstrap")
logrus.Infof("Pulling Nydus bootstrap to %s", target) logrus.WithField("type", tool.CheckImageType(parsed)).WithField("image", parsed.Remote.Ref).Info("pulling bootstrap")
bootstrapReader, err := checker.targetParser.PullNydusBootstrap(ctx, targetParsed.NydusImage) var parser *parser.Parser
if dir == "source" {
parser = checker.sourceParser
} else {
parser = checker.targetParser
}
bootstrapReader, err := parser.PullNydusBootstrap(ctx, parsed.NydusImage)
if err != nil { if err != nil {
return errors.Wrap(err, "pull Nydus bootstrap layer") return errors.Wrap(err, "pull nydus bootstrap layer")
} }
defer bootstrapReader.Close() defer bootstrapReader.Close()
if err := utils.UnpackFile(bootstrapReader, utils.BootstrapFileNameInLayer, target); err != nil { tarRc, err := compression.DecompressStream(bootstrapReader)
return errors.Wrap(err, "unpack Nydus bootstrap layer") if err != nil {
return err
}
defer tarRc.Close()
diffID := digest.SHA256.Digester()
if err := utils.UnpackFromTar(io.TeeReader(tarRc, diffID.Hash()), bootstrapDir); err != nil {
return errors.Wrap(err, "unpack nydus bootstrap layer")
}
diffIDs := parsed.NydusImage.Config.RootFS.DiffIDs
manifest := parsed.NydusImage.Manifest
if manifest.ArtifactType != modelspec.ArtifactTypeModelManifest && diffIDs[len(diffIDs)-1] != diffID.Digest() {
return errors.Errorf(
"invalid bootstrap layer diff id: %s (calculated) != %s (in image config)",
diffID.Digest().String(),
diffIDs[len(diffIDs)-1].String(),
)
}
if manifest.ArtifactType == modelspec.ArtifactTypeModelManifest {
if manifest.Subject == nil {
return errors.New("missing subject in manifest")
}
if manifest.Subject.MediaType != ocispec.MediaTypeImageManifest {
return errors.Errorf("invalid subject media type: %s", manifest.Subject.MediaType)
}
} }
} }

View File

@ -8,79 +8,90 @@ import (
"encoding/json" "encoding/json"
"fmt" "fmt"
"os" "os"
"path/filepath"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/checker/tool" "github.com/containerd/nydus-snapshotter/pkg/label"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/parser" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/checker/tool"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/parser"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/utils"
) )
// BootstrapRule validates bootstrap in Nydus image // BootstrapRule validates bootstrap in nydus image
type BootstrapRule struct { type BootstrapRule struct {
Parsed *parser.Parsed WorkDir string
BootstrapPath string NydusImagePath string
NydusImagePath string
DebugOutputPath string SourceParsed *parser.Parsed
BackendType string TargetParsed *parser.Parsed
SourceBackendType string
SourceBackendConfig string
TargetBackendType string
TargetBackendConfig string
} }
type bootstrapDebug struct { type output struct {
Blobs []string `json:"blobs"` Blobs []string `json:"blobs"`
} }
func (rule *BootstrapRule) Name() string { func (rule *BootstrapRule) Name() string {
return "Bootstrap" return "bootstrap"
} }
func (rule *BootstrapRule) Validate() error { func (rule *BootstrapRule) validate(parsed *parser.Parsed, dir string) error {
logrus.Infof("Checking Nydus bootstrap") if parsed == nil || parsed.NydusImage == nil {
return nil
}
logrus.WithField("type", tool.CheckImageType(parsed)).WithField("image", parsed.Remote.Ref).Info("checking bootstrap")
bootstrapDir := filepath.Join(rule.WorkDir, dir, "nydus_bootstrap")
outputPath := filepath.Join(rule.WorkDir, dir, "nydus_output.json")
// Get blob list in the blob table of bootstrap by calling // Get blob list in the blob table of bootstrap by calling
// `nydus-image check` command // `nydus-image check` command
builder := tool.NewBuilder(rule.NydusImagePath) builder := tool.NewBuilder(rule.NydusImagePath)
if err := builder.Check(tool.BuilderOption{ if err := builder.Check(tool.BuilderOption{
BootstrapPath: rule.BootstrapPath, BootstrapPath: filepath.Join(bootstrapDir, utils.BootstrapFileNameInLayer),
DebugOutputPath: rule.DebugOutputPath, DebugOutputPath: outputPath,
}); err != nil { }); err != nil {
return errors.Wrap(err, "invalid nydus bootstrap format") return errors.Wrap(err, "invalid nydus bootstrap format")
} }
// For registry garbage collection, nydus puts the blobs to // Parse blob list from blob layers in nydus manifest
// the layers in manifest, so here only need to check blob
// list consistency for registry backend.
if rule.BackendType != "registry" {
return nil
}
// Parse blob list from blob layers in Nydus manifest
blobListInLayer := map[string]bool{} blobListInLayer := map[string]bool{}
layers := rule.Parsed.NydusImage.Manifest.Layers layers := parsed.NydusImage.Manifest.Layers
for i, layer := range layers { for i, layer := range layers {
if layer.Annotations != nil && layers[i].Annotations[label.NydusRefLayer] != "" {
// Ignore OCI reference layer check
continue
}
if i != len(layers)-1 { if i != len(layers)-1 {
blobListInLayer[layer.Digest.Hex()] = true blobListInLayer[layer.Digest.Hex()] = true
} }
} }
// Parse blob list from blob table of bootstrap // Parse blob list from blob table of bootstrap
var bootstrap bootstrapDebug var out output
bootstrapBytes, err := os.ReadFile(rule.DebugOutputPath) outputBytes, err := os.ReadFile(outputPath)
if err != nil { if err != nil {
return errors.Wrap(err, "read bootstrap debug json") return errors.Wrap(err, "read bootstrap debug json")
} }
if err := json.Unmarshal(bootstrapBytes, &bootstrap); err != nil { if err := json.Unmarshal(outputBytes, &out); err != nil {
return errors.Wrap(err, "unmarshal bootstrap output JSON") return errors.Wrap(err, "unmarshal bootstrap output JSON")
} }
blobListInBootstrap := map[string]bool{} blobListInBootstrap := map[string]bool{}
lostInLayer := false lostInLayer := false
for _, blobID := range bootstrap.Blobs { for _, blobID := range out.Blobs {
blobListInBootstrap[blobID] = true blobListInBootstrap[blobID] = true
if !blobListInLayer[blobID] { if !blobListInLayer[blobID] {
lostInLayer = true lostInLayer = true
} }
} }
if !lostInLayer { if len(blobListInLayer) == 0 || !lostInLayer {
return nil return nil
} }
@ -94,3 +105,15 @@ func (rule *BootstrapRule) Validate() error {
blobListInLayer, blobListInLayer,
) )
} }
func (rule *BootstrapRule) Validate() error {
if err := rule.validate(rule.SourceParsed, "source"); err != nil {
return errors.Wrap(err, "source image: invalid nydus bootstrap")
}
if err := rule.validate(rule.TargetParsed, "target"); err != nil {
return errors.Wrap(err, "target image: invalid nydus bootstrap")
}
return nil
}

View File

@ -6,7 +6,6 @@ package rule
import ( import (
"context" "context"
"encoding/base64"
"encoding/hex" "encoding/hex"
"encoding/json" "encoding/json"
"fmt" "fmt"
@ -15,13 +14,12 @@ import (
"reflect" "reflect"
"syscall" "syscall"
modelspec "github.com/CloudNativeAI/model-spec/specs-go/v1"
"github.com/distribution/reference" "github.com/distribution/reference"
dockerconfig "github.com/docker/cli/cli/config"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/checker/tool" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/checker/tool"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/parser" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/parser"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/remote" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/utils"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/utils"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/pkg/xattr" "github.com/pkg/xattr"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
@ -31,18 +29,23 @@ import (
var WorkerCount uint = 8 var WorkerCount uint = 8
// FilesystemRule compares file metadata and data in the two mountpoints: // FilesystemRule compares file metadata and data in the two mountpoints:
// Mounted by Nydusd for Nydus image, // Mounted by nydusd for nydus image,
// Mounted by Overlayfs for OCI image. // Mounted by Overlayfs for OCI image.
type FilesystemRule struct { type FilesystemRule struct {
NydusdConfig tool.NydusdConfig WorkDir string
Source string NydusdPath string
SourceMountPath string
SourceParsed *parser.Parsed SourceImage *Image
SourcePath string TargetImage *Image
SourceRemote *remote.Remote SourceBackendType string
Target string SourceBackendConfig string
TargetInsecure bool TargetBackendType string
PlainHTTP bool TargetBackendConfig string
}
type Image struct {
Parsed *parser.Parsed
Insecure bool
} }
// Node records file metadata and file data hash. // Node records file metadata and file data hash.
@ -68,14 +71,14 @@ type RegistryBackendConfig struct {
func (node *Node) String() string { func (node *Node) String() string {
return fmt.Sprintf( return fmt.Sprintf(
"Path: %s, Size: %d, Mode: %d, Rdev: %d, Symink: %s, UID: %d, GID: %d, "+ "path: %s, size: %d, mode: %d, rdev: %d, symink: %s, uid: %d, gid: %d, "+
"Xattrs: %v, Hash: %s", node.Path, node.Size, node.Mode, node.Rdev, node.Symlink, "xattrs: %v, hash: %s", node.Path, node.Size, node.Mode, node.Rdev, node.Symlink,
node.UID, node.GID, node.Xattrs, hex.EncodeToString(node.Hash), node.UID, node.GID, node.Xattrs, hex.EncodeToString(node.Hash),
) )
} }
func (rule *FilesystemRule) Name() string { func (rule *FilesystemRule) Name() string {
return "Filesystem" return "filesystem"
} }
func getXattrs(path string) (map[string][]byte, error) { func getXattrs(path string) (map[string][]byte, error) {
@ -134,13 +137,13 @@ func (rule *FilesystemRule) walk(rootfs string) (map[string]Node, error) {
xattrs, err := getXattrs(path) xattrs, err := getXattrs(path)
if err != nil { if err != nil {
logrus.Warnf("Failed to get xattr: %s", err) logrus.Warnf("failed to get xattr: %s", err)
} }
// Calculate file data hash if the `backend-type` option be specified, // Calculate file data hash if the `backend-type` option be specified,
// this will cause that nydusd read data from backend, it's network load // this will cause that nydusd read data from backend, it's network load
var hash []byte var hash []byte
if rule.NydusdConfig.BackendType != "" && info.Mode().IsRegular() { if info.Mode().IsRegular() {
hash, err = utils.HashFile(path) hash, err = utils.HashFile(path)
if err != nil { if err != nil {
return err return err
@ -168,20 +171,146 @@ func (rule *FilesystemRule) walk(rootfs string) (map[string]Node, error) {
return nodes, nil return nodes, nil
} }
func (rule *FilesystemRule) pullSourceImage() (*tool.Image, error) { func (rule *FilesystemRule) mountNydusImage(image *Image, dir string) (func() error, error) {
layers := rule.SourceParsed.OCIImage.Manifest.Layers logrus.WithField("type", tool.CheckImageType(image.Parsed)).WithField("image", image.Parsed.Remote.Ref).Info("mounting image")
digestValidate := false
if image.Parsed.NydusImage != nil {
nydusManifest := parser.FindNydusBootstrapDesc(&image.Parsed.NydusImage.Manifest)
if nydusManifest != nil {
v := utils.GetNydusFsVersionOrDefault(nydusManifest.Annotations, utils.V5)
if v == utils.V5 {
// Digest validate is not currently supported for v6,
// but v5 supports it. In order to make the check more sufficient,
// this validate needs to be turned on for v5.
digestValidate = true
}
}
}
backendType := rule.SourceBackendType
backendConfig := rule.SourceBackendConfig
if dir == "target" {
backendType = rule.TargetBackendType
backendConfig = rule.TargetBackendConfig
}
mountDir := filepath.Join(rule.WorkDir, dir, "mnt")
nydusdDir := filepath.Join(rule.WorkDir, dir, "nydusd")
if err := os.MkdirAll(nydusdDir, 0755); err != nil {
return nil, errors.Wrap(err, "create nydusd directory")
}
nydusdConfig := tool.NydusdConfig{
EnablePrefetch: true,
NydusdPath: rule.NydusdPath,
BackendType: backendType,
BackendConfig: backendConfig,
BootstrapPath: filepath.Join(rule.WorkDir, dir, "nydus_bootstrap/image/image.boot"),
ExternalBackendConfigPath: filepath.Join(rule.WorkDir, dir, "nydus_bootstrap/image/backend.json"),
ConfigPath: filepath.Join(nydusdDir, "config.json"),
BlobCacheDir: filepath.Join(nydusdDir, "cache"),
APISockPath: filepath.Join(nydusdDir, "api.sock"),
MountPath: mountDir,
Mode: "direct",
DigestValidate: digestValidate,
}
if err := os.MkdirAll(nydusdConfig.BlobCacheDir, 0755); err != nil {
return nil, errors.Wrap(err, "create blob cache directory for nydusd")
}
if err := os.MkdirAll(nydusdConfig.MountPath, 0755); err != nil {
return nil, errors.Wrap(err, "create mountpoint directory of nydus image")
}
ref, err := reference.ParseNormalizedNamed(image.Parsed.Remote.Ref)
if err != nil {
return nil, err
}
if nydusdConfig.BackendType == "" {
nydusdConfig.BackendType = "registry"
if nydusdConfig.BackendConfig == "" {
backendConfig, err := utils.NewRegistryBackendConfig(ref, image.Insecure)
if err != nil {
return nil, errors.Wrap(err, "failed to parse backend configuration")
}
if image.Insecure {
backendConfig.SkipVerify = true
}
if image.Parsed.Remote.IsWithHTTP() {
backendConfig.Scheme = "http"
}
bytes, err := json.Marshal(backendConfig)
if err != nil {
return nil, errors.Wrap(err, "parse registry backend config")
}
nydusdConfig.BackendConfig = string(bytes)
}
}
if image.Parsed.NydusImage.Manifest.ArtifactType == modelspec.ArtifactTypeModelManifest {
if err := utils.BuildRuntimeExternalBackendConfig(nydusdConfig.BackendConfig, nydusdConfig.ExternalBackendConfigPath); err != nil {
return nil, errors.Wrap(err, "failed to build external backend config file")
}
}
nydusd, err := tool.NewNydusd(nydusdConfig)
if err != nil {
return nil, errors.Wrap(err, "create nydusd daemon")
}
if err := nydusd.Mount(); err != nil {
return nil, errors.Wrap(err, "mount nydus image")
}
umount := func() error {
if err := nydusd.Umount(false); err != nil {
return errors.Wrap(err, "umount nydus image")
}
if err := os.RemoveAll(mountDir); err != nil {
logrus.WithError(err).Warnf("cleanup mount directory: %s", mountDir)
}
if err := os.RemoveAll(nydusdDir); err != nil {
logrus.WithError(err).Warnf("cleanup nydusd directory: %s", nydusdDir)
}
return nil
}
return umount, nil
}
func (rule *FilesystemRule) mountOCIImage(image *Image, dir string) (func() error, error) {
logrus.WithField("type", tool.CheckImageType(image.Parsed)).WithField("image", image.Parsed.Remote.Ref).Infof("mounting image")
mountPath := filepath.Join(rule.WorkDir, dir, "mnt")
if err := os.MkdirAll(mountPath, 0755); err != nil {
return nil, errors.Wrap(err, "create mountpoint directory")
}
layerBasePath := filepath.Join(rule.WorkDir, dir, "layers")
if err := os.MkdirAll(layerBasePath, 0755); err != nil {
return nil, errors.Wrap(err, "create layer base directory")
}
layers := image.Parsed.OCIImage.Manifest.Layers
worker := utils.NewWorkerPool(WorkerCount, uint(len(layers))) worker := utils.NewWorkerPool(WorkerCount, uint(len(layers)))
for idx := range layers { for idx := range layers {
worker.Put(func(idx int) func() error { worker.Put(func(idx int) func() error {
return func() error { return func() error {
layer := layers[idx] layer := layers[idx]
reader, err := rule.SourceRemote.Pull(context.Background(), layer, true) reader, err := image.Parsed.Remote.Pull(context.Background(), layer, true)
if err != nil { if err != nil {
return errors.Wrap(err, "pull source image layers from the remote registry") return errors.Wrap(err, "pull source image layers from the remote registry")
} }
if err = utils.UnpackTargz(context.Background(), filepath.Join(rule.SourcePath, fmt.Sprintf("layer-%d", idx)), reader, true); err != nil { layerDir := filepath.Join(layerBasePath, fmt.Sprintf("layer-%d", idx))
if err = utils.UnpackTargz(context.Background(), layerDir, reader, true); err != nil {
return errors.Wrap(err, "unpack source image layers") return errors.Wrap(err, "unpack source image layers")
} }
@ -194,124 +323,59 @@ func (rule *FilesystemRule) pullSourceImage() (*tool.Image, error) {
return nil, errors.Wrap(err, "pull source image layers in wait") return nil, errors.Wrap(err, "pull source image layers in wait")
} }
return &tool.Image{ mounter := &tool.Image{
Layers: layers, Layers: layers,
Source: rule.Source, LayerBaseDir: layerBasePath,
SourcePath: rule.SourcePath, Rootfs: mountPath,
Rootfs: rule.SourceMountPath,
}, nil
}
func (rule *FilesystemRule) mountSourceImage() (*tool.Image, error) {
logrus.Infof("Mounting source image to %s", rule.SourceMountPath)
image, err := rule.pullSourceImage()
if err != nil {
return nil, errors.Wrap(err, "pull source image")
} }
if err := image.Umount(); err != nil { if err := mounter.Umount(); err != nil {
return nil, errors.Wrap(err, "umount previous rootfs") return nil, errors.Wrap(err, "umount previous rootfs")
} }
if err := image.Mount(); err != nil { if err := mounter.Mount(); err != nil {
return nil, errors.Wrap(err, "mount source image") return nil, errors.Wrap(err, "mount source image")
} }
return image, nil umount := func() error {
} if err := mounter.Umount(); err != nil {
logrus.WithError(err).Warnf("umount rootfs")
func NewRegistryBackendConfig(parsed reference.Named) (RegistryBackendConfig, error) {
backendConfig := RegistryBackendConfig{
Scheme: "https",
Host: reference.Domain(parsed),
Repo: reference.Path(parsed),
}
config := dockerconfig.LoadDefaultConfigFile(os.Stderr)
authConfig, err := config.GetAuthConfig(backendConfig.Host)
if err != nil {
return backendConfig, errors.Wrap(err, "get docker registry auth config")
}
var auth string
if authConfig.Username != "" && authConfig.Password != "" {
auth = base64.StdEncoding.EncodeToString([]byte(fmt.Sprintf("%s:%s", authConfig.Username, authConfig.Password)))
}
backendConfig.Auth = auth
return backendConfig, nil
}
func (rule *FilesystemRule) mountNydusImage() (*tool.Nydusd, error) {
logrus.Infof("Mounting Nydus image to %s", rule.NydusdConfig.MountPath)
if err := os.MkdirAll(rule.NydusdConfig.BlobCacheDir, 0755); err != nil {
return nil, errors.Wrap(err, "create blob cache directory for Nydusd")
}
if err := os.MkdirAll(rule.NydusdConfig.MountPath, 0755); err != nil {
return nil, errors.Wrap(err, "create mountpoint directory of Nydus image")
}
parsed, err := reference.ParseNormalizedNamed(rule.Target)
if err != nil {
return nil, err
}
if rule.NydusdConfig.BackendType == "" {
rule.NydusdConfig.BackendType = "registry"
if rule.NydusdConfig.BackendConfig == "" {
backendConfig, err := NewRegistryBackendConfig(parsed)
if err != nil {
return nil, errors.Wrap(err, "failed to parse backend configuration")
}
if rule.TargetInsecure {
backendConfig.SkipVerify = true
}
if rule.PlainHTTP {
backendConfig.Scheme = "http"
}
bytes, err := json.Marshal(backendConfig)
if err != nil {
return nil, errors.Wrap(err, "parse registry backend config")
}
rule.NydusdConfig.BackendConfig = string(bytes)
} }
if err := os.RemoveAll(layerBasePath); err != nil {
logrus.WithError(err).Warnf("cleanup layers directory %s", layerBasePath)
}
return nil
} }
nydusd, err := tool.NewNydusd(rule.NydusdConfig) return umount, nil
if err != nil {
return nil, errors.Wrap(err, "create Nydusd daemon")
}
if err := nydusd.Mount(); err != nil {
return nil, errors.Wrap(err, "mount Nydus image")
}
return nydusd, nil
} }
func (rule *FilesystemRule) verify() error { func (rule *FilesystemRule) mountImage(image *Image, dir string) (func() error, error) {
logrus.Infof("Verifying filesystem for source and Nydus image") if image.Parsed.OCIImage != nil {
return rule.mountOCIImage(image, dir)
} else if image.Parsed.NydusImage != nil {
return rule.mountNydusImage(image, dir)
}
return nil, fmt.Errorf("invalid image for mounting")
}
func (rule *FilesystemRule) verify(sourceRootfs, targetRootfs string) error {
logrus.Infof("comparing filesystem")
sourceNodes := map[string]Node{} sourceNodes := map[string]Node{}
// Concurrently walk the rootfs directory of source and Nydus image // Concurrently walk the rootfs directory of source and nydus image
walkErr := make(chan error) walkErr := make(chan error)
go func() { go func() {
var err error var err error
sourceNodes, err = rule.walk(rule.SourceMountPath) sourceNodes, err = rule.walk(sourceRootfs)
walkErr <- err walkErr <- err
}() }()
nydusNodes, err := rule.walk(rule.NydusdConfig.MountPath) targetNodes, err := rule.walk(targetRootfs)
if err != nil { if err != nil {
return errors.Wrap(err, "walk rootfs of Nydus image") return errors.Wrap(err, "walk rootfs of source image")
} }
if err := <-walkErr; err != nil { if err := <-walkErr; err != nil {
@ -319,58 +383,44 @@ func (rule *FilesystemRule) verify() error {
} }
for path, sourceNode := range sourceNodes { for path, sourceNode := range sourceNodes {
nydusNode, exist := nydusNodes[path] targetNode, exist := targetNodes[path]
if !exist { if !exist {
return fmt.Errorf("File not found in Nydus image: %s", path) return fmt.Errorf("file not found in target image: %s", path)
} }
delete(nydusNodes, path) delete(targetNodes, path)
if path != "/" && !reflect.DeepEqual(sourceNode, nydusNode) { if path != "/" && !reflect.DeepEqual(sourceNode, targetNode) {
return fmt.Errorf("File not match in Nydus image: %s <=> %s", sourceNode.String(), nydusNode.String()) return fmt.Errorf("file not match in target image:\n\t[source] %s\n\t[target] %s", sourceNode.String(), targetNode.String())
} }
} }
for path := range nydusNodes { for path := range targetNodes {
return fmt.Errorf("File not found in source image: %s", path) return fmt.Errorf("file not found in source image: %s", path)
} }
return nil return nil
} }
func (rule *FilesystemRule) Validate() error { func (rule *FilesystemRule) Validate() error {
// Skip filesystem validation if no source image be specified // Skip filesystem validation if no source or target image be specified
if rule.Source == "" { if rule.SourceImage.Parsed == nil || rule.TargetImage.Parsed == nil {
return nil return nil
} }
// Cleanup temporary directories umountSource, err := rule.mountImage(rule.SourceImage, "source")
defer func() {
if err := os.RemoveAll(rule.SourcePath); err != nil {
logrus.WithError(err).Warnf("cleanup source image directory %s", rule.SourcePath)
}
if err := os.RemoveAll(rule.NydusdConfig.MountPath); err != nil {
logrus.WithError(err).Warnf("cleanup nydus image directory %s", rule.NydusdConfig.MountPath)
}
if err := os.RemoveAll(rule.NydusdConfig.BlobCacheDir); err != nil {
logrus.WithError(err).Warnf("cleanup nydus blob cache directory %s", rule.NydusdConfig.BlobCacheDir)
}
}()
image, err := rule.mountSourceImage()
if err != nil { if err != nil {
return err return err
} }
defer image.Umount() defer umountSource()
nydusd, err := rule.mountNydusImage() umountTarget, err := rule.mountImage(rule.TargetImage, "target")
if err != nil { if err != nil {
return err return err
} }
defer nydusd.Umount(false) defer umountTarget()
if err := rule.verify(); err != nil { return rule.verify(
return err filepath.Join(rule.WorkDir, "source/mnt"),
} filepath.Join(rule.WorkDir, "target/mnt"),
)
return nil
} }

View File

@ -6,104 +6,132 @@ package rule
import ( import (
"encoding/json" "encoding/json"
"fmt"
"reflect" "reflect"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/parser" modelspec "github.com/CloudNativeAI/model-spec/specs-go/v1"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/utils" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/checker/tool"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/parser"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/utils"
) )
// ManifestRule validates manifest format of Nydus image // ManifestRule validates manifest format of nydus image
type ManifestRule struct { type ManifestRule struct {
SourceParsed *parser.Parsed SourceParsed *parser.Parsed
TargetParsed *parser.Parsed TargetParsed *parser.Parsed
MultiPlatform bool
BackendType string
ExpectedArch string
} }
func (rule *ManifestRule) Name() string { func (rule *ManifestRule) Name() string {
return "Manifest" return "manifest"
} }
func (rule *ManifestRule) Validate() error { func (rule *ManifestRule) validateConfig(sourceImage, targetImage *parser.Image) error {
logrus.Infof("Checking Nydus manifest") //nolint:staticcheck
// ignore static check SA1019 here. We have to assign deprecated field.
//
// Skip ArgsEscaped's Check
//
// This field is present only for legacy compatibility with Docker and
// should not be used by new image builders. Nydusify (1.6 and above)
// ignores it, which is an expected behavior.
// Also ignore it in check.
//
// Addition: [ArgsEscaped in spec](https://github.com/opencontainers/image-spec/pull/892)
sourceImage.Config.Config.ArgsEscaped = targetImage.Config.Config.ArgsEscaped
// Ensure the target image represents a manifest list, sourceConfig, err := json.Marshal(sourceImage.Config.Config)
// and it should consist of OCI and Nydus manifest if err != nil {
if rule.MultiPlatform { return errors.New("marshal source image config")
if rule.TargetParsed.Index == nil { }
return errors.New("not found image manifest list") targetConfig, err := json.Marshal(targetImage.Config.Config)
} if err != nil {
foundNydusDesc := false return errors.New("marshal target image config")
foundOCIDesc := false }
for _, desc := range rule.TargetParsed.Index.Manifests { if !reflect.DeepEqual(sourceConfig, targetConfig) {
if desc.Platform == nil { return errors.New("source image config should be equal with target image config")
continue
}
if desc.Platform.Architecture == rule.ExpectedArch && desc.Platform.OS == "linux" {
if utils.IsNydusPlatform(desc.Platform) {
foundNydusDesc = true
} else {
foundOCIDesc = true
}
}
}
if !foundNydusDesc {
return errors.Errorf("not found nydus image of specified platform linux/%s", rule.ExpectedArch)
}
if !foundOCIDesc {
return errors.Errorf("not found OCI image of specified platform linux/%s", rule.ExpectedArch)
}
} }
// Check manifest of Nydus return nil
if rule.TargetParsed.NydusImage == nil { }
return errors.New("invalid nydus image manifest")
func (rule *ManifestRule) validateOCI(image *parser.Image) error {
// Check config diff IDs
layers := image.Manifest.Layers
artifact := image.Manifest.ArtifactType
if artifact != modelspec.ArtifactTypeModelManifest && len(image.Config.RootFS.DiffIDs) != len(layers) {
return fmt.Errorf("invalid diff ids in image config: %d (diff ids) != %d (layers)", len(image.Config.RootFS.DiffIDs), len(layers))
} }
layers := rule.TargetParsed.NydusImage.Manifest.Layers return nil
}
func (rule *ManifestRule) validateNydus(image *parser.Image) error {
// Check bootstrap and blob layers
layers := image.Manifest.Layers
manifestArtifact := image.Manifest.ArtifactType
for i, layer := range layers { for i, layer := range layers {
if i == len(layers)-1 { if i == len(layers)-1 {
if layer.Annotations[utils.LayerAnnotationNydusBootstrap] != "true" { if layer.Annotations[utils.LayerAnnotationNydusBootstrap] != "true" {
return errors.New("invalid bootstrap layer in nydus image manifest") return errors.New("invalid bootstrap layer in nydus image manifest")
} }
if manifestArtifact == modelspec.ArtifactTypeModelManifest && layer.Annotations[utils.LayerAnnotationNydusArtifactType] != manifestArtifact {
return errors.New("invalid manifest artifact type in nydus image manifest")
}
} else { } else {
if layer.MediaType != utils.MediaTypeNydusBlob || if manifestArtifact != modelspec.ArtifactTypeModelManifest &&
layer.Annotations[utils.LayerAnnotationNydusBlob] != "true" { (layer.MediaType != utils.MediaTypeNydusBlob ||
layer.Annotations[utils.LayerAnnotationNydusBlob] != "true") {
return errors.New("invalid blob layer in nydus image manifest") return errors.New("invalid blob layer in nydus image manifest")
} }
} }
} }
// Check Nydus image config with OCI image // Check config diff IDs
if rule.SourceParsed.OCIImage != nil { if manifestArtifact != modelspec.ArtifactTypeModelManifest && len(image.Config.RootFS.DiffIDs) != len(layers) {
return fmt.Errorf("invalid diff ids in image config: %d (diff ids) != %d (layers)", len(image.Config.RootFS.DiffIDs), len(layers))
}
//nolint:staticcheck return nil
// ignore static check SA1019 here. We have to assign deprecated field. }
//
// Skip ArgsEscaped's Check
//
// This field is present only for legacy compatibility with Docker and
// should not be used by new image builders. Nydusify (1.6 and above)
// ignores it, which is an expected behavior.
// Also ignore it in check.
//
// Addition: [ArgsEscaped in spec](https://github.com/opencontainers/image-spec/pull/892)
rule.TargetParsed.NydusImage.Config.Config.ArgsEscaped = rule.SourceParsed.OCIImage.Config.Config.ArgsEscaped
ociConfig, err := json.Marshal(rule.SourceParsed.OCIImage.Config.Config) func (rule *ManifestRule) validate(parsed *parser.Parsed) error {
if err != nil { if parsed == nil {
return errors.New("marshal oci image config") return nil
}
logrus.WithField("type", tool.CheckImageType(parsed)).WithField("image", parsed.Remote.Ref).Infof("checking manifest")
if parsed.OCIImage != nil {
return errors.Wrap(rule.validateOCI(parsed.OCIImage), "invalid OCI image manifest")
} else if parsed.NydusImage != nil {
return errors.Wrap(rule.validateNydus(parsed.NydusImage), "invalid nydus image manifest")
}
return errors.New("not found valid image")
}
func (rule *ManifestRule) Validate() error {
if err := rule.validate(rule.SourceParsed); err != nil {
return errors.Wrap(err, "source image: invalid manifest")
}
if err := rule.validate(rule.TargetParsed); err != nil {
return errors.Wrap(err, "target image: invalid manifest")
}
if rule.SourceParsed != nil && rule.TargetParsed != nil {
sourceImage := rule.SourceParsed.OCIImage
if sourceImage == nil {
sourceImage = rule.SourceParsed.NydusImage
} }
nydusConfig, err := json.Marshal(rule.TargetParsed.NydusImage.Config.Config) targetImage := rule.TargetParsed.OCIImage
if err != nil { if targetImage == nil {
return errors.New("marshal nydus image config") targetImage = rule.TargetParsed.NydusImage
} }
if !reflect.DeepEqual(ociConfig, nydusConfig) { if err := rule.validateConfig(sourceImage, targetImage); err != nil {
return errors.New("nydus image config should be equal with oci image config") return fmt.Errorf("validate image config: %v", err)
} }
} }

View File

@ -1,28 +1,41 @@
// Copyright 2023 Nydus Developers. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
package rule package rule
import ( import (
"testing" "testing"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/parser" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/parser"
"github.com/stretchr/testify/assert" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/remote"
"github.com/stretchr/testify/require"
v1 "github.com/opencontainers/image-spec/specs-go/v1" "github.com/opencontainers/go-digest"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
) )
func TestManifestName(t *testing.T) {
rule := ManifestRule{}
require.Equal(t, "manifest", rule.Name())
}
func TestManifestRuleValidate_IgnoreDeprecatedField(t *testing.T) { func TestManifestRuleValidate_IgnoreDeprecatedField(t *testing.T) {
source := &parser.Parsed{ source := &parser.Parsed{
NydusImage: &parser.Image{ Remote: &remote.Remote{},
Config: v1.Image{ OCIImage: &parser.Image{
Config: v1.ImageConfig{ Config: ocispec.Image{
Config: ocispec.ImageConfig{
ArgsEscaped: true, // deprecated field ArgsEscaped: true, // deprecated field
}, },
}, },
}, },
} }
target := &parser.Parsed{ target := &parser.Parsed{
Remote: &remote.Remote{},
NydusImage: &parser.Image{ NydusImage: &parser.Image{
Config: v1.Image{ Config: ocispec.Image{
Config: v1.ImageConfig{ Config: ocispec.ImageConfig{
ArgsEscaped: false, ArgsEscaped: false,
}, },
}, },
@ -34,5 +47,82 @@ func TestManifestRuleValidate_IgnoreDeprecatedField(t *testing.T) {
TargetParsed: target, TargetParsed: target,
} }
assert.Nil(t, rule.Validate()) require.Nil(t, rule.Validate())
}
func TestManifestRuleValidate_TargetLayer(t *testing.T) {
rule := ManifestRule{}
rule.TargetParsed = &parser.Parsed{
Remote: &remote.Remote{},
NydusImage: &parser.Image{
Manifest: ocispec.Manifest{
MediaType: "application/vnd.docker.distribution.manifest.v2+json",
Config: ocispec.Descriptor{
MediaType: "application/vnd.oci.image.config.v1+json",
Digest: "sha256:563fad1f51cec2ee4c972af4bfd7275914061e2f73770585cfb04309cb5e0d6b",
Size: 523,
},
Layers: []ocispec.Descriptor{
{
MediaType: "application / vnd.oci.image.layer.v1.tar",
Digest: "sha256:09845cce1d983b158d4865fc37c23bbfb892d4775c786e8114d3cf868975c059",
Size: 83528010,
Annotations: map[string]string{
"containerd.io/snapshot/nydus-blob": "true",
},
},
{
MediaType: "application/vnd.oci.image.layer.nydus.blob.v1",
Digest: "sha256:09845cce1d983b158d4865fc37c23bbfb892d4775c786e8114d3cf868975c059",
Size: 83528010,
Annotations: map[string]string{
"containerd.io/snapshot/nydus-blob": "true",
},
},
},
},
},
}
require.Error(t, rule.Validate())
require.Contains(t, rule.Validate().Error(), "invalid blob layer in nydus image manifest")
rule.TargetParsed.NydusImage.Manifest.Layers = []ocispec.Descriptor{
{
MediaType: "application/vnd.oci.image.layer.nydus.blob.v1",
Digest: "sha256:09845cce1d983b158d4865fc37c23bbfb892d4775c786e8114d3cf868975c059",
Size: 83528010,
Annotations: map[string]string{
"containerd.io/snapshot/nydus-blob": "true",
},
},
}
require.Error(t, rule.Validate())
require.Contains(t, rule.Validate().Error(), "invalid bootstrap layer in nydus image manifest")
rule.TargetParsed.NydusImage.Config.RootFS.DiffIDs = []digest.Digest{
"sha256:09845cce1d983b158d4865fc37c23bbfb892d4775c786e8114d3cf868975c059",
"sha256:bec98c9e3dce739877b8f5fe1cddd339de1db2b36c20995d76f6265056dbdb08",
}
rule.TargetParsed.NydusImage.Manifest.Layers = []ocispec.Descriptor{
{
MediaType: "application/vnd.oci.image.layer.nydus.blob.v1",
Digest: "sha256:09845cce1d983b158d4865fc37c23bbfb892d4775c786e8114d3cf868975c059",
Size: 83528010,
Annotations: map[string]string{
"containerd.io/snapshot/nydus-blob": "true",
},
},
{
MediaType: "application/vnd.oci.image.layer.v1.tar+gzip",
Digest: "sha256:aec98c9e3dce739877b8f5fe1cddd339de1db2b36c20995d76f6265056dbdb08",
Size: 273320,
Annotations: map[string]string{
"containerd.io/snapshot/nydus-bootstrap": "true",
"containerd.io/snapshot/nydus-reference-blob-ids": "[\"09845cce1d983b158d4865fc37c23bbfb892d4775c786e8114d3cf868975c059\"]",
},
},
}
require.NoError(t, rule.Validate())
} }

View File

@ -29,7 +29,7 @@ func NewBuilder(binaryPath string) *Builder {
} }
} }
// Check calls `nydus-image check` to parse Nydus bootstrap // Check calls `nydus-image check` to parse nydus bootstrap
// and output debug information to specified JSON file. // and output debug information to specified JSON file.
func (builder *Builder) Check(option BuilderOption) error { func (builder *Builder) Check(option BuilderOption) error {
args := []string{ args := []string{
@ -46,9 +46,5 @@ func (builder *Builder) Check(option BuilderOption) error {
cmd.Stdout = builder.stdout cmd.Stdout = builder.stdout
cmd.Stderr = builder.stderr cmd.Stderr = builder.stderr
if err := cmd.Run(); err != nil { return cmd.Run()
return err
}
return nil
} }

View File

@ -11,6 +11,7 @@ import (
"strings" "strings"
"github.com/containerd/containerd/mount" "github.com/containerd/containerd/mount"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/parser"
ocispec "github.com/opencontainers/image-spec/specs-go/v1" ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors" "github.com/pkg/errors"
) )
@ -45,11 +46,19 @@ func mkMounts(dirs []string) []mount.Mount {
} }
} }
func CheckImageType(parsed *parser.Parsed) string {
if parsed.NydusImage != nil {
return "nydus"
} else if parsed.OCIImage != nil {
return "oci"
}
return "unknown"
}
type Image struct { type Image struct {
Layers []ocispec.Descriptor Layers []ocispec.Descriptor
Source string LayerBaseDir string
SourcePath string Rootfs string
Rootfs string
} }
// Mount mounts rootfs of OCI image. // Mount mounts rootfs of OCI image.
@ -62,7 +71,7 @@ func (image *Image) Mount() error {
count := len(image.Layers) count := len(image.Layers)
for idx := range image.Layers { for idx := range image.Layers {
layerName := fmt.Sprintf("layer-%d", count-idx-1) layerName := fmt.Sprintf("layer-%d", count-idx-1)
layerDir := filepath.Join(image.SourcePath, layerName) layerDir := filepath.Join(image.LayerBaseDir, layerName)
dirs = append(dirs, strings.ReplaceAll(layerDir, ":", "\\:")) dirs = append(dirs, strings.ReplaceAll(layerDir, ":", "\\:"))
} }

View File

@ -14,24 +14,28 @@ import (
"net/http" "net/http"
"os" "os"
"os/exec" "os/exec"
"strings"
"text/template" "text/template"
"time" "time"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/sirupsen/logrus"
) )
type NydusdConfig struct { type NydusdConfig struct {
EnablePrefetch bool EnablePrefetch bool
NydusdPath string NydusdPath string
BootstrapPath string BootstrapPath string
ConfigPath string ConfigPath string
BackendType string BackendType string
BackendConfig string BackendConfig string
BlobCacheDir string ExternalBackendConfigPath string
APISockPath string ExternalBackendProxyCacheDir string
MountPath string BlobCacheDir string
Mode string APISockPath string
DigestValidate bool MountPath string
Mode string
DigestValidate bool
} }
// Nydusd runs nydusd binary. // Nydusd runs nydusd binary.
@ -50,6 +54,9 @@ var configTpl = `
"type": "{{.BackendType}}", "type": "{{.BackendType}}",
"config": {{.BackendConfig}} "config": {{.BackendConfig}}
}, },
"external_backend": {
"config_path": "{{.ExternalBackendConfigPath}}"
},
"cache": { "cache": {
"type": "blobcache", "type": "blobcache",
"config": { "config": {
@ -76,12 +83,10 @@ func makeConfig(conf NydusdConfig) error {
if conf.BackendType == "" { if conf.BackendType == "" {
conf.BackendType = "localfs" conf.BackendType = "localfs"
conf.BackendConfig = `{"dir": "/fake"}` conf.BackendConfig = `{"dir": "/fake"}`
conf.EnablePrefetch = false
} else { } else {
if conf.BackendConfig == "" { if conf.BackendConfig == "" {
return errors.Errorf("empty backend configuration string") return errors.Errorf("empty backend configuration string")
} }
conf.EnablePrefetch = true
} }
if err := tpl.Execute(&ret, conf); err != nil { if err := tpl.Execute(&ret, conf); err != nil {
return errors.New("failed to prepare configuration file for Nydusd") return errors.New("failed to prepare configuration file for Nydusd")
@ -176,10 +181,11 @@ func (nydusd *Nydusd) Mount() error {
"--apisock", "--apisock",
nydusd.APISockPath, nydusd.APISockPath,
"--log-level", "--log-level",
"error", "warn",
} }
cmd := exec.Command(nydusd.NydusdPath, args...) cmd := exec.Command(nydusd.NydusdPath, args...)
logrus.Debugf("Command: %s %s", nydusd.NydusdPath, strings.Join(args, " "))
cmd.Stdout = os.Stdout cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr cmd.Stderr = os.Stderr

View File

@ -1,7 +1,10 @@
package generator package generator
import ( import (
"compress/gzip"
"context" "context"
"encoding/json"
"io"
"io/fs" "io/fs"
"os" "os"
"path/filepath" "path/filepath"
@ -10,20 +13,47 @@ import (
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/build" "github.com/containerd/containerd/namespaces"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/parser" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/backend"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/provider" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/build"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/utils" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/parser"
originprovider "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/provider"
"github.com/goharbor/acceleration-service/pkg/remote"
"github.com/BraveY/snapshotter-converter/converter"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/converter/provider"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/utils"
"github.com/dustin/go-humanize"
"github.com/goharbor/acceleration-service/pkg/platformutil"
serverutils "github.com/goharbor/acceleration-service/pkg/utils"
"github.com/opencontainers/go-digest"
"golang.org/x/sync/errgroup"
"golang.org/x/sync/semaphore"
"github.com/containerd/containerd/content"
containerdErrdefs "github.com/containerd/containerd/errdefs"
"github.com/goharbor/acceleration-service/pkg/errdefs"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
) )
// Opt defines Chunkdict generate options. // Opt defines Chunkdict generate options.
// Note: sources is one or more Nydus image references. // Note: sources is one or more Nydus image references.
type Opt struct { type Opt struct {
WorkDir string
Sources []string Sources []string
Target string
SourceInsecure bool SourceInsecure bool
TargetInsecure bool
BackendType string
BackendConfig string
BackendForcePush bool
WorkDir string
NydusImagePath string NydusImagePath string
ExpectedArch string ExpectedArch string
AllPlatforms bool
Platforms string
} }
// Generator generates chunkdict by deduplicating multiple nydus images // Generator generates chunkdict by deduplicating multiple nydus images
@ -33,12 +63,16 @@ type Generator struct {
sourcesParser []*parser.Parser sourcesParser []*parser.Parser
} }
type output struct {
Blobs []string
}
// New creates Generator instance. // New creates Generator instance.
func New(opt Opt) (*Generator, error) { func New(opt Opt) (*Generator, error) {
// TODO: support sources image resolver // TODO: support sources image resolver
var sourcesParser []*parser.Parser var sourcesParser []*parser.Parser
for _, source := range opt.Sources { for _, source := range opt.Sources {
sourcesRemote, err := provider.DefaultRemote(source, opt.SourceInsecure) sourcesRemote, err := originprovider.DefaultRemote(source, opt.SourceInsecure)
if err != nil { if err != nil {
return nil, errors.Wrap(err, "Init source image parser") return nil, errors.Wrap(err, "Init source image parser")
} }
@ -59,48 +93,435 @@ func New(opt Opt) (*Generator, error) {
// Generate saves multiple Nydus bootstraps into the database one by one. // Generate saves multiple Nydus bootstraps into the database one by one.
func (generator *Generator) Generate(ctx context.Context) error { func (generator *Generator) Generate(ctx context.Context) error {
for index := range generator.Sources { var bootstrapPaths []string
if err := generator.save(ctx, index); err != nil { bootstrapPaths, err := generator.pull(ctx)
if utils.RetryWithHTTP(err) {
if err != nil {
if utils.RetryWithHTTP(err) {
for index := range generator.Sources {
generator.sourcesParser[index].Remote.MaybeWithHTTP(err) generator.sourcesParser[index].Remote.MaybeWithHTTP(err)
} }
if err := generator.save(ctx, index); err != nil { }
return err bootstrapPaths, err = generator.pull(ctx)
if err != nil {
return err
}
}
chunkdictBootstrapPath, outputPath, err := generator.generate(ctx, bootstrapPaths)
if err != nil {
return err
}
if err := generator.push(ctx, chunkdictBootstrapPath, outputPath); err != nil {
return err
}
// return os.RemoveAll(generator.WorkDir)
return nil
}
// Pull the bootstrap of nydus image
func (generator *Generator) pull(ctx context.Context) ([]string, error) {
var bootstrapPaths []string
for index := range generator.Sources {
sourceParsed, err := generator.sourcesParser[index].Parse(ctx)
if err != nil {
return nil, errors.Wrap(err, "parse Nydus image")
}
// Create a directory to store the image bootstrap
nydusImageName := strings.Replace(generator.Sources[index], "/", ":", -1)
bootstrapDirPath := filepath.Join(generator.WorkDir, nydusImageName)
if err := os.MkdirAll(bootstrapDirPath, fs.ModePerm); err != nil {
return nil, errors.Wrap(err, "creat work directory")
}
if err := generator.Output(ctx, sourceParsed, bootstrapDirPath, index); err != nil {
return nil, errors.Wrap(err, "output image information")
}
bootstrapPath := filepath.Join(bootstrapDirPath, "nydus_bootstrap")
bootstrapPaths = append(bootstrapPaths, bootstrapPath)
}
return bootstrapPaths, nil
}
func (generator *Generator) generate(_ context.Context, bootstrapSlice []string) (string, string, error) {
// Invoke "nydus-image chunkdict generate" command
currentDir, _ := os.Getwd()
builder := build.NewBuilder(generator.NydusImagePath)
chunkdictBootstrapPath := filepath.Join(generator.WorkDir, "chunkdict_bootstrap")
databaseType := "sqlite"
var databasePath string
if strings.HasPrefix(generator.WorkDir, "/") {
databasePath = databaseType + "://" + filepath.Join(generator.WorkDir, "database.db")
} else {
databasePath = databaseType + "://" + filepath.Join(currentDir, generator.WorkDir, "database.db")
}
outputPath := filepath.Join(generator.WorkDir, "nydus_bootstrap_output.json")
if err := builder.Generate(build.GenerateOption{
BootstrapPaths: bootstrapSlice,
ChunkdictBootstrapPath: chunkdictBootstrapPath,
DatabasePath: databasePath,
OutputPath: outputPath,
}); err != nil {
return "", "", errors.Wrap(err, "invalid nydus bootstrap format")
}
logrus.Infof("Successfully generate image chunk dictionary")
return chunkdictBootstrapPath, outputPath, nil
}
func hosts(generator *Generator) remote.HostFunc {
maps := make(map[string]bool)
for _, source := range generator.Sources {
maps[source] = generator.SourceInsecure
}
maps[generator.Target] = generator.TargetInsecure
return func(ref string) (remote.CredentialFunc, bool, error) {
return remote.NewDockerConfigCredFunc(), maps[ref], nil
}
}
func (generator *Generator) push(ctx context.Context, chunkdictBootstrapPath string, outputPath string) error {
// Basic configuration
ctx = namespaces.WithNamespace(ctx, "nydusify")
platformMC, err := platformutil.ParsePlatforms(generator.AllPlatforms, generator.Platforms)
if err != nil {
return err
}
pvd, err := provider.New(generator.WorkDir, hosts(generator), 200, "v1", platformMC, 0)
if err != nil {
return err
}
var bkd backend.Backend
if generator.BackendType != "" {
bkd, err = backend.NewBackend(generator.BackendType, []byte(generator.BackendConfig), nil)
if err != nil {
return errors.Wrapf(err, "new backend")
}
}
// Pull source image
for index := range generator.Sources {
if err := pvd.Pull(ctx, generator.Sources[index]); err != nil {
if errdefs.NeedsRetryWithHTTP(err) {
pvd.UsePlainHTTP()
if err := pvd.Pull(ctx, generator.Sources[index]); err != nil {
return errors.Wrap(err, "try to pull image")
}
} else {
return errors.Wrap(err, "pull source image")
} }
} }
} }
return nil
}
// "save" stores information of chunk and blob of a Nydus Image in the database logrus.Infof("pulled source image %s", generator.Sources[0])
func (generator *Generator) save(ctx context.Context, index int) error { sourceImage, err := pvd.Image(ctx, generator.Sources[0])
sourceParsed, err := generator.sourcesParser[index].Parse(ctx)
if err != nil { if err != nil {
return errors.Wrap(err, "parse Nydus image") return errors.Wrap(err, "find image from store")
}
sourceDescs, err := serverutils.GetManifests(ctx, pvd.ContentStore(), *sourceImage, platformMC)
if err != nil {
return errors.Wrap(err, "get image manifests")
} }
// Create a directory to store the image bootstrap targetDescs := make([]ocispec.Descriptor, len(sourceDescs))
nydusImageName := strings.Replace(generator.Sources[index], "/", ":", -1)
folderPath := filepath.Join(generator.WorkDir, nydusImageName)
if err := os.MkdirAll(folderPath, fs.ModePerm); err != nil {
return errors.Wrap(err, "creat work directory")
}
if err := generator.Output(ctx, sourceParsed, folderPath, index); err != nil {
return errors.Wrap(err, "output image information")
}
// Invoke "nydus-image save" command sem := semaphore.NewWeighted(1)
builder := build.NewBuilder(generator.NydusImagePath) eg := errgroup.Group{}
if err := builder.Save(build.SaveOption{ for idx := range sourceDescs {
BootstrapPath: filepath.Join(folderPath, "nydus_bootstrap"), func(idx int) {
}); err != nil { eg.Go(func() error {
return errors.Wrap(err, "invalid nydus bootstrap format") sem.Acquire(context.Background(), 1)
defer sem.Release(1)
sourceDesc := sourceDescs[idx]
targetDesc := &sourceDesc
// Get the blob from backend
descs, _targetDesc, err := pushBlobFromBackend(ctx, pvd, bkd, sourceDesc, *generator, chunkdictBootstrapPath, outputPath)
if err != nil {
return errors.Wrap(err, "get resolver")
}
if _targetDesc != nil {
targetDesc = _targetDesc
store := newStore(pvd.ContentStore(), descs)
pvd.SetContentStore(store)
}
targetDescs[idx] = *targetDesc
if err := pvd.Push(ctx, *targetDesc, generator.Target); err != nil {
if errdefs.NeedsRetryWithHTTP(err) {
pvd.UsePlainHTTP()
if err := pvd.Push(ctx, *targetDesc, generator.Target); err != nil {
return errors.Wrap(err, "try to push image manifest")
}
} else {
return errors.Wrap(err, "push target image manifest")
}
}
return nil
})
}(idx)
} }
if err := eg.Wait(); err != nil {
logrus.Infof("Save chunk information from image %s", generator.sourcesParser[index].Remote.Ref) return errors.Wrap(err, "push image manifests")
if err := os.RemoveAll(folderPath); err != nil {
return errors.Wrap(err, "remove work directory")
} }
return nil return nil
} }
func pushBlobFromBackend(
ctx context.Context, pvd *provider.Provider, bkd backend.Backend, src ocispec.Descriptor, generator Generator, bootstrapPath string, outputPath string,
) ([]ocispec.Descriptor, *ocispec.Descriptor, error) {
manifest := ocispec.Manifest{}
if _, err := serverutils.ReadJSON(ctx, pvd.ContentStore(), &manifest, src); err != nil {
return nil, nil, errors.Wrap(err, "read manifest from store")
}
fsversion := src.Annotations["containerd.io/snapshot/nydus-fs-version"]
// Read the Nydusify output JSON to get the list of blobs
var out output
bytes, err := os.ReadFile(outputPath)
if err != nil {
return nil, nil, errors.Wrap(err, "read output file")
}
if err := json.Unmarshal(bytes, &out); err != nil {
return nil, nil, errors.Wrap(err, "unmarshal output json")
}
blobIDs := []string{}
blobIDMap := map[string]bool{}
for _, blobID := range out.Blobs {
if blobIDMap[blobID] {
continue
}
blobIDs = append(blobIDs, blobID)
blobIDMap[blobID] = true
}
blobDescs := make([]ocispec.Descriptor, len(blobIDs))
eg, ctx := errgroup.WithContext(ctx)
sem := semaphore.NewWeighted(int64(provider.LayerConcurrentLimit))
for idx := range blobIDs {
func(idx int) {
eg.Go(func() error {
sem.Acquire(context.Background(), 1)
defer sem.Release(1)
blobID := blobIDs[idx]
blobDigest := digest.Digest("sha256:" + blobID)
var blobSize int64
var rc io.ReadCloser
if bkd != nil {
rc, err = bkd.Reader(blobID)
if err != nil {
return errors.Wrap(err, "get blob reader")
}
blobSize, err = bkd.Size(blobID)
if err != nil {
return errors.Wrap(err, "get blob size")
}
} else {
imageDesc, err := generator.sourcesParser[0].Remote.Resolve(ctx)
if err != nil {
if strings.Contains(err.Error(), "x509: certificate signed by unknown authority") {
logrus.Warningln("try to enable \"--source-insecure\" / \"--target-insecure\" option")
}
return errors.Wrap(err, "resolve image")
}
rc, err = generator.sourcesParser[0].Remote.Pull(ctx, *imageDesc, true)
if err != nil {
return errors.Wrap(err, "get blob reader")
}
blobInfo, err := pvd.ContentStore().Info(ctx, blobDigest)
if err != nil {
return errors.Wrap(err, "get info from content store")
}
blobSize = blobInfo.Size
}
defer rc.Close()
blobSizeStr := humanize.Bytes(uint64(blobSize))
logrus.WithField("digest", blobDigest).WithField("size", blobSizeStr).Infof("pushing blob from backend")
blobDescs[idx] = ocispec.Descriptor{
Digest: blobDigest,
Size: blobSize,
MediaType: converter.MediaTypeNydusBlob,
Annotations: map[string]string{
converter.LayerAnnotationNydusBlob: "true",
},
}
writer, err := getPushWriter(ctx, pvd, blobDescs[idx], generator.Opt)
if err != nil {
if errdefs.NeedsRetryWithHTTP(err) {
pvd.UsePlainHTTP()
writer, err = getPushWriter(ctx, pvd, blobDescs[idx], generator.Opt)
}
if err != nil {
return errors.Wrap(err, "get push writer")
}
}
if writer != nil {
defer writer.Close()
return content.Copy(ctx, writer, rc, blobSize, blobDigest)
}
logrus.WithField("digest", blobDigest).WithField("size", blobSizeStr).Infof("pushed blob from backend")
return nil
})
}(idx)
}
if err := eg.Wait(); err != nil {
return nil, nil, errors.Wrap(err, "push blobs")
}
// Update manifest blob layers
manifest.Layers = nil
manifest.Layers = append(blobDescs, manifest.Layers...)
// Update bootstrap
cw, err := content.OpenWriter(ctx, pvd.ContentStore(), content.WithRef("merge-bootstrap"))
if err != nil {
return nil, nil, errors.Wrap(err, "open content store writer")
}
defer cw.Close()
bootstrapPathTar := "image/image.boot"
rc, err := utils.PackTargz(bootstrapPath, bootstrapPathTar, false)
if err != nil {
return nil, nil, errors.Wrap(err, "get bootstrap reader")
}
defer rc.Close()
gw := gzip.NewWriter(cw)
uncompressedDgst := digest.SHA256.Digester()
compressed := io.MultiWriter(gw, uncompressedDgst.Hash())
buffer := make([]byte, 32*1024)
if _, err := io.CopyBuffer(compressed, rc, buffer); err != nil {
return nil, nil, errors.Wrapf(err, "copy bootstrap targz into content store")
}
if err := gw.Close(); err != nil {
return nil, nil, errors.Wrap(err, "close gzip writer")
}
compressedDgst := cw.Digest()
if err := cw.Commit(ctx, 0, compressedDgst, content.WithLabels(map[string]string{
"containerd.io/uncompressed": uncompressedDgst.Digest().String(),
})); err != nil {
if !containerdErrdefs.IsAlreadyExists(err) {
return nil, nil, errors.Wrap(err, "commit to content store")
}
}
if err := cw.Close(); err != nil {
return nil, nil, errors.Wrap(err, "close content store writer")
}
bootstrapInfo, err := pvd.ContentStore().Info(ctx, compressedDgst)
if err != nil {
return nil, nil, errors.Wrap(err, "get info from content store")
}
bootstrapSize := bootstrapInfo.Size
bootstrapDesc := ocispec.Descriptor{
Digest: compressedDgst,
Size: bootstrapSize,
MediaType: "application/vnd.docker.image.rootfs.diff.tar.gzip",
Annotations: map[string]string{
"containerd.io/snapshot/nydus-bootstrap": "true",
"containerd.io/snapshot/nydus-fs-version": fsversion,
},
}
manifest.Layers = append(manifest.Layers, bootstrapDesc)
// Update image config
blobDigests := []digest.Digest{}
for idx := range blobDescs {
blobDigests = append(blobDigests, blobDescs[idx].Digest)
}
config := ocispec.Image{}
if _, err := serverutils.ReadJSON(ctx, pvd.ContentStore(), &config, manifest.Config); err != nil {
return nil, nil, errors.Wrap(err, "read config json")
}
config.RootFS.DiffIDs = nil
config.RootFS.DiffIDs = append(blobDigests, config.RootFS.DiffIDs...)
config.RootFS.DiffIDs = append(config.RootFS.DiffIDs, digest.Digest(uncompressedDgst.Digest().String()))
configDesc, err := serverutils.WriteJSON(ctx, pvd.ContentStore(), config, manifest.Config, generator.Target, nil)
if err != nil {
return nil, nil, errors.Wrap(err, "write config json")
}
manifest.Config = *configDesc
target, err := serverutils.WriteJSON(ctx, pvd.ContentStore(), &manifest, src, generator.Target, nil)
if err != nil {
return nil, nil, errors.Wrap(err, "write manifest json")
}
return blobDescs, target, nil
}
func getPushWriter(ctx context.Context, pvd *provider.Provider, desc ocispec.Descriptor, opt Opt) (content.Writer, error) {
resolver, err := pvd.Resolver(opt.Target)
if err != nil {
return nil, errors.Wrap(err, "get resolver")
}
ref := opt.Target
if !strings.Contains(ref, "@") {
ref = ref + "@" + desc.Digest.String()
}
pusher, err := resolver.Pusher(ctx, ref)
if err != nil {
return nil, errors.Wrap(err, "create pusher")
}
writer, err := pusher.Push(ctx, desc)
if err != nil {
if containerdErrdefs.IsAlreadyExists(err) {
return nil, nil
}
return nil, err
}
return writer, nil
}
type store struct {
content.Store
remotes []ocispec.Descriptor
}
func newStore(base content.Store, remotes []ocispec.Descriptor) *store {
return &store{
Store: base,
remotes: remotes,
}
}
func (s *store) Info(ctx context.Context, dgst digest.Digest) (content.Info, error) {
info, err := s.Store.Info(ctx, dgst)
if err != nil {
if !containerdErrdefs.IsNotFound(err) {
return content.Info{}, err
}
for _, desc := range s.remotes {
if desc.Digest == dgst {
return content.Info{
Digest: desc.Digest,
Size: desc.Size,
}, nil
}
}
return content.Info{}, err
}
return info, nil
}

View File

@ -10,8 +10,8 @@ import (
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/parser" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/parser"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/utils" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/utils"
) )
func prettyDump(obj interface{}, name string) error { func prettyDump(obj interface{}, name string) error {

View File

@ -0,0 +1,940 @@
// Copyright 2024 Nydus Developers. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
package committer
import (
"bytes"
"compress/gzip"
"context"
"encoding/json"
"fmt"
"io"
"os"
"os/exec"
"path/filepath"
"runtime"
"strings"
"sync"
"time"
"github.com/containerd/containerd/labels"
"github.com/BraveY/snapshotter-converter/converter"
"github.com/containerd/containerd"
"github.com/containerd/containerd/content/local"
"github.com/containerd/containerd/namespaces"
"github.com/containerd/containerd/reference/docker"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/committer/diff"
parserPkg "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/parser"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/provider"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/utils"
"github.com/dustin/go-humanize"
"github.com/opencontainers/go-digest"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"golang.org/x/sync/errgroup"
)
// Opt defines the options for committing container changes
type Opt struct {
WorkDir string
ContainerdAddress string
NydusImagePath string
Namespace string
ContainerID string
SourceInsecure bool
TargetRef string
TargetInsecure bool
MaximumTimes int
FsVersion string
Compressor string
WithPaths []string
WithoutPaths []string
}
type Committer struct {
workDir string
builder string
manager *Manager
}
// NewCommitter creates a new Committer instance
func NewCommitter(opt Opt) (*Committer, error) {
if err := os.MkdirAll(opt.WorkDir, 0755); err != nil {
return nil, errors.Wrap(err, "prepare work dir")
}
workDir, err := os.MkdirTemp(opt.WorkDir, "nydusify-commiter-")
if err != nil {
return nil, errors.Wrap(err, "create temp dir")
}
cm, err := NewManager(opt.ContainerdAddress)
if err != nil {
return nil, errors.Wrap(err, "new container manager")
}
return &Committer{
workDir: workDir,
builder: opt.NydusImagePath,
manager: cm,
}, nil
}
func (cm *Committer) Commit(ctx context.Context, opt Opt) error {
// Resolve container ID first
if err := cm.resolveContainerID(ctx, &opt); err != nil {
return errors.Wrap(err, "failed to resolve container ID")
}
ctx = namespaces.WithNamespace(ctx, opt.Namespace)
targetRef, err := ValidateRef(opt.TargetRef)
if err != nil {
return errors.Wrap(err, "parse target image name")
}
inspect, err := cm.manager.Inspect(ctx, opt.ContainerID)
if err != nil {
return errors.Wrap(err, "inspect container")
}
originalSourceRef := inspect.Image
logrus.Infof("pulling base bootstrap")
start := time.Now()
image, committedLayers, err := cm.pullBootstrap(ctx, originalSourceRef, "bootstrap-base", opt.SourceInsecure)
if err != nil {
return errors.Wrap(err, "pull base bootstrap")
}
logrus.Infof("pulled base bootstrap, elapsed: %s", time.Since(start))
if committedLayers >= opt.MaximumTimes {
return fmt.Errorf("reached maximum committed times %d", opt.MaximumTimes)
}
if opt.FsVersion, opt.Compressor, err = cm.obtainBootStrapInfo(ctx, "bootstrap-base"); err != nil {
return errors.Wrap(err, "obtain bootstrap FsVersion and Compressor")
}
// Push lower blobs
for idx, layer := range image.Manifest.Layers {
if layer.MediaType == utils.MediaTypeNydusBlob {
name := fmt.Sprintf("blob-mount-%d", idx)
if _, err := cm.pushBlob(ctx, name, layer.Digest, originalSourceRef, targetRef, opt.TargetInsecure, image); err != nil {
return errors.Wrap(err, "push lower blob")
}
}
}
mountList := NewMountList()
var upperBlob *Blob
mountBlobs := make([]Blob, len(opt.WithPaths))
commit := func() error {
eg := errgroup.Group{}
eg.Go(func() error {
var upperBlobDigest *digest.Digest
if err := withRetry(func() error {
upperBlobDigest, err = cm.commitUpperByDiff(ctx, mountList.Add, opt.WithPaths, opt.WithoutPaths, inspect.LowerDirs, inspect.UpperDir, "blob-upper", opt.FsVersion, opt.Compressor)
return err
}, 3); err != nil {
return errors.Wrap(err, "commit upper")
}
logrus.Infof("pushing blob for upper")
start := time.Now()
upperBlobDesc, err := cm.pushBlob(ctx, "blob-upper", *upperBlobDigest, originalSourceRef, targetRef, opt.TargetInsecure, image)
if err != nil {
return errors.Wrap(err, "push upper blob")
}
upperBlob = &Blob{
Name: "blob-upper",
Desc: *upperBlobDesc,
}
logrus.Infof("pushed blob for upper, elapsed: %s", time.Since(start))
return nil
})
if len(opt.WithPaths) > 0 {
for idx := range opt.WithPaths {
func(idx int) {
eg.Go(func() error {
withPath := opt.WithPaths[idx]
name := fmt.Sprintf("blob-mount-%d", idx)
var mountBlobDigest *digest.Digest
if err := withRetry(func() error {
mountBlobDigest, err = cm.commitMountByNSEnter(ctx, inspect.Pid, withPath, name, opt.FsVersion, opt.Compressor)
return err
}, 3); err != nil {
return errors.Wrap(err, "commit mount")
}
logrus.Infof("pushing blob for mount")
start := time.Now()
mountBlobDesc, err := cm.pushBlob(ctx, name, *mountBlobDigest, originalSourceRef, targetRef, opt.TargetInsecure, image)
if err != nil {
return errors.Wrap(err, "push mount blob")
}
mountBlobs[idx] = Blob{
Name: name,
Desc: *mountBlobDesc,
}
logrus.Infof("pushed blob for mount, elapsed: %s", time.Since(start))
return nil
})
}(idx)
}
}
if err := eg.Wait(); err != nil {
return err
}
appendedEg := errgroup.Group{}
appendedMutex := sync.Mutex{}
if len(mountList.paths) > 0 {
logrus.Infof("need commit appended mount path: %s", strings.Join(mountList.paths, ", "))
}
for idx := range mountList.paths {
func(idx int) {
appendedEg.Go(func() error {
mountPath := mountList.paths[idx]
name := fmt.Sprintf("blob-appended-mount-%d", idx)
var mountBlobDigest *digest.Digest
if err := withRetry(func() error {
mountBlobDigest, err = cm.commitMountByNSEnter(ctx, inspect.Pid, mountPath, name, opt.FsVersion, opt.Compressor)
return err
}, 3); err != nil {
return errors.Wrap(err, "commit appended mount")
}
logrus.Infof("pushing blob for appended mount")
start := time.Now()
mountBlobDesc, err := cm.pushBlob(ctx, name, *mountBlobDigest, originalSourceRef, targetRef, opt.TargetInsecure, image)
if err != nil {
return errors.Wrap(err, "push appended mount blob")
}
appendedMutex.Lock()
mountBlobs = append(mountBlobs, Blob{
Name: name,
Desc: *mountBlobDesc,
})
appendedMutex.Unlock()
logrus.Infof("pushed blob for appended mount, elapsed: %s", time.Since(start))
return nil
})
}(idx)
}
return appendedEg.Wait()
}
// Ensure filesystem changes are written to disk before committing
// This prevents issues where changes are still in memory buffers
// and not yet visible in the overlay filesystem's upper directory
logrus.Infof("syncing filesystem before commit")
if err := cm.syncFilesystem(ctx, opt.ContainerID); err != nil {
return errors.Wrap(err, "failed to sync filesystem")
}
if err := cm.pause(ctx, opt.ContainerID, commit); err != nil {
return errors.Wrap(err, "pause container to commit")
}
logrus.Infof("merging base and upper bootstraps")
_, bootstrapDiffID, err := cm.mergeBootstrap(ctx, *upperBlob, mountBlobs, "bootstrap-base", "bootstrap-merged.tar")
if err != nil {
return errors.Wrap(err, "merge bootstrap")
}
logrus.Infof("pushing committed image to %s", targetRef)
if err := cm.pushManifest(ctx, *image, *bootstrapDiffID, targetRef, "bootstrap-merged.tar", opt.FsVersion, upperBlob, mountBlobs, opt.TargetInsecure); err != nil {
return errors.Wrap(err, "push manifest")
}
return nil
}
func (cm *Committer) pullBootstrap(ctx context.Context, ref, bootstrapName string, insecure bool) (*parserPkg.Image, int, error) {
remoter, err := provider.DefaultRemote(ref, insecure)
if err != nil {
return nil, 0, errors.Wrap(err, "create remote")
}
parser, err := parserPkg.New(remoter, runtime.GOARCH)
if err != nil {
return nil, 0, errors.Wrap(err, "create parser")
}
var parsed *parserPkg.Parsed
parsed, err = parser.Parse(ctx)
if err != nil {
if utils.RetryWithHTTP(err) {
remoter.MaybeWithHTTP(err)
parsed, err = parser.Parse(ctx)
if err != nil {
return nil, 0, errors.Wrap(err, "parse nydus image")
}
} else {
return nil, 0, errors.Wrap(err, "parse nydus image")
}
}
if parsed.NydusImage == nil {
return nil, 0, fmt.Errorf("not a nydus image: %s", ref)
}
bootstrapDesc := parserPkg.FindNydusBootstrapDesc(&parsed.NydusImage.Manifest)
if bootstrapDesc == nil {
return nil, 0, fmt.Errorf("not found nydus bootstrap layer")
}
committedLayers := 0
_commitBlobs := bootstrapDesc.Annotations[utils.LayerAnnotationNydusCommitBlobs]
if _commitBlobs != "" {
committedLayers = len(strings.Split(_commitBlobs, ","))
logrus.Infof("detected committed layers: %d", committedLayers)
}
target := filepath.Join(cm.workDir, bootstrapName)
reader, err := parser.PullNydusBootstrap(ctx, parsed.NydusImage)
if err != nil {
return nil, 0, errors.Wrap(err, "pull bootstrap layer")
}
var closeErr error
defer func() {
if err := reader.Close(); err != nil {
closeErr = errors.Wrap(err, "close bootstrap reader")
}
}()
if err := utils.UnpackFile(reader, utils.BootstrapFileNameInLayer, target); err != nil {
return nil, 0, errors.Wrap(err, "unpack bootstrap layer")
}
if closeErr != nil {
return nil, 0, closeErr
}
return parsed.NydusImage, committedLayers, nil
}
func (cm *Committer) commitUpperByDiff(ctx context.Context, appendMount func(path string), withPaths []string, withoutPaths []string, lowerDirs, upperDir, blobName, fsversion, compressor string) (*digest.Digest, error) {
logrus.Infof("committing upper")
start := time.Now()
blobPath := filepath.Join(cm.workDir, blobName)
blob, err := os.Create(blobPath)
if err != nil {
return nil, errors.Wrap(err, "create upper blob file")
}
defer blob.Close()
digester := digest.SHA256.Digester()
counter := Counter{}
tarWc, err := converter.Pack(ctx, io.MultiWriter(blob, digester.Hash(), &counter), converter.PackOption{
WorkDir: cm.workDir,
FsVersion: fsversion,
Compressor: compressor,
BuilderPath: cm.builder,
})
if err != nil {
return nil, errors.Wrap(err, "initialize pack to blob")
}
if err := diff.Diff(ctx, appendMount, withPaths, withoutPaths, tarWc, lowerDirs, upperDir); err != nil {
return nil, errors.Wrap(err, "make diff")
}
if err := tarWc.Close(); err != nil {
return nil, errors.Wrap(err, "pack to blob")
}
blobDigest := digester.Digest()
logrus.Infof("committed upper, size: %s, elapsed: %s", humanize.Bytes(uint64(counter.Size())), time.Since(start))
return &blobDigest, nil
}
// getDistributionSourceLabel returns the source label key and value for the image distribution
func getDistributionSourceLabel(sourceRef string) (string, string) {
named, err := docker.ParseDockerRef(sourceRef)
if err != nil {
return "", ""
}
host := docker.Domain(named)
labelValue := docker.Path(named)
labelKey := fmt.Sprintf("%s.%s", labels.LabelDistributionSource, host)
return labelKey, labelValue
}
// pushBlob pushes a blob to the target registry
func (cm *Committer) pushBlob(ctx context.Context, blobName string, blobDigest digest.Digest, sourceRef string, targetRef string, insecure bool, image *parserPkg.Image) (*ocispec.Descriptor, error) {
logrus.Infof("pushing blob: %s, digest: %s", blobName, blobDigest)
targetRemoter, err := provider.DefaultRemote(targetRef, insecure)
if err != nil {
return nil, errors.Wrap(err, "create target remote")
}
// Check if this is a lower blob (starts with "blob-mount-" but not in workDir)
isLowerBlob := strings.HasPrefix(blobName, "blob-mount-")
blobPath := filepath.Join(cm.workDir, blobName)
var blobDesc ocispec.Descriptor
var reader io.Reader
var readerCloser io.Closer
var closeErr error
defer func() {
if readerCloser != nil {
if err := readerCloser.Close(); err != nil {
closeErr = errors.Wrap(err, "close blob reader")
}
}
}()
if isLowerBlob {
logrus.Debugf("handling lower blob: %s", blobName)
// For lower blobs, use remote access
blobDesc = ocispec.Descriptor{
Digest: blobDigest,
MediaType: utils.MediaTypeNydusBlob,
}
// Find corresponding layer in source manifest to get size
var sourceLayer *ocispec.Descriptor
for _, layer := range image.Manifest.Layers {
if layer.Digest == blobDigest {
sourceLayer = &layer
blobDesc.Size = layer.Size
break
}
}
if sourceLayer == nil {
return nil, fmt.Errorf("layer not found in source image: %s", blobDigest)
}
if blobDesc.Size <= 0 {
return nil, fmt.Errorf("invalid blob size: %d", blobDesc.Size)
}
logrus.Debugf("lower blob size: %d", blobDesc.Size)
// Use source image remoter to get blob data
sourceRemoter, err := provider.DefaultRemote(sourceRef, insecure)
if err != nil {
return nil, errors.Wrap(err, "create source remote")
}
// Get ReaderAt for remote blob
readerAt, err := sourceRemoter.ReaderAt(ctx, *sourceLayer, true)
if err != nil {
return nil, errors.Wrap(err, "create remote reader for lower blob")
}
if readerAt == nil {
return nil, fmt.Errorf("got nil reader for lower blob: %s", blobName)
}
reader = io.NewSectionReader(readerAt, 0, readerAt.Size())
if closer, ok := readerAt.(io.Closer); ok {
readerCloser = closer
}
// Add required annotations
blobDesc.Annotations = map[string]string{
utils.LayerAnnotationUncompressed: blobDigest.String(),
utils.LayerAnnotationNydusBlob: "true",
}
} else {
logrus.Debugf("handling local blob: %s", blobName)
// Handle local blob
blobRa, err := local.OpenReader(blobPath)
if err != nil {
return nil, errors.Wrap(err, "open reader for blob")
}
if blobRa == nil {
return nil, fmt.Errorf("got nil reader for local blob: %s", blobName)
}
size := blobRa.Size()
if size <= 0 {
blobRa.Close()
return nil, fmt.Errorf("invalid local blob size: %d", size)
}
logrus.Debugf("local blob size: %d", size)
reader = io.NewSectionReader(blobRa, 0, size)
readerCloser = blobRa
blobDesc = ocispec.Descriptor{
Digest: blobDigest,
Size: size,
MediaType: utils.MediaTypeNydusBlob,
Annotations: map[string]string{
utils.LayerAnnotationUncompressed: blobDigest.String(),
utils.LayerAnnotationNydusBlob: "true",
},
}
}
// Add distribution source label
distributionSourceLabel, distributionSourceLabelValue := getDistributionSourceLabel(sourceRef)
if distributionSourceLabel != "" {
if blobDesc.Annotations == nil {
blobDesc.Annotations = make(map[string]string)
}
blobDesc.Annotations[distributionSourceLabel] = distributionSourceLabelValue
}
logrus.Debugf("pushing blob: digest=%s, size=%d", blobDesc.Digest, blobDesc.Size)
if err := targetRemoter.Push(ctx, blobDesc, true, reader); err != nil {
if utils.RetryWithHTTP(err) {
targetRemoter.MaybeWithHTTP(err)
logrus.Debugf("retrying push with HTTP")
if err := targetRemoter.Push(ctx, blobDesc, true, reader); err != nil {
return nil, errors.Wrap(err, "push blob with HTTP")
}
} else {
return nil, errors.Wrap(err, "push blob")
}
}
if closeErr != nil {
return nil, closeErr
}
return &blobDesc, nil
}
func (cm *Committer) pause(ctx context.Context, containerID string, handle func() error) error {
logrus.Infof("pausing container: %s", containerID)
if err := cm.manager.Pause(ctx, containerID); err != nil {
return errors.Wrap(err, "pause container")
}
if err := handle(); err != nil {
logrus.Infof("unpausing container: %s", containerID)
if err := cm.manager.UnPause(ctx, containerID); err != nil {
logrus.Errorf("unpause container: %s", containerID)
}
return err
}
logrus.Infof("unpausing container: %s", containerID)
return cm.manager.UnPause(ctx, containerID)
}
// syncFilesystem forces filesystem sync to ensure all changes are written to disk.
// This is crucial for overlay filesystems where changes may still be in memory
// buffers and not yet visible in the upper directory when committing.
func (cm *Committer) syncFilesystem(ctx context.Context, containerID string) error {
inspect, err := cm.manager.Inspect(ctx, containerID)
if err != nil {
return errors.Wrap(err, "inspect container for sync")
}
// Use nsenter to execute sync command in the container's namespace
config := &Config{
Mount: true,
PID: true,
Target: inspect.Pid,
}
stderr, err := config.ExecuteContext(ctx, io.Discard, "sync")
if err != nil {
return errors.Wrap(err, fmt.Sprintf("execute sync in container namespace: %s", strings.TrimSpace(stderr)))
}
// Also sync the host filesystem to ensure overlay changes are written
cmd := exec.CommandContext(ctx, "sync")
if err := cmd.Run(); err != nil {
return errors.Wrap(err, "execute host sync")
}
return nil
}
func (cm *Committer) pushManifest(
ctx context.Context, nydusImage parserPkg.Image, bootstrapDiffID digest.Digest, targetRef, bootstrapName, fsversion string, upperBlob *Blob, mountBlobs []Blob, insecure bool,
) error {
lowerBlobLayers := []ocispec.Descriptor{}
for idx := range nydusImage.Manifest.Layers {
layer := nydusImage.Manifest.Layers[idx]
if layer.MediaType == utils.MediaTypeNydusBlob {
lowerBlobLayers = append(lowerBlobLayers, layer)
}
}
// Push image config
config := nydusImage.Config
config.RootFS.DiffIDs = []digest.Digest{}
for idx := range lowerBlobLayers {
config.RootFS.DiffIDs = append(config.RootFS.DiffIDs, lowerBlobLayers[idx].Digest)
}
for idx := range mountBlobs {
mountBlob := mountBlobs[idx]
config.RootFS.DiffIDs = append(config.RootFS.DiffIDs, mountBlob.Desc.Digest)
}
config.RootFS.DiffIDs = append(config.RootFS.DiffIDs, upperBlob.Desc.Digest)
config.RootFS.DiffIDs = append(config.RootFS.DiffIDs, bootstrapDiffID)
configBytes, configDesc, err := cm.makeDesc(config, nydusImage.Manifest.Config)
if err != nil {
return errors.Wrap(err, "make config desc")
}
remoter, err := provider.DefaultRemote(targetRef, insecure)
if err != nil {
return errors.Wrap(err, "create remote")
}
if err := remoter.Push(ctx, *configDesc, true, bytes.NewReader(configBytes)); err != nil {
if utils.RetryWithHTTP(err) {
remoter.MaybeWithHTTP(err)
if err := remoter.Push(ctx, *configDesc, true, bytes.NewReader(configBytes)); err != nil {
return errors.Wrap(err, "push image config")
}
} else {
return errors.Wrap(err, "push image config")
}
}
// Push bootstrap layer
bootstrapTarPath := filepath.Join(cm.workDir, bootstrapName)
bootstrapTar, err := os.Open(bootstrapTarPath)
if err != nil {
return errors.Wrap(err, "open bootstrap tar file")
}
bootstrapTarGzPath := filepath.Join(cm.workDir, bootstrapName+".gz")
bootstrapTarGz, err := os.Create(bootstrapTarGzPath)
if err != nil {
return errors.Wrap(err, "create bootstrap tar.gz file")
}
defer bootstrapTarGz.Close()
digester := digest.SHA256.Digester()
gzWriter := gzip.NewWriter(io.MultiWriter(bootstrapTarGz, digester.Hash()))
if _, err := io.Copy(gzWriter, bootstrapTar); err != nil {
return errors.Wrap(err, "compress bootstrap tar to tar.gz")
}
if err := gzWriter.Close(); err != nil {
return errors.Wrap(err, "close gzip writer")
}
ra, err := local.OpenReader(bootstrapTarGzPath)
if err != nil {
return errors.Wrap(err, "open reader for upper blob")
}
defer ra.Close()
commitBlobs := []string{}
for idx := range mountBlobs {
mountBlob := mountBlobs[idx]
commitBlobs = append(commitBlobs, mountBlob.Desc.Digest.String())
}
commitBlobs = append(commitBlobs, upperBlob.Desc.Digest.String())
bootstrapDesc := ocispec.Descriptor{
Digest: digester.Digest(),
Size: ra.Size(),
MediaType: ocispec.MediaTypeImageLayerGzip,
Annotations: map[string]string{
converter.LayerAnnotationFSVersion: fsversion,
converter.LayerAnnotationNydusBootstrap: "true",
utils.LayerAnnotationNydusCommitBlobs: strings.Join(commitBlobs, ","),
},
}
bootstrapRc, err := os.Open(bootstrapTarGzPath)
if err != nil {
return errors.Wrapf(err, "open bootstrap %s", bootstrapTarGzPath)
}
defer bootstrapRc.Close()
if err := remoter.Push(ctx, bootstrapDesc, true, bootstrapRc); err != nil {
return errors.Wrap(err, "push bootstrap layer")
}
// Push image manifest
layers := lowerBlobLayers
for idx := range mountBlobs {
mountBlob := mountBlobs[idx]
layers = append(layers, mountBlob.Desc)
}
layers = append(layers, upperBlob.Desc)
layers = append(layers, bootstrapDesc)
nydusImage.Manifest.Config = *configDesc
nydusImage.Manifest.Layers = layers
manifestBytes, manifestDesc, err := cm.makeDesc(nydusImage.Manifest, nydusImage.Desc)
if err != nil {
return errors.Wrap(err, "make config desc")
}
if err := remoter.Push(ctx, *manifestDesc, false, bytes.NewReader(manifestBytes)); err != nil {
return errors.Wrap(err, "push image manifest")
}
return nil
}
func (cm *Committer) makeDesc(x interface{}, oldDesc ocispec.Descriptor) ([]byte, *ocispec.Descriptor, error) {
data, err := json.MarshalIndent(x, "", " ")
if err != nil {
return nil, nil, errors.Wrap(err, "json marshal")
}
dgst := digest.SHA256.FromBytes(data)
newDesc := oldDesc
newDesc.Size = int64(len(data))
newDesc.Digest = dgst
return data, &newDesc, nil
}
func (cm *Committer) commitMountByNSEnter(ctx context.Context, containerPid int, sourceDir, name, fsversion, compressor string) (*digest.Digest, error) {
logrus.Infof("committing mount: %s", sourceDir)
start := time.Now()
blobPath := filepath.Join(cm.workDir, name)
blob, err := os.Create(blobPath)
if err != nil {
return nil, errors.Wrap(err, "create mount blob file")
}
defer blob.Close()
digester := digest.SHA256.Digester()
counter := Counter{}
tarWc, err := converter.Pack(ctx, io.MultiWriter(blob, &counter, digester.Hash()), converter.PackOption{
WorkDir: cm.workDir,
FsVersion: fsversion,
Compressor: compressor,
BuilderPath: cm.builder,
})
if err != nil {
return nil, errors.Wrap(err, "initialize pack to blob")
}
if err := copyFromContainer(ctx, containerPid, sourceDir, tarWc); err != nil {
return nil, errors.Wrapf(err, "copy %s from pid %d", sourceDir, containerPid)
}
if err := tarWc.Close(); err != nil {
return nil, errors.Wrap(err, "pack to blob")
}
mountBlobDigest := digester.Digest()
logrus.Infof("committed mount: %s, size: %s, elapsed %s", sourceDir, humanize.Bytes(uint64(counter.Size())), time.Since(start))
return &mountBlobDigest, nil
}
func (cm *Committer) mergeBootstrap(
ctx context.Context, upperBlob Blob, mountBlobs []Blob, baseBootstrapName, mergedBootstrapName string,
) ([]digest.Digest, *digest.Digest, error) {
baseBootstrap := filepath.Join(cm.workDir, baseBootstrapName)
upperBlobRa, err := local.OpenReader(filepath.Join(cm.workDir, upperBlob.Name))
if err != nil {
return nil, nil, errors.Wrap(err, "open reader for upper blob")
}
mergedBootstrap := filepath.Join(cm.workDir, mergedBootstrapName)
bootstrap, err := os.Create(mergedBootstrap)
if err != nil {
return nil, nil, errors.Wrap(err, "create upper blob file")
}
defer bootstrap.Close()
digester := digest.SHA256.Digester()
writer := io.MultiWriter(bootstrap, digester.Hash())
layers := []converter.Layer{}
layers = append(layers, converter.Layer{
Digest: upperBlob.Desc.Digest,
ReaderAt: upperBlobRa,
})
for idx := range mountBlobs {
mountBlob := mountBlobs[idx]
mountBlobRa, err := local.OpenReader(filepath.Join(cm.workDir, mountBlob.Name))
if err != nil {
return nil, nil, errors.Wrap(err, "open reader for mount blob")
}
layers = append(layers, converter.Layer{
Digest: mountBlob.Desc.Digest,
ReaderAt: mountBlobRa,
})
}
blobDigests, err := converter.Merge(ctx, layers, writer, converter.MergeOption{
WorkDir: cm.workDir,
ParentBootstrapPath: baseBootstrap,
WithTar: true,
BuilderPath: cm.builder,
})
if err != nil {
return nil, nil, errors.Wrap(err, "merge bootstraps")
}
bootstrapDiffID := digester.Digest()
return blobDigests, &bootstrapDiffID, nil
}
func copyFromContainer(ctx context.Context, containerPid int, source string, target io.Writer) error {
config := &Config{
Mount: true,
Target: containerPid,
}
stderr, err := config.ExecuteContext(ctx, target, "tar", "--xattrs", "--ignore-failed-read", "--absolute-names", "-cf", "-", source)
if err != nil {
return errors.Wrap(err, fmt.Sprintf("execute tar: %s", strings.TrimSpace(stderr)))
}
if stderr != "" {
logrus.Warnf("from container: %s", stderr)
}
return nil
}
type MountList struct {
mutex sync.Mutex
paths []string
}
func NewMountList() *MountList {
return &MountList{
paths: make([]string, 0),
}
}
func (ml *MountList) Add(path string) {
ml.mutex.Lock()
defer ml.mutex.Unlock()
ml.paths = append(ml.paths, path)
}
type Blob struct {
Name string
BootstrapName string
Desc ocispec.Descriptor
}
func withRetry(handle func() error, total int) error {
for {
total--
err := handle()
if err == nil {
return nil
}
if total > 0 {
logrus.WithError(err).Warnf("retry (remain %d times)", total)
continue
}
return err
}
}
// ValidateRef validate the target image reference.
func ValidateRef(ref string) (string, error) {
named, err := docker.ParseDockerRef(ref)
if err != nil {
return "", errors.Wrapf(err, "invalid image reference: %s", ref)
}
if _, ok := named.(docker.Digested); ok {
return "", fmt.Errorf("unsupported digested image reference: %s", ref)
}
named = docker.TagNameOnly(named)
return named.String(), nil
}
type outputJSON struct {
FsVersion string `json:"fs_version"`
Compressor string `json:"compressor"`
}
func (cm *Committer) obtainBootStrapInfo(ctx context.Context, BootstrapName string) (string, string, error) {
targetBootstrapPath := filepath.Join(cm.workDir, BootstrapName)
outputJSONPath := filepath.Join(cm.workDir, "output.json")
defer os.Remove(outputJSONPath)
args := []string{
"check",
"--log-level",
"warn",
"--bootstrap",
targetBootstrapPath,
"--output-json",
outputJSONPath,
}
logrus.Debugf("\tCommand: %s", args)
cmd := exec.CommandContext(ctx, cm.builder, args...)
if err := cmd.Run(); err != nil {
return "", "", errors.Wrap(err, "run merge command")
}
outputBytes, err := os.ReadFile(outputJSONPath)
if err != nil {
return "", "", errors.Wrapf(err, "read file %s", outputJSONPath)
}
var output outputJSON
err = json.Unmarshal(outputBytes, &output)
if err != nil {
return "", "", errors.Wrapf(err, "unmarshal output json file %s", outputJSONPath)
}
return output.FsVersion, strings.ToLower(output.Compressor), nil
}
// resolveContainerID resolves the container ID to its full ID
func (cm *Committer) resolveContainerID(ctx context.Context, opt *Opt) error {
// If the ID is already a full ID (64 characters), return it directly
if len(opt.ContainerID) == 64 {
logrus.Debugf("container ID %s is already a full ID", opt.ContainerID)
return nil
}
logrus.Infof("resolving container ID prefix %s to full ID", opt.ContainerID)
var (
fullID string
matchCount int
)
// Create containerd client directly
client, err := containerd.New(cm.manager.address)
if err != nil {
return fmt.Errorf("failed to create containerd client: %w", err)
}
defer client.Close()
// Set namespace in context
ctx = namespaces.WithNamespace(ctx, opt.Namespace)
walker := NewContainerWalker(client, func(_ context.Context, found Found) error {
fullID = found.Container.ID()
matchCount = found.MatchCount
return nil
})
n, err := walker.Walk(ctx, opt.ContainerID)
if err != nil {
return fmt.Errorf("failed to walk containers: %w", err)
}
if n == 0 {
return fmt.Errorf("no container found with ID : %s", opt.ContainerID)
}
if matchCount > 1 {
return fmt.Errorf("ambiguous container ID '%s' matches multiple containers, please provide a more specific ID", opt.ContainerID)
}
opt.ContainerID = fullID
logrus.Infof("resolved container ID to full ID: %s", fullID)
return nil
}

View File

@ -0,0 +1,70 @@
// Ported from nerdctl project, copyright The nerdctl Authors.
// https://github.com/containerd/nerdctl/blob/31b4e49db76382567eea223a7e8562e0213ef05f/pkg/idutil/containerwalker/containerwalker.go#L53
package committer
import (
"context"
"fmt"
"regexp"
"strings"
"github.com/containerd/containerd"
"github.com/sirupsen/logrus"
)
type Found struct {
Container containerd.Container
Req string // The raw request string. name, short ID, or long ID.
MatchIndex int // Begins with 0, up to MatchCount - 1.
MatchCount int // 1 on exact match. > 1 on ambiguous match. Never be <= 0.
}
type OnFound func(ctx context.Context, found Found) error
type ContainerWalker struct {
Client *containerd.Client
OnFound OnFound
}
func NewContainerWalker(client *containerd.Client, onFound OnFound) *ContainerWalker {
return &ContainerWalker{
Client: client,
OnFound: onFound,
}
}
// Walk walks containers and calls w.OnFound.
// Req is name, short ID, or long ID.
// Returns the number of the found entries.
func (w *ContainerWalker) Walk(ctx context.Context, req string) (int, error) {
logrus.Debugf("walking containers with request: %s", req)
if strings.HasPrefix(req, "k8s://") {
return -1, fmt.Errorf("specifying \"k8s://...\" form is not supported (Hint: specify ID instead): %q", req)
}
filters := []string{
fmt.Sprintf("id~=^%s.*$", regexp.QuoteMeta(req)),
}
containers, err := w.Client.Containers(ctx, filters...)
if err != nil {
return -1, err
}
matchCount := len(containers)
for i, c := range containers {
logrus.Debugf("found match for container ID: %s", c.ID())
f := Found{
Container: c,
Req: req,
MatchIndex: i,
MatchCount: matchCount,
}
if e := w.OnFound(ctx, f); e != nil {
return -1, e
}
}
return matchCount, nil
}

View File

@ -0,0 +1,317 @@
/*
Copyright The containerd Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package archive
import (
"archive/tar"
"context"
"fmt"
"io"
"os"
"path/filepath"
"strings"
"sync"
"time"
"github.com/containerd/continuity/fs"
)
var bufPool = &sync.Pool{
New: func() interface{} {
buffer := make([]byte, 32*1024)
return &buffer
},
}
const (
// whiteoutPrefix prefix means file is a whiteout. If this is followed by a
// filename this means that file has been removed from the base layer.
// See https://github.com/opencontainers/image-spec/blob/main/layer.md#whiteouts
whiteoutPrefix = ".wh."
paxSchilyXattr = "SCHILY.xattr."
)
// ChangeWriter provides tar stream from filesystem change information.
// The provided tar stream is styled as an OCI layer. Change information
// (add/modify/delete/unmodified) for each file needs to be passed to this
// writer through HandleChange method.
//
// This should be used combining with continuity's diff computing functionality
// (e.g. `fs.Change` of github.com/containerd/continuity/fs).
//
// See also https://github.com/opencontainers/image-spec/blob/main/layer.md for details
// about OCI layers
type ChangeWriter struct {
tw *tar.Writer
source string
modTimeUpperBound *time.Time
whiteoutT time.Time
inodeSrc map[uint64]string
inodeRefs map[uint64][]string
addedDirs map[string]struct{}
}
// ChangeWriterOpt can be specified in NewChangeWriter.
type ChangeWriterOpt func(cw *ChangeWriter)
// NewChangeWriter returns ChangeWriter that writes tar stream of the source directory
// to the provided writer. Change information (add/modify/delete/unmodified) for each
// file needs to be passed through HandleChange method.
func NewChangeWriter(w io.Writer, source string, opts ...ChangeWriterOpt) *ChangeWriter {
cw := &ChangeWriter{
tw: tar.NewWriter(w),
source: source,
whiteoutT: time.Now(), // can be overridden with WithWhiteoutTime(time.Time) ChangeWriterOpt .
inodeSrc: map[uint64]string{},
inodeRefs: map[uint64][]string{},
addedDirs: map[string]struct{}{},
}
for _, o := range opts {
o(cw)
}
return cw
}
// HandleChange receives filesystem change information and reflect that information to
// the result tar stream. This function implements `fs.ChangeFunc` of continuity
// (github.com/containerd/continuity/fs) and should be used with that package.
func (cw *ChangeWriter) HandleChange(k fs.ChangeKind, p string, f os.FileInfo, err error) error {
if err != nil {
return err
}
if k == fs.ChangeKindDelete {
whiteOutDir := filepath.Dir(p)
whiteOutBase := filepath.Base(p)
whiteOut := filepath.Join(whiteOutDir, whiteoutPrefix+whiteOutBase)
hdr := &tar.Header{
Typeflag: tar.TypeReg,
Name: whiteOut[1:],
Size: 0,
ModTime: cw.whiteoutT,
AccessTime: cw.whiteoutT,
ChangeTime: cw.whiteoutT,
}
if err := cw.includeParents(hdr); err != nil {
return err
}
if err := cw.tw.WriteHeader(hdr); err != nil {
return fmt.Errorf("failed to write whiteout header: %w", err)
}
} else {
var (
link string
err error
source = filepath.Join(cw.source, p)
)
switch {
case f.Mode()&os.ModeSocket != 0:
return nil // ignore sockets
case f.Mode()&os.ModeSymlink != 0:
if link, err = os.Readlink(source); err != nil {
return err
}
}
hdr, err := tar.FileInfoHeader(f, link)
if err != nil {
return err
}
hdr.Mode = int64(chmodTarEntry(os.FileMode(hdr.Mode)))
// truncate timestamp for compatibility. without PAX stdlib rounds timestamps instead
hdr.Format = tar.FormatPAX
if cw.modTimeUpperBound != nil && hdr.ModTime.After(*cw.modTimeUpperBound) {
hdr.ModTime = *cw.modTimeUpperBound
}
hdr.ModTime = hdr.ModTime.Truncate(time.Second)
hdr.AccessTime = time.Time{}
hdr.ChangeTime = time.Time{}
name := p
if strings.HasPrefix(name, string(filepath.Separator)) {
name, err = filepath.Rel(string(filepath.Separator), name)
if err != nil {
return fmt.Errorf("failed to make path relative: %w", err)
}
}
// Canonicalize to POSIX-style paths using forward slashes. Directory
// entries must end with a slash.
name = filepath.ToSlash(name)
if f.IsDir() && !strings.HasSuffix(name, "/") {
name += "/"
}
hdr.Name = name
if err := setHeaderForSpecialDevice(hdr, name, f); err != nil {
return fmt.Errorf("failed to set device headers: %w", err)
}
// additionalLinks stores file names which must be linked to
// this file when this file is added
var additionalLinks []string
inode, isHardlink := fs.GetLinkInfo(f)
if isHardlink {
// If the inode has a source, always link to it
if source, ok := cw.inodeSrc[inode]; ok {
hdr.Typeflag = tar.TypeLink
hdr.Linkname = source
hdr.Size = 0
} else {
if k == fs.ChangeKindUnmodified {
cw.inodeRefs[inode] = append(cw.inodeRefs[inode], name)
return nil
}
cw.inodeSrc[inode] = name
additionalLinks = cw.inodeRefs[inode]
delete(cw.inodeRefs, inode)
}
} else if k == fs.ChangeKindUnmodified {
// Nothing to write to diff
return nil
}
if capability, err := getxattr(source, "security.capability"); err != nil {
return fmt.Errorf("failed to get capabilities xattr: %w", err)
} else if len(capability) > 0 {
if hdr.PAXRecords == nil {
hdr.PAXRecords = map[string]string{}
}
hdr.PAXRecords[paxSchilyXattr+"security.capability"] = string(capability)
}
if err := cw.includeParents(hdr); err != nil {
return err
}
if err := cw.tw.WriteHeader(hdr); err != nil {
return fmt.Errorf("failed to write file header: %w", err)
}
if hdr.Typeflag == tar.TypeReg && hdr.Size > 0 {
file, err := open(source)
if err != nil {
return fmt.Errorf("failed to open path: %v: %w", source, err)
}
defer file.Close()
// HACK (imeoer): display file path in error message.
n, err := copyBuffered(context.TODO(), cw.tw, file)
if err != nil {
return fmt.Errorf("failed to copy file %s: %w", p, err)
}
if n != hdr.Size {
return fmt.Errorf("short write copying file: %s", p)
}
}
if additionalLinks != nil {
source = hdr.Name
for _, extra := range additionalLinks {
hdr.Name = extra
hdr.Typeflag = tar.TypeLink
hdr.Linkname = source
hdr.Size = 0
if err := cw.includeParents(hdr); err != nil {
return err
}
if err := cw.tw.WriteHeader(hdr); err != nil {
return fmt.Errorf("failed to write file header: %w", err)
}
}
}
}
return nil
}
// Close closes this writer.
func (cw *ChangeWriter) Close() error {
if err := cw.tw.Close(); err != nil {
return fmt.Errorf("failed to close tar writer: %w", err)
}
return nil
}
func (cw *ChangeWriter) includeParents(hdr *tar.Header) error {
if cw.addedDirs == nil {
return nil
}
name := strings.TrimRight(hdr.Name, "/")
fname := filepath.Join(cw.source, name)
parent := filepath.Dir(name)
pname := filepath.Join(cw.source, parent)
// Do not include root directory as parent
if fname != cw.source && pname != cw.source {
_, ok := cw.addedDirs[parent]
if !ok {
cw.addedDirs[parent] = struct{}{}
fi, err := os.Stat(pname)
if err != nil {
return err
}
if err := cw.HandleChange(fs.ChangeKindModify, parent, fi, nil); err != nil {
return err
}
}
}
if hdr.Typeflag == tar.TypeDir {
cw.addedDirs[name] = struct{}{}
}
return nil
}
func copyBuffered(ctx context.Context, dst io.Writer, src io.Reader) (written int64, err error) {
buf := bufPool.Get().(*[]byte)
defer bufPool.Put(buf)
for {
select {
case <-ctx.Done():
err = ctx.Err()
return
default:
}
nr, er := src.Read(*buf)
if nr > 0 {
nw, ew := dst.Write((*buf)[0:nr])
if nw > 0 {
written += int64(nw)
}
if ew != nil {
err = ew
break
}
if nr != nw {
err = io.ErrShortWrite
break
}
}
if er != nil {
if er != io.EOF {
err = er
}
break
}
}
return written, err
}

View File

@ -0,0 +1,80 @@
//go:build !windows
/*
Copyright The containerd Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package archive
import (
"archive/tar"
"errors"
"os"
"runtime"
"syscall"
"github.com/containerd/continuity/sysx"
"golang.org/x/sys/unix"
)
func chmodTarEntry(perm os.FileMode) os.FileMode {
return perm
}
func setHeaderForSpecialDevice(hdr *tar.Header, _ string, fi os.FileInfo) error {
// Devmajor and Devminor are only needed for special devices.
// In FreeBSD, RDev for regular files is -1 (unless overridden by FS):
// https://cgit.freebsd.org/src/tree/sys/kern/vfs_default.c?h=stable/13#n1531
// (NODEV is -1: https://cgit.freebsd.org/src/tree/sys/sys/param.h?h=stable/13#n241).
// ZFS in particular does not override the default:
// https://cgit.freebsd.org/src/tree/sys/contrib/openzfs/module/os/freebsd/zfs/zfs_vnops_os.c?h=stable/13#n2027
// Since `Stat_t.Rdev` is uint64, the cast turns -1 into (2^64 - 1).
// Such large values cannot be encoded in a tar header.
if runtime.GOOS == "freebsd" && hdr.Typeflag != tar.TypeBlock && hdr.Typeflag != tar.TypeChar {
return nil
}
s, ok := fi.Sys().(*syscall.Stat_t)
if !ok {
return errors.New("unsupported stat type")
}
rdev := uint64(s.Rdev) //nolint:nolintlint,unconvert // rdev is int32 on darwin/bsd, int64 on linux/solaris
// Currently go does not fill in the major/minors
if s.Mode&syscall.S_IFBLK != 0 ||
s.Mode&syscall.S_IFCHR != 0 {
hdr.Devmajor = int64(unix.Major(rdev))
hdr.Devminor = int64(unix.Minor(rdev))
}
return nil
}
func open(p string) (*os.File, error) {
return os.Open(p)
}
func getxattr(path, attr string) ([]byte, error) {
b, err := sysx.LGetxattr(path, attr)
if err == unix.ENOTSUP || err == sysx.ENODATA {
return nil, nil
}
return b, err
}

View File

@ -0,0 +1,114 @@
// Ported from buildkit project, copyright The buildkit Authors.
// https://github.com/moby/buildkit
package diff
import (
"context"
"fmt"
"io"
"os"
"strings"
"github.com/containerd/containerd/mount"
"github.com/moby/buildkit/util/overlay"
"github.com/pkg/errors"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/committer/diff/archive"
)
func overlaySupportIndex() bool {
if _, err := os.Stat("/sys/module/overlay/parameters/index"); err == nil {
return true
}
return false
}
// Ported from github.com/moby/buildkit/util/overlay/overlay_linux.go
// Modified overlayfs temp mount handle.
//
// WriteUpperdir writes a layer tar archive into the specified writer, based on
// the diff information stored in the upperdir.
func writeUpperdir(ctx context.Context, appendMount func(path string), withPaths []string, withoutPaths []string, w io.Writer, upperdir string, lower []mount.Mount) error {
emptyLower, err := os.MkdirTemp("", "buildkit") // empty directory used for the lower of diff view
if err != nil {
return errors.Wrapf(err, "failed to create temp dir")
}
defer os.Remove(emptyLower)
options := []string{
fmt.Sprintf("lowerdir=%s", strings.Join([]string{upperdir, emptyLower}, ":")),
}
if overlaySupportIndex() {
options = append(options, "index=off")
}
upperView := []mount.Mount{
{
Type: "overlay",
Source: "overlay",
Options: options,
},
}
return mount.WithTempMount(ctx, lower, func(lowerRoot string) error {
return mount.WithTempMount(ctx, upperView, func(upperViewRoot string) error {
cw := archive.NewChangeWriter(&cancellableWriter{ctx, w}, upperViewRoot)
if err := Changes(ctx, appendMount, withPaths, withoutPaths, cw.HandleChange, upperdir, upperViewRoot, lowerRoot); err != nil {
if err2 := cw.Close(); err2 != nil {
return errors.Wrapf(err, "failed to record upperdir changes (close error: %v)", err2)
}
return errors.Wrapf(err, "failed to record upperdir changes")
}
return cw.Close()
})
})
}
func Diff(ctx context.Context, appendMount func(path string), withPaths []string, withoutPaths []string, writer io.Writer, lowerDirs, upperDir string) error {
emptyLower, err := os.MkdirTemp("", "nydus-cli-diff")
if err != nil {
return errors.Wrapf(err, "create temp dir")
}
defer os.Remove(emptyLower)
lowerDirs += fmt.Sprintf(":%s", emptyLower)
options := []string{
fmt.Sprintf("lowerdir=%s", lowerDirs),
}
if overlaySupportIndex() {
options = append(options, "index=off")
}
lower := []mount.Mount{
{
Type: "overlay",
Source: "overlay",
Options: options,
},
}
options = []string{
fmt.Sprintf("lowerdir=%s:%s", upperDir, lowerDirs),
}
if overlaySupportIndex() {
options = append(options, "index=off")
}
upper := []mount.Mount{
{
Type: "overlay",
Source: "overlay",
Options: options,
},
}
upperDir, err = overlay.GetUpperdir(lower, upper)
if err != nil {
return errors.Wrap(err, "get upper dir")
}
if err = writeUpperdir(ctx, appendMount, withPaths, withoutPaths, &cancellableWriter{ctx, writer}, upperDir, lower); err != nil {
return errors.Wrap(err, "write diff")
}
return nil
}

View File

@ -0,0 +1,459 @@
// Ported from buildkit project, copyright The buildkit Authors.
// https://github.com/moby/buildkit
package diff
import (
"bytes"
"context"
"io"
"os"
"path/filepath"
"strings"
"sync"
"syscall"
"github.com/containerd/containerd/mount"
"github.com/containerd/continuity/devices"
"github.com/containerd/continuity/fs"
"github.com/containerd/continuity/sysx"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"golang.org/x/sys/unix"
)
// GetUpperdir parses the passed mounts and identifies the directory
// that contains diff between upper and lower.
func GetUpperdir(lower, upper []mount.Mount) (string, error) {
var upperdir string
if len(lower) == 0 && len(upper) == 1 { // upper is the bottommost snapshot
// Get layer directories of upper snapshot
upperM := upper[0]
if upperM.Type != "bind" {
return "", errors.Errorf("bottommost upper must be bind mount but %q", upperM.Type)
}
upperdir = upperM.Source
} else if len(lower) == 1 && len(upper) == 1 {
// Get layer directories of lower snapshot
var lowerlayers []string
lowerM := lower[0]
switch lowerM.Type {
case "bind":
// lower snapshot is a bind mount of one layer
lowerlayers = []string{lowerM.Source}
case "overlay":
// lower snapshot is an overlay mount of multiple layers
var err error
lowerlayers, err = GetOverlayLayers(lowerM)
if err != nil {
return "", err
}
default:
return "", errors.Errorf("cannot get layer information from mount option (type = %q)", lowerM.Type)
}
// Get layer directories of upper snapshot
upperM := upper[0]
if upperM.Type != "overlay" {
return "", errors.Errorf("upper snapshot isn't overlay mounted (type = %q)", upperM.Type)
}
upperlayers, err := GetOverlayLayers(upperM)
if err != nil {
return "", err
}
// Check if the diff directory can be determined
if len(upperlayers) != len(lowerlayers)+1 {
return "", errors.Errorf("cannot determine diff of more than one upper directories")
}
for i := 0; i < len(lowerlayers); i++ {
if upperlayers[i] != lowerlayers[i] {
return "", errors.Errorf("layer %d must be common between upper and lower snapshots", i)
}
}
upperdir = upperlayers[len(upperlayers)-1] // get the topmost layer that indicates diff
} else {
return "", errors.Errorf("multiple mount configurations are not supported")
}
if upperdir == "" {
return "", errors.Errorf("cannot determine upperdir from mount option")
}
return upperdir, nil
}
// GetOverlayLayers returns all layer directories of an overlayfs mount.
func GetOverlayLayers(m mount.Mount) ([]string, error) {
var u string
var uFound bool
var l []string // l[0] = bottommost
for _, o := range m.Options {
if strings.HasPrefix(o, "upperdir=") {
u, uFound = strings.TrimPrefix(o, "upperdir="), true
} else if strings.HasPrefix(o, "lowerdir=") {
l = strings.Split(strings.TrimPrefix(o, "lowerdir="), ":")
for i, j := 0, len(l)-1; i < j; i, j = i+1, j-1 {
l[i], l[j] = l[j], l[i] // make l[0] = bottommost
}
} else if strings.HasPrefix(o, "workdir=") || o == "index=off" || o == "userxattr" || strings.HasPrefix(o, "redirect_dir=") {
// these options are possible to specfied by the snapshotter but not indicate dir locations.
continue
} else {
// encountering an unknown option. return error and fallback to walking differ
// to avoid unexpected diff.
return nil, errors.Errorf("unknown option %q specified by snapshotter", o)
}
}
if uFound {
return append(l, u), nil
}
return l, nil
}
type cancellableWriter struct {
ctx context.Context
w io.Writer
}
func (w *cancellableWriter) Write(p []byte) (int, error) {
if err := w.ctx.Err(); err != nil {
return 0, err
}
return w.w.Write(p)
}
// Changes is continuty's `fs.Change`-like method but leverages overlayfs's
// "upperdir" for computing the diff. "upperdirView" is overlayfs mounted view of
// the upperdir that doesn't contain whiteouts. This is used for computing
// changes under opaque directories.
func Changes(ctx context.Context, appendMount func(path string), withPaths []string, withoutPaths []string, changeFn fs.ChangeFunc, upperdir, upperdirView, base string) error {
err := filepath.Walk(upperdir, func(path string, f os.FileInfo, err error) error {
if err != nil {
return err
}
if ctx.Err() != nil {
return ctx.Err()
}
// Rebase path
path, err = filepath.Rel(upperdir, path)
if err != nil {
return err
}
path = filepath.Join(string(os.PathSeparator), path)
// Skip root
if path == string(os.PathSeparator) {
return nil
}
// Skip filtered path
for _, filtered := range withoutPaths {
if path == filtered || strings.HasPrefix(path, filtered+"/") {
return nil
}
}
// Check redirect
if redirect, err := checkRedirect(upperdir, path, f); err != nil {
return err
} else if redirect {
// Return error when redirect_dir is enabled which can result to a wrong diff.
// TODO: support redirect_dir
logrus.Warnf(
"[need append] redirect_dir is used but it's not supported in overlayfs differ: %s",
filepath.Join(upperdir, path),
)
appendMount(path)
return nil
}
// Check if this is a deleted entry
isDelete, skip, err := checkDelete(upperdir, path, base, f)
if err != nil {
return err
} else if skip {
return nil
}
var kind fs.ChangeKind
var skipRecord bool
if isDelete {
// This is a deleted entry.
kind = fs.ChangeKindDelete
// Leave f set to the FileInfo for the whiteout device in case the caller wants it, e.g.
// the merge code uses it to hardlink in the whiteout device to merged snapshots
} else if baseF, err := os.Lstat(filepath.Join(base, path)); err == nil {
// File exists in the base layer. Thus this is modified.
kind = fs.ChangeKindModify
// Avoid including directory that hasn't been modified. If /foo/bar/baz is modified,
// then /foo will apper here even if it's not been modified because it's the parent of bar.
if same, err := sameDirent(baseF, f, filepath.Join(base, path), filepath.Join(upperdirView, path)); same {
skipRecord = true // Both are the same, don't record the change
} else if err != nil {
return err
}
} else if os.IsNotExist(err) || errors.Is(err, unix.ENOTDIR) {
// File doesn't exist in the base layer. Thus this is added.
kind = fs.ChangeKindAdd
} else if err != nil {
return errors.Wrap(err, "failed to stat base file during overlay diff")
}
if !skipRecord {
if err := changeFn(kind, path, f, nil); err != nil {
return err
}
}
if f != nil {
if isOpaque, err := checkOpaque(upperdir, path, base, f); err != nil {
return err
} else if isOpaque {
// This is an opaque directory. Start a new walking differ to get adds/deletes of
// this directory. We use "upperdirView" directory which doesn't contain whiteouts.
if err := fs.Changes(ctx, filepath.Join(base, path), filepath.Join(upperdirView, path),
func(k fs.ChangeKind, p string, f os.FileInfo, err error) error {
return changeFn(k, filepath.Join(path, p), f, err) // rebase path to be based on the opaque dir
},
); err != nil {
return err
}
return filepath.SkipDir // We completed this directory. Do not walk files under this directory anymore.
}
}
return nil
})
if err != nil {
return err
}
// Remove lower files, these files will be re-added on committing mount process.
for _, withPath := range withPaths {
if err := changeFn(fs.ChangeKindDelete, withPath, nil, nil); err != nil {
return errors.Wrapf(err, "handle deleted with path: %s", withPath)
}
}
return err
}
// checkDelete checks if the specified file is a whiteout
func checkDelete(_ string, path string, base string, f os.FileInfo) (isDelete, skip bool, _ error) {
if f.Mode()&os.ModeCharDevice != 0 {
if _, ok := f.Sys().(*syscall.Stat_t); ok {
maj, minor, err := devices.DeviceInfo(f)
if err != nil {
return false, false, errors.Wrapf(err, "failed to get device info")
}
if maj == 0 && minor == 0 {
// This file is a whiteout (char 0/0) that indicates this is deleted from the base
if _, err := os.Lstat(filepath.Join(base, path)); err != nil {
if !os.IsNotExist(err) {
return false, false, errors.Wrapf(err, "failed to lstat")
}
// This file doesn't exist even in the base dir.
// We don't need whiteout. Just skip this file.
return false, true, nil
}
return true, false, nil
}
}
}
return false, false, nil
}
// checkDelete checks if the specified file is an opaque directory
func checkOpaque(upperdir string, path string, base string, f os.FileInfo) (isOpaque bool, _ error) {
if f.IsDir() {
for _, oKey := range []string{"trusted.overlay.opaque", "user.overlay.opaque"} {
opaque, err := sysx.LGetxattr(filepath.Join(upperdir, path), oKey)
if err != nil && err != unix.ENODATA {
return false, errors.Wrapf(err, "failed to retrieve %s attr", oKey)
} else if len(opaque) == 1 && opaque[0] == 'y' {
// This is an opaque whiteout directory.
if _, err := os.Lstat(filepath.Join(base, path)); err != nil {
if !os.IsNotExist(err) {
return false, errors.Wrapf(err, "failed to lstat")
}
// This file doesn't exist even in the base dir. We don't need to treat this as an opaque.
return false, nil
}
return true, nil
}
}
}
return false, nil
}
// checkRedirect checks if the specified path enables redirect_dir.
func checkRedirect(upperdir string, path string, f os.FileInfo) (bool, error) {
if f.IsDir() {
rKey := "trusted.overlay.redirect"
redirect, err := sysx.LGetxattr(filepath.Join(upperdir, path), rKey)
if err != nil && err != unix.ENODATA {
return false, errors.Wrapf(err, "failed to retrieve %s attr", rKey)
}
return len(redirect) > 0, nil
}
return false, nil
}
// sameDirent performs continity-compatible comparison of files and directories.
// https://github.com/containerd/continuity/blob/v0.1.0/fs/path.go#L91-L133
// This will only do a slow content comparison of two files if they have all the
// same metadata and both have truncated nanosecond mtime timestamps. In practice,
// this can only happen if both the base file in the lowerdirs has a truncated
// timestamp (i.e. was unpacked from a tar) and the user did something like
// "mv foo tmp && mv tmp foo" that results in the file being copied up to the
// upperdir without making any changes to it. This is much rarer than similar
// cases in the double-walking differ, where the slow content comparison will
// be used whenever a file with a truncated timestamp is in the lowerdir at
// all and left unmodified.
func sameDirent(f1, f2 os.FileInfo, f1fullPath, f2fullPath string) (bool, error) {
if os.SameFile(f1, f2) {
return true, nil
}
equalStat, err := compareSysStat(f1.Sys(), f2.Sys())
if err != nil || !equalStat {
return equalStat, err
}
if eq, err := compareCapabilities(f1fullPath, f2fullPath); err != nil || !eq {
return eq, err
}
if !f1.IsDir() {
if f1.Size() != f2.Size() {
return false, nil
}
t1 := f1.ModTime()
t2 := f2.ModTime()
if t1.Unix() != t2.Unix() {
return false, nil
}
// If the timestamp may have been truncated in both of the
// files, check content of file to determine difference
if t1.Nanosecond() == 0 && t2.Nanosecond() == 0 {
if (f1.Mode() & os.ModeSymlink) == os.ModeSymlink {
return compareSymlinkTarget(f1fullPath, f2fullPath)
}
if f1.Size() == 0 {
return true, nil
}
return compareFileContent(f1fullPath, f2fullPath)
} else if t1.Nanosecond() != t2.Nanosecond() {
return false, nil
}
}
return true, nil
}
// Ported from continuity project
// https://github.com/containerd/continuity/blob/v0.1.0/fs/diff_unix.go#L43-L54
// Copyright The containerd Authors.
func compareSysStat(s1, s2 interface{}) (bool, error) {
ls1, ok := s1.(*syscall.Stat_t)
if !ok {
return false, nil
}
ls2, ok := s2.(*syscall.Stat_t)
if !ok {
return false, nil
}
return ls1.Mode == ls2.Mode && ls1.Uid == ls2.Uid && ls1.Gid == ls2.Gid && ls1.Rdev == ls2.Rdev, nil
}
// Ported from continuity project
// https://github.com/containerd/continuity/blob/v0.1.0/fs/diff_unix.go#L56-L66
// Copyright The containerd Authors.
func compareCapabilities(p1, p2 string) (bool, error) {
c1, err := sysx.LGetxattr(p1, "security.capability")
if err != nil && err != sysx.ENODATA {
return false, errors.Wrapf(err, "failed to get xattr for %s", p1)
}
c2, err := sysx.LGetxattr(p2, "security.capability")
if err != nil && err != sysx.ENODATA {
return false, errors.Wrapf(err, "failed to get xattr for %s", p2)
}
return bytes.Equal(c1, c2), nil
}
// Ported from continuity project
// https://github.com/containerd/continuity/blob/bce1c3f9669b6f3e7f6656ee715b0b4d75fa64a6/fs/path.go#L135
// Copyright The containerd Authors.
func compareSymlinkTarget(p1, p2 string) (bool, error) {
t1, err := os.Readlink(p1)
if err != nil {
return false, err
}
t2, err := os.Readlink(p2)
if err != nil {
return false, err
}
return t1 == t2, nil
}
var bufPool = sync.Pool{
New: func() interface{} {
b := make([]byte, 32*1024)
return &b
},
}
// Ported from continuity project
// https://github.com/containerd/continuity/blob/bce1c3f9669b6f3e7f6656ee715b0b4d75fa64a6/fs/path.go#L151
// Copyright The containerd Authors.
func compareFileContent(p1, p2 string) (bool, error) {
f1, err := os.Open(p1)
if err != nil {
return false, err
}
defer f1.Close()
if stat, err := f1.Stat(); err != nil {
return false, err
} else if !stat.Mode().IsRegular() {
return false, errors.Errorf("%s is not a regular file", p1)
}
f2, err := os.Open(p2)
if err != nil {
return false, err
}
defer f2.Close()
if stat, err := f2.Stat(); err != nil {
return false, err
} else if !stat.Mode().IsRegular() {
return false, errors.Errorf("%s is not a regular file", p2)
}
b1 := bufPool.Get().(*[]byte)
defer bufPool.Put(b1)
b2 := bufPool.Get().(*[]byte)
defer bufPool.Put(b2)
for {
n1, err1 := io.ReadFull(f1, *b1)
if err1 == io.ErrUnexpectedEOF {
// it's expected to get EOF when file size isn't a multiple of chunk size, consolidate these error types
err1 = io.EOF
}
if err1 != nil && err1 != io.EOF {
return false, err1
}
n2, err2 := io.ReadFull(f2, *b2)
if err2 == io.ErrUnexpectedEOF {
err2 = io.EOF
}
if err2 != nil && err2 != io.EOF {
return false, err2
}
if n1 != n2 || !bytes.Equal((*b1)[:n1], (*b2)[:n2]) {
return false, nil
}
if err1 == io.EOF && err2 == io.EOF {
return true, nil
}
}
}

View File

@ -0,0 +1,130 @@
// Copyright 2024 Nydus Developers. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
package committer
import (
"context"
"encoding/json"
"strings"
"github.com/containerd/containerd"
"github.com/containerd/containerd/oci"
"github.com/pkg/errors"
)
type InspectResult struct {
LowerDirs string
UpperDir string
Image string
Mounts []Mount
Pid int
}
type Mount struct {
Destination string
Source string
}
type Manager struct {
address string
}
func NewManager(addr string) (*Manager, error) {
return &Manager{
address: addr,
}, nil
}
func (m *Manager) Pause(ctx context.Context, containerID string) error {
client, err := containerd.New(m.address)
if err != nil {
return errors.Wrapf(err, "create client")
}
container, err := client.LoadContainer(ctx, containerID)
if err != nil {
return errors.Wrapf(err, "load container")
}
task, err := container.Task(ctx, nil)
if err != nil {
return errors.Wrapf(err, "obtain container task")
}
return task.Pause(ctx)
}
func (m *Manager) UnPause(ctx context.Context, containerID string) error {
client, err := containerd.New(m.address)
if err != nil {
return errors.Wrapf(err, "create client")
}
container, err := client.LoadContainer(ctx, containerID)
if err != nil {
return errors.Wrapf(err, "load container")
}
task, err := container.Task(ctx, nil)
if err != nil {
return errors.Wrapf(err, "obtain container task")
}
return task.Resume(ctx)
}
func (m *Manager) Inspect(ctx context.Context, containerID string) (*InspectResult, error) {
client, err := containerd.New(m.address)
if err != nil {
return nil, errors.Wrapf(err, "create client")
}
container, err := client.LoadContainer(ctx, containerID)
if err != nil {
return nil, errors.Wrapf(err, "load container")
}
_image, err := container.Image(ctx)
if err != nil {
return nil, errors.Wrapf(err, "obtain container image")
}
image := _image.Name()
task, err := container.Task(ctx, nil)
if err != nil {
return nil, errors.Wrapf(err, "obtain container task")
}
pid := int(task.Pid())
containerInfo, err := container.Info(ctx, containerd.WithoutRefreshedMetadata)
if err != nil {
return nil, errors.Wrapf(err, "obtain container info")
}
spec := oci.Spec{}
if err := json.Unmarshal(containerInfo.Spec.GetValue(), &spec); err != nil {
return nil, errors.Wrapf(err, "unmarshal json")
}
mounts := []Mount{}
for _, mount := range spec.Mounts {
mounts = append(mounts, Mount{
Destination: mount.Destination,
Source: mount.Source,
})
}
snapshot := client.SnapshotService("nydus")
lowerDirs := ""
upperDir := ""
mount, err := snapshot.Mounts(ctx, containerInfo.SnapshotKey)
if err != nil {
return nil, errors.Wrapf(err, "get snapshot mount")
}
// snapshot Mount Options[0] "workdir=$workdir", Options[1] "upperdir=$upperdir", Options[2] "lowerdir=$lowerdir".
lowerDirs = strings.TrimPrefix(mount[0].Options[2], "lowerdir=")
upperDir = strings.TrimPrefix(mount[0].Options[1], "upperdir=")
return &InspectResult{
LowerDirs: lowerDirs,
UpperDir: upperDir,
Image: image,
Mounts: mounts,
Pid: pid,
}, nil
}

View File

@ -0,0 +1,186 @@
// Ported from go-nsenter project, copyright The go-nsenter Authors.
// https://github.com/Devatoria/go-nsenter
package committer
import (
"bytes"
"context"
"fmt"
"io"
"os/exec"
"strconv"
"time"
)
// Config is the nsenter configuration used to generate
// nsenter command
type Config struct {
Cgroup bool // Enter cgroup namespace
CgroupFile string // Cgroup namespace location, default to /proc/PID/ns/cgroup
FollowContext bool // Set SELinux security context
GID int // GID to use to execute given program
IPC bool // Enter IPC namespace
IPCFile string // IPC namespace location, default to /proc/PID/ns/ipc
Mount bool // Enter mount namespace
MountFile string // Mount namespace location, default to /proc/PID/ns/mnt
Net bool // Enter network namespace
NetFile string // Network namespace location, default to /proc/PID/ns/net
NoFork bool // Do not fork before executing the specified program
PID bool // Enter PID namespace
PIDFile string // PID namespace location, default to /proc/PID/ns/pid
PreserveCredentials bool // Preserve current UID/GID when entering namespaces
RootDirectory string // Set the root directory, default to target process root directory
Target int // Target PID (required)
UID int // UID to use to execute given program
User bool // Enter user namespace
UserFile string // User namespace location, default to /proc/PID/ns/user
UTS bool // Enter UTS namespace
UTSFile string // UTS namespace location, default to /proc/PID/ns/uts
WorkingDirectory string // Set the working directory, default to target process working directory
}
// Execute executes the given command with a default background context
func (c *Config) Execute(writer io.Writer, program string, args ...string) (string, error) {
return c.ExecuteContext(context.Background(), writer, program, args...)
}
// ExecuteContext the given program using the given nsenter configuration and given context
// and return stdout/stderr or an error if command has failed
func (c *Config) ExecuteContext(ctx context.Context, writer io.Writer, program string, args ...string) (string, error) {
cmd, err := c.buildCommand(ctx)
if err != nil {
return "", fmt.Errorf("Error while building command: %v", err)
}
// Prepare command
var srderr bytes.Buffer
rc, err := cmd.StdoutPipe()
if err != nil {
return "", fmt.Errorf("Open stdout pipe: %v", err)
}
defer rc.Close()
cmd.Stderr = &srderr
cmd.Args = append(cmd.Args, program)
cmd.Args = append(cmd.Args, args...)
if err := cmd.Start(); err != nil {
return srderr.String(), err
}
// HACK: we can't wait rc.Close happen automatically when process
// exits, so must check process state and call rc.Close() by manually.
go func() {
for {
time.Sleep(time.Second * 1)
if cmd.ProcessState != nil && cmd.ProcessState.Exited() {
rc.Close()
break
}
}
}()
if _, err := io.Copy(writer, rc); err != nil {
return srderr.String(), err
}
return srderr.String(), cmd.Wait()
}
func (c *Config) buildCommand(ctx context.Context) (*exec.Cmd, error) {
if c.Target == 0 {
return nil, fmt.Errorf("Target must be specified")
}
var args []string
args = append(args, "--target", strconv.Itoa(c.Target))
if c.Cgroup {
if c.CgroupFile != "" {
args = append(args, fmt.Sprintf("--cgroup=%s", c.CgroupFile))
} else {
args = append(args, "--cgroup")
}
}
if c.FollowContext {
args = append(args, "--follow-context")
}
if c.GID != 0 {
args = append(args, "--setgid", strconv.Itoa(c.GID))
}
if c.IPC {
if c.IPCFile != "" {
args = append(args, fmt.Sprintf("--ip=%s", c.IPCFile))
} else {
args = append(args, "--ipc")
}
}
if c.Mount {
if c.MountFile != "" {
args = append(args, fmt.Sprintf("--mount=%s", c.MountFile))
} else {
args = append(args, "--mount")
}
}
if c.Net {
if c.NetFile != "" {
args = append(args, fmt.Sprintf("--net=%s", c.NetFile))
} else {
args = append(args, "--net")
}
}
if c.NoFork {
args = append(args, "--no-fork")
}
if c.PID {
if c.PIDFile != "" {
args = append(args, fmt.Sprintf("--pid=%s", c.PIDFile))
} else {
args = append(args, "--pid")
}
}
if c.PreserveCredentials {
args = append(args, "--preserve-credentials")
}
if c.RootDirectory != "" {
args = append(args, "--root", c.RootDirectory)
}
if c.UID != 0 {
args = append(args, "--setuid", strconv.Itoa(c.UID))
}
if c.User {
if c.UserFile != "" {
args = append(args, fmt.Sprintf("--user=%s", c.UserFile))
} else {
args = append(args, "--user")
}
}
if c.UTS {
if c.UTSFile != "" {
args = append(args, fmt.Sprintf("--uts=%s", c.UTSFile))
} else {
args = append(args, "--uts")
}
}
if c.WorkingDirectory != "" {
args = append(args, "--wd", c.WorkingDirectory)
}
cmd := exec.CommandContext(ctx, "nsenter", args...)
return cmd, nil
}

View File

@ -0,0 +1,18 @@
package committer
import (
"sync/atomic"
)
type Counter struct {
n int64
}
func (c *Counter) Write(p []byte) (n int, err error) {
atomic.AddInt64(&c.n, int64(len(p)))
return len(p), nil
}
func (c *Counter) Size() (n int64) {
return c.n
}

View File

@ -5,23 +5,23 @@ import (
"os" "os"
"path/filepath" "path/filepath"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/build" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/build"
"github.com/pkg/errors" "github.com/pkg/errors"
) )
var defaultCompactConfig = &CompactConfig{ var defaultCompactConfig = &CompactConfig{
MinUsedRatio: 5, MinUsedRatio: "5",
CompactBlobSize: 10485760, CompactBlobSize: "10485760",
MaxCompactSize: 104857600, MaxCompactSize: "104857600",
LayersToCompact: 32, LayersToCompact: "32",
} }
type CompactConfig struct { type CompactConfig struct {
MinUsedRatio int `json:"min_used_ratio"` MinUsedRatio string
CompactBlobSize int `json:"compact_blob_size"` CompactBlobSize string
MaxCompactSize int `json:"max_compact_size"` MaxCompactSize string
LayersToCompact int `json:"layers_to_compact"` LayersToCompact string
BlobsDir string `json:"blobs_dir,omitempty"` BlobsDir string
} }
func (cfg *CompactConfig) Dumps(filePath string) error { func (cfg *CompactConfig) Dumps(filePath string) error {
@ -81,11 +81,6 @@ func (compactor *Compactor) Compact(bootstrapPath, chunkDict, backendType, backe
if err := os.Remove(targetBootstrap); err != nil && !os.IsNotExist(err) { if err := os.Remove(targetBootstrap); err != nil && !os.IsNotExist(err) {
return "", errors.Wrap(err, "failed to delete old bootstrap file") return "", errors.Wrap(err, "failed to delete old bootstrap file")
} }
// prepare config file
configFilePath := filepath.Join(compactor.workdir, "compact.json")
if err := compactor.cfg.Dumps(configFilePath); err != nil {
return "", errors.Wrap(err, "compact err")
}
outputJSONPath := filepath.Join(compactor.workdir, "compact-result.json") outputJSONPath := filepath.Join(compactor.workdir, "compact-result.json")
if err := os.Remove(outputJSONPath); err != nil && !os.IsNotExist(err) { if err := os.Remove(outputJSONPath); err != nil && !os.IsNotExist(err) {
return "", errors.Wrap(err, "failed to delete old output-json file") return "", errors.Wrap(err, "failed to delete old output-json file")
@ -97,7 +92,11 @@ func (compactor *Compactor) Compact(bootstrapPath, chunkDict, backendType, backe
BackendType: backendType, BackendType: backendType,
BackendConfigPath: backendConfigFile, BackendConfigPath: backendConfigFile,
OutputJSONPath: outputJSONPath, OutputJSONPath: outputJSONPath,
CompactConfigPath: configFilePath, MinUsedRatio: compactor.cfg.MinUsedRatio,
CompactBlobSize: compactor.cfg.CompactBlobSize,
MaxCompactSize: compactor.cfg.MaxCompactSize,
LayersToCompact: compactor.cfg.LayersToCompact,
BlobsDir: compactor.cfg.BlobsDir,
}) })
if err != nil { if err != nil {
return "", errors.Wrap(err, "failed to run compact command") return "", errors.Wrap(err, "failed to run compact command")

View File

@ -5,11 +5,37 @@
package converter package converter
import ( import (
"bytes"
"compress/gzip"
"context" "context"
"fmt"
"io"
"os" "os"
"path/filepath"
"strconv"
"strings"
"time"
modelspec "github.com/CloudNativeAI/model-spec/specs-go/v1"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/containerd/containerd/content"
"github.com/containerd/containerd/content/local"
"github.com/containerd/containerd/namespaces" "github.com/containerd/containerd/namespaces"
"github.com/dragonflyoss/image-service/contrib/nydusify/pkg/converter/provider" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/converter/provider"
pkgPvd "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/provider"
snapConv "github.com/BraveY/snapshotter-converter/converter"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/external/modctl"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/parser"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/utils"
"encoding/json"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/snapshotter/external"
"github.com/opencontainers/go-digest"
"github.com/opencontainers/image-spec/specs-go"
"github.com/goharbor/acceleration-service/pkg/converter" "github.com/goharbor/acceleration-service/pkg/converter"
"github.com/goharbor/acceleration-service/pkg/platformutil" "github.com/goharbor/acceleration-service/pkg/platformutil"
"github.com/pkg/errors" "github.com/pkg/errors"
@ -24,6 +50,9 @@ type Opt struct {
Target string Target string
ChunkDictRef string ChunkDictRef string
SourceBackendType string
SourceBackendConfig string
SourceInsecure bool SourceInsecure bool
TargetInsecure bool TargetInsecure bool
ChunkDictInsecure bool ChunkDictInsecure bool
@ -47,14 +76,31 @@ type Opt struct {
PrefetchPatterns string PrefetchPatterns string
OCIRef bool OCIRef bool
WithReferrer bool WithReferrer bool
WithPlainHTTP bool
AllPlatforms bool AllPlatforms bool
Platforms string Platforms string
OutputJSON string OutputJSON string
PushRetryCount int
PushRetryDelay string
}
type SourceBackendConfig struct {
Context string `json:"context"`
WorkDir string `json:"work_dir"`
} }
func Convert(ctx context.Context, opt Opt) error { func Convert(ctx context.Context, opt Opt) error {
if opt.SourceBackendType == "modelfile" {
return convertModelFile(ctx, opt)
}
if opt.SourceBackendType == "model-artifact" {
return convertModelArtifact(ctx, opt)
}
ctx = namespaces.WithNamespace(ctx, "nydusify") ctx = namespaces.WithNamespace(ctx, "nydusify")
platformMC, err := platformutil.ParsePlatforms(opt.AllPlatforms, opt.Platforms) platformMC, err := platformutil.ParsePlatforms(opt.AllPlatforms, opt.Platforms)
if err != nil { if err != nil {
@ -83,6 +129,15 @@ func Convert(ctx context.Context, opt Opt) error {
} }
defer os.RemoveAll(tmpDir) defer os.RemoveAll(tmpDir)
// Parse retry delay
retryDelay, err := time.ParseDuration(opt.PushRetryDelay)
if err != nil {
return errors.Wrap(err, "parse push retry delay")
}
// Set push retry configuration
pvd.SetPushRetryConfig(opt.PushRetryCount, retryDelay)
cvt, err := converter.New( cvt, err := converter.New(
converter.WithProvider(pvd), converter.WithProvider(pvd),
converter.WithDriver("nydus", getConfig(opt)), converter.WithDriver("nydus", getConfig(opt)),
@ -98,3 +153,413 @@ func Convert(ctx context.Context, opt Opt) error {
} }
return err return err
} }
func convertModelFile(ctx context.Context, opt Opt) error {
if _, err := os.Stat(opt.WorkDir); err != nil {
if errors.Is(err, os.ErrNotExist) {
if err := os.MkdirAll(opt.WorkDir, 0755); err != nil {
return errors.Wrap(err, "prepare work directory")
}
// We should only clean up when the work directory not exists
// before, otherwise it may delete user data by mistake.
defer os.RemoveAll(opt.WorkDir)
} else {
return errors.Wrap(err, "stat work directory")
}
}
tmpDir, err := os.MkdirTemp(opt.WorkDir, "nydusify-")
if err != nil {
return errors.Wrap(err, "create temp directory")
}
defer os.RemoveAll(tmpDir)
attributesPath := filepath.Join(tmpDir, ".nydusattributes")
backendMetaPath := filepath.Join(tmpDir, ".backend.meta")
backendConfigPath := filepath.Join(tmpDir, ".backend.json")
var srcBkdCfg SourceBackendConfig
if err := json.Unmarshal([]byte(opt.SourceBackendConfig), &srcBkdCfg); err != nil {
return errors.Wrap(err, "unmarshal source backend config")
}
modctlHandler, err := newModctlHandler(opt, srcBkdCfg.WorkDir)
if err != nil {
return errors.Wrap(err, "create modctl handler")
}
if err := external.Handle(context.Background(), external.Options{
Dir: srcBkdCfg.WorkDir,
Handler: modctlHandler,
MetaOutput: backendMetaPath,
BackendOutput: backendConfigPath,
AttributesOutput: attributesPath,
}); err != nil {
return errors.Wrap(err, "handle modctl")
}
// Make nydus layer with external blob
packOption := snapConv.PackOption{
BuilderPath: opt.NydusImagePath,
Compressor: opt.Compressor,
FsVersion: opt.FsVersion,
ChunkSize: opt.ChunkSize,
FromDir: srcBkdCfg.Context,
AttributesPath: attributesPath,
}
_, externalBlobDigest, err := packWithAttributes(ctx, packOption, tmpDir)
if err != nil {
return errors.Wrap(err, "pack to blob")
}
bootStrapTarPath, err := packFinalBootstrap(tmpDir, backendConfigPath, externalBlobDigest)
if err != nil {
return errors.Wrap(err, "pack final bootstrap")
}
modelCfg, err := buildModelConfig(modctlHandler)
if err != nil {
return errors.Wrap(err, "build model config")
}
modelLayers := modctlHandler.GetLayers()
nydusImage := buildNydusImage()
return pushManifest(context.Background(), opt, *modelCfg, modelLayers, *nydusImage, bootStrapTarPath)
}
func convertModelArtifact(ctx context.Context, opt Opt) error {
if _, err := os.Stat(opt.WorkDir); err != nil {
if errors.Is(err, os.ErrNotExist) {
if err := os.MkdirAll(opt.WorkDir, 0755); err != nil {
return errors.Wrap(err, "prepare work directory")
}
// We should only clean up when the work directory not exists
// before, otherwise it may delete user data by mistake.
defer os.RemoveAll(opt.WorkDir)
} else {
return errors.Wrap(err, "stat work directory")
}
}
tmpDir, err := os.MkdirTemp(opt.WorkDir, "nydusify-")
if err != nil {
return errors.Wrap(err, "create temp directory")
}
defer os.RemoveAll(tmpDir)
contextDir, err := os.MkdirTemp(tmpDir, "context-")
if err != nil {
return errors.Wrap(err, "create temp directory")
}
defer os.RemoveAll(contextDir)
attributesPath := filepath.Join(tmpDir, ".nydusattributes")
backendMetaPath := filepath.Join(tmpDir, ".backend.meta")
backendConfigPath := filepath.Join(tmpDir, ".backend.json")
handler, err := modctl.NewRemoteHandler(ctx, opt.Source, opt.WithPlainHTTP)
if err != nil {
return errors.Wrap(err, "create modctl handler")
}
if err := external.RemoteHandle(ctx, external.Options{
ContextDir: contextDir,
RemoteHandler: handler,
MetaOutput: backendMetaPath,
BackendOutput: backendConfigPath,
AttributesOutput: attributesPath,
}); err != nil {
return errors.Wrap(err, "remote handle")
}
// Make nydus layer with external blob
packOption := snapConv.PackOption{
BuilderPath: opt.NydusImagePath,
Compressor: opt.Compressor,
FsVersion: opt.FsVersion,
ChunkSize: opt.ChunkSize,
FromDir: contextDir,
AttributesPath: attributesPath,
}
_, externalBlobDigest, err := packWithAttributes(ctx, packOption, tmpDir)
if err != nil {
return errors.Wrap(err, "pack to blob")
}
bootStrapTarPath, err := packFinalBootstrap(tmpDir, backendConfigPath, externalBlobDigest)
if err != nil {
return errors.Wrap(err, "pack final bootstrap")
}
modelCfg, err := handler.GetModelConfig()
if err != nil {
return errors.Wrap(err, "build model config")
}
modelLayers := handler.GetLayers()
nydusImage := buildNydusImage()
return pushManifest(context.Background(), opt, *modelCfg, modelLayers, *nydusImage, bootStrapTarPath)
}
func newModctlHandler(opt Opt, workDir string) (*modctl.Handler, error) {
chunkSizeStr := strings.TrimPrefix(opt.ChunkSize, "0x")
chunkSize, err := strconv.ParseUint(chunkSizeStr, 16, 64)
if err != nil {
return nil, errors.Wrap(err, "parse chunk size to uint64")
}
modctlOpt, err := modctl.GetOption(opt.Source, workDir, chunkSize)
if err != nil {
return nil, errors.Wrap(err, "parse modctl option")
}
return modctl.NewHandler(*modctlOpt)
}
func packWithAttributes(ctx context.Context, packOption snapConv.PackOption, blobDir string) (digest.Digest, digest.Digest, error) {
blob, err := os.CreateTemp(blobDir, "blob-")
if err != nil {
return "", "", errors.Wrap(err, "create temp file for blob")
}
defer blob.Close()
externalBlob, err := os.CreateTemp(blobDir, "external-blob-")
if err != nil {
return "", "", errors.Wrap(err, "create temp file for external blob")
}
defer externalBlob.Close()
blobDigester := digest.Canonical.Digester()
blobWriter := io.MultiWriter(blob, blobDigester.Hash())
externalBlobDigester := digest.Canonical.Digester()
packOption.ExternalBlobWriter = io.MultiWriter(externalBlob, externalBlobDigester.Hash())
_, err = snapConv.Pack(ctx, blobWriter, packOption)
if err != nil {
return "", "", errors.Wrap(err, "pack to blob")
}
blobDigest := blobDigester.Digest()
err = os.Rename(blob.Name(), filepath.Join(blobDir, blobDigest.Hex()))
if err != nil {
return "", "", errors.Wrap(err, "rename blob file")
}
externalBlobDigest := externalBlobDigester.Digest()
err = os.Rename(externalBlob.Name(), filepath.Join(blobDir, externalBlobDigest.Hex()))
if err != nil {
return "", "", errors.Wrap(err, "rename external blob file")
}
return blobDigest, externalBlobDigest, nil
}
// Pack bootstrap and backend config into final bootstrap tar file.
func packFinalBootstrap(workDir, backendConfigPath string, externalBlobDigest digest.Digest) (string, error) {
bkdCfg, err := os.ReadFile(backendConfigPath)
if err != nil {
return "", errors.Wrap(err, "read backend config file")
}
bkdReader := bytes.NewReader(bkdCfg)
files := []snapConv.File{
{
Name: "backend.json",
Reader: bkdReader,
Size: int64(len(bkdCfg)),
},
}
externalBlobRa, err := local.OpenReader(filepath.Join(workDir, externalBlobDigest.Hex()))
if err != nil {
return "", errors.Wrap(err, "open reader for upper blob")
}
bootstrap, err := os.CreateTemp(workDir, "bootstrap-")
if err != nil {
return "", errors.Wrap(err, "create temp file for bootstrap")
}
defer bootstrap.Close()
if _, err := snapConv.UnpackEntry(externalBlobRa, snapConv.EntryBootstrap, bootstrap); err != nil {
return "", errors.Wrap(err, "unpack bootstrap from nydus")
}
files = append(files, snapConv.File{
Name: snapConv.EntryBootstrap,
Reader: content.NewReader(externalBlobRa),
Size: externalBlobRa.Size(),
})
bootStrapTarPath := fmt.Sprintf("%s-final.tar", bootstrap.Name())
bootstrapTar, err := os.Create(bootStrapTarPath)
if err != nil {
return "", errors.Wrap(err, "open bootstrap tar file")
}
defer bootstrap.Close()
rc := snapConv.PackToTar(files, false)
defer rc.Close()
println("copy bootstrap to tar file")
if _, err = io.Copy(bootstrapTar, rc); err != nil {
return "", errors.Wrap(err, "copy merged bootstrap")
}
return bootStrapTarPath, nil
}
func buildNydusImage() *parser.Image {
manifest := ocispec.Manifest{
Versioned: specs.Versioned{SchemaVersion: 2},
MediaType: ocispec.MediaTypeImageManifest,
ArtifactType: modelspec.ArtifactTypeModelManifest,
Config: ocispec.Descriptor{
MediaType: modelspec.MediaTypeModelConfig,
},
}
desc := ocispec.Descriptor{
MediaType: ocispec.MediaTypeImageManifest,
}
nydusImage := &parser.Image{
Manifest: manifest,
Desc: desc,
}
return nydusImage
}
func buildModelConfig(modctlHandler *modctl.Handler) (*modelspec.Model, error) {
cfgBytes, err := modctlHandler.GetConfig()
if err != nil {
return nil, errors.Wrap(err, "get modctl config")
}
var modelCfg modelspec.Model
if err := json.Unmarshal(cfgBytes, &modelCfg); err != nil {
return nil, errors.Wrap(err, "unmarshal modctl config")
}
return &modelCfg, nil
}
func pushManifest(
ctx context.Context, opt Opt, modelCfg modelspec.Model, modelLayers []ocispec.Descriptor, nydusImage parser.Image, bootstrapTarPath string,
) error {
// Push image config
configBytes, configDesc, err := makeDesc(modelCfg, nydusImage.Manifest.Config)
if err != nil {
return errors.Wrap(err, "make config desc")
}
remoter, err := pkgPvd.DefaultRemote(opt.Target, opt.TargetInsecure)
if err != nil {
return errors.Wrap(err, "create remote")
}
if opt.WithPlainHTTP {
remoter.WithHTTP()
}
if err := remoter.Push(ctx, *configDesc, true, bytes.NewReader(configBytes)); err != nil {
if utils.RetryWithHTTP(err) {
remoter.MaybeWithHTTP(err)
if err := remoter.Push(ctx, *configDesc, true, bytes.NewReader(configBytes)); err != nil {
return errors.Wrap(err, "push image config")
}
} else {
return errors.Wrap(err, "push image config")
}
}
// Push bootstrap layer
bootstrapTar, err := os.Open(bootstrapTarPath)
if err != nil {
return errors.Wrap(err, "open bootstrap tar file")
}
bootstrapTarGzPath := bootstrapTarPath + ".gz"
bootstrapTarGz, err := os.Create(bootstrapTarGzPath)
if err != nil {
return errors.Wrap(err, "create bootstrap tar.gz file")
}
defer bootstrapTarGz.Close()
digester := digest.SHA256.Digester()
gzWriter := gzip.NewWriter(io.MultiWriter(bootstrapTarGz, digester.Hash()))
if _, err := io.Copy(gzWriter, bootstrapTar); err != nil {
return errors.Wrap(err, "compress bootstrap tar to tar.gz")
}
if err := gzWriter.Close(); err != nil {
return errors.Wrap(err, "close gzip writer")
}
ra, err := local.OpenReader(bootstrapTarGzPath)
if err != nil {
return errors.Wrap(err, "open reader for upper blob")
}
defer ra.Close()
bootstrapDesc := ocispec.Descriptor{
Digest: digester.Digest(),
Size: ra.Size(),
MediaType: ocispec.MediaTypeImageLayerGzip,
Annotations: map[string]string{
snapConv.LayerAnnotationFSVersion: opt.FsVersion,
snapConv.LayerAnnotationNydusBootstrap: "true",
snapConv.LayerAnnotationNydusArtifactType: modelspec.ArtifactTypeModelManifest,
},
}
bootstrapRc, err := os.Open(bootstrapTarGzPath)
if err != nil {
return errors.Wrapf(err, "open bootstrap %s", bootstrapTarGzPath)
}
defer bootstrapRc.Close()
if err := remoter.Push(ctx, bootstrapDesc, true, bootstrapRc); err != nil {
return errors.Wrap(err, "push bootstrap layer")
}
// Push image manifest
layers := make([]ocispec.Descriptor, 0, len(modelLayers)+1)
layers = append(layers, modelLayers...)
layers = append(layers, bootstrapDesc)
subject, err := getSourceManifestSubject(ctx, opt.Source, opt.SourceInsecure, opt.WithPlainHTTP)
if err != nil {
return errors.Wrap(err, "get source manifest subject")
}
nydusImage.Manifest.Config = *configDesc
nydusImage.Manifest.Layers = layers
nydusImage.Manifest.Subject = subject
manifestBytes, manifestDesc, err := makeDesc(nydusImage.Manifest, nydusImage.Desc)
if err != nil {
return errors.Wrap(err, "make manifest desc")
}
if err := remoter.Push(ctx, *manifestDesc, false, bytes.NewReader(manifestBytes)); err != nil {
return errors.Wrap(err, "push image manifest")
}
return nil
}
func getSourceManifestSubject(ctx context.Context, sourceRef string, inscure, plainHTTP bool) (*ocispec.Descriptor, error) {
remoter, err := pkgPvd.DefaultRemote(sourceRef, inscure)
if err != nil {
return nil, errors.Wrap(err, "create remote")
}
if plainHTTP {
remoter.WithHTTP()
}
desc, err := remoter.Resolve(ctx)
if utils.RetryWithHTTP(err) {
remoter.MaybeWithHTTP(err)
desc, err = remoter.Resolve(ctx)
}
if err != nil {
return nil, errors.Wrap(err, "resolve source manifest subject")
}
return desc, nil
}
func makeDesc(x interface{}, oldDesc ocispec.Descriptor) ([]byte, *ocispec.Descriptor, error) {
data, err := json.MarshalIndent(x, "", " ")
if err != nil {
return nil, nil, errors.Wrap(err, "json marshal")
}
dgst := digest.SHA256.FromBytes(data)
newDesc := oldDesc
newDesc.Size = int64(len(data))
newDesc.Digest = dgst
return data, &newDesc, nil
}

Some files were not shown because too many files have changed in this diff Show More