Compare commits

...

278 Commits

Author SHA1 Message Date
Fan Shang f7d513844d Remove mirrors configuration
Signed-off-by: Fan Shang <2444576154@qq.com>
2025-08-05 10:38:09 +08:00
Baptiste Girard-Carrabin 29dc8ec5c8 [registry] Accept empty scope during token auth challenge
The distribution spec (https://distribution.github.io/distribution/spec/auth/scope/#authorization-server-use) mentions that the access token provided during auth challenge "may include a scope" which means that it's not necessary to have one either to comply with the spec.
Additionally, this is something that is already accepted by containerd which will simply log a warning when no scope is specified: https://github.com/containerd/containerd/blob/main/core/remotes/docker/auth/fetch.go#L64
To match with what containerd and the spec suggest, the commit modifies the `parse_auth` logic to accept an empty `scope` field. It also logs the same warning as containerd.

Signed-off-by: Baptiste Girard-Carrabin <baptiste.girardcarrabin@datadoghq.com>
2025-07-31 20:28:47 +08:00
imeoer 7886e1868f storage: fix redirect in registry backend
To fix https://github.com/dragonflyoss/nydus/issues/1720

Signed-off-by: imeoer <yansong.ys@antgroup.com>
2025-07-31 11:49:44 +08:00
Peng Tao e1dffec213 api: increase error.rs UT coverage
Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2025-07-17 11:33:17 +08:00
Peng Tao cc62dd6890 github: add project common copilot instructions
Copilot generated with slight modification.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2025-07-17 11:33:17 +08:00
Peng Tao d140d60bea rafs: increase UT coverage for cached_v5.rs
Copilot generated.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2025-07-17 11:33:17 +08:00
Peng Tao f323c7f6e3 gitignore: ignore temp files generated by UTs
Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2025-07-17 11:33:17 +08:00
Peng Tao 5c8299c7f7 service: skip init fscache test if cachefiles is unavailable
Also skip the test for non-root users.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2025-07-17 11:33:17 +08:00
Jack Decker 14c0062cee Make filesystem sync operation fatal on failure
Signed-off-by: Jack Decker <jack@thundercompute.com>
2025-07-11 10:42:45 +08:00
Jack Decker d3bbc3e509 Add filesystem sync in both container and host namespaces before pausing container for commit to ensure all changes are flushed to disk.
Signed-off-by: Jack Decker <jack@thundercompute.com>
2025-07-11 10:42:45 +08:00
imeoer 80f80dda0e cargo: bump crates version
Signed-off-by: imeoer <yansong.ys@antgroup.com>
2025-07-08 10:38:27 +08:00
Yang Kaiyong a26c7bf99c test: support miri for unit test in actions
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-07-04 10:17:32 +08:00
imeoer 72b1955387 misc: add issue / PR stale workflow
Signed-off-by: imeoer <yansong.ys@antgroup.com>
2025-06-18 10:38:00 +08:00
ymy d589292ebc feat(nydusify): After converting the image, if the push operation fails, increase the number of retries.
Signed-off-by: ymy <ymy@zetyun.com>
2025-06-17 17:11:38 +08:00
Zephyrcf 344a208e86 Make ssl fallback check case-insensitive
Signed-off-by: Zephyrcf <zinsist77@gmail.com>
2025-06-12 19:03:49 +08:00
imeoer 9645820222 docs: add MAINTAINERS doc
Signed-off-by: imeoer <yansong.ys@antgroup.com>
2025-05-30 18:40:33 +08:00
Baptiste Girard-Carrabin d36295a21e [registry] Modify TokenResponse instead
Apply github comment.
Use `serde:default` in TokenResponse to have the same behavior as Option<String> without changing the struct signature.

Signed-off-by: Baptiste Girard-Carrabin <baptiste.girardcarrabin@datadoghq.com>
2025-04-28 16:11:23 +08:00
Baptiste Girard-Carrabin c048fcc45f [registry] Fix auth token parsing for access_token
Extend auth token parsing to support token in different json fields.
There is no real consensus on Oauth2 token response format, which means that each registry can implement their own. In particular, Azure ACR uses `access_token` as described here https://github.com/Azure/acr/blob/main/docs/Token-BasicAuth.md#get-a-pull-access-token-for-the-user. As such, when attempting to parse the JSON response containing the authorization token, we should attempt to deserialize using either `token` or `access_token` (and potentially more fields in the future if needed).
To not break the integration with existing registry, the behavior is to fallback to `access_token` only if `token` does not exist in the response.

Signed-off-by: Baptiste Girard-Carrabin <baptiste.girardcarrabin@datadoghq.com>
2025-04-28 16:11:23 +08:00
Baptiste Girard-Carrabin 67bf8b8283 [storage] Modify redirect policy to follow 10 redirects
From 2378d074fe (diff-c9f1f654cf0ba5d46a4ed25d8bb0ea22c942840c6693d31927a9fd912bcb9456R125-R131)
it seems that the redirect policy of the http client has always been to not follow redirects. However, this means that pulling from registries which have redirects when pulling blobs does not work. This is the case for instance on GCP's former container registries that were migrated to artifact registries.
Additionally, containerd's behavior is to follow up to 10 redirects https://github.com/containerd/containerd/blob/main/core/remotes/docker/resolver.go#L596 so it makes sense to use the same value.

Signed-off-by: Baptiste Girard-Carrabin <baptiste.girardcarrabin@datadoghq.com>
2025-04-27 18:54:04 +08:00
Peng Tao d74629233b readme: add deepwiki reference
Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2025-04-27 18:53:16 +08:00
Yang Kaiyong 21206e75b3 nydusify(refactor): handle layer with retry
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-04-23 11:04:54 +08:00
Yan Song c288169c1a action: add free-disk-space job
Try to fix the broken CI: https://github.com/dragonflyoss/nydus/actions/runs/14569290750/job/40863611290
It might be due to insufficient disk space.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2025-04-23 10:28:06 +08:00
Yang Kaiyong 23fdda1020 nydusify(feat): support for specifing log file and concurrently processing external model manifests
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-04-21 15:16:57 +08:00
Yang Kaiyong 9b915529a9 nydusify(feat): add crc32 in file attributes
Read CRC32 from external models' manifest and pass it to builder.

Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-04-18 18:30:18 +08:00
Yang Kaiyong 96c3e5569a nydus-image: only add crc32 flag in chunk level
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-04-18 14:39:03 +08:00
Yang Kaiyong 44069d6091 feat: support crc32 validation when validating chunks
- Add CRC32 algorithm implementation wiht crc-rs crate.
- Introduced a crc_enable option to the nydus builder.
- Support for generating CRC32 checksums when building images.
- Support for validating CRC32 in both normal chunk or external chunks.

Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-04-18 14:39:03 +08:00
Yang Kaiyong 31c8e896f0 chore: fix cargo-deny check failed
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-04-16 19:39:21 +08:00
Yang Kaiyong 8593498dbd nydusify: remove nydusd code which is working in progress
- remove the unready nydusd (runtime) implemention.
- remove the debug code.
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-04-02 22:21:27 +08:00
Yang Kaiyong 6161868e41 builder: suport build external model image from modctl
builder: add support for build external model image from modctl in local
context or remote registery.

feat(nydusify): add support for mount external large model images

chore: introduce GoReleaser for RPM package generation

nydusify(feat): add support for model image in check command

nydusify(test): add support for binary-based testing in external model's smoke tests

Signed-off-by: Yan Song <yansong.ys@antgroup.com>

Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-04-02 22:21:27 +08:00
Yang Kaiyong 871e1c6e4f chore(smoke): fix broken CI in smoke test
Run `rustup run stable cargo` instead of `cargo` to explicitly specify the toolchain.

Since `nextest` fails due to symlink resolution with new rustup v1.28.

Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-03-25 18:23:18 +08:00
Yan Song 8c0925b091 action: fix bootstrap path for fsck.erofs check
The output bootstrap path has been changed in the nydusify
check subcommand.

Related PR: https://github.com/dragonflyoss/nydus/pull/1652

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2025-03-25 14:51:28 +08:00
Yan Song baadb3990d misc: remove centos image from image conversion CI
The centos image has been deprecated on Docker Hub, so we can't
pull it in "Convert & Check Images" CI pipeline.

See https://hub.docker.com/_/centos

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2025-03-25 14:51:28 +08:00
Yan Song bd2123f2ed smoke: add v0.1.0 nydusd into native layer cases
To check the compatibility between the newer builder and old nydusd.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2025-03-24 20:34:54 +08:00
Yan Song c41ac4760d builder: remove redundant blobs for merge subcommand
After merging all trees, we need to re-calculate the blob index of
referenced blobs, as the upper tree might have deleted some files
or directories by opaques, and some blobs are dereferenced.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2025-03-24 20:34:54 +08:00
Yan Song 7daa0a3cd9 nydusify: refactor check subcommand
- allow either the source or target to be an OCI or nydus image;
- improve output directory structure and log format;

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2025-03-24 17:45:50 +08:00
ymy 7e5147990c feat(nydusify): A short container id is supported when you commit a container
Signed-off-by: ymy <ymy@zetyun.com>
2025-03-10 10:21:06 +08:00
ymy 36382b54dd Optimize: Improve code style in push lower blob section
Signed-off-by: ymy <ymy@zetyun.com>
2025-03-10 10:21:06 +08:00
yumy 8b03fd7593 fix: nydusify golang ci arg
Signed-off-by: ymy <ymy@zetyun.com>
2025-03-04 23:48:02 +08:00
ymy 76651c319a nydusify: fix the issue of blob not found when modifying image name during commit
Signed-off-by: ymy <ymy@zetyun.com>
2025-03-04 23:48:02 +08:00
Yang Kaiyong 91931607f8 fix(nydusd): fix parsing of failover-policy argument
Use `inspect_err` instead of `inspect` to to correctly handle and log
errors when parsing the `failover-policy` argument.

Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-02-24 11:25:26 +08:00
Yan Song dd9ba54e33 misc: remove goproxy.io for go build
The goproxy.io service is unstable for now, it effects,
the github CI, let's remove it.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2025-02-17 09:55:13 +08:00
Yan Song 09b81c50b4 nydusify: fix layer push retry for copy subcommand
Add push retry mechanism, enhance the success rate for image copy
when a single layer copy failure.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2025-02-17 09:55:13 +08:00
Yang Kaiyong 3beb9a72d9 chore: bump deps to address rustsec warnning
- Bump vm-memory to 1.14.1, vmm-sys-util to 0.12.1 and vhost to 0.11.0.
- Bump cargo-deny-action version from v1 to v2 in workflows.

Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-02-11 20:29:22 +08:00
Yang Kaiyong 3c10b59324 chore: comment the unused code to address clippy error
The backend-oss feature is nerver enabled, so comment the test code.

Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-02-10 10:21:58 +08:00
Yang Kaiyong bf17d221d6 fix: Support building rafs without the dedup feature
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-02-10 10:21:58 +08:00
Yang Kaiyong ee5ef64cdd chore: pass rust version to build docker container in CI
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-02-10 10:21:58 +08:00
Yang Kaiyong 05ea41d159 chore: specify the rust version to 1.84.0 and enable docker cache
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-02-10 10:21:58 +08:00
Yang Kaiyong 4def4db396 chore: fix the broken CI on riscv64
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-02-10 10:21:58 +08:00
Yang Kaiyong d48d3dbdb3 chore: bump rust version to 1.8.4 and update deps to reslove cargo deny check failures
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-02-10 10:21:58 +08:00
Kostis Papazafeiropoulos f60e40aafa fix(blobfs): Use correct result types for `open` and `create`
Use the correct result types for `open` and `create` expected by the
`fuse_backend_rs` 0.12.0 `Filesystem` trait

Signed-off-by: Kostis Papazafeiropoulos <papazof@gmail.com>
2025-01-15 10:18:59 +08:00
Kostis Papazafeiropoulos 83fa946897 build(rafs): Add missing `dedup` feature for `storage` crate dependency
Fix `rafs` build by adding missing `dedup` feature for `storage` crate
dependency

Signed-off-by: Kostis Papazafeiropoulos <papazof@gmail.com>
2025-01-15 10:18:59 +08:00
Gaius 365f13edcf chore: rename repo Dragonfly2 to dragonfly
Signed-off-by: Gaius <gaius.qi@gmail.com>
2024-12-20 17:09:10 +08:00
Lin Wang e23d5bc570 fix: dragonflyoss#1644 and #1651 resolve Algorithm to_string and FromStr inconsistency
Signed-off-by: Lin Wang <l.wang@mail.dlut.edu.cn>
2024-12-16 20:39:08 +08:00
Liu Bo acdf021ec9 rafs: fix typo
Fix an invalid info! usage.

Signed-off-by: Liu Bo <liub.liubo@gmail.com>
2024-12-13 14:40:50 +08:00
Xing Ma b175fc4baa nydusify: introduce optimize subcommand of nydusify
We can statically analyze the image entrypoint dependency, or use runtime dynamic
analysis technologies such as ebpf, fanotify, metric, etc. to obtain the container
file access pattern, and then build this part of data into an independent image layer:

* preferentially fetch blob during the image startup phase to reduce network and disk IO.
* avoid frequent image builds, allows for better local cache utilization.

Implement optimize subcommand of nydusify to generate a new image, which references a new
blob included prefetch file chunks.
```
nydusify optimize --policy separated-prefetch-blob \
	--source $existed-nydus-image \
	--target $new-nydus-image \
	--prefetch-files /path/to/prefetch-files
```

More detailed process is as follows:
1. nydusify first downloads the source image and bootstrap, utilize nydus-image to output a
new bootstrap along with an independent prefetchblob;
2. nydusify generate&push new meta layer including new bootstrap and the prefetch-files ,
also generates&push new manifest/config/prefetchblob, completing the incremental image build.

Signed-off-by: Xing Ma <maxing.lan@bytedance.com>
2024-12-09 14:51:13 +08:00
Xing Ma 8edc031a31 builder: Enhance optimize subcommand for prefetch
Major changes:
1. Added compatibility for rafs v5/v6 formats;
2. Set IS_SEPARATED_WITH_PREFETCH_FILES flag in BlobInfo for prefetchblob;
3. Add option output-json to store build output.

Signed-off-by: Xing Ma <maxing.lan@bytedance.com>
2024-12-09 14:51:13 +08:00
pyq bb4744c7fb docs: fix docker-env-setup.md
Signed-off-by: pyq <eilo.pengyq@gmail.com>
2024-12-04 10:10:26 +08:00
dDai Yongxuan 375f55f32e builder: introduce optimize subcommand for prefetch
We can statically analyze the image entrypoint dependency, or use runtime dynamic
analysis technologies such as ebpf, fanotify, metric, etc. to obtain the container
file access pattern, and then build this part of data into an independent image layer:

* preferentially fetch blob during the image startup phase to reduce network and disk IO.
* avoid frequent image builds, allows for better local cache utilization.

Implement optimize subcommand to optimize image bootstrap
from a prefetch file list, generate a new blob.

```
nydus-image optimize --prefetch-files /path/to/prefetch-files.txt \
  --bootstrap /path/to/bootstrap \
  --blob-dir /path/to/blobs
```
This will generate a new bootstrap and new blob in `blob-dir`.

Signed-off-by: daiyongxuan <daiyongxuan20@mails.ucas.ac.cn>
2024-10-29 14:52:17 +08:00
abushwang a575439471 fix: correct some typos about nerdctl image rm
Signed-off-by: abushwang <abushwangs@gmail.com>
2024-10-25 16:11:22 +08:00
abushwang 4ee6ddd931 fix: correct some typos in nydus-fscache.md
Signed-off-by: abushwang <abushwangs@gmail.com>
2024-10-25 15:05:32 +08:00
Yadong Ding 57c112a998 smoke: add smoking test for cas and chunk dedup
Add smoking test case for cas and chunk dedup.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-10-23 18:25:01 +08:00
Jiang Liu b9ba409f13 docs: add documentation for cas
Add documentation for cas.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2024-10-23 18:25:01 +08:00
Yadong Ding 2387fe8217 storage: enable chunk deduplication for file cache
Enable chunk deduplication for file cache. It works in this way:
- When a chunk is not in blob cache file yet, inquery CAS database
  whether other blob data files have the required chunk. If there's
  duplicated data chunk in other data files, copy the chunk data
  into current blob cache file by using copy_file_range().
- After downloading a data chunk from remote, save file/offset/chunk-id
  into CAS database, so it can be reused later.

Co-authored-by: Jiang Liu <gerry@linux.alibaba.com>
Co-authored-by: Yading Ding <ding_yadong@foxmail.com>
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-10-23 18:25:01 +08:00
Yadong Ding 4b1fd55e6e storage: add garbage collection in CasMgr
- Changed `delete_blobs` method in `CasDb` to take an immutable reference (`&self`) instead of a mutable reference (`&mut self`).
- Updated `dedup_chunk` method in `CasMgr` to correctly handle the deletion of non-existent blob files from both the file descriptor cache and the database.
- Implemented the `gc` (garbage collection) method in `CasMgr` to identify and remove blobs that no longer exist on the filesystem, ensuring the database and cache remain consistent.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-10-23 18:25:01 +08:00
Jiang Liu 45e07eab3d storage: implement CasManager to support chunk dedup at runtime
Implement CasManager to support chunk dedup at runtime.
The manager provides to major interfaces:
- add chunk data to the CAS database
- check whether a chunk exists in CAS database and copy it to blob file
  by copy_file_range() if the chunk exists.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2024-10-23 18:25:01 +08:00
Yadong Ding 51a6045d74 storage: improve copy_file_range
- improve copy_file_range when target os is not linux
- add more comprehensive tests

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-10-23 18:25:01 +08:00
Jiang Liu 7d1c2e635a storage: add helper copy_file_range
Add helper copy_file_range() which:
- avoid copy data into userspace
- may support reflink on xfs etc

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2024-10-23 18:25:01 +08:00
Mike Hotan 15ec192e3d Nydusify `localfs` support
Signed-off-by: Mike Hotan <mike@union.ai>
2024-10-17 09:42:59 +08:00
Yadong Ding da2510b6f5 action: bump macos-13
The macOS 12 Actions runner image will begin deprecation on 10/7/24.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-10-16 18:35:50 +08:00
Yadong Ding 47025395fa lint: bump golangci-lint v1.61.0 and fix lint
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-10-16 09:45:05 +08:00
Yadong Ding 678b44ba32 rust: upgrade to 1.75.0
1. reduce the binary size.
2. use more rust-clippy lints.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-10-16 09:45:05 +08:00
Yifan Zhao 7c498497fb nydusify: modify compact interface
This patch modifies the compact interface to meet the change in
nydus-image.

Signed-off-by: Yifan Zhao <zhaoyifan@sjtu.edu.cn>
2024-10-15 09:27:34 +08:00
Yifan Zhao 1ccc603525 nydus-image: modify compact interface
This commit uses compact parameter directly  insteadof compact config
file in the cli interface. It also fix a bug where chunk key for
ChunkWrapper::Ref is not generated correctly.

Signed-off-by: Yifan Zhao <zhaoyifan@sjtu.edu.cn>
2024-10-15 09:27:34 +08:00
Yifan Zhao a4683baa1e rafs: fix bug in InodeWrapper::is_sock()
We incorrectly use is_dir() to check if a file is a socket. This patch
fixes it.

Signed-off-by: Yifan Zhao <zhaoyifan@sjtu.edu.cn>
2024-09-27 12:35:14 +08:00
Yadong Ding 9f439ab404 bats: use nerdctl replace ctr-remote
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-09-25 09:23:19 +08:00
Yadong Ding 0c0ba2adec chore: remove contrib/ctr-remote
Nerdctl is more useful than `ctr-remote`, deprecate it.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-09-25 09:23:19 +08:00
Yadong Ding c5ef5c97a4 chore: keep smoke test component latest version
- Use the latest `nerdctl`, `nydus-snapshotter`, and `cni` in smoke test env.
- Delete `misc/takeover/snapshotter_config.toml`, use modifyed `misc/performance/snapshotter_config.toml` when test.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-09-25 09:11:08 +08:00
Yadong Ding 37a7b96412 nydusctl: fix build version info
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-09-20 17:32:55 +08:00
Yadong Ding 742954eb2c tests: chang assertr of test_worker_mgr_rate_limiter
assert_eq!(mgr.prefetch_inflight.load(Ordering::Acquire), 3); and assert!(mgr.prefetch_inflight.load(Ordering::Acquire) <= 2); sometimes failed.
The reason is the worker threads may have already started processing the requests and decreased the counter before the main thread checks it.

- change assert!(mgr.prefetch_inflight.load(Ordering::Acquire) = 3); to assert!(mgr.prefetch_inflight.load(Ordering::Acquire) <= 3);
-  thread::sleep(Duration::from_secs(1)); to thread::sleep(Duration::from_secs(2));

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-09-19 20:30:27 +08:00
Yadong Ding 849591afa9 feat: add retry mechanism in read blob metadata
When read blob size from blob metadata, we should retry read from the remote if error occurs.
Also set the max retry times is 3.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-09-19 15:12:04 +08:00
Yadong Ding e8a4305773 chore: bump go lint action v6 and version 1.61.0
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-09-19 15:04:16 +08:00
Yadong Ding 7fc9edeec5 chore: change nydus snapshotter work dir
- use /var/lib/containerd/io.containerd.snapshotter.v1.nydus
- bump nydusd snapshotter v1.14.0

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-09-18 11:13:22 +08:00
Yadong Ding f4fb04a50f lint: remove unused fieldsPath
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-09-18 09:18:12 +08:00
dependabot[bot] 481a63b885 build(deps): bump github.com/docker/docker in /contrib/nydusify
Bumps [github.com/docker/docker](https://github.com/docker/docker) from 25.0.5+incompatible to 25.0.6+incompatible.
- [Release notes](https://github.com/docker/docker/releases)
- [Commits](https://github.com/docker/docker/compare/v25.0.5...v25.0.6)

---
updated-dependencies:
- dependency-name: github.com/docker/docker
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-09-16 20:23:59 +08:00
BruceAko 9b4c272d78 fix: add tests for nydusify copy
Signed-off-by: BruceAko <chongzhi@hust.edu.cn>
2024-09-15 21:10:37 +08:00
BruceAko 30d53c3f25 fix: add a doc about nydusify copy
Signed-off-by: BruceAko <chongzhi@hust.edu.cn>
2024-09-15 21:10:37 +08:00
BruceAko 309feab765 fix: add getLocalPath() and close decompressor
Signed-off-by: BruceAko <chongzhi@hust.edu.cn>
2024-09-15 21:10:37 +08:00
BruceAko a1ceb176f4 feat: support local tarball for nydusify copy
Signed-off-by: BruceAko <chongzhi@hust.edu.cn>
2024-09-15 21:10:37 +08:00
Jiancong Zhu 6106fbc539 refactor: fixed the unnecessary mutex lock operation
Signed-off-by: Jiancong Zhu <Chasing1020@gmail.com>
2024-09-12 18:26:26 +08:00
Yifan Zhao d89410f3fc nydus-image: refactor unpack/compact cli interface
Since unpack and compact subcommands does not need the entire nydusd
configuration file, let's refactor their cli interface and directly
take backend configuration file.

Specifically, we introduce `--backend-type`, `--backend-config` and
`--backend-config-file` options to specify the backend type and remove
`--config` option.

Signed-off-by: Yifan Zhao <zhaoyifan@sjtu.edu.cn>

Fixes: #1602
2024-09-10 14:33:51 +08:00
Yifan Zhao 36fe98b3ac smoke: fix invalid cleanup issue in main_test.go
The cleanup of new registry is invalid as TestMain() calls os.Exit()
and will not run defer functions. This patch fixes the issue by
doing the cleanup explicitly.

Signed-off-by: Yifan Zhao <zhaoyifan@sjtu.edu.cn>
2024-09-10 14:33:51 +08:00
fappy1234567 114ec880a2 smoke: add mount api test case
Signed-off-by: fappy1234567 <2019gexinlei@bupt.edu.cn>
2024-08-30 15:36:59 +08:00
Yan Song 3eb5c7b5ef nydusify: small improvements for mount & check subcommands
- Add `--prefetch` option for enabling full image data prefetch.
- Support `HTTP_PROXY` / `HTTPS_PROXY` env for enabling proxy for nydusd.
- Change nydusd log level to `warn` for mount & check subcommands.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2024-08-28 11:07:26 +08:00
Yadong Ding 52ed07b4cf deny: ignore RUSTSEC-2024-0357
openssl 0.10.55 can't build in riscv64 and ppc64le.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-08-08 14:42:44 +08:00
Yan Song a6bd8ccb8d smoke: add nydusd hot upgrade test case
The test case in hot_upgrade_test.go is different with takeover_test.go,
it not depend on snapshotter component.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2024-07-22 09:22:47 +08:00
Yan Song 642571236d smoke: refactor nydusd methods for testing
Renaming and adding some methods for nydusd struct, for easily controlling
nydusd process.

And support SKIP_CASES env to allow skipping some cases.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2024-07-22 09:22:47 +08:00
Yadong Ding 32b6ead5ec action: fix upload-coverage-to-codecov with secret
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-07-15 09:32:19 +08:00
Yadong Ding c92fe6512f action: upgrade macos to 12
macos-11 was deprecated since 2024.06.28.
https://docs.github.com/actions/using-jobs/choosing-the-runner-for-a-job

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-07-15 09:32:19 +08:00
BruceAko 3684474254 fix: rename mirrors' check_pause_elapsed to health_check_pause_elapsed
Signed-off-by: BruceAko <chongzhi@hust.edu.cn>
2024-07-11 09:13:11 +08:00
BruceAko cd24506d43 feat: skip health check if connection is not active
1. Add last_active field for Connection. When Connection.call() is called, last_active is updated to current timestamp.
2. Add check_pause_elapsed field for ProxyConfig and MirrorConfig. Connection is considered to be inactive if the current time to the last_active time exceeds check_pause_elapsed.
3. In proxy and mirror's health checking thread's loop, if the connection is not active (exceeds check_pause_elapsed), this round of health check is skipped.
4. Update the document.

Signed-off-by: BruceAko <chongzhi@hust.edu.cn>
2024-07-11 09:13:11 +08:00
YuQiang 19b09ed12f fix: add namespace flag for nydusify commit.
Signed-off-by: YuQiang <yu_qiang@mail.nwpu.edu.cn>
2024-07-09 18:15:25 +08:00
BruceAko da5d423b8c fix: correct some typos in Nydusify
Signed-off-by: BruceAko <chongzhi@hust.edu.cn>
2024-07-09 18:14:16 +08:00
Lin Wang 455c856aa8 nydus-image: add documentation for chunk-level deduplication
Signed-off-by: Lin Wang <l.wang@mail.dlut.edu.cn>
2024-07-04 18:08:59 +08:00
Lin Wang 5dec7536fa nydusify: add chunkdict generate command and corresponding tests
Signed-off-by: Lin Wang <l.wang@mail.dlut.edu.cn>
2024-07-04 18:08:59 +08:00
Lin Wang 087c0b1baf nydus-image: Add support for chunkdict generation
Signed-off-by: Lin Wang <l.wang@mail.dlut.edu.cn>
2024-07-04 18:08:59 +08:00
泰友 332f3dd456 fix: compatibility to image without ext table for blob cache
There are scenes that cache file is smaller than expect size. Such as:

    1. Nydusd 1.6 generates cache file by prefetch, which is smaller than size in boot.
    2. Nydusd 2.2 generates cache file by prefetch, when image not provide ext blob tables.
    3. Nydusd not have enough time to fill cache for blob.

    Equality check for size is too much strict for both 1.6
    compatibility and 2.2 concurrency. This pr ensures blob size smaller
    or equal than expect size. It also truncates blob cache when smaller
    than expect size.

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2024-06-18 10:43:42 +08:00
泰友 7cf2d4a2d7 fix: bad read by wrong data region
User io may involve discontinuous segments in different chunks. Bad
    read is produced by merging them into continuous one. That is what
    Region does. This pr separate discontinuous segments into different
    regions, avoiding merging forcibly.

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2024-06-18 10:43:42 +08:00
泰友 64dddd2d2b fix: residual fuse mountpoint after graceful shutdown
1. Case1: Fuse server exits in thread not main. There is possibility
       that process finishes before shutdown of server.
    2. Case2: Fuse server exits in thread of state machine. There is
       possibiltiy that state machine not responses to signal catch
       thread. Then dead lock happens. Process exits before shutdown of
       server.

    This pr aims to seperator shutdown actions from signal catch
    handler. It only notifies controller. Controller exits with
    shutdown of fuse server. No race. No deadlock.

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2024-06-18 10:43:42 +08:00
Yan Song de7cfc4088 nydusify: upgrade acceleration-service v0.2.14
To bring the fixup: https://github.com/goharbor/acceleration-service/pull/290

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2024-06-06 10:18:45 +08:00
Yadong Ding 79a7015496 chore: upgrade components version in test env
1. Upgrade cni to v1.5.0 and try to fix error in TestCommit.
2. upgrade nerdctl to v1.7.6.
3. upgrade nydus-snapshotter to v0.13.13 and fix path error.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-06-06 09:56:26 +08:00
BruceAko 3b9b0d4588 fix: correct some typos and grammatical problem
Signed-off-by: chongzhi <chongzhi@hust.edu.cn>
2024-06-06 09:55:11 +08:00
Yadong Ding 7ea510b237 docs: fix incorrect file path
https://github.com/containerd/nydus-snapshotter/blob/main/misc/snapshotter/config.toml#L27
In snapshotter config nydusd config file path is /etc/nydus/nydusd-config.fusedev.json

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-06-06 09:50:40 +08:00
dependabot[bot] 34ab06b6b3 build(deps): bump golang.org/x/net in /contrib/ctr-remote
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.19.0 to 0.23.0.
- [Commits](https://github.com/golang/net/compare/v0.19.0...v0.23.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-06-05 16:32:26 +08:00
dependabot[bot] 9483286863 build(deps): bump golang.org/x/net in /contrib/nydusify
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.19.0 to 0.23.0.
- [Commits](https://github.com/golang/net/compare/v0.19.0...v0.23.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-06-05 15:56:24 +08:00
Yadong Ding 13a9aa625b fix: downgraded to codecov/codecov-action@v4.0.0
codecov/codecov-action@v4 is unstable.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-06-04 15:59:46 +08:00
Yadong Ding 305a418b31 fix: upload-coverage failed in master
When action don't run on pull request, Codecov GitHub Action V4 need token.
Refence:
1. https://github.com/codecov/codecov-action?tab=readme-ov-file#breaking-changes
2. https://docs.codecov.com/docs/codecov-uploader#supporting-token-less-uploads-for-forks-of-open-source-repos-using-codecov

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-06-04 15:18:48 +08:00
Qinqi Qu 4a16402120 action: bump codecov-action to v4
To solve the problem of CI failure.

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2024-04-17 16:39:48 +08:00
Qinqi Qu 1d1691692c deps: update indexmap from v1 to v2
Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2024-04-16 15:12:25 +08:00
Qinqi Qu d1dfe7bd65 backend-proxy: refactor to support latest versions of crates
Also fix some security alerts of Dependabot:
1. https://github.com/advisories/GHSA-q6cp-qfwq-4gcv
2. https://github.com/advisories/GHSA-8r5v-vm4m-4g25
3. https://github.com/advisories/GHSA-g98v-hv3f-hcfr

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2024-04-16 15:12:25 +08:00
Qinqi Qu 3b2a0c0bcc deps: remove dependency on atty
The atty crate is not maintained, so flexi_logger and clap are updated
to remove the dependency on atty.

Fix: https://github.com/advisories/GHSA-g98v-hv3f-hcfr

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>

s

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2024-04-16 15:12:25 +08:00
Qinqi Qu 9826b2cc3f bats test: add a backup image to avoid network errors
Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2024-04-09 17:32:28 +08:00
dependabot[bot] 260a044c6e build(deps): bump h2 from 0.3.24 to 0.3.26
Bumps [h2](https://github.com/hyperium/h2) from 0.3.24 to 0.3.26.
- [Release notes](https://github.com/hyperium/h2/releases)
- [Changelog](https://github.com/hyperium/h2/blob/v0.3.26/CHANGELOG.md)
- [Commits](https://github.com/hyperium/h2/compare/v0.3.24...v0.3.26)

---
updated-dependencies:
- dependency-name: h2
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-09 15:27:13 +08:00
dependabot[bot] e926d2ff9c build(deps): bump google.golang.org/protobuf in /contrib/nydusify
Bumps google.golang.org/protobuf from 1.32.0 to 1.33.0.

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-31 11:36:18 +08:00
dependabot[bot] fc52ebc7a1 build(deps): bump github.com/docker/docker in /contrib/nydusify
Bumps [github.com/docker/docker](https://github.com/docker/docker) from 25.0.3+incompatible to 25.0.5+incompatible.
- [Release notes](https://github.com/docker/docker/releases)
- [Commits](https://github.com/docker/docker/compare/v25.0.3...v25.0.5)

---
updated-dependencies:
- dependency-name: github.com/docker/docker
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-29 17:05:58 +08:00
YuQiang af914dd1a5 fix: modify benchmark prepare bash path
1. correct the performance test prepare bash file path

Signed-off-by: YuQiang <y_q_email@163.com>
2024-03-26 10:02:52 +08:00
Adolfo Ochagavía 2308efa6f7 Add compression method support to zran docs
Signed-off-by: Adolfo Ochagavía <github@adolfo.ochagavia.nl>
2024-03-25 17:38:44 +08:00
Wei Zhang 9ae8e3a7b5 overlay: add overlay implementation
With help of newly introduced Overlay FileSystem in `fuse-backend-rs`
library, now we can create writable rootfs in Nydus. Implementation of
writable rootfs is based on one passthrough FS(as upper layer) over one
readonly rafs(as lower layer).

To do so, configuration is extended with some Overlay options.

Signed-off-by: Wei Zhang <weizhang555.zw@gmail.com>
2024-03-15 14:15:54 +08:00
YuQiang 3dfa9e9776 docs: add doc for nydus-image check command
Signed-off-by: YuQiang <y_q_email@163.com>
2024-03-14 11:10:46 +08:00
YuQiang f10782c79d docs: add doc for nydusify commit command
Signed-off-by: YuQiang <y_q_email@163.com>
2024-03-14 09:33:02 +08:00
YuQiang ae842f9b8b action: merge and move prepare.sh
remove misc/performance/prepare.sh and misc/performance/prepare.sh and merge to misc/prepare.sh

Signed-off-by: YuQiang <y_q_email@163.com>
2024-03-14 09:32:38 +08:00
YuQiang 26b1d7db5a feat: add smoke test for nydusify commit
Signed-off-by: YuQiang <y_q_email@163.com>
2024-03-14 09:32:38 +08:00
YuQiang c14790cb21 feat: add nydusify commit command
add nydusify commit command to commit a nydus container into nydus image

Signed-off-by: YuQiang <y_q_email@163.com>
2024-03-14 09:32:38 +08:00
YuQiang 19daa7df6f feat: ported write overlay upperdir capability
ported  capability of get and write diff between overlayfs upper and lower.

Signed-off-by: YuQiang <y_q_email@163.com>
2024-03-14 09:32:38 +08:00
dependabot[bot] a0ec880182 build(deps): bump github.com/go-jose/go-jose/v3 in /contrib/nydusify
Bumps [github.com/go-jose/go-jose/v3](https://github.com/go-jose/go-jose) from 3.0.1 to 3.0.3.
- [Release notes](https://github.com/go-jose/go-jose/releases)
- [Changelog](https://github.com/go-jose/go-jose/blob/v3.0.3/CHANGELOG.md)
- [Commits](https://github.com/go-jose/go-jose/compare/v3.0.1...v3.0.3)

---
updated-dependencies:
- dependency-name: github.com/go-jose/go-jose/v3
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-08 19:01:13 +08:00
dependabot[bot] c57e7c038c build(deps): bump mio in /contrib/nydus-backend-proxy
Bumps [mio](https://github.com/tokio-rs/mio) from 0.8.5 to 0.8.11.
- [Release notes](https://github.com/tokio-rs/mio/releases)
- [Changelog](https://github.com/tokio-rs/mio/blob/master/CHANGELOG.md)
- [Commits](https://github.com/tokio-rs/mio/compare/v0.8.5...v0.8.11)

---
updated-dependencies:
- dependency-name: mio
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-08 19:00:57 +08:00
dependabot[bot] eba6afe5b8 build(deps): bump mio from 0.8.10 to 0.8.11
Bumps [mio](https://github.com/tokio-rs/mio) from 0.8.10 to 0.8.11.
- [Release notes](https://github.com/tokio-rs/mio/releases)
- [Changelog](https://github.com/tokio-rs/mio/blob/master/CHANGELOG.md)
- [Commits](https://github.com/tokio-rs/mio/compare/v0.8.10...v0.8.11)

---
updated-dependencies:
- dependency-name: mio
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-07 14:46:07 +08:00
YuQiang aaab560aa9 feat: add fs_version and compressor output of nydus image check
1.Add rafs_version value, output like 5 or 6.
2.Add compressor algorithm value, like ztsd.
Add rafs_version and compressor json output of nydus image check,so that more info can be get if it is necessary.

Signed-off-by: YuQiang <y_q_email@163.com>
2024-02-29 14:15:39 +08:00
Yadong Ding 7b3cc503a2 action: add contrib-lint in smoke test
1. Use the official GitHub action for golangci-lint from its authors.
2. fix golang lint error with v1.56
3. separate test and golang lint.Sometimes we need tests without golang lint and sometimes we just want to do golang lint.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-02-21 11:44:33 +08:00
dependabot[bot] 5fb809605d build(deps): bump github.com/opencontainers/runc in /contrib/ctr-remote
Bumps [github.com/opencontainers/runc](https://github.com/opencontainers/runc) from 1.1.11 to 1.1.12.
- [Release notes](https://github.com/opencontainers/runc/releases)
- [Changelog](https://github.com/opencontainers/runc/blob/v1.1.12/CHANGELOG.md)
- [Commits](https://github.com/opencontainers/runc/compare/v1.1.11...v1.1.12)

---
updated-dependencies:
- dependency-name: github.com/opencontainers/runc
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-02-20 13:11:38 +08:00
Yan Song abaf9caa16 docs: update outdated dingtalk QR code
And remove the outdated technical meeting schedule.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2024-02-20 10:17:19 +08:00
dependabot[bot] d7ea50e621 build(deps): bump github.com/opencontainers/runc in /contrib/nydusify
Bumps [github.com/opencontainers/runc](https://github.com/opencontainers/runc) from 1.1.11 to 1.1.12.
- [Release notes](https://github.com/opencontainers/runc/releases)
- [Changelog](https://github.com/opencontainers/runc/blob/v1.1.12/CHANGELOG.md)
- [Commits](https://github.com/opencontainers/runc/compare/v1.1.11...v1.1.12)

---
updated-dependencies:
- dependency-name: github.com/opencontainers/runc
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-02-18 17:11:09 +08:00
Yadong Ding d12634f998 action: bump nodejs20 github action
https://github.blog/changelog/2023-09-22-github-actions-transitioning-from-node-16-to-node-20/

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-02-06 09:36:54 +08:00
loheagn 9a1c47bd00 docs: add doc for nydusd failover and hot upgrade
Signed-off-by: loheagn <loheagn@icloud.com>
2024-01-23 20:01:48 +08:00
Yadong Ding 3f47f1ec6d fix: upload-artifact v4 break changes
upload-artifact v4 can't upload artifact to same name

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-01-19 11:01:50 +08:00
Yadong Ding 5f26f8ee1c fix: upgrade h2 to 0.3.24 to fix RUSTSEC-2024-0003
ID: RUSTSEC-2024-0003
Advisory: https://rustsec.org/advisories/RUSTSEC-2024-0003
An attacker with an HTTP/2 connection to an affected endpoint can send a steady stream of invalid frames to force the
generation of reset frames on the victim endpoint.
By closing their recv window, the attacker could then force these resets to be queued in an unbounded fashion,
resulting in Out Of Memory (OOM) and high CPU usage.

This fix is corrected in [hyperium/h2#737](https://github.com/hyperium/h2/pull/737), which limits the total number of
internal error resets emitted by default before the connection is closed.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-01-18 16:58:44 +08:00
Yadong Ding eae9ed7e45 fix: upload-artifact@v4 breake in release
Error:
Failed to CreateArtifact: Received non-retryable error: Failed request: (409) Conflict: an artifact with this name already exists on the workflow run

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-01-18 16:58:44 +08:00
Yadong Ding a3922b8e0d action: bump upload-artifact/download-artifact v4
Since https://github.com/actions/download-artifact/issues/249 are fixed,
we can use the v4 version.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-01-17 10:04:49 +08:00
Wenhao Ren 9dae4eccee storage: fix the tiny prefetch request for batch chunks
By passing the chunk continuous check, and correctly sort batch chunks,
the prefetch request will not be interrupted by batch chunks anymore.

Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2024-01-15 17:56:26 +08:00
Wenhao Ren d7190d9fee action: add convert test for batch chunk
Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2024-01-15 17:56:26 +08:00
Wenhao Ren 8bb53a873a storage: add validation and unit test for batch chunks
1. Add the validation for batch chunks.
2. Add unit test for `BatchInflateContext`.

Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2024-01-15 17:56:26 +08:00
Wenhao Ren 7f799ec8bb storage: introduce `BlobCCI` for reading batch chunk info
`BlobCompressionContextInfo` is need to read batch chunk info.
`BlobCCI` is introduced for simplifying the code,
and decrease the times of getting this context by lazy loading.

Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2024-01-15 17:56:26 +08:00
Wenhao Ren c557f99d08 storage: fix the read amplification for batch chunks.
Read amplification for batch chunk is not correctly implemented that may crash.
The read amplification is rewrited to fix this bug.
A unit test for read amplification is also added for covering this code.

Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2024-01-15 17:56:26 +08:00
Wenhao Ren 676acd0a6f storage: fix the Error type to log the error correctly
Currently, many error are output as `os error 22` lossing customized log info.
So we change the Error type for correctly output and log the error info
as what we expected.

Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2024-01-15 17:56:26 +08:00
Wenhao Ren fa72c98ffc rafs: add `is_batch()` for `BlobChunkInfo`
Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2024-01-15 17:56:26 +08:00
Wenhao Ren b4fe28aad6 rafs: move `compressed_offset` from `BatchInflateContext` to chunk info for batch chunks.
1. `compressed_offset` is used for build-time and runtime sorting for chunk info.
So we move `compressed_offset` from `BatchInflateContext` to chunk info for batch chunks.

2. the `compressed_size` for the blobs in batch mode is not correctly set.
We thus fix it by setting the value of `dumped_size`.

Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2024-01-15 17:56:26 +08:00
dependabot[bot] 596492b932 build(deps): bump github.com/go-jose/go-jose/v3 in /contrib/nydusify
Bumps [github.com/go-jose/go-jose/v3](https://github.com/go-jose/go-jose) from 3.0.0 to 3.0.1.
- [Release notes](https://github.com/go-jose/go-jose/releases)
- [Changelog](https://github.com/go-jose/go-jose/blob/main/CHANGELOG.md)
- [Commits](https://github.com/go-jose/go-jose/compare/v3.0.0...v3.0.1)

---
updated-dependencies:
- dependency-name: github.com/go-jose/go-jose/v3
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-01-04 18:52:59 +08:00
Yadong Ding 2743f163b9 deps: update the latest version and sync
Bump containerd v1.7.11 and golang.org/x/crypto v0.17.0.
Resolve GHSA-45x7-px36-x8w8 and GHSA-7ww5-4wqc-m92c.
Update dependents to latest version and sync in muti modules.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-01-04 14:11:36 +08:00
loheagn 04b4552e03 tests: add smoke test for hot upgrade
Signed-off-by: loheagn <loheagn@icloud.com>
2024-01-04 14:10:31 +08:00
Qinqi Qu 5ecda8c057 bats test: upgrade golang version to 1.21.5
Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2024-01-03 11:54:30 +08:00
Qinqi Qu 8e1799e5df bats test: change rust docker image to Debian 11 bullseye version
The rust:1.72.1 image is based on the Debian 12 bookworm, and requires
an excessively high version of glibc, resulting in the inability to
find the glibc version to run the compiled nydus program on some old
operating systems.

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2024-01-03 11:54:30 +08:00
Yadong Ding f08587928b rust: bump 1.72.1 and fix errors
https://rust-lang.github.io/rust-clippy/master/index.html#non_minimal_cfg
https://rust-lang.github.io/rust-clippy/master/index.html#unwrap_or_default
https://rust-lang.github.io/rust-clippy/master/index.html#needless_borrows_for_generic_args
https://rust-lang.github.io/rust-clippy/master/index.html#reserve_after_initializatio
https://rust-lang.github.io/rust-clippy/master/index.html#/arc_with_non_send_sync
https://rust-lang.github.io/rust-clippy/master/index.html#useless_vec

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-29 08:58:02 +08:00
Xin Yin cf76edbc52 dep: upgrade tokio to 1.35.1
Fix painc after all prefetch worker exit for fscache mode.

Signed-off-by: Xin Yin <yinxin.x@bytedance.com>
2023-12-27 20:36:23 +08:00
loheagn 7f27b7ae78 tests: add smoke test for nydusd failover
Signed-off-by: loheagn <loheagn@icloud.com>
2023-12-25 16:35:14 +08:00
Yadong Ding 17c373fc29 nydusify: fix error in go vet
`sudo` in action will change go env, remove sudo.
In runner user, we can create file in unpacktargz-test inseted of temp/unpacktargz-test,
so don't use os.CreateTemp in archive_test.go.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-25 14:02:59 +08:00
Yadong Ding d5242901f9 action: delete useless env
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-25 14:02:59 +08:00
Yadong Ding 39daa97bac nydusify: fix unit test fail in utils
utils_test.go:248:
                Error Trace:    /root/nydus/contrib/nydusify/pkg/utils/utils_test.go:248
                Error:          Should be true
                Test:           TestRetryWithHTTP

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-25 14:02:59 +08:00
Yadong Ding 2cd8ba25bd nydusify: add unit test for nydusify
We had removed the tests files(e2e) in nydusify, we need add the unit tests
to improve test coverage.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-25 09:14:54 +08:00
Yadong Ding 3164f19ab7 makefile: remove build in test
use `make test` to run unit tests, it don't need build.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-25 09:14:54 +08:00
Yadong Ding 6675da3186 action: use upload-artifact/download-artifact v3
master branch is unstable, change to v3.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-25 09:14:54 +08:00
Yadong Ding 7772082411 action: use sudo in contrib-unit-test-coverage
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-25 09:14:54 +08:00
Yadong Ding 65046b0533 refactor: use ErrSchemeMismatch and ECONNREFUSED
ref: https://github.com/golang/go/issues/44855

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-25 09:03:20 +08:00
Yadong Ding b5e88a4f4e chore: upgrade go version to 1.21
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-25 09:03:20 +08:00
Yadong Ding 18ba2eda63 action: fix failed to compile `cross v0.2.4`
error: failed to compile `cross v0.2.4`, intermediate artifacts can be found at `/tmp/cargo-installG1Scm4`

Caused by:
  package `home v0.5.9` cannot be built because it requires rustc 1.70.0 or newer, while the currently active rustc version is 1.68.2
  Try re-running cargo install with `--locked`

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-19 14:25:04 +08:00
Yadong Ding ab06841c39 revent build(deps): bump openssl from 0.10.55 to 0.10.60
Revent https://github.com/dragonflyoss/nydus/pull/1513.
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-19 14:25:04 +08:00
Yadong Ding e9d63f5d3b chore: upgrade dbs-snapshot to 1.5.1
v1.5.1 brings support of ppc64le and riscv64.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-19 14:25:04 +08:00
Yadong Ding 1a1e8fdb98 action: test build with more architectures
Test build with more architectures, but only use `amd64` in next jobs.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-19 14:25:04 +08:00
Yadong Ding a4ec9b8061 tests: add go module unit coverage to Codecov
resolve dragonflyoss#1518.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-19 09:48:48 +08:00
Yadong Ding 54a3395434 action: add contrib-test and build
Use contrib-tes job to test the golang modules in contrib.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-19 09:48:48 +08:00
Yadong Ding 0458817278 chore: modify repo to dragonflyoss/nydus
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-18 17:41:00 +08:00
Yadong Ding 763786f316 chore: change go module name to nydus
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-18 17:41:00 +08:00
dependabot[bot] d6da88a8f1 build(deps): bump github.com/docker/docker in /contrib/nydusify
Bumps [github.com/docker/docker](https://github.com/docker/docker) from 23.0.3+incompatible to 24.0.7+incompatible.
- [Release notes](https://github.com/docker/docker/releases)
- [Commits](https://github.com/docker/docker/compare/v23.0.3...v24.0.7)

---
updated-dependencies:
- dependency-name: github.com/docker/docker
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-12-18 13:38:23 +08:00
Yadong Ding 06755fe74b tests: remove useless test files
Since https://github.com/dragonflyoss/nydus/pull/983, we have the new smoke test, we can remove the
old smoke test files including nydusify and nydus.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-15 14:14:05 +08:00
Yadong Ding 2bca6f216a smoke: use golangci-lint to improve code quality
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-15 14:12:44 +08:00
Yadong Ding 0e81f2605d nydusify: fix errors found by golangci-lint
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-15 14:12:44 +08:00
Yadong Ding f98b6e8332 action: upgrade golangci-lint to v1.54.2
We have some golang lint error in nydusify.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-15 14:12:44 +08:00
Yadong Ding 1d289e25f9 rust: update to edition2021
Since we are using cargo 1.68.2 we don't need to require edition 2018 any more.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-15 14:10:50 +08:00
Yadong Ding 194641a624 chore: remove go test cover
In golang smoke test, go test don't need coverage analysis and create coverage profile.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-13 15:54:42 +08:00
Yiqun Leng 45331d5e18 bats test: move the logic of generating dockerfile into common lib
Signed-off-by: Yiqun Leng <yqleng@linux.alibaba.com>
2023-12-13 15:25:15 +08:00
dependabot[bot] 55a999b9e6 build(deps): bump openssl from 0.10.55 to 0.10.60
Bumps [openssl](https://github.com/sfackler/rust-openssl) from 0.10.55 to 0.10.60.
- [Release notes](https://github.com/sfackler/rust-openssl/releases)
- [Commits](https://github.com/sfackler/rust-openssl/compare/openssl-v0.10.55...openssl-v0.10.60)

---
updated-dependencies:
- dependency-name: openssl
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-12-13 13:09:44 +08:00
Yan Song 87e3db7186 nydusify: upgrade containerd package
To import some fixups from https://github.com/containerd/containerd/pull/9405.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-12-13 09:57:20 +08:00
Qinqi Qu a84400d165 misc: update rust-toolchain file to TOML format
1. Move rust-toolchain to rust-toolchain.toml
2. Update the parsing process of rust-toolchain in the test script.

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-12-12 20:27:12 +08:00
Yadong Ding d793aee881 action: delete clean-cache
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-11 09:47:54 +08:00
Yadong Ding a3e60c0801 action: benchmark add conversion_elapsed
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-08 09:33:03 +08:00
Yadong Ding 794f7f7293 smoke: add image conversion time in benchmark
ConversionElapsed can express the performance of accelerated image conversion.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-08 09:33:03 +08:00
Xin Yin e12416ef09 upgrade: change to use dbs_snapshot crate
Signed-off-by: Xin Yin <yinxin.x@bytedance.com>
2023-12-07 20:10:13 +08:00
Xin Yin 7b25d8a059 service: add unit test for upgrade mananger
Signed-off-by: Xin Yin <yinxin.x@bytedance.com>
2023-12-07 20:10:13 +08:00
Xin Yin e0ad430486 feat: support takeover for fscache
refine the UpgradeManager, make it can also support store status for
fscache daemon. And make the takeover feature applies to both fuse and
fscache mode.

Signed-off-by: Xin Yin <yinxin.x@bytedance.com>
2023-12-07 20:10:13 +08:00
Nan Li 16f5ac3d14 feat: implement `takeover` for nydusd fusedev daemon
This patch implements the `save` and `restore` functions in the `fusedev_upgrade` in the service create.
To do this,
- This patch add a new create named `nydus-upgrade` into the workspace. The `nydus-upgrade` create has some util functions help to do serialization and deserialization for the rust structs using the versionize and snapshot crates. The crate also has a trait named `StorageBackend` which can be used to store and restore fuse session fds and state data for the upgrade action, and there's also an implementation named `UdsStorageBackend` which uses unix domain socket to do this.
- as we have to use the same fuse session connection, backend file system mount commands, Vfs to re-mount the rafs for the new daemon (created for "hot upgrade" or failover), this patch add a new struct named `FusedevState` to hold these information. The `FusedevState` is serialized and stored into the `UdsStorageBackend` (which happens in the `save` function in the `fusedev_upgrade` module) before the new daemon is created, and the `FusedevState` is deserialized and restored from the `UdsStorageBackend` (which happens in the `restore` function in the `fusedev_upgrade` module) when the new daemon is triggered by `takeover`.

Signed-off-by: Nan Li <loheagn@icloud.com>
Signed-off-by: linan.loheagn3 <linan.loheagn3@bytedance.com>
2023-12-07 20:10:13 +08:00
Yadong Ding e4cf98b125 action: add oci in benchmark
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-07 10:19:53 +08:00
Yadong Ding b87814b557 smoke: support different snapshooter in bench
We can use overlayfs to test OCI V1 image.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-07 10:19:53 +08:00
Jiang Liu 50b8988751 storage: use connection pool for sqlite
Sqlite connection is not thread safe, so use connection pool to
support multi-threading.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-12-06 15:54:09 +08:00
Jiang Liu 1c293cfefd storage: move cas db from util into storage
Move cas db from util into storage.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-12-06 15:54:09 +08:00
Jiang Liu bfc171a933 util: refine database structure for CAS
Refine the sqlite database structure for storing CAS information.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-12-06 15:54:09 +08:00
xwb1136021767 6ca3ca7dc0 utils: introduce sqlite to store CAS related information
Introduce sqlite to store CAS related information.

Signed-off-by: Huang Jianan <jnhuang95@gmail.com>
Signed-off-by: xwb1136021767 <1136021767@qq.com>
2023-12-06 15:54:09 +08:00
Yadong Ding 93ef71db79 action:use more images in benchmark
Include:
- python:3.10.7
- golang:1.19.3
- ruby:3.1.3
- amazoncorretto:8-al2022-jdk

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-04 10:14:17 +08:00
Yadong Ding ba8d3102ab smoke: support more images in container
Support: python, golang, ruby, amazoncorretto.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-04 10:14:17 +08:00
Yadong Ding eeddfff9a0 nydusify: fix deprecated
1. replace `github.com/docker/distribution` with `github.com/distribution/reference`
2. replace `EndpointResolver` with `BaseEndpoint`

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-04 10:12:45 +08:00
Yadong Ding 11592893ea nydusify: update dependencies version
github.com/aliyun/aliyun-oss-go-sdk: `v2.2.6+incompatible` -> `v3.0.1+incompatible`
github.com/aws/aws-sdk-go-v2 `v1.17.6` -> `v1.23.5`
github.com/aws/aws-sdk-go-v2/config `v1.18.16` -> `v1.25.11`
github.com/aws/aws-sdk-go-v2/credentials `v1.13.16` -> `v1.16.9`
github.com/aws/aws-sdk-go-v2/feature/s3/manager `v1.11.56` -> `v1.15.4`
github.com/aws/aws-sdk-go-v2/service/s3 `v1.30.6` -> `v1.47.2`
github.com/containerd/nydus-snapshotter `v0.13.2 -> v0.13.3`
github.com/docker/cli `v24.0.6+incompatible` -> `v24.0.7+incompatible`
github.com/docker/distribution `v2.8.2+incompatible` -> `v2.8.3+incompatible`
github.com/google/uuid `v1.3.1` -> `v1.4.0`
github.com/hashicorp/go-hclog `v1.3.1` -> `v1.5.0`
github.com/hashicorp/go-plugin `v1.4.5` -> `v1.6.0`
github.com/opencontainers/image-spec `v1.1.0-rc4` -> `v1.1.0-rc5`
github.com/prometheus/client_golang `v1.16.0` -> `v1.17.0`
github.com/sirupsen/logrus `v1.9.0` -> `v1.9.3`
github.com/stretchr/testify `v1.8.3` -> `v1.8.4`
golang.org/x/sync `v0.3.0` -> `v0.5.0`
golang.org/x/sys `v0.13.0` -> `v0.15.0`
lukechampine.com/blake3 `v1.1.5` -> `v1.2.1`

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-04 10:12:45 +08:00
Yadong Ding 3f999a70c5 action: add `node:19.8` in benchmark
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-01 14:54:31 +08:00
Yadong Ding e0041ec9cb smoke: benchamrk support node
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-01 14:54:31 +08:00
Yadong Ding d266599128 docs: add benchmark badge with schedule event
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-01 11:38:31 +08:00
Yan Song e0fc6a1106 contrib: fix golangci lint for ctr-remote
Fix the lint check error by updating containerd package:

```
golangci-lint run
Error: commands/rpull.go:89:2: SA1019: log.G is deprecated: use [log.G]. (staticcheck)
	log.G(pCtx).WithField("image", ref).Debug("fetching")
	^
```

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-12-01 10:59:28 +08:00
Yan Song 838593fed3 nydusify: support --push-chunk-size option
Reference: https://github.com/containerd/containerd/pull/9405

Will replace containerd dep to upstream version if the PR can be merged.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-12-01 10:59:28 +08:00
Yadong Ding f1de095905 action: use same golang cache
setup-go@v4 use cache name `setup-go-Linux-ubuntu22-go-1.20.11-${hash}`.
`actions/cache@v3` restore the same content, so just restore the same cache.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-01 10:42:24 +08:00
Yadong Ding a1ad70a46c action: update setup-go to v4 and enabled caching
When update setup-go to v4, it can cache by itself. And select go version
by `go.work`.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-01 08:39:32 +08:00
Yadong Ding 40489c7365 action: update rust cache version and share caches
1. update Swatinem/rust-cache to v2.7.0.
2. share caches betwwen jobs in release, smoke, convert and benchmark.
3. save rust cache only in master branch in smoke test.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-01 08:39:32 +08:00
wuheng 3f5c2c8bb9 docs: nydus-sandbox.yaml add uid
Signed-off-by: wuheng <wuheng@kylinos.cn>
2023-11-30 15:05:07 +08:00
Yadong Ding f5001bbdc3 misc: delete python version benchmark
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-11-30 11:34:27 +08:00
Yadong Ding 0e10dbcaae action: use smoke BenchmarkTest in Benchmark
We should deprecate python version benchmark.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-11-30 11:34:27 +08:00
Yadong Ding 822c935c77 smoke: add benchmark test
1. refactor performance_test, move clearContainer to tools.
2. add benchmark test.
benchmark test will test image container, and save metrics to json file.
Fox example
```json
{
	e2e_time: 2747131
	image_size: 2107412
	read_amount: 121345
	read_count: 121
}
```

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-11-30 11:34:27 +08:00
Yadong Ding 8ad7ae541d fix: smoke test-performance env var set up failed
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-11-29 17:12:43 +08:00
zyfjeff 96f402bfee Let targz type conversions support multi-stream gzip
code reference https://github.com/madler/zlib/blob/master/examples/zran.c

at present, zran and normal targz do not consider the support for
multi stream gzip when decompressing, so there will be problems
when encountering this kind of image, and this PR is used to
support the gzip multi-stream.

Signed-off-by: zyfjeff <zyfjeff@linux.alibaba-inc.com>
2023-11-29 12:57:37 +08:00
zyfjeff 8247fe7b01 Update libz-sys& flate2 crate to latest version
Signed-off-by: zyfjeff <zyfjeff@linux.alibaba-inc.com>
2023-11-29 12:57:37 +08:00
Qinqi Qu 091697918c action: disable codecov patch check
Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-11-27 09:00:33 +08:00
Yadong Ding f21fe67a81 action: use performance test in smoke test
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-11-27 08:59:33 +08:00
Yadong Ding c51ecd0e42 smoke: add performance test
Add performance test to make sure there don't have performance descend

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-11-27 08:59:33 +08:00
Yadong Ding 4c33d4e605 action: remove benchmark test in smoke
We will rewrite it in performance_test with golang

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-11-27 08:59:33 +08:00
Wenhao Ren 71dfc6ff7e builder: align file dump order with prefetch list, fix #1488
1. The dump order for prefetch files does not match the order specified in prefetch list,
so let's fix it.
2. The construction of `Prefetch` is slow due to inefficient matching of prefetch patterns,
By adopting a more efficient data structure, this process has been accelerated.
3. Unit tests for prefetch are added.

Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2023-11-27 08:58:52 +08:00
Yadong Ding e2b131e4c6 go mod: sync deps by go mod tidy
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-11-22 14:50:20 +08:00
Yadong Ding 6f9551a328 git: add go.work.sum to .gitignore
`go.work.sum` always changes too large. We only need it to work well locally.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-11-22 14:50:20 +08:00
Yan Song 767adcf03a nydusify: fix unnecessary manifest index when copy one platform image
When use the command to copy the image with specified one platform:

```
nydusify copy --platform linux/amd64 --source nginx --target localhost:5000/nginx
```

We found the target image is a manifest index format like:

```
{
  "schemaVersion": 2,
  "mediaType": "application/vnd.docker.distribution.manifest.list.v2+json",
  "manifests": [
    {
      "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
      "digest": "sha256:d2e65182b5fd330470eca9b8e23e8a1a0d87cc9b820eb1fb3f034bf8248d37ee",
      "size": 1778,
      "platform": {
        "architecture": "amd64",
        "os": "linux"
      }
    }
  ]
}
```

This can be a bit strange, in fact just the manifest is enough, the patch improves this.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-11-10 16:50:41 +08:00
Wenhao Ren c9fbce8ccf nydusd: add the config support of `amplify_io`
Add the support of `amplify_io` in the config file of nydusd
to configure read amplification.

Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2023-11-09 14:15:18 +08:00
Wenhao Ren 468eeaa2cf rafs: rename variable names about prefetch configuration
Variable names about prefetch are confused currently.
So we merge variable names that have the same meaning,
while DO NOT affect the field names read from the configuration file.

Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2023-11-09 14:15:18 +08:00
Peng Tao 46dca1785f rafs/builder: fix build on macos
These are u16 on macos.

Signed-off-by: Peng Tao <bergwolf@gmail.com>
2023-11-09 11:20:25 +08:00
Peng Tao e06c1ca85f ut: stop testing some unit tests on macos
We are only testing blob cache and fscache in unit tests. And we are
testing linux device id. All of them do not work on macos at all.

Signed-off-by: Peng Tao <bergwolf@gmail.com>
2023-11-09 11:20:25 +08:00
Peng Tao 3061050e20 smoke: add macos build test
Signed-off-by: Peng Tao <bergwolf@gmail.com>
2023-11-09 11:20:25 +08:00
Yan Song 1c24213802 docs: update multiple snapshotter switch troubleshooting
Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-11-09 10:28:10 +08:00
weizhen.zt b572a0f24e utils: bugfix for unit test case.
Signed-off-by: weizhen.zt <wodemia@linux.alibaba.com>
2023-11-09 10:24:45 +08:00
weizhen.zt c608ef6231 storage: move toml to dev-dependencies
Signed-off-by: weizhen.zt <wodemia@linux.alibaba.com>
2023-11-09 10:24:45 +08:00
weizhen.zt 19185ed0d2 builder: add some unit test cases
Signed-off-by: weizhen.zt <wodemia@linux.alibaba.com>
2023-11-09 10:24:45 +08:00
weizhen.zt cc5a8c5035 api: add some unit test cases
Signed-off-by: weizhen.zt <wodemia@linux.alibaba.com>
2023-11-09 10:24:45 +08:00
weizhen.zt 60db5334ff rafs: add some unit test cases
Signed-off-by: weizhen.zt <wodemia@linux.alibaba.com>
2023-11-09 10:24:45 +08:00
weizhen.zt f75e0da3ad storage: add some unit test cases
Signed-off-by: weizhen.zt <wodemia@linux.alibaba.com>
2023-11-09 10:24:45 +08:00
weizhen.zt 9021871596 utils: add some unit test cases
Signed-off-by: weizhen.zt <wodemia@linux.alibaba.com>
2023-11-09 10:24:45 +08:00
Yan Song 360b59fa98 docs: unify object_prefix field for oss/s3 backend
Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-11-09 09:47:48 +08:00
Yan Song ea5db01442 docs: some improvements for usage
1. buildkit upstream follow-up is slow, update to nydusaccelerator/buildkit;
2. runtime-level snapshotter usage needs extra containerd patch;
3. add s3 storage backend example for nydusd doc page;

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-11-09 09:47:48 +08:00
hijackthe2 002b2f2c8a builder: fix assertion error by explicitly specifying type when building nydus in macos arm64 environment.
Signed-off-by: hijackthe2 <2948278083@qq.com>
2023-11-07 13:42:04 +08:00
hijackthe2 89882a4002 storage: add some unit test cases
Some unit test cases are added for device.rs, meta/batch.rs, meta/chunk_info_v2.rs, meta/mod.rs, and meta/toc.rs in storage/src to increase code coverage.

Signed-off-by: hijackthe2 <2948278083@qq.com>
2023-11-07 09:13:12 +08:00
Yadong Ding 2fb293411d action: get latest tag by Github API
Use https://api.github.com/repos/Dragonflyoss/nydus/releases/latest to get the
latest tag of nydus, and used in smoke/integration-test.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-11-07 09:08:14 +08:00
Junduo Dong 8b81a99108 contrib: correct parameter name
Signed-off-by: Junduo Dong <andj4cn@gmail.com>
2023-11-06 09:04:31 +08:00
hijackthe2 240af3e336 builder: add some unit test cases
Some unit test cases are added for compact.rs, lib.rs, merge.rs, stargz.rs, core/context.rs, and core/node.rs in builder/src to increase code coverage.

Signed-off-by: hijackthe2 <2948278083@qq.com>
2023-11-03 16:56:14 +08:00
hijackthe2 689900cc18 ci: add configurations to setup fscache
Since using `/dev/cachefiles` requires sudo mode, so some environment variables are defined and we use `sudo -E` to pass these environment variables to sudo operations.

The script file for enabling fscache is misc/fscache/setup.sh

Signed-off-by: hijackthe2 <2948278083@qq.com>
2023-11-03 08:35:31 +08:00
hijackthe2 cdc41de069 docs: add fscache configuation
Signed-off-by: hijackthe2 <2948278083@qq.com>
2023-11-03 08:35:31 +08:00
hijackthe2 3c57fc608c tests: add unit test case for blob_cache.rs, block_device.rs, fs_cache.rs, singleton.rs under service/src
1. In blob_cache.rs, two simple lines of code have been added to cover previously missed cases.
2. In block_device.rs, some test cases are added to cover function export(), block_size(), blocks_to_size(), and size_to_blocks().
3. In fs_cache.rs, some test cases are added to cover function try_from() for struct FsCacheMsgOpen and FsCacheMsgRead.
4. In singletion.rs, some test cases are added to cover function initialize_blob_cache() and initialize_fscache_service(). In addition, fscache must be correctly enabled firstly as the device file `/dev/cachefiles` will used by function initialize_fscache_service().

Signed-off-by: hijackthe2 <2948278083@qq.com>
2023-11-03 08:35:31 +08:00
Yadong Ding 4d4ebe66c0 go work: support go workspace mode and sync deps
We have mutile golang modules in repo, golang had supported the workspaces,
see https://go.dev/blog/get-familiar-with-workspaces.
Use `go work sync` to synchronize versions of the same dependencies for different modules.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-11-02 22:28:39 +08:00
Yan Song ac55d7f932 smoke: add basic nydusify copy test
Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-11-02 16:50:48 +08:00
Yan Song a478fb6e76 nydusify: fix copy race issue
1. Fix lost namespace on containerd image pull context:

```
pull source image: namespace is required: failed precondition
```

2. Fix possible semaphore Acquire race on the same one context:

```
panic: semaphore: released more than held
```

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-11-02 16:50:48 +08:00
Yan Song ace7c3633d smoke: fix stable version for compatibility test
And let's make stable version name as a env.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-11-02 10:35:00 +08:00
dependabot[bot] 75c87e9e42 build(deps): bump rustix in /contrib/nydus-backend-proxy
Bumps [rustix](https://github.com/bytecodealliance/rustix) from 0.36.8 to 0.36.17.
- [Release notes](https://github.com/bytecodealliance/rustix/releases)
- [Commits](https://github.com/bytecodealliance/rustix/compare/v0.36.8...v0.36.17)

---
updated-dependencies:
- dependency-name: rustix
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-11-02 08:47:06 +08:00
Peng Tao d638eb26e1 smoke: test v2.2.3 by default
Let's make stable v2.2.y a LTS one.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2023-11-01 11:44:25 +08:00
Yan Song 34a09d87ce api: fix unsupported dummy cache type
The dummycache type is missed to handle in config validation:

```
ERROR [/src/fusedev.rs:595] service mount error: RAFS failed to handle request, Failed to load config: failed to parse configuration information`
ERROR [/src/error.rs:18] Stack:
   0: backtrace::backtrace::trace
   1: backtrace::capture::Backtrace::new

ERROR [/src/error.rs:19] Error:
        Rafs(LoadConfig(Custom { kind: InvalidInput, error: "failed to parse configuration information" }))
        at service/src/fusedev.rs:596
ERROR [src/bin/nydusd/main.rs:525] Failed in starting daemon:
Error: Custom { kind: Other, error: "" }
```

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-10-31 18:00:45 +08:00
Yadong Ding e64b912a10 action: rename images-service to nydus
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-10-31 14:10:16 +08:00
Yadong Ding 44149519d1 docs: replace images-service to nydus in links
Since https://github.com/dragonflyoss/nydus/issues/1405, we had changed repo name to nydus.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-10-31 14:10:16 +08:00
Yan Song 55bba9d80b tests: remove useless rust smoke test
The rust integration test has been replaced with the go integration
test in smoke/tests, let's remove it.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-10-31 12:14:56 +08:00
Yan Song 47b62d978c contrib: remove unmaintained python integration test
The python integration test is too long without maintenance, it should
be replaced with the go integration test in smoke/tests.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-10-31 12:14:56 +08:00
Qinqi Qu f55d2c948f deps: bump google.golang.org/grpc to 1.59.0
1. Fix gRPC-Go HTTP/2 Rapid Reset vulnerability

Please refer to:
https://github.com/advisories/GHSA-m425-mq94-257g

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-10-27 16:13:49 +08:00
Qinqi Qu 69ddef9f4c smoke: replaces the io/ioutil API which was deprecated in go 1.19
Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-10-27 15:19:30 +08:00
Qinqi Qu cb458bdea4 contrib: upgrade to go 1.20
Keep consistent with other components in container ecosystem,
for example containerd is using go 1.20.

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-10-27 15:19:30 +08:00
YuQiang 46fc7249b4 update: integrate-acceld-cacheIntegrate acceld cache module
Signed-off-by: YuQiang <y_q_email@163.com>
2023-10-27 14:14:51 +08:00
linchuan 6dc9144193 enhance error handling with thiserror
Signed-off-by: linchuan <linchuan.jh@antgroup.com>
2023-10-27 10:27:24 +08:00
hijackthe2 3bb124ba77 tests: add unit test case for service/src/upgrade.rs
test type transformation between struct FailoverPolicy and String/&str
2023-10-24 18:48:51 +08:00
liyaojie acb689f19b CI : fix the failed fsck patch apply in CI
Signed-off-by: liyaojie <lyj199907@outlook.com>
2023-10-24 15:40:42 +08:00
Yan Song 9632d18e0b api: fix the log message print in macro
Regardless of whether debug compilation is enabled, we should
always print error messages. Otherwise, some error logs may be
lost, making it difficult to debug codes.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-10-20 10:46:42 +08:00
Yan Song 0cad49a6bd storage: fix compatibility on fetching token for registry backend
The registry backend received an unauthorized error from Harbor registry
when fetching registry token by HTTP GET method, the bug is introduced
from https://github.com/dragonflyoss/image-service/pull/1425/files#diff-f7ce8f265a570c66eae48c85e0f5b6f29fdaec9cf2ee2eded95810fe320d80e1L263.

We should insert the basic auth header to ensure the compatibility of
fetching token by HTTP GET method.

This refers to containerd implementation: dc7dba9c20/remotes/docker/auth/fetch.go (L187)

The change has been tested for Harbor v2.9.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-10-20 10:46:42 +08:00
Qinqi Qu 5c63ba924e deps: bump golang.org/x/net to v0.17.0
Fix the following 2 issues:
1. HTTP/2 rapid reset can cause excessive work in net/http
2. Improper rendering of text nodes in golang.org/x/net/html

Please refer to:
https://github.com/dragonflyoss/image-service/security/dependabot/95
https://github.com/dragonflyoss/image-service/security/dependabot/96
https://github.com/dragonflyoss/image-service/security/dependabot/97
https://github.com/dragonflyoss/image-service/security/dependabot/98
https://github.com/dragonflyoss/image-service/security/dependabot/99
https://github.com/dragonflyoss/image-service/security/dependabot/100

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-10-13 03:59:27 -05:00
zyfjeff 9ab1ec1297 Add --blob-cache-dir arg use to generate raw blob cache and meta
generate blob cache and blob meta through the —-blob-cache-dir parameters,
so that nydusd can be started directly from these two files without
going to the backend to download. this can improve the performance
of data loading in localfs mode.

Signed-off-by: zyfjeff <zyfjeff@linux.alibaba.com>
2023-10-10 05:19:53 -05:00
Yan Song 6ea22ccd8a docs: update containerd integration tutorial
Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-10-08 20:50:30 -05:00
Yan Song a9678d2c97 misc: remove outdated example doc
These docs and configs are poorly maintained, and it also can be
replaced by the doc https://github.com/dragonflyoss/image-service/blob/master/docs/containerd-env-setup.md.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-10-08 20:50:30 -05:00
425 changed files with 32692 additions and 21661 deletions

8
.github/codecov.yml vendored
View File

@ -5,11 +5,9 @@ coverage:
enabled: yes
target: auto # auto compares coverage to the previous base commit
# adjust accordingly based on how flaky your tests are
# this allows a 1% drop from the previous base commit coverage
threshold: 1%
patch:
default:
target: 25% # the required coverage value in each patch
# this allows a 0.2% drop from the previous base commit coverage
threshold: 0.2%
patch: false
comment:
layout: "reach, diff, flags, files"

250
.github/copilot-instructions.md vendored Normal file
View File

@ -0,0 +1,250 @@
# GitHub Copilot Instructions for Nydus
## Project Overview
Nydus is a high-performance container image service that implements a content-addressable file system on the RAFS format. It enhances the OCI image specification by enabling on-demand loading, chunk-level deduplication, and improved container startup performance.
### Key Components
- **nydusd**: User-space daemon that processes FUSE/fscache/virtiofs messages and serves Nydus images
- **nydus-image**: CLI tool to convert OCI image layers to Nydus format
- **nydusify**: Tool to convert entire OCI images to Nydus format with registry integration
- **nydusctl**: CLI client for managing and querying nydusd daemon
- **nydus-service**: Library crate for integrating Nydus services into other projects
## Architecture Guidelines
### Crate Structure
```
- api/ # Nydus Image Service APIs and data structures
- builder/ # Image building and conversion logic
- rafs/ # RAFS filesystem implementation
- service/ # Daemon and service management framework
- storage/ # Core storage subsystem with backends and caching
- utils/ # Common utilities and helper functions
- src/bin/ # Binary executables (nydusd, nydus-image, nydusctl)
```
### Key Technologies
- **Language**: Rust with memory safety focus
- **Filesystems**: FUSE, virtiofs, EROFS, fscache
- **Storage Backends**: Registry, OSS, S3, LocalFS, HTTP proxy
- **Compression**: LZ4, Gzip, Zstd
- **Async Runtime**: Tokio (current thread for io-uring compatibility)
## Code Style and Patterns
### Rust Conventions
- Use `#![deny(warnings)]` in all binary crates
- Follow standard Rust naming conventions (snake_case, PascalCase)
- Prefer `anyhow::Result` for error handling in applications
- Use custom error types with `thiserror` for libraries
- Apply `#[macro_use]` for frequently used external crates like `log`
- Always format the code with `cargo fmt`
- Use `clippy` for linting and follow its suggestions
### Error Handling
```rust
// Prefer anyhow for applications
use anyhow::{bail, Context, Result};
// Use custom error types for libraries
use thiserror::Error;
#[derive(Error, Debug)]
pub enum NydusError {
#[error("Invalid arguments: {0}")]
InvalidArguments(String),
#[error("IO error: {0}")]
Io(#[from] std::io::Error),
}
```
### Logging Patterns
- Use structured logging with appropriate levels (trace, debug, info, warn, error)
- Include context in error messages: `.with_context(|| "description")`
- Use `info!`, `warn!`, `error!` macros consistently
### Configuration Management
- Use `serde` for JSON configuration serialization/deserialization
- Support both file-based and environment variable configuration
- Validate configurations at startup with clear error messages
- Follow the `ConfigV2` pattern for versioned configurations
## Development Guidelines
### Storage Backend Development
- When implementing new storage backends:
- - Implement the `BlobBackend` trait
- - Support timeout, retry, and connection management
- - Add configuration in the backend config structure
- - Consider proxy support for high availability
- - Implement proper error handling and logging
### Daemon Service Development
- Use the `NydusDaemon` trait for service implementations
- Support save/restore for hot upgrade functionality
- Implement proper state machine transitions
- Use `DaemonController` for lifecycle management
### RAFS Filesystem Features
- Support both RAFS v5 and v6 formats
- Implement chunk-level deduplication
- Handle prefetch optimization for container startup
- Support overlay filesystem operations
- Maintain POSIX compatibility
### API Development
- Use versioned APIs (v1, v2) with backward compatibility
- Implement HTTP endpoints with proper error handling
- Support both Unix socket and TCP communication
- Follow OpenAPI specification patterns
## Testing Patterns
### Unit Tests
- Test individual functions and modules in isolation
- Use `#[cfg(test)]` modules within source files
- Mock external dependencies when necessary
- Focus on error conditions and edge cases
### Integration Tests
- Place integration tests in `tests/` directory
- Test complete workflows and component interactions
- Use temporary directories for filesystem operations
- Clean up resources properly in test teardown
### Smoke Tests
- Located in `smoke/` directory using Go
- Test real-world scenarios with actual images
- Verify performance and functionality
- Use Bats framework for shell-based testing
## Performance Considerations
### I/O Optimization
- Use async I/O patterns with Tokio
- Implement prefetching for predictable access patterns
- Optimize chunk size (default 1MB) for workload characteristics
- Consider io-uring for high-performance scenarios
### Memory Management
- Use `Arc<T>` for shared ownership of large objects
- Implement lazy loading for metadata structures
- Consider memory mapping for large files
- Profile memory usage in performance-critical paths
### Caching Strategy
- Implement blob caching with configurable backends
- Support compression in cache to save space
- Use chunk-level caching with efficient eviction policies
- Consider cache warming strategies for frequently accessed data
## Security Guidelines
### Data Integrity
- Implement end-to-end digest validation
- Support multiple hash algorithms (SHA256, Blake3)
- Verify chunk integrity on read operations
- Detect and prevent supply chain attacks
### Authentication
- Support registry authentication (basic auth, bearer tokens)
- Handle credential rotation and refresh
- Implement secure credential storage
- Support mutual TLS for backend connections
## Specific Code Patterns
### Configuration Loading
```rust
// Standard pattern for configuration loading
let config = match config_path {
Some(path) => ConfigV2::from_file(path)?,
None => ConfigV2::default(),
};
// Environment variable override
if let Ok(auth) = std::env::var("IMAGE_PULL_AUTH") {
config.update_registry_auth_info(&auth);
}
```
### Daemon Lifecycle
```rust
// Standard daemon initialization pattern
let daemon = create_daemon(config, build_info)?;
DAEMON_CONTROLLER.set_daemon(daemon);
// Event loop management
if DAEMON_CONTROLLER.is_active() {
DAEMON_CONTROLLER.run_loop();
}
// Graceful shutdown
DAEMON_CONTROLLER.shutdown();
```
### Blob Access Pattern
```rust
// Standard blob read pattern
let mut bio = BlobIoDesc::new(blob_id, blob_address, blob_size, user_io);
let blob_device = factory.get_device(&blob_info)?;
blob_device.read(&mut bio)?;
```
## Documentation Standards
### Code Documentation
- Document all public APIs with `///` comments
- Include examples in documentation
- Document safety requirements for unsafe code
- Explain complex algorithms and data structures
### Architecture Documentation
- Maintain design documents in `docs/` directory
- Update documentation when adding new features
- Include diagrams for complex interactions
- Document configuration options comprehensively
### Release Notes
- Document breaking changes clearly
- Include migration guides for major versions
- Highlight performance improvements
- List new features and bug fixes
## Container and Cloud Native Patterns
### OCI Compatibility
- Maintain compatibility with OCI image spec
- Support standard container runtimes (runc, Kata)
- Implement proper layer handling and manifest generation
- Support multi-architecture images
### Kubernetes Integration
- Design for Kubernetes CRI integration
- Support containerd snapshotter pattern
- Handle pod lifecycle events appropriately
- Implement proper resource cleanup
### Cloud Storage Integration
- Support major cloud providers (AWS S3, Alibaba OSS)
- Implement proper credential management
- Handle network interruptions gracefully
- Support cross-region replication patterns
## Build and Release
### Build Configuration
- Use `Cargo.toml` workspace configuration
- Support cross-compilation for multiple architectures
- Implement proper feature flags for optional components
- Use consistent dependency versioning
### Release Process
- Tag releases with semantic versioning
- Generate release binaries for supported platforms
- Update documentation with release notes
- Validate release artifacts before publishing
Remember to follow these guidelines when contributing to or working with the Nydus codebase. The project emphasizes performance, security, and compatibility with the broader container ecosystem.

40
.github/workflows/Dockerfile.cross vendored Normal file
View File

@ -0,0 +1,40 @@
FROM ubuntu:latest
ENV DEBIAN_FRONTEND=noninteractive
ARG RUST_VERSION=1.84.0
RUN apt-get update && apt-get install -y \
software-properties-common \
build-essential \
curl \
git \
libssl-dev \
pkg-config \
cmake \
gcc-riscv64-linux-gnu \
g++-riscv64-linux-gnu \
&& rm -rf /var/lib/apt/lists/*
RUN add-apt-repository ppa:ubuntu-toolchain-r/test \
&& apt-get update && apt-get install -y \
gcc-14 \
g++-14 \
gcc-14-riscv64-linux-gnu \
g++-14-riscv64-linux-gnu \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /root
RUN curl https://sh.rustup.rs -sSf | sh -s -- -y --default-toolchain $RUST_VERSION
ENV PATH="/root/.cargo/bin:${PATH}"
RUN rustup target add \
riscv64gc-unknown-linux-gnu
RUN mkdir -p ~/.cargo && echo '\
[target.riscv64gc-unknown-linux-gnu]\n\
linker = "riscv64-linux-gnu-gcc-14"' > ~/.cargo/config.toml
CMD ["/bin/bash"]

View File

@ -7,7 +7,6 @@ on:
pull_request:
paths:
- '.github/workflows/benchmark.yml'
- 'misc/benchmark/*'
workflow_dispatch:
env:
@ -18,26 +17,17 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Setup Golang
uses: actions/setup-go@v3
uses: actions/setup-go@v5
with:
go-version: ~1.18
- name: Golang Cache
uses: actions/cache@v3
with:
path: |
~/.cache/go-build
~/go/pkg/mod
key: ${{ runner.os }}-golang-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-golang-
go-version-file: 'go.work'
cache-dependency-path: "**/*.sum"
- name: Build Contrib
run: |
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sudo sh -s -- -b /usr/local/bin v1.51.2
make -e DOCKER=false nydusify-release
- name: Upload Nydusify
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd/nydusify
@ -46,24 +36,34 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Rust Cache
uses: Swatinem/rust-cache@v2.2.0
uses: Swatinem/rust-cache@v2
with:
cache-on-failure: true
shared-key: nydus-build
shared-key: Linux-cargo-amd64
- uses: dsherret/rust-toolchain-file@v1
- name: Build Nydus
run: |
rustup component add rustfmt clippy
make
make release
- name: Upload Nydus Binaries
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v4
with:
name: nydus-artifact
path: |
target/release/nydus-image
target/release/nydusd
benchmark-description:
runs-on: ubuntu-latest
steps:
- name: Description
run: |
echo "## Benchmark Environment" > $GITHUB_STEP_SUMMARY
echo "| operating system | cpu | memory " >> $GITHUB_STEP_SUMMARY
echo "|:----------------:|:---:|:------ " >> $GITHUB_STEP_SUMMARY
echo "| ubuntu-22.04 | 2-core CPU (x86_64) | 7GB |" >> $GITHUB_STEP_SUMMARY
benchmark-oci:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
@ -84,33 +84,34 @@ jobs:
tag: 8-al2022-jdk
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Download Nydus
uses: actions/download-artifact@master
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: target/release
- name: Download Nydusify
uses: actions/download-artifact@master
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare OCI Environment
- name: Prepare Environment
run: |
sudo bash misc/benchmark/prepare_env.sh oci
sudo docker pull ${{ matrix.image }}:${{ matrix.tag }} && docker tag ${{ matrix.image }}:${{ matrix.tag }} localhost:5000/${{ matrix.image }}:${{ matrix.tag }}
sudo docker push localhost:5000/${{ matrix.image }}:${{ matrix.tag }}
sudo bash misc/prepare.sh
- name: BenchMark Test
run: |
cd misc/benchmark
sudo python3 benchmark.py --mode oci --image ${{ matrix.image }}:${{ matrix.tag }}
- name: Save Test Result
uses: actions/upload-artifact@v3
export BENCHMARK_TEST_IMAGE=${{ matrix.image }}:${{ matrix.tag }}
export BENCHMARK_MODE=oci
export BENCHMARK_METRIC_FILE=${{ matrix.image }}-oci.json
export SNAPSHOTTER=overlayfs
sudo -E make smoke-benchmark
- name: Save BenchMark Result
uses: actions/upload-artifact@v4
with:
name: benchmark-oci-${{ matrix.image }}
path: misc/benchmark/${{ matrix.image }}.csv
path: smoke/${{ matrix.image }}-oci.json
benchmark-nydus-no-prefetch:
benchmark-fsversion-v5:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
strategy:
@ -130,35 +131,33 @@ jobs:
tag: 8-al2022-jdk
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Download Nydus
uses: actions/download-artifact@master
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: target/release
- name: Download Nydusify
uses: actions/download-artifact@master
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Nydus Environment
- name: Prepare Environment
run: |
sudo bash misc/benchmark/prepare_env.sh nydus
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--source ${{ matrix.image }}:${{ matrix.tag }} \
--target localhost:5000/${{ matrix.image }}:${{ matrix.tag }}_nydus \
--fs-version 6
sudo bash misc/prepare.sh
- name: BenchMark Test
run: |
cd misc/benchmark
sudo python3 benchmark.py --mode nydus-no-prefetch --image ${{ matrix.image }}:${{ matrix.tag }}
- name: Save Test Result
uses: actions/upload-artifact@v3
export BENCHMARK_TEST_IMAGE=${{ matrix.image }}:${{ matrix.tag }}
export BENCHMARK_MODE=fs-version-5
export BENCHMARK_METRIC_FILE=${{ matrix.image }}-fsversion-v5.json
sudo -E make smoke-benchmark
- name: Save BenchMark Result
uses: actions/upload-artifact@v4
with:
name: benchmark-nydus-no-prefetch-${{matrix.image}}
path: misc/benchmark/${{matrix.image}}.csv
name: benchmark-fsversion-v5-${{ matrix.image }}
path: smoke/${{ matrix.image }}-fsversion-v5.json
benchmark-zran-no-prefetch:
benchmark-fsversion-v6:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
strategy:
@ -178,38 +177,33 @@ jobs:
tag: 8-al2022-jdk
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Download Nydus
uses: actions/download-artifact@master
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: target/release
- name: Download Nydusify
uses: actions/download-artifact@master
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Nydus Environment
- name: Prepare Environment
run: |
sudo bash misc/benchmark/prepare_env.sh nydus
sudo docker pull ${{ matrix.image }}:${{ matrix.tag }} && docker tag ${{ matrix.image }}:${{ matrix.tag }} localhost:5000/${{ matrix.image }}:${{ matrix.tag }}
sudo docker push localhost:5000/${{ matrix.image }}:${{ matrix.tag }}
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--source localhost:5000/${{ matrix.image }}:${{ matrix.tag }} \
--target localhost:5000/${{ matrix.image }}:${{ matrix.tag }}_nydus \
--fs-version 6 \
--oci-ref
sudo bash misc/prepare.sh
- name: BenchMark Test
run: |
cd misc/benchmark
sudo python3 benchmark.py --mode nydus-no-prefetch --image ${{ matrix.image }}:${{ matrix.tag }}
- name: Save Test Result
uses: actions/upload-artifact@v3
export BENCHMARK_TEST_IMAGE=${{ matrix.image }}:${{ matrix.tag }}
export BENCHMARK_MODE=fs-version-6
export BENCHMARK_METRIC_FILE=${{ matrix.image }}-fsversion-v6.json
sudo -E make smoke-benchmark
- name: Save BenchMark Result
uses: actions/upload-artifact@v4
with:
name: benchmark-zran-no-prefetch-${{matrix.image}}
path: misc/benchmark/${{matrix.image}}.csv
name: benchmark-fsversion-v6-${{ matrix.image }}
path: smoke/${{ matrix.image }}-fsversion-v6.json
benchmark-nydus-all-prefetch:
benchmark-zran:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
strategy:
@ -229,146 +223,35 @@ jobs:
tag: 8-al2022-jdk
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Download Nydus
uses: actions/download-artifact@master
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: target/release
- name: Download Nydusify
uses: actions/download-artifact@master
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Nydus Environment
- name: Prepare Environment
run: |
sudo bash misc/benchmark/prepare_env.sh nydus
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--source ${{ matrix.image }}:${{ matrix.tag }} \
--target localhost:5000/${{ matrix.image }}:${{ matrix.tag }}_nydus \
--fs-version 6
sudo bash misc/prepare.sh
- name: BenchMark Test
run: |
cd misc/benchmark
sudo python3 benchmark.py --mode nydus-all-prefetch --image ${{ matrix.image }}:${{ matrix.tag }}
- name: Save Test Result
uses: actions/upload-artifact@v3
export BENCHMARK_TEST_IMAGE=${{ matrix.image }}:${{ matrix.tag }}
export BENCHMARK_MODE=zran
export BENCHMARK_METRIC_FILE=${{ matrix.image }}-zran.json
sudo -E make smoke-benchmark
- name: Save BenchMark Result
uses: actions/upload-artifact@v4
with:
name: benchmark-nydus-all-prefetch-${{matrix.image}}
path: misc/benchmark/${{matrix.image}}.csv
benchmark-zran-all-prefetch:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
strategy:
matrix:
include:
- image: wordpress
tag: 6.1.1
- image: node
tag: 19.8
- image: python
tag: 3.10.7
- image: golang
tag: 1.19.3
- image: ruby
tag: 3.1.3
- image: amazoncorretto
tag: 8-al2022-jdk
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Download Nydus
uses: actions/download-artifact@master
with:
name: nydus-artifact
path: target/release
- name: Download Nydusify
uses: actions/download-artifact@master
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Nydus Environment
run: |
sudo bash misc/benchmark/prepare_env.sh nydus
sudo docker pull ${{ matrix.image }}:${{ matrix.tag }} && docker tag ${{ matrix.image }}:${{ matrix.tag }} localhost:5000/${{ matrix.image }}:${{ matrix.tag }}
sudo docker push localhost:5000/${{ matrix.image }}:${{ matrix.tag }}
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--source localhost:5000/${{ matrix.image }}:${{ matrix.tag }} \
--target localhost:5000/${{ matrix.image }}:${{ matrix.tag }}_nydus \
--fs-version 6 \
--oci-ref
- name: BenchMark Test
run: |
cd misc/benchmark
sudo python3 benchmark.py --mode nydus-all-prefetch --image ${{ matrix.image }}:${{ matrix.tag }}
- name: Save Test Result
uses: actions/upload-artifact@v3
with:
name: benchmark-zran-all-prefetch-${{matrix.image}}
path: misc/benchmark/${{matrix.image}}.csv
benchmark-nydus-filelist-prefetch:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
strategy:
matrix:
include:
- image: wordpress
tag: 6.1.1
- image: node
tag: 19.8
- image: python
tag: 3.10.7
- image: golang
tag: 1.19.3
- image: ruby
tag: 3.1.3
- image: amazoncorretto
tag: 8-al2022-jdk
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Download Nydus
uses: actions/download-artifact@master
with:
name: nydus-artifact
path: target/release
- name: Download Nydusify
uses: actions/download-artifact@master
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Nydus Environment
run: |
sudo bash misc/benchmark/prepare_env.sh nydus
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--source ${{ matrix.image }}:${{ matrix.tag }} \
--target localhost:5000/${{ matrix.image }}:${{ matrix.tag }}_nydus \
--fs-version 6
- name: BenchMark Test
run: |
cd misc/benchmark
sudo python3 benchmark.py --mode nydus-filelist-prefetch --image ${{ matrix.image }}:${{ matrix.tag }}
- name: Save Test Result
uses: actions/upload-artifact@v3
with:
name: benchmark-nydus-filelist-prefetch-${{matrix.image}}
path: misc/benchmark/${{matrix.image}}.csv
benchmark-description:
runs-on: ubuntu-latest
steps:
- name: Description
run: |
echo "## Benchmark Environment" > $GITHUB_STEP_SUMMARY
echo "| operating system | cpu | memory |bandwidth|" >> $GITHUB_STEP_SUMMARY
echo "|:----------------:|:---:|:------:|:--------:|" >> $GITHUB_STEP_SUMMARY
echo "| ubuntu-22.04 | 2-core CPU (x86_64) | 7GB |10MB|" >> $GITHUB_STEP_SUMMARY
name: benchmark-zran-${{ matrix.image }}
path: smoke/${{ matrix.image }}-zran.json
benchmark-result:
runs-on: ubuntu-latest
needs: [benchmark-oci, benchmark-zran-all-prefetch, benchmark-zran-no-prefetch, benchmark-nydus-no-prefetch, benchmark-nydus-all-prefetch, benchmark-nydus-filelist-prefetch]
needs: [benchmark-oci, benchmark-fsversion-v5, benchmark-fsversion-v6, benchmark-zran]
strategy:
matrix:
include:
@ -386,53 +269,28 @@ jobs:
tag: 8-al2022-jdk
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Get Date
id: get-date
run: |
echo "date=$(date +%s)" >> $GITHUB_OUTPUT
shell: bash
- name: Restore benchmark result
uses: actions/cache/restore@v3
with:
path: benchmark-result
key: benchmark-${{matrix.image}}-${{ steps.get-date.outputs.date }}
restore-keys: |
benchmark-${{matrix.image}}
uses: actions/checkout@v4
- name: Download benchmark-oci
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4
with:
name: benchmark-oci-${{matrix.image}}
path: benchmark-oci
- name: Download benchmark-nydus-no-prefetch
uses: actions/download-artifact@v3
name: benchmark-oci-${{ matrix.image }}
path: benchmark-result
- name: Download benchmark-fsversion-v5
uses: actions/download-artifact@v4
with:
name: benchmark-nydus-no-prefetch-${{matrix.image}}
path: benchmark-nydus-no-prefetch
- name: Download benchmark-zran-no-prefetch
uses: actions/download-artifact@v3
name: benchmark-fsversion-v5-${{ matrix.image }}
path: benchmark-result
- name: Download benchmark-fsversion-v6
uses: actions/download-artifact@v4
with:
name: benchmark-zran-no-prefetch-${{matrix.image}}
path: benchmark-zran-no-prefetch
- name: Download benchmark-nydus-all-prefetch
uses: actions/download-artifact@v3
name: benchmark-fsversion-v6-${{ matrix.image }}
path: benchmark-result
- name: Download benchmark-zran
uses: actions/download-artifact@v4
with:
name: benchmark-nydus-all-prefetch-${{matrix.image}}
path: benchmark-nydus-all-prefetch
- name: Download benchmark-zran-all-prefetch
uses: actions/download-artifact@v3
with:
name: benchmark-zran-all-prefetch-${{matrix.image}}
path: benchmark-zran-all-prefetch
- name: Download benchmark-nydus-filelist-prefetch
uses: actions/download-artifact@v3
with:
name: benchmark-nydus-filelist-prefetch-${{matrix.image}}
path: benchmark-nydus-filelist-prefetch
- uses: geekyeggo/delete-artifact@v2
with:
name: "*-${{matrix.image}}"
- name: Benchmark Workload
name: benchmark-zran-${{ matrix.image }}
path: benchmark-result
- name: Benchmark Summary
run: |
case ${{matrix.image}} in
"wordpress")
@ -454,15 +312,18 @@ jobs:
echo "### workload: javac Main.java; java Main" > $GITHUB_STEP_SUMMARY
;;
esac
- name: Benchmark
run: |
if [ ! -d "benchmark-result" ]; then
mkdir benchmark-result
fi
sudo python3 misc/benchmark/benchmark_summary.py --mode benchmark-schedule >> $GITHUB_STEP_SUMMARY
- name: Save Benchmark Result
uses: actions/cache/save@v3
with:
path: benchmark-result
key: benchmark-${{matrix.image}}-${{ steps.get-date.outputs.date }}
cd benchmark-result
metric_files=(
"${{ matrix.image }}-oci.json"
"${{ matrix.image }}-fsversion-v5.json"
"${{ matrix.image }}-fsversion-v6.json"
"${{ matrix.image }}-zran.json"
)
echo "| bench-result | e2e-time(s) | read-count | read-amount(MB) | image-size(MB) |convert-time(s)|" >> $GITHUB_STEP_SUMMARY
echo "|:-------------|:-----------:|:----------:|:---------------:|:--------------:|:-------------:|" >> $GITHUB_STEP_SUMMARY
for file in "${metric_files[@]}"; do
name=$(basename "$file" .json | sed 's/^[^-]*-\(.*\)$/\1/')
data=$(jq -r '. | "\(.e2e_time / 1e9) \(.read_count) \(.read_amount_total / (1024 * 1024)) \(.image_size / (1024 * 1024)) \(.conversion_elapsed / 1e9)"' "$file" | \
awk '{ printf "%.2f | %.0f | %.2f | %.2f | %.2f", $1, $2, $3, $4, $5 }')
echo "| $name | $data |" >> $GITHUB_STEP_SUMMARY
done

View File

@ -1,33 +0,0 @@
name: Cleanup caches by a branch
on:
pull_request:
types:
- closed
jobs:
cleanup:
runs-on: ubuntu-22.04
steps:
- name: Check out code
uses: actions/checkout@v3
- name: Cleanup
run: |
gh extension install actions/gh-actions-cache
REPO=${{ github.repository }}
BRANCH="refs/pull/${{ github.event.pull_request.number }}/merge"
echo "Fetching list of cache key"
cacheKeysForPR=$(gh actions-cache list -R $REPO -B $BRANCH | cut -f 1 )
## Setting this to not fail the workflow while deleting cache keys.
set +e
echo "Deleting caches..."
for cacheKey in $cacheKeysForPR
do
gh actions-cache delete $cacheKey -R $REPO -B $BRANCH --confirm
done
echo "Done"
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@ -18,26 +18,18 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Setup Golang
uses: actions/setup-go@v3
uses: actions/setup-go@v5
with:
go-version: ~1.18
- name: Golang Cache
uses: actions/cache@v3
with:
path: |
~/.cache/go-build
~/go/pkg/mod
key: ${{ runner.os }}-golang-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-golang-
go-version-file: 'go.work'
cache-dependency-path: "**/*.sum"
- name: Build Contrib
run: |
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sudo sh -s -- -b /usr/local/bin v1.51.2
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sudo sh -s -- -b /usr/local/bin v1.61.0
make -e DOCKER=false nydusify-release
- name: Upload Nydusify
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd/nydusify
@ -46,18 +38,18 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Rust Cache
uses: Swatinem/rust-cache@v2.2.0
uses: Swatinem/rust-cache@v2
with:
cache-on-failure: true
shared-key: nydus-build
shared-key: Linux-cargo-amd64
- uses: dsherret/rust-toolchain-file@v1
- name: Build Nydus
run: |
rustup component add rustfmt clippy
make
make release
- name: Upload Nydus Binaries
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v4
with:
name: nydus-artifact
path: |
@ -68,15 +60,15 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Build fsck.erofs
run: |
sudo apt-get update && sudo apt-get install -y build-essential git autotools-dev automake libtool pkg-config uuid-dev liblz4-dev
git clone https://git.kernel.org/pub/scm/linux/kernel/git/xiang/erofs-utils.git
cd erofs-utils && git apply ../${{ env.FSCK_PATCH_PATH }} && ./autogen.sh && ./configure && make && cd ..
cd erofs-utils && git checkout v1.6 && git apply ../${{ env.FSCK_PATCH_PATH }} && ./autogen.sh && ./configure && make && cd ..
sudo cp erofs-utils/fsck/fsck.erofs /usr/local/bin/
- name: Upload fsck.erofs
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v4
with:
name: fsck-erofs-artifact
path: |
@ -87,25 +79,25 @@ jobs:
needs: [nydusify-build, nydus-build, fsck-erofs-build]
steps:
- name: Checkout repository
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Login ghcr registry
uses: docker/login-action@v2
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Download Nydus
uses: actions/download-artifact@master
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: /usr/local/bin
- name: Download Nydusify
uses: actions/download-artifact@master
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: /usr/local/bin
- name: Download fsck.erofs
uses: actions/download-artifact@master
uses: actions/download-artifact@v4
with:
name: fsck-erofs-artifact
path: /usr/local/bin
@ -147,11 +139,11 @@ jobs:
--source localhost:5000/$I \
--target localhost:5000/$I:nydus-nightly-oci-ref
sudo fsck.erofs -d1 output/nydus_bootstrap
sudo fsck.erofs -d1 ./output/target/nydus_bootstrap/image/image.boot
sudo rm -rf ./output
done
- name: Save Nydusify Metric
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: convert-zran-metric
path: convert-zran
@ -161,20 +153,20 @@ jobs:
needs: [nydusify-build, nydus-build]
steps:
- name: Checkout repository
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Login ghcr registry
uses: docker/login-action@v2
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Download Nydus
uses: actions/download-artifact@master
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: /usr/local/bin
- name: Download Nydusify
uses: actions/download-artifact@master
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: /usr/local/bin
@ -205,7 +197,7 @@ jobs:
--target localhost:5000/$I:nydus-nightly-v5
done
- name: Save Nydusify Metric
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: convert-native-v5-metric
path: convert-native-v5
@ -215,25 +207,25 @@ jobs:
needs: [nydusify-build, nydus-build, fsck-erofs-build]
steps:
- name: Checkout repository
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Login ghcr registry
uses: docker/login-action@v2
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Download Nydus
uses: actions/download-artifact@master
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: /usr/local/bin
- name: Download Nydusify
uses: actions/download-artifact@master
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: /usr/local/bin
- name: Download fsck.erofs
uses: actions/download-artifact@master
uses: actions/download-artifact@v4
with:
name: fsck-erofs-artifact
path: /usr/local/bin
@ -264,42 +256,112 @@ jobs:
sudo DOCKER_CONFIG=$HOME/.docker nydusify check --source $I:latest \
--target localhost:5000/$I:nydus-nightly-v6
sudo fsck.erofs -d1 output/nydus_bootstrap
sudo fsck.erofs -d1 ./output/target/nydus_bootstrap/image/image.boot
sudo rm -rf ./output
done
- name: Save Nydusify Metric
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: convert-native-v6-metric
path: convert-native-v6
convert-metric:
convert-native-v6-batch:
runs-on: ubuntu-latest
needs: [convert-zran, convert-native-v5, convert-native-v6]
needs: [nydusify-build, nydus-build, fsck-erofs-build]
steps:
- name: Checkout repository
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Login ghcr registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: /usr/local/bin
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: /usr/local/bin
- name: Download fsck.erofs
uses: actions/download-artifact@v4
with:
name: fsck-erofs-artifact
path: /usr/local/bin
- name: Convert and check RAFS v6 batch images
run: |
sudo chmod +x /usr/local/bin/nydus*
sudo chmod +x /usr/local/bin/fsck.erofs
sudo docker run -d --restart=always -p 5000:5000 registry
sudo mkdir convert-native-v6-batch
for I in $(cat ${{ env.IMAGE_LIST_PATH }}); do
echo "converting $I:latest to $I:nydus-nightly-v6-batch"
# for pre-built images
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--source $I:latest \
--target ${{ env.REGISTRY }}/${{ env.ORGANIZATION }}/$I:nydus-nightly-v6-batch \
--fs-version 6 \
--batch-size 0x100000 \
--platform linux/amd64,linux/arm64
# use local registry for speed
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--source $I:latest \
--target localhost:5000/$I:nydus-nightly-v6-batch \
--fs-version 6 \
--batch-size 0x100000 \
--platform linux/amd64,linux/arm64 \
--output-json convert-native-v6-batch/${I}.json
sudo rm -rf ./tmp
sudo DOCKER_CONFIG=$HOME/.docker nydusify check --source $I:latest \
--target localhost:5000/$I:nydus-nightly-v6-batch
sudo fsck.erofs -d1 ./output/target/nydus_bootstrap/image/image.boot
sudo rm -rf ./output
done
- name: Save Nydusify Metric
uses: actions/upload-artifact@v4
with:
name: convert-native-v6-batch-metric
path: convert-native-v6-batch
convert-metric:
runs-on: ubuntu-latest
needs: [convert-zran, convert-native-v5, convert-native-v6, convert-native-v6-batch]
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Download Zran Metric
uses: actions/download-artifact@master
uses: actions/download-artifact@v4
with:
name: convert-zran-metric
path: convert-zran
- name: Download V5 Metric
uses: actions/download-artifact@master
uses: actions/download-artifact@v4
with:
name: convert-native-v5-metric
path: convert-native-v5
- name: Download V6 Metric
uses: actions/download-artifact@master
uses: actions/download-artifact@v4
with:
name: convert-native-v6-metric
path: convert-native-v6
- name: Download V6 Batch Metric
uses: actions/download-artifact@v4
with:
name: convert-native-v6-batch-metric
path: convert-native-v6-batch
- name: Summary
run: |
echo "## Image Size(MB)" > $GITHUB_STEP_SUMMARY
echo "> Compare the size of OCI image and Nydus image."
echo "|image name|oci/nydus-zran|oci/nydus-v5|oci/nydus-v6|" >> $GITHUB_STEP_SUMMARY
echo "|:--------:|:------------:|:----------:|:----------:|" >> $GITHUB_STEP_SUMMARY
echo "|image name|oci/nydus-zran|oci/nydus-v5|oci/nydus-v6|oci/nydus-batch|" >> $GITHUB_STEP_SUMMARY
echo "|:--------:|:------------:|:----------:|:----------:|:-------------:|" >> $GITHUB_STEP_SUMMARY
for I in $(cat ${{ env.IMAGE_LIST_PATH }}); do
zranSourceImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.SourceImageSize' convert-zran/${I}.json) / 1048576")")
zranTargetImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.TargetImageSize' convert-zran/${I}.json) / 1048576")")
@ -307,17 +369,20 @@ jobs:
v5TargetImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.TargetImageSize' convert-native-v5/${I}.json) / 1048576")")
v6SourceImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.SourceImageSize' convert-native-v6/${I}.json) / 1048576")")
v6TargetImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.TargetImageSize' convert-native-v6/${I}.json) / 1048576")")
echo "|${I}:latest|${zranSourceImageSize}/${zranTargetImageSize}|${v5SourceImageSize}/${v5TargetImageSize}|${v6SourceImageSize}/${v6TargetImageSize}|" >> $GITHUB_STEP_SUMMARY
batchSourceImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.SourceImageSize' convert-native-v6-batch/${I}.json) / 1048576")")
batchTargetImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.TargetImageSize' convert-native-v6-batch/${I}.json) / 1048576")")
echo "|${I}:latest|${zranSourceImageSize}/${zranTargetImageSize}|${v5SourceImageSize}/${v5TargetImageSize}|${v6SourceImageSize}/${v6TargetImageSize}|${batchSourceImageSize}/${batchTargetImageSize}|" >> $GITHUB_STEP_SUMMARY
done
echo "## Conversion Time(ms)" >> $GITHUB_STEP_SUMMARY
echo "> Time elapsed to convert OCI image to Nydus image."
echo "|image name|nydus-zran|nydus-v5|nydus-v6|" >> $GITHUB_STEP_SUMMARY
echo "|:---:|:--:|:-------:|:-------:|" >> $GITHUB_STEP_SUMMARY
echo "|image name|nydus-zran|nydus-v5|nydus-v6|nydus-batch|" >> $GITHUB_STEP_SUMMARY
echo "|:---:|:--:|:-------:|:-------:|:-------:|" >> $GITHUB_STEP_SUMMARY
for I in $(cat ${{ env.IMAGE_LIST_PATH }}); do
zranConversionElapsed=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.ConversionElapsed' convert-zran/${I}.json) / 1000000")")
v5ConversionElapsed=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.ConversionElapsed' convert-native-v5/${I}.json) / 1000000")")
v6ConversionElapsed=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.ConversionElapsed' convert-native-v6/${I}.json) / 1000000")")
echo "|${I}:latest|${zranConversionElapsed}|${v5ConversionElapsed}|${v6ConversionElapsed}|" >> $GITHUB_STEP_SUMMARY
batchConversionElapsed=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.ConversionElapsed' convert-native-v6-batch/${I}.json) / 1000000")")
echo "|${I}:latest|${zranConversionElapsed}|${v5ConversionElapsed}|${v6ConversionElapsed}|${batchConversionElapsed}|" >> $GITHUB_STEP_SUMMARY
done
- uses: geekyeggo/delete-artifact@v2
with:

View File

@ -1,111 +0,0 @@
name: Integration Test
on:
schedule:
# Do conversion every day at 00:03 clock UTC
- cron: "3 0 * * *"
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
arch: [amd64]
fs_version: [5, 6]
branch: [master, stable/v2.2]
steps:
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: 1.18
- name: Setup pytest
run: |
sudo apt-get update
sudo apt-get install --no-install-recommends -y attr libattr1-dev fio pkg-config libssl-dev python3
sudo python3 -m pip install --upgrade pip
sudo pip3 install pytest xattr requests psutil requests_unixsocket libconf py-splice fallocate pytest-repeat PyYAML six docker toml
- name: containerd runc and crictl
run: |
sudo wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.17.0/crictl-v1.17.0-linux-amd64.tar.gz
sudo tar zxvf ./crictl-v1.17.0-linux-amd64.tar.gz -C /usr/local/bin
sudo wget https://github.com/containerd/containerd/releases/download/v1.4.3/containerd-1.4.3-linux-amd64.tar.gz
mkdir containerd
sudo tar -zxf ./containerd-1.4.3-linux-amd64.tar.gz -C ./containerd
sudo mv ./containerd/bin/* /usr/bin/
sudo wget https://github.com/opencontainers/runc/releases/download/v1.1.2/runc.amd64 -O /usr/bin/runc
sudo chmod +x /usr/bin/runc
- name: Set up ossutils
run: |
sudo wget https://gosspublic.alicdn.com/ossutil/1.7.13/ossutil64 -O /usr/bin/ossutil64
sudo chmod +x /usr/bin/ossutil64
- uses: actions/checkout@v3
with:
ref: ${{ matrix.branch }}
- name: Cache cargo
uses: Swatinem/rust-cache@v2.2.0
with:
cache-on-failure: true
key: ${{ runner.os }}-cargo-${{ matrix.arch }}
- name: Build nydus-rs
run: |
declare -A rust_target_map=( ["amd64"]="x86_64-unknown-linux-musl" ["arm64"]="aarch64-unknown-linux-musl")
RUST_TARGET=${rust_target_map[${{ matrix.arch }}]}
cargo install --version 0.2.1 cross
rustup component add rustfmt clippy
make -e RUST_TARGET=$RUST_TARGET -e CARGO=cross static-release
make release -C contrib/nydus-backend-proxy/
sudo cp -r misc/configs .
sudo chown -R $(id -un):$(id -gn) . ~/.cargo/
pwd
ls -lh target/$RUST_TARGET/release
- name: Set up anchor file
env:
OSS_AK_ID: ${{ secrets.OSS_TEST_AK_ID }}
OSS_AK_SEC: ${{ secrets.OSS_TEST_AK_SECRET }}
FS_VERSION: ${{ matrix.fs_version }}
run: |
sudo mkdir -p /home/runner/nydus-test-workspace
sudo mkdir -p /home/runner/nydus-test-workspace/proxy_blobs
sudo cat > /home/runner/work/image-service/image-service/contrib/nydus-test/anchor_conf.json << EOF
{
"workspace": "/home/runner/nydus-test-workspace",
"nydus_project": "/home/runner/work/image-service/image-service",
"nydus_runtime_conf": {
"profile": "release",
"log_level": "info"
},
"registry": {
"registry_url": "localhost:5000",
"registry_namespace": "",
"registry_auth": "YOURAUTH==",
"backend_proxy_url": "127.0.0.1:8000",
"backend_proxy_blobs_dir": "/home/runner/nydus-test-workspace/proxy_blobs"
},
"oss": {
"endpoint": "oss-cn-beijing.aliyuncs.com",
"ak_id": "$OSS_AK_ID",
"ak_secret": "$OSS_AK_SEC",
"bucket": "nydus-ci"
},
"images": {
"images_array": [
"busybox:latest"
]
},
"artifacts": {
"containerd": "/usr/bin/containerd",
"ossutil_bin": "/usr/bin/ossutil64"
},
"fs_version": "$FS_VERSION",
"logging_file": "stderr",
"target": "musl"
}
EOF
- name: run e2e tests
run: |
cd /home/runner/work/image-service/image-service/contrib/nydus-test
sudo mkdir -p /blobdir
sudo python3 nydus_test_config.py --dist fs_structure.yaml
sudo pytest -vs -x --durations=0 functional-test/test_api.py functional-test/test_nydus.py functional-test/test_layered_image.py

45
.github/workflows/miri.yml vendored Normal file
View File

@ -0,0 +1,45 @@
name: Miri Test
on:
push:
branches: ["**", "stable/**"]
paths-ignore: [ '**.md', '**.png', '**.jpg', '**.svg', '**/docs/**' ]
pull_request:
branches: ["**", "stable/**"]
paths-ignore: [ '**.md', '**.png', '**.jpg', '**.svg', '**/docs/**' ]
schedule:
# Run daily sanity check at 03:00 clock UTC
- cron: "0 03 * * *"
workflow_dispatch:
env:
CARGO_TERM_COLOR: always
jobs:
nydus-unit-test-with-miri:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Rust Cache
uses: Swatinem/rust-cache@v2
with:
cache-on-failure: true
shared-key: Linux-cargo-amd64
save-if: ${{ github.ref == 'refs/heads/master' }}
- name: Install cargo nextest
uses: taiki-e/install-action@nextest
- name: Fscache Setup
run: sudo bash misc/fscache/setup.sh
- name: Install Miri
run: |
rustup toolchain install nightly --component miri
rustup override set nightly
cargo miri setup
- name: Unit Test with Miri
run: |
CARGO_HOME=${HOME}/.cargo
CARGO_BIN=$(which cargo)
RUSTUP_BIN=$(which rustup)
sudo -E RUSTUP=${RUSTUP_BIN} make miri-ut-nextest 2>&1 | tee miri-ut.log
grep -C 2 'Undefined Behavior' miri-ut.log

View File

@ -19,26 +19,60 @@ jobs:
matrix:
arch: [amd64, arm64, ppc64le, riscv64]
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- name: Cache cargo
uses: Swatinem/rust-cache@v2.2.0
uses: Swatinem/rust-cache@v2
with:
cache-on-failure: true
key: ${{ runner.os }}-cargo-${{ matrix.arch }}
- name: Build nydus-rs
shared-key: ${{ runner.os }}-cargo-${{ matrix.arch }}
- uses: dsherret/rust-toolchain-file@v1
- name: Read Rust toolchain version
id: set_toolchain_version
run: |
RUST_TOOLCHAIN_VERSION=$(grep -oP '(?<=channel = ")[^"]*' rust-toolchain.toml)
echo "Rust toolchain version: $RUST_TOOLCHAIN_VERSION"
echo "rust-version=$RUST_TOOLCHAIN_VERSION" >> $GITHUB_OUTPUT
shell: bash
- name: Set up Docker Buildx
if: matrix.arch == 'riscv64'
uses: docker/setup-buildx-action@v3
- name: Build and push Docker image
if: matrix.arch == 'riscv64'
uses: docker/build-push-action@v6
with:
context: .
file: ./.github/workflows/Dockerfile.cross
push: false
load: true
tags: rust-cross-compile-riscv64:latest
cache-from: type=gha
cache-to: type=gha,mode=max
build-args: |
RUST_VERSION=${{ steps.set_toolchain_version.outputs.rust-version }}
- name: Build nydus-rs Non-RISC-V
if: matrix.arch != 'riscv64'
run: |
declare -A rust_target_map=( ["amd64"]="x86_64-unknown-linux-musl" ["arm64"]="aarch64-unknown-linux-musl" ["ppc64le"]="powerpc64le-unknown-linux-gnu")
RUST_TARGET=${rust_target_map[${{ matrix.arch }}]}
cargo install --locked --version 0.2.5 cross
make -e RUST_TARGET_STATIC=$RUST_TARGET -e CARGO=cross static-release
- name : Build Nydus-rs RISC-V
if: matrix.arch == 'riscv64'
run: |
RUST_TARGET=riscv64gc-unknown-linux-gnu
docker run --rm -v ${{ github.workspace }}:/root/src rust-cross-compile-riscv64:latest \
sh -c "cd /root/src && make -e RUST_TARGET_STATIC=$RUST_TARGET static-release"
- name: Prepare to upload artifacts
run: |
declare -A rust_target_map=( ["amd64"]="x86_64-unknown-linux-musl" ["arm64"]="aarch64-unknown-linux-musl" ["ppc64le"]="powerpc64le-unknown-linux-gnu" ["riscv64"]="riscv64gc-unknown-linux-gnu")
RUST_TARGET=${rust_target_map[${{ matrix.arch }}]}
cargo install --version 0.2.4 cross
rustup component add rustfmt clippy
make -e RUST_TARGET_STATIC=$RUST_TARGET -e CARGO=cross static-release
sudo mv target/$RUST_TARGET/release/nydusd nydusd
sudo mv target/$RUST_TARGET/release/nydus-image .
sudo mv target/$RUST_TARGET/release/nydusctl .
sudo cp -r misc/configs .
sudo chown -R $(id -un):$(id -gn) . ~/.cargo/
- name: store-artifacts
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: nydus-artifacts-linux-${{ matrix.arch }}
path: |
@ -48,17 +82,18 @@ jobs:
configs
nydus-macos:
runs-on: macos-11
runs-on: macos-13
strategy:
matrix:
arch: [amd64, arm64]
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- name: Cache cargo
uses: Swatinem/rust-cache@v2.2.0
uses: Swatinem/rust-cache@v2
with:
cache-on-failure: true
key: ${{ runner.os }}-cargo-${{ matrix.arch }}
shared-key: ${{ runner.os }}-cargo-${{ matrix.arch }}
- uses: dsherret/rust-toolchain-file@v1
- name: build
run: |
if [[ "${{matrix.arch}}" == "amd64" ]]; then
@ -66,15 +101,14 @@ jobs:
else
RUST_TARGET="aarch64-apple-darwin"
fi
cargo install --version 0.2.4 cross
cargo install --version 0.2.5 cross
rustup target add ${RUST_TARGET}
rustup component add rustfmt clippy
make -e RUST_TARGET_STATIC=$RUST_TARGET -e CARGO=cross static-release
sudo mv target/$RUST_TARGET/release/nydusd nydusd
sudo cp -r misc/configs .
sudo chown -R $(id -un):$(id -gn) . ~/.cargo/
- name: store-artifacts
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: nydus-artifacts-darwin-${{ matrix.arch }}
path: |
@ -91,29 +125,22 @@ jobs:
env:
DOCKER: false
steps:
- uses: actions/checkout@v3
- uses: actions/setup-go@v3
- uses: actions/checkout@v4
- name: Setup Golang
uses: actions/setup-go@v5
with:
go-version: '1.18'
- name: cache go mod
uses: actions/cache@v3
with:
path: /go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/contrib/nydusify/go.sum', '**/contrib/ctr-remote/go.sum', '**/contrib/nydus-overlayfs/go.sum') }}
restore-keys: |
${{ runner.os }}-go
go-version-file: 'go.work'
cache-dependency-path: "**/*.sum"
- name: build contrib go components
run: |
make -e GOARCH=${{ matrix.arch }} contrib-release
sudo mv contrib/ctr-remote/bin/ctr-remote .
sudo mv contrib/nydusify/cmd/nydusify .
sudo mv contrib/nydus-overlayfs/bin/nydus-overlayfs .
- name: store-artifacts
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: nydus-artifacts-linux-${{ matrix.arch }}
name: nydus-artifacts-linux-${{ matrix.arch }}-contrib
path: |
ctr-remote
nydusify
nydus-overlayfs
containerd-nydus-grpc
@ -127,9 +154,10 @@ jobs:
needs: [nydus-linux, contrib-linux]
steps:
- name: download artifacts
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4
with:
name: nydus-artifacts-${{ matrix.os }}-${{ matrix.arch }}
pattern: nydus-artifacts-${{ matrix.os }}-${{ matrix.arch }}*
merge-multiple: true
path: nydus-static
- name: prepare release tarball
run: |
@ -143,9 +171,9 @@ jobs:
sha256sum $tarball > $shasum
echo "tarball_shasum=${shasum}" >> $GITHUB_ENV
- name: store-artifacts
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: nydus-release-tarball
name: nydus-release-tarball-${{ matrix.os }}-${{ matrix.arch }}
path: |
${{ env.tarball }}
${{ env.tarball_shasum }}
@ -160,7 +188,7 @@ jobs:
needs: [nydus-macos]
steps:
- name: download artifacts
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4
with:
name: nydus-artifacts-${{ matrix.os }}-${{ matrix.arch }}
path: nydus-static
@ -176,9 +204,9 @@ jobs:
sha256sum $tarball > $shasum
echo "tarball_shasum=${shasum}" >> $GITHUB_ENV
- name: store-artifacts
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: nydus-release-tarball
name: nydus-release-tarball-${{ matrix.os }}-${{ matrix.arch }}
path: |
${{ env.tarball }}
${{ env.tarball_shasum }}
@ -188,9 +216,10 @@ jobs:
needs: [prepare-tarball-linux, prepare-tarball-darwin]
steps:
- name: download artifacts
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4
with:
name: nydus-release-tarball
pattern: nydus-release-tarball-*
merge-multiple: true
path: nydus-tarball
- name: prepare release env
run: |
@ -210,3 +239,87 @@ jobs:
generate_release_notes: true
files: |
${{ env.tarballs }}
goreleaser:
if: github.event_name == 'push' && startsWith(github.ref, 'refs/tags/')
strategy:
matrix:
arch: [amd64, arm64]
os: [linux]
needs: [nydus-linux, contrib-linux]
permissions:
contents: write
runs-on: ubuntu-latest
timeout-minutes: 60
outputs:
hashes: ${{ steps.hash.outputs.hashes }}
steps:
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 0
submodules: recursive
- name: Setup Golang
uses: actions/setup-go@v5
with:
go-version-file: 'go.work'
cache-dependency-path: "**/*.sum"
- name: download artifacts
uses: actions/download-artifact@v4
with:
pattern: nydus-artifacts-${{ matrix.os }}-${{ matrix.arch }}*
merge-multiple: true
path: nydus-static
- name: prepare context
run: |
chmod +x nydus-static/*
export GOARCH=${{ matrix.arch }}
echo "GOARCH: $GOARCH"
sh ./goreleaser.sh
- name: Check GoReleaser config
uses: goreleaser/goreleaser-action@90a3faa9d0182683851fbfa97ca1a2cb983bfca3
with:
version: latest
args: check
- name: Run GoReleaser
uses: goreleaser/goreleaser-action@90a3faa9d0182683851fbfa97ca1a2cb983bfca3
id: run-goreleaser
with:
version: latest
args: release --clean
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Generate subject
id: hash
env:
ARTIFACTS: "${{ steps.run-goreleaser.outputs.artifacts }}"
run: |
set -euo pipefail
hashes=$(echo $ARTIFACTS | jq --raw-output '.[] | {name, "digest": (.extra.Digest // .extra.Checksum)} | select(.digest) | {digest} + {name} | join(" ") | sub("^sha256:";"")' | base64 -w0)
if test "$hashes" = ""; then # goreleaser < v1.13.0
checksum_file=$(echo "$ARTIFACTS" | jq -r '.[] | select (.type=="Checksum") | .path')
hashes=$(cat $checksum_file | base64 -w0)
fi
echo "hashes=$hashes" >> $GITHUB_OUTPUT
- name: Set tag output
id: tag
run: echo "tag_name=${GITHUB_REF#refs/*/}" >> "$GITHUB_OUTPUT"
provenance:
if: github.event_name == 'push' && startsWith(github.ref, 'refs/tags/')
needs: [goreleaser]
permissions:
actions: read # To read the workflow path.
id-token: write # To sign the provenance.
contents: write # To add assets to a release.
uses: slsa-framework/slsa-github-generator/.github/workflows/generator_generic_slsa3.yml@v2.1.0
with:
base64-subjects: "${{ needs.goreleaser.outputs.hashes }}"
upload-assets: true # upload to a new release
upload-tag-name: "${{ needs.release.outputs.tag_name }}"
draft-release: true

View File

@ -14,169 +14,212 @@ on:
env:
CARGO_TERM_COLOR: always
IMAGE: wordpress
TAG: 6.1.1
jobs:
contrib-build:
runs-on: ubuntu-latest
strategy:
matrix:
arch: [amd64, arm64, ppc64le, riscv64]
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Setup Golang
uses: actions/setup-go@v3
uses: actions/setup-go@v5
with:
go-version: ~1.18
- name: Golang Cache
uses: actions/cache@v3
with:
path: |
~/.cache/go-build
~/go/pkg/mod
key: ${{ runner.os }}-golang-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-golang-
go-version-file: 'go.work'
cache-dependency-path: "**/*.sum"
- name: Build Contrib
run: |
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sudo sh -s -- -b /usr/bin v1.51.2
make -e DOCKER=false nydusify-release
make -e DOCKER=false contrib-test
make -e DOCKER=false GOARCH=${{ matrix.arch }} contrib-release
- name: Upload Nydusify
uses: actions/upload-artifact@master
if: matrix.arch == 'amd64'
uses: actions/upload-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
contrib-build-master:
contrib-lint:
runs-on: ubuntu-latest
if: github.event_name == 'pull_request'
strategy:
matrix:
include:
- path: contrib/nydusify
- path: contrib/nydus-overlayfs
steps:
- name: Checkout
uses: actions/checkout@v3
with:
ref: master
uses: actions/checkout@v4
- name: Setup Golang
uses: actions/setup-go@v3
uses: actions/setup-go@v5
with:
go-version: ~1.18
- name: Golang Cache
uses: actions/cache@v3
go-version-file: 'go.work'
cache: false
- name: Lint
uses: golangci/golangci-lint-action@v6
with:
path: |
~/.cache/go-build
~/go/pkg/mod
key: ${{ runner.os }}-golang-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-golang-
- name: Build Contrib
run: |
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sudo sh -s -- -b /usr/bin v1.51.2
make -e DOCKER=false nydusify-release
make -e DOCKER=false contrib-test
- name: Upload Nydusify
uses: actions/upload-artifact@master
with:
name: nydusify-artifact-master
path: contrib/nydusify/cmd
version: v1.64
working-directory: ${{ matrix.path }}
args: --timeout=10m --verbose
nydus-build:
runs-on: ubuntu-latest
strategy:
matrix:
arch: [amd64, arm64, ppc64le, riscv64]
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Rust Cache
uses: Swatinem/rust-cache@v2.2.0
uses: Swatinem/rust-cache@v2
with:
cache-on-failure: true
shared-key: nydus-build
- name: Build Nydus
shared-key: ${{ runner.os }}-cargo-${{ matrix.arch }}
save-if: ${{ github.ref == 'refs/heads/master' }}
- uses: dsherret/rust-toolchain-file@v1
- name: Read Rust toolchain version
id: set_toolchain_version
run: |
rustup component add rustfmt clippy
make
RUST_TOOLCHAIN_VERSION=$(grep -oP '(?<=channel = ")[^"]*' rust-toolchain.toml)
echo "Rust toolchain version: $RUST_TOOLCHAIN_VERSION"
echo "rust-version=$RUST_TOOLCHAIN_VERSION" >> $GITHUB_OUTPUT
shell: bash
- name: Set up Docker Buildx
if: matrix.arch == 'riscv64'
uses: docker/setup-buildx-action@v3
- name: Build and push Docker image
if: matrix.arch == 'riscv64'
uses: docker/build-push-action@v6
with:
context: .
file: ./.github/workflows/Dockerfile.cross
push: false
load: true
tags: rust-cross-compile-riscv64:latest
cache-from: type=gha
cache-to: type=gha,mode=max
build-args: |
RUST_VERSION=${{ steps.set_toolchain_version.outputs.rust-version }}
- name: Build Nydus Non-RISC-V
if: matrix.arch != 'riscv64'
run: |
declare -A rust_target_map=( ["amd64"]="x86_64-unknown-linux-musl" ["arm64"]="aarch64-unknown-linux-musl" ["ppc64le"]="powerpc64le-unknown-linux-gnu")
RUST_TARGET=${rust_target_map[${{ matrix.arch }}]}
cargo install --locked --version 0.2.5 cross
make -e RUST_TARGET_STATIC=$RUST_TARGET -e CARGO=cross static-release
- name: Build Nydus RISC-V
if: matrix.arch == 'riscv64'
run: |
RUST_TARGET=riscv64gc-unknown-linux-gnu
docker run --rm -v ${{ github.workspace }}:/root/src rust-cross-compile-riscv64:latest \
sh -c "cd /root/src && make -e RUST_TARGET_STATIC=$RUST_TARGET static-release"
- name: Prepare to upload artifacts
run: |
declare -A rust_target_map=( ["amd64"]="x86_64-unknown-linux-musl" ["arm64"]="aarch64-unknown-linux-musl" ["ppc64le"]="powerpc64le-unknown-linux-gnu" ["riscv64"]="riscv64gc-unknown-linux-gnu")
RUST_TARGET=${rust_target_map[${{ matrix.arch }}]}
sudo mv target/$RUST_TARGET/release/nydusd .
sudo mv target/$RUST_TARGET/release/nydus-image .
- name: Upload Nydus Binaries
uses: actions/upload-artifact@master
if: matrix.arch == 'amd64'
uses: actions/upload-artifact@v4
with:
name: nydus-artifact
path: |
target/release/nydus-image
target/release/nydusd
nydus-image
nydusd
nydus-build-master:
runs-on: ubuntu-latest
if: github.event_name == 'pull_request'
nydusd-build-macos:
runs-on: macos-13
strategy:
matrix:
arch: [amd64, arm64]
steps:
- name: Checkout
uses: actions/checkout@v3
with:
ref: master
- name: Rust Cache
uses: Swatinem/rust-cache@v2.2.0
- uses: actions/checkout@v4
- name: Cache cargo
uses: Swatinem/rust-cache@v2
with:
cache-on-failure: true
shared-key: nydus-build
- name: Build Nydus
shared-key: ${{ runner.os }}-cargo-${{ matrix.arch }}
save-if: ${{ github.ref == 'refs/heads/master' }}
- uses: dsherret/rust-toolchain-file@v1
- name: build
run: |
rustup component add rustfmt clippy
make
- name: Upload Nydus Binaries
uses: actions/upload-artifact@master
with:
name: nydus-artifact-master
path: |
target/release/nydus-image
target/release/nydusd
if [[ "${{matrix.arch}}" == "amd64" ]]; then
RUST_TARGET="x86_64-apple-darwin"
else
RUST_TARGET="aarch64-apple-darwin"
fi
cargo install --version 0.2.5 cross
rustup target add ${RUST_TARGET}
make -e RUST_TARGET_STATIC=$RUST_TARGET -e CARGO=cross static-release
nydus-integration-test:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Docker Cache
uses: jpribyl/action-docker-layer-caching@v0.1.0
continue-on-error: true
- name: Download Nydus
uses: actions/download-artifact@master
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: |
target/release
- name: Download Nydusify
uses: actions/download-artifact@master
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Older Binaries
id: prepare-binaries
run: |
versions=(v0.1.0 v2.1.6)
version_archs=(v0.1.0-x86_64 v2.1.6-linux-amd64)
export NYDUS_STABLE_VERSION=$(curl https://api.github.com/repos/Dragonflyoss/nydus/releases/latest | jq -r '.tag_name')
versions=(v0.1.0 ${NYDUS_STABLE_VERSION})
version_archs=(v0.1.0-x86_64 ${NYDUS_STABLE_VERSION}-linux-amd64)
for i in ${!versions[@]}; do
version=${versions[$i]}
version_arch=${version_archs[$i]}
wget -q https://github.com/dragonflyoss/image-service/releases/download/$version/nydus-static-$version_arch.tgz
wget -q https://github.com/dragonflyoss/nydus/releases/download/$version/nydus-static-$version_arch.tgz
sudo mkdir nydus-$version /usr/bin/nydus-$version
sudo tar xzf nydus-static-$version_arch.tgz -C nydus-$version
sudo cp -r nydus-$version/nydus-static/* /usr/bin/nydus-$version/
done
- name: Golang Cache
uses: actions/cache@v3
- name: Setup Golang
uses: actions/setup-go@v5
with:
path: |
~/.cache/go-build
~/go/pkg/mod
key: ${{ runner.os }}-golang-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-golang-
go-version-file: 'go.work'
cache-dependency-path: "**/*.sum"
- name: Free Disk Space
uses: jlumbroso/free-disk-space@main
with:
# this might remove tools that are actually needed,
# if set to "true" but frees about 6 GB
tool-cache: false
# all of these default to true, but feel free to set to
# "false" if necessary for your workflow
android: true
dotnet: true
haskell: true
large-packages: true
docker-images: true
swap-storage: true
- name: Integration Test
run: |
sudo mkdir -p /usr/bin/nydus-latest /home/runner/work/workdir
sudo install -D -m 755 contrib/nydusify/cmd/nydusify /usr/bin/nydus-latest
sudo install -D -m 755 target/release/nydusd target/release/nydus-image /usr/bin/nydus-latest
sudo bash misc/prepare.sh
versions=(v0.1.0 v2.1.6 latest)
version_exports=(v0_1_0 v2_1_6 latest)
export NYDUS_STABLE_VERSION=$(curl https://api.github.com/repos/Dragonflyoss/nydus/releases/latest | jq -r '.tag_name')
export NYDUS_STABLE_VERSION_EXPORT="${NYDUS_STABLE_VERSION//./_}"
versions=(v0.1.0 ${NYDUS_STABLE_VERSION} latest)
version_exports=(v0_1_0 ${NYDUS_STABLE_VERSION_EXPORT} latest)
for i in ${!version_exports[@]}; do
version=${versions[$i]}
version_export=${version_exports[$i]}
@ -185,457 +228,159 @@ jobs:
export NYDUS_NYDUSIFY_$version_export=/usr/bin/nydus-$version/nydusify
done
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sudo sh -s -- -b /usr/bin v1.51.2
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sudo sh -s -- -b /usr/bin v1.64.8
sudo -E make smoke-only
benchmark-oci:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Download Nydus
uses: actions/download-artifact@master
with:
name: nydus-artifact
path: target/release
- name: Download Nydusify
uses: actions/download-artifact@master
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare OCI Environment
run: |
sudo bash misc/benchmark/prepare_env.sh oci
sudo docker pull ${{env.IMAGE}}:${{env.TAG}} && docker tag ${{env.IMAGE}}:${{env.TAG}} localhost:5000/${{env.IMAGE}}:${{env.TAG}}
sudo docker push localhost:5000/${{env.IMAGE}}:${{env.TAG}}
- name: BenchMark Test
run: |
cd misc/benchmark
sudo python3 benchmark.py --mode oci --image ${{env.IMAGE}}:${{env.TAG}}
- name: Save Test Result
uses: actions/upload-artifact@v3
with:
name: benchmark-oci
path: misc/benchmark/${{env.IMAGE}}.csv
benchmark-nydus-no-prefetch:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Download Nydus
uses: actions/download-artifact@master
with:
name: nydus-artifact
path: target/release
- name: Download Nydusify
uses: actions/download-artifact@master
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Nydus Environment
run: |
sudo bash misc/benchmark/prepare_env.sh nydus
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--source ${{env.IMAGE}}:${{env.TAG}} \
--target localhost:5000/${{env.IMAGE}}:${{env.TAG}}_nydus \
--fs-version 6
- name: BenchMark Test
run: |
cd misc/benchmark
sudo python3 benchmark.py --mode nydus-no-prefetch --image ${{env.IMAGE}}:${{env.TAG}}
- name: Save Test Result
uses: actions/upload-artifact@v3
with:
name: benchmark-nydus-no-prefetch
path: misc/benchmark/${{env.IMAGE}}.csv
benchmark-nydus-no-prefetch-master:
runs-on: ubuntu-latest
needs: [contrib-build-master, nydus-build-master]
if: github.event_name == 'pull_request'
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Download Nydus Master
uses: actions/download-artifact@master
with:
name: nydus-artifact-master
path: target/release
- name: Download Nydusify Master
uses: actions/download-artifact@master
with:
name: nydusify-artifact-master
path: contrib/nydusify/cmd
- name: Prepare Nydus Environment
run: |
sudo bash misc/benchmark/prepare_env.sh nydus
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--source ${{env.IMAGE}}:${{env.TAG}} \
--target localhost:5000/${{env.IMAGE}}:${{env.TAG}}_nydus \
--fs-version 6
- name: BenchMark Test
run: |
cd misc/benchmark
sudo python3 benchmark.py --mode nydus-no-prefetch --image ${{env.IMAGE}}:${{env.TAG}}
- name: Save Test Result
uses: actions/upload-artifact@v3
with:
name: benchmark-nydus-no-prefetch-master
path: misc/benchmark/${{env.IMAGE}}.csv
benchmark-zran-no-prefetch:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Download Nydus
uses: actions/download-artifact@master
with:
name: nydus-artifact
path: target/release
- name: Download Nydusify
uses: actions/download-artifact@master
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Nydus Environment
run: |
sudo bash misc/benchmark/prepare_env.sh nydus
sudo docker pull ${{env.IMAGE}}:${{env.TAG}} && docker tag ${{env.IMAGE}}:${{env.TAG}} localhost:5000/${{env.IMAGE}}:${{env.TAG}}
sudo docker push localhost:5000/${{env.IMAGE}}:${{env.TAG}}
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--source localhost:5000/${{env.IMAGE}}:${{env.TAG}} \
--target localhost:5000/${{env.IMAGE}}:${{env.TAG}}_nydus \
--fs-version 6 \
--oci-ref
- name: BenchMark Test
run: |
cd misc/benchmark
sudo python3 benchmark.py --mode nydus-no-prefetch --image ${{env.IMAGE}}:${{env.TAG}}
- name: Save Test Result
uses: actions/upload-artifact@v3
with:
name: benchmark-zran-no-prefetch
path: misc/benchmark/${{env.IMAGE}}.csv
benchmark-zran-no-prefetch-master:
runs-on: ubuntu-latest
needs: [contrib-build-master, nydus-build-master]
if: github.event_name == 'pull_request'
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Download Nydus Master
uses: actions/download-artifact@master
with:
name: nydus-artifact-master
path: target/release
- name: Download Nydusify Master
uses: actions/download-artifact@master
with:
name: nydusify-artifact-master
path: contrib/nydusify/cmd
- name: Prepare Nydus Environment
run: |
sudo bash misc/benchmark/prepare_env.sh nydus
sudo docker pull ${{env.IMAGE}}:${{env.TAG}} && docker tag ${{env.IMAGE}}:${{env.TAG}} localhost:5000/${{env.IMAGE}}:${{env.TAG}}
sudo docker push localhost:5000/${{env.IMAGE}}:${{env.TAG}}
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--source localhost:5000/${{env.IMAGE}}:${{env.TAG}} \
--target localhost:5000/${{env.IMAGE}}:${{env.TAG}}_nydus \
--fs-version 6 \
--oci-ref
- name: BenchMark Test
run: |
cd misc/benchmark
sudo python3 benchmark.py --mode nydus-no-prefetch --image ${{env.IMAGE}}:${{env.TAG}}
- name: Save Test Result
uses: actions/upload-artifact@v3
with:
name: benchmark-zran-no-prefetch-master
path: misc/benchmark/${{env.IMAGE}}.csv
benchmark-nydus-all-prefetch:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Download Nydus
uses: actions/download-artifact@master
with:
name: nydus-artifact
path: target/release
- name: Download Nydusify
uses: actions/download-artifact@master
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Nydus Environment
run: |
sudo bash misc/benchmark/prepare_env.sh nydus
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--source ${{env.IMAGE}}:${{env.TAG}} \
--target localhost:5000/${{env.IMAGE}}:${{env.TAG}}_nydus \
--fs-version 6
- name: BenchMark Test
run: |
cd misc/benchmark
sudo python3 benchmark.py --mode nydus-all-prefetch --image ${{env.IMAGE}}:${{env.TAG}}
- name: Save Test Result
uses: actions/upload-artifact@v3
with:
name: benchmark-nydus-all-prefetch
path: misc/benchmark/${{env.IMAGE}}.csv
benchmark-nydus-all-prefetch-master:
runs-on: ubuntu-latest
needs: [contrib-build-master, nydus-build-master]
if: github.event_name == 'pull_request'
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Download Nydus Master
uses: actions/download-artifact@master
with:
name: nydus-artifact-master
path: target/release
- name: Download Nydusify Master
uses: actions/download-artifact@master
with:
name: nydusify-artifact-master
path: contrib/nydusify/cmd
- name: Prepare Nydus Environment
run: |
sudo bash misc/benchmark/prepare_env.sh nydus
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--source ${{env.IMAGE}}:${{env.TAG}} \
--target localhost:5000/${{env.IMAGE}}:${{env.TAG}}_nydus \
--fs-version 6
- name: BenchMark Test
run: |
cd misc/benchmark
sudo python3 benchmark.py --mode nydus-all-prefetch --image ${{env.IMAGE}}:${{env.TAG}}
- name: Save Test Result
uses: actions/upload-artifact@v3
with:
name: benchmark-nydus-all-prefetch-master
path: misc/benchmark/${{env.IMAGE}}.csv
benchmark-zran-all-prefetch:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Download Nydus
uses: actions/download-artifact@master
with:
name: nydus-artifact
path: target/release
- name: Download Nydusify
uses: actions/download-artifact@master
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Nydus Environment
run: |
sudo bash misc/benchmark/prepare_env.sh nydus
sudo docker pull ${{env.IMAGE}}:${{env.TAG}} && docker tag ${{env.IMAGE}}:${{env.TAG}} localhost:5000/${{env.IMAGE}}:${{env.TAG}}
sudo docker push localhost:5000/${{env.IMAGE}}:${{env.TAG}}
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--source localhost:5000/${{env.IMAGE}}:${{env.TAG}} \
--target localhost:5000/${{env.IMAGE}}:${{env.TAG}}_nydus \
--fs-version 6 \
--oci-ref
- name: BenchMark Test
run: |
cd misc/benchmark
sudo python3 benchmark.py --mode nydus-all-prefetch --image ${{env.IMAGE}}:${{env.TAG}}
- name: Save Test Result
uses: actions/upload-artifact@v3
with:
name: benchmark-zran-all-prefetch
path: misc/benchmark/${{env.IMAGE}}.csv
benchmark-zran-all-prefetch-master:
runs-on: ubuntu-latest
needs: [contrib-build-master, nydus-build-master]
if: github.event_name == 'pull_request'
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Download Nydus Master
uses: actions/download-artifact@master
with:
name: nydus-artifact-master
path: target/release
- name: Download Nydusify Master
uses: actions/download-artifact@master
with:
name: nydusify-artifact-master
path: contrib/nydusify/cmd
- name: Prepare Nydus Environment
run: |
sudo bash misc/benchmark/prepare_env.sh nydus
sudo docker pull ${{env.IMAGE}}:${{env.TAG}} && docker tag ${{env.IMAGE}}:${{env.TAG}} localhost:5000/${{env.IMAGE}}:${{env.TAG}}
sudo docker push localhost:5000/${{env.IMAGE}}:${{env.TAG}}
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--source localhost:5000/${{env.IMAGE}}:${{env.TAG}} \
--target localhost:5000/${{env.IMAGE}}:${{env.TAG}}_nydus \
--fs-version 6 \
--oci-ref
- name: BenchMark Test
run: |
cd misc/benchmark
sudo python3 benchmark.py --mode nydus-all-prefetch --image ${{env.IMAGE}}:${{env.TAG}}
- name: Save Test Result
uses: actions/upload-artifact@v3
with:
name: benchmark-zran-all-prefetch-master
path: misc/benchmark/${{env.IMAGE}}.csv
benchmark-nydus-filelist-prefetch:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Download Nydus
uses: actions/download-artifact@master
with:
name: nydus-artifact
path: target/release
- name: Download Nydusify
uses: actions/download-artifact@master
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Nydus Environment
run: |
sudo bash misc/benchmark/prepare_env.sh nydus
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--source ${{env.IMAGE}}:${{env.TAG}} \
--target localhost:5000/${{env.IMAGE}}:${{env.TAG}}_nydus \
--fs-version 6
- name: BenchMark Test
run: |
cd misc/benchmark
sudo python3 benchmark.py --mode nydus-filelist-prefetch --image ${{env.IMAGE}}:${{env.TAG}}
- name: Save Test Result
uses: actions/upload-artifact@v3
with:
name: benchmark-nydus-filelist-prefetch
path: misc/benchmark/${{env.IMAGE}}.csv
benchmark-nydus-filelist-prefetch-master:
runs-on: ubuntu-latest
needs: [contrib-build-master, nydus-build-master]
if: github.event_name == 'pull_request'
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Download Nydus Master
uses: actions/download-artifact@master
with:
name: nydus-artifact-master
path: target/release
- name: Download Nydusify Master
uses: actions/download-artifact@master
with:
name: nydusify-artifact-master
path: contrib/nydusify/cmd
- name: Prepare Nydus Environment
run: |
sudo bash misc/benchmark/prepare_env.sh nydus
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--source ${{env.IMAGE}}:${{env.TAG}} \
--target localhost:5000/${{env.IMAGE}}:${{env.TAG}}_nydus \
--fs-version 6
- name: BenchMark Test
run: |
cd misc/benchmark
sudo python3 benchmark.py --mode nydus-filelist-prefetch --image ${{env.IMAGE}}:${{env.TAG}}
- name: Save Test Result
uses: actions/upload-artifact@v3
with:
name: benchmark-nydus-filelist-prefetch-master
path: misc/benchmark/${{env.IMAGE}}.csv
benchmark-result:
runs-on: ubuntu-latest
needs: [benchmark-oci, benchmark-zran-all-prefetch, benchmark-zran-no-prefetch, benchmark-nydus-no-prefetch, benchmark-nydus-all-prefetch, benchmark-nydus-filelist-prefetch]
if: github.event_name != 'pull_request'
steps:
- name: Checkout
uses: actions/checkout@v3
- uses: actions/download-artifact@v3
- uses: geekyeggo/delete-artifact@v2
with:
name: '*'
- name: Save Result
run: |
sudo python3 misc/benchmark/benchmark_summary.py --mode benchmark-result > $GITHUB_STEP_SUMMARY
benchmark-compare:
runs-on: ubuntu-latest
needs: [benchmark-oci, benchmark-zran-all-prefetch, benchmark-zran-no-prefetch, benchmark-nydus-no-prefetch, benchmark-nydus-all-prefetch, benchmark-nydus-filelist-prefetch, benchmark-zran-all-prefetch-master, benchmark-zran-no-prefetch-master, benchmark-nydus-no-prefetch-master, benchmark-nydus-all-prefetch-master, benchmark-nydus-filelist-prefetch-master]
if: github.event_name == 'pull_request'
steps:
- name: Checkout
uses: actions/checkout@v3
- uses: actions/download-artifact@v3
- uses: geekyeggo/delete-artifact@v2
with:
name: '*'
- name: Save Result
run: |
sudo python3 misc/benchmark/benchmark_summary.py --mode benchmark-compare > $GITHUB_STEP_SUMMARY
nydus-unit-test:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Rust Cache
uses: Swatinem/rust-cache@v2.2.0
uses: Swatinem/rust-cache@v2
with:
cache-on-failure: true
shared-key: nydus-build
shared-key: Linux-cargo-amd64
save-if: ${{ github.ref == 'refs/heads/master' }}
- name: Install cargo nextest
uses: taiki-e/install-action@nextest
- name: Fscache Setup
run: sudo bash misc/fscache/setup.sh
- name: Unit Test
run: |
make ut-nextest
CARGO_HOME=${HOME}/.cargo
CARGO_BIN=$(which cargo)
RUSTUP_BIN=$(which rustup)
sudo -E RUSTUP=${RUSTUP_BIN} make ut-nextest
contrib-unit-test-coverage:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Golang
uses: actions/setup-go@v5
with:
go-version-file: 'go.work'
cache-dependency-path: "**/*.sum"
- name: Unit Test
run: |
make -e DOCKER=false contrib-test
- name: Upload contrib coverage file
uses: actions/upload-artifact@v4
with:
name: contrib-test-coverage-artifact
path: |
contrib/nydusify/coverage.txt
nydus-unit-test-coverage:
runs-on: ubuntu-latest
env:
CARGO_TERM_COLOR: always
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- name: Rust Cache
uses: Swatinem/rust-cache@v2.2.0
uses: Swatinem/rust-cache@v2
with:
cache-on-failure: true
shared-key: Linux-cargo-amd64
save-if: ${{ github.ref == 'refs/heads/master' }}
- name: Install cargo-llvm-cov
uses: taiki-e/install-action@cargo-llvm-cov
- name: Fscache Setup
run: sudo bash misc/fscache/setup.sh
- name: Generate code coverage
run: make coverage-codecov
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v3
run: |
CARGO_HOME=${HOME}/.cargo
CARGO_BIN=$(which cargo)
RUSTUP_BIN=$(which rustup)
sudo -E RUSTUP=${RUSTUP_BIN} make coverage-codecov
- name: Upload nydus coverage file
uses: actions/upload-artifact@v4
with:
files: codecov.json
fail_ci_if_error: true
name: nydus-test-coverage-artifact
path: |
codecov.json
upload-coverage-to-codecov:
runs-on: ubuntu-latest
needs: [contrib-unit-test-coverage, nydus-unit-test-coverage]
steps:
- uses: actions/checkout@v4
- name: Download nydus coverage file
uses: actions/download-artifact@v4
with:
name: nydus-test-coverage-artifact
- name: Download contrib coverage file
uses: actions/download-artifact@v4
with:
name: contrib-test-coverage-artifact
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v4
with:
files: ./codecov.json,./coverage.txt
token: ${{ secrets.CODECOV_TOKEN }}
verbose: true
fail_ci_if_error: true
nydus-cargo-deny:
name: cargo-deny
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- uses: actions/checkout@v3
- uses: EmbarkStudios/cargo-deny-action@v1
- uses: actions/checkout@v4
- uses: EmbarkStudios/cargo-deny-action@v2
performance-test:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
strategy:
matrix:
include:
- mode: fs-version-5
- mode: fs-version-6
- mode: zran
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: target/release
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Nydus Container Environment
run: |
sudo bash misc/prepare.sh
- name: Performance Test
run: |
export PERFORMANCE_TEST_MODE=${{ matrix.mode }}
sudo -E make smoke-performance
takeover-test:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: target/release
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Nydus Container Environment
run: |
sudo bash misc/prepare.sh takeover_test
- name: Takeover Test
run: |
export NEW_NYDUSD_BINARY_PATH=target/release/nydusd
sudo -E make smoke-takeover

31
.github/workflows/stale.yaml vendored Normal file
View File

@ -0,0 +1,31 @@
name: Close stale issues and PRs
on:
workflow_dispatch:
schedule:
- cron: "0 0 * * *"
permissions:
issues: write
pull-requests: write
jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@5bef64f19d7facfb25b37b414482c7164d639639 # v9.1.0
id: stale
with:
delete-branch: true
days-before-close: 7
days-before-stale: 60
days-before-pr-close: 7
days-before-pr-stale: 60
stale-issue-label: "stale"
exempt-issue-labels: bug,wip
exempt-pr-labels: bug,wip
exempt-all-milestones: true
stale-issue-message: 'This issue is stale because it has been open 60 days with no activity.'
close-issue-message: 'This issue was closed because it has been stalled for 7 days with no activity.'
stale-pr-message: 'This PR is stale because it has been open 60 days with no activity.'
close-pr-message: 'This PR was closed because it has been stalled for 7 days with no activity.'

6
.gitignore vendored
View File

@ -6,3 +6,9 @@
**/.pyc
__pycache__
.DS_Store
go.work.sum
dist/
nydus-static/
.goreleaser.yml
metadata.db
tests/texture/zran/233c72f2b6b698c07021c4da367cfe2dff4f049efbaa885ca0ff760ea297865a

2153
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -6,9 +6,9 @@ description = "Nydus Image Service"
authors = ["The Nydus Developers"]
license = "Apache-2.0 OR BSD-3-Clause"
homepage = "https://nydus.dev/"
repository = "https://github.com/dragonflyoss/image-service"
repository = "https://github.com/dragonflyoss/nydus"
exclude = ["contrib/", "smoke/", "tests/"]
edition = "2018"
edition = "2021"
resolver = "2"
build = "build.rs"
@ -35,7 +35,7 @@ path = "src/lib.rs"
anyhow = "1"
clap = { version = "4.0.18", features = ["derive", "cargo"] }
flexi_logger = { version = "0.25", features = ["compress"] }
fuse-backend-rs = "^0.10.4"
fuse-backend-rs = "^0.12.0"
hex = "0.4.3"
hyper = "0.14.11"
hyperlocal = "0.8.0"
@ -46,37 +46,44 @@ log-panics = { version = "2.1.0", features = ["with-backtrace"] }
mio = { version = "0.8", features = ["os-poll", "os-ext"] }
nix = "0.24.0"
rlimit = "0.9.0"
rusqlite = { version = "0.29.0", features = ["bundled"] }
rusqlite = { version = "0.30.0", features = ["bundled"] }
serde = { version = "1.0.110", features = ["serde_derive", "rc"] }
serde_json = "1.0.51"
tar = "0.4.40"
tokio = { version = "1.24", features = ["macros"] }
tokio = { version = "1.35.1", features = ["macros"] }
# Build static linked openssl library
openssl = { version = "0.10.55", features = ["vendored"] }
# pin openssl-src to bring in fix for https://rustsec.org/advisories/RUSTSEC-2022-0032
#openssl-src = { version = "111.22" }
openssl = { version = '0.10.72', features = ["vendored"] }
nydus-api = { version = "0.3.0", path = "api", features = ["error-backtrace", "handler"] }
nydus-builder = { version = "0.1.0", path = "builder" }
nydus-rafs = { version = "0.3.1", path = "rafs" }
nydus-service = { version = "0.3.0", path = "service", features = ["block-device"] }
nydus-storage = { version = "0.6.3", path = "storage", features = ["prefetch-rate-limit"] }
nydus-utils = { version = "0.4.2", path = "utils" }
nydus-api = { version = "0.4.0", path = "api", features = [
"error-backtrace",
"handler",
] }
nydus-builder = { version = "0.2.0", path = "builder" }
nydus-rafs = { version = "0.4.0", path = "rafs" }
nydus-service = { version = "0.4.0", path = "service", features = [
"block-device",
] }
nydus-storage = { version = "0.7.0", path = "storage", features = [
"prefetch-rate-limit",
] }
nydus-utils = { version = "0.5.0", path = "utils" }
vhost = { version = "0.6.0", features = ["vhost-user-slave"], optional = true }
vhost-user-backend = { version = "0.8.0", optional = true }
virtio-bindings = { version = "0.1", features = ["virtio-v5_0_0"], optional = true }
virtio-queue = { version = "0.7.0", optional = true }
vm-memory = { version = "0.10.0", features = ["backend-mmap"], optional = true }
vmm-sys-util = { version = "0.11.0", optional = true }
vhost = { version = "0.11.0", features = ["vhost-user"], optional = true }
vhost-user-backend = { version = "0.15.0", optional = true }
virtio-bindings = { version = "0.1", features = [
"virtio-v5_0_0",
], optional = true }
virtio-queue = { version = "0.12.0", optional = true }
vm-memory = { version = "0.14.1", features = ["backend-mmap","backend-atomic"], optional = true }
vmm-sys-util = { version = "0.12.1", optional = true }
[build-dependencies]
time = { version = "0.3.14", features = ["formatting"] }
[dev-dependencies]
xattr = "1.0.1"
vmm-sys-util = "0.11.0"
vmm-sys-util = "0.12.1"
[features]
default = [
@ -86,6 +93,7 @@ default = [
"backend-s3",
"backend-http-proxy",
"backend-localdisk",
"dedup",
]
virtiofs = [
"nydus-service/virtiofs",
@ -96,15 +104,27 @@ virtiofs = [
"vm-memory",
"vmm-sys-util",
]
block-nbd = [
"nydus-service/block-nbd"
]
block-nbd = ["nydus-service/block-nbd"]
backend-http-proxy = ["nydus-storage/backend-http-proxy"]
backend-localdisk = ["nydus-storage/backend-localdisk", "nydus-storage/backend-localdisk-gpt"]
backend-localdisk = [
"nydus-storage/backend-localdisk",
"nydus-storage/backend-localdisk-gpt",
]
backend-oss = ["nydus-storage/backend-oss"]
backend-registry = ["nydus-storage/backend-registry"]
backend-s3 = ["nydus-storage/backend-s3"]
dedup = ["nydus-storage/dedup"]
[workspace]
members = ["api", "builder", "clib", "rafs", "storage", "service", "utils"]
members = [
"api",
"builder",
"clib",
"rafs",
"storage",
"service",
"upgrade",
"utils",
]

15
MAINTAINERS.md Normal file
View File

@ -0,0 +1,15 @@
# Maintainers
<!-- markdownlint-disable -->
| GitHub ID | Name | Email | Company |
| :-------------------------------------------: | :---------: | :-----------------------------: | :-----------: |
| [imeoer](https://github.com/imeoer) | Yan Song | imeoer@gmail.com | Ant Group |
| [bergwolf](https://github.com/bergwolf) | Peng Tao | bergwolf@hyper.sh | Ant Group |
| [jiangliu](https://github.com/jiangliu) | Jiang Liu | gerry@linux.alibaba.com | Alibaba Group |
| [liubogithub](https://github.com/liubogithub) | Liu Bo | liub.liubo@gmail.com | Alibaba Group |
| [luodw](https://github.com/luodw) | daowen luo | luodaowen.backend@bytedance.com | ByteDance |
| [changweige](https://github.com/changweige) | Changwei Ge | gechangwei@live.cn | ByteDance |
| [hsiangkao](https://github.com/hsiangkao) | Gao Xiang | hsiangkao@linux.alibaba.com | Alibaba Group |
<!-- markdownlint-restore -->

View File

@ -44,7 +44,6 @@ endif
endif
RUST_TARGET_STATIC ?= $(STATIC_TARGET)
CTR-REMOTE_PATH = contrib/ctr-remote
NYDUSIFY_PATH = contrib/nydusify
NYDUS-OVERLAYFS_PATH = contrib/nydus-overlayfs
@ -52,12 +51,6 @@ current_dir := $(shell dirname $(realpath $(firstword $(MAKEFILE_LIST))))
env_go_path := $(shell go env GOPATH 2> /dev/null)
go_path := $(if $(env_go_path),$(env_go_path),"$(HOME)/go")
# Set the env DIND_CACHE_DIR to specify a cache directory for
# docker-in-docker container, used to cache data for docker pull,
# then mitigate the impact of docker hub rate limit, for example:
# env DIND_CACHE_DIR=/path/to/host/var-lib-docker make docker-nydusify-smoke
dind_cache_mount := $(if $(DIND_CACHE_DIR),-v $(DIND_CACHE_DIR):/var/lib/docker,)
# Functions
# Func: build golang target in docker
@ -67,7 +60,7 @@ dind_cache_mount := $(if $(DIND_CACHE_DIR),-v $(DIND_CACHE_DIR):/var/lib/docker,
define build_golang
echo "Building target $@ by invoking: $(2)"
if [ $(DOCKER) = "true" ]; then \
docker run --rm -v ${go_path}:/go -v ${current_dir}:/nydus-rs --workdir /nydus-rs/$(1) golang:1.18 $(2) ;\
docker run --rm -v ${go_path}:/go -v ${current_dir}:/nydus-rs --workdir /nydus-rs/$(1) golang:1.21 $(2) ;\
else \
$(2) -C $(1); \
fi
@ -115,7 +108,11 @@ ut: .release_version
# you need install cargo nextest first from: https://nexte.st/book/pre-built-binaries.html
ut-nextest: .release_version
TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) RUST_BACKTRACE=1 ${CARGO} nextest run --no-fail-fast --filter-expr 'test(test) - test(integration)' --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS) --test-threads 8
TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) RUST_BACKTRACE=1 ${RUSTUP} run stable cargo nextest run --no-fail-fast --filter-expr 'test(test) - test(integration)' --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS)
# install miri first from https://github.com/rust-lang/miri/
miri-ut-nextest: .release_version
MIRIFLAGS=-Zmiri-disable-isolation TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) RUST_BACKTRACE=1 ${RUSTUP} run nightly cargo miri nextest run --no-fail-fast --filter-expr 'test(test) - test(integration) - test(deduplicate::tests) - test(inode_bitmap::tests::test_inode_bitmap)' --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS)
# install test dependencies
pre-coverage:
@ -128,59 +125,34 @@ coverage: pre-coverage
# write unit teset coverage to codecov.json, used for Github CI
coverage-codecov:
TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) ${CARGO} llvm-cov --codecov --output-path codecov.json --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS) -- --skip integration --nocapture --test-threads=8
TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) ${RUSTUP} run stable cargo llvm-cov --codecov --output-path codecov.json --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS) -- --skip integration --nocapture --test-threads=8
smoke-only:
make -C smoke test
smoke-performance:
make -C smoke test-performance
smoke-benchmark:
make -C smoke test-benchmark
smoke-takeover:
make -C smoke test-takeover
smoke: release smoke-only
docker-nydus-smoke:
docker build -t nydus-smoke --build-arg RUST_TARGET=${RUST_TARGET_STATIC} misc/nydus-smoke
docker run --rm --privileged ${CARGO_BUILD_GEARS} \
-e TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) \
-v ~/.cargo:/root/.cargo \
-v $(TEST_WORKDIR_PREFIX) \
-v ${current_dir}:/nydus-rs \
nydus-smoke
contrib-build: nydusify nydus-overlayfs
# TODO: Nydusify smoke has to be time consuming for a while since it relies on musl nydusd and nydus-image.
# So musl compilation must be involved.
# And docker-in-docker deployment involves image building?
docker-nydusify-smoke: docker-static
$(call build_golang,$(NYDUSIFY_PATH),make build-smoke)
docker build -t nydusify-smoke misc/nydusify-smoke
docker run --rm --privileged \
-e BACKEND_TYPE=$(BACKEND_TYPE) \
-e BACKEND_CONFIG=$(BACKEND_CONFIG) \
-v $(current_dir):/nydus-rs $(dind_cache_mount) nydusify-smoke TestSmoke
contrib-release: nydusify-release nydus-overlayfs-release
docker-nydusify-image-test: docker-static
$(call build_golang,$(NYDUSIFY_PATH),make build-smoke)
docker build -t nydusify-smoke misc/nydusify-smoke
docker run --rm --privileged \
-e BACKEND_TYPE=$(BACKEND_TYPE) \
-e BACKEND_CONFIG=$(BACKEND_CONFIG) \
-v $(current_dir):/nydus-rs $(dind_cache_mount) nydusify-smoke TestDockerHubImage
contrib-test: nydusify-test nydus-overlayfs-test
# Run integration smoke test in docker-in-docker container. It requires some special settings,
# refer to `misc/example/README.md` for details.
docker-smoke: docker-nydus-smoke docker-nydusify-smoke
contrib-lint: nydusify-lint nydus-overlayfs-lint
contrib-build: nydusify ctr-remote nydus-overlayfs
contrib-release: nydusify-release ctr-remote-release \
nydus-overlayfs-release
contrib-test: nydusify-test ctr-remote-test \
nydus-overlayfs-test
contrib-clean: nydusify-clean ctr-remote-clean \
nydus-overlayfs-clean
contrib-clean: nydusify-clean nydus-overlayfs-clean
contrib-install:
@sudo mkdir -m 755 -p $(INSTALL_DIR_PREFIX)
@sudo install -m 755 contrib/ctr-remote/bin/ctr-remote $(INSTALL_DIR_PREFIX)/ctr-remote
@sudo install -m 755 contrib/nydus-overlayfs/bin/nydus-overlayfs $(INSTALL_DIR_PREFIX)/nydus-overlayfs
@sudo install -m 755 contrib/nydusify/cmd/nydusify $(INSTALL_DIR_PREFIX)/nydusify
@ -196,17 +168,8 @@ nydusify-test:
nydusify-clean:
$(call build_golang,${NYDUSIFY_PATH},make clean)
ctr-remote:
$(call build_golang,${CTR-REMOTE_PATH},make)
ctr-remote-release:
$(call build_golang,${CTR-REMOTE_PATH},make release)
ctr-remote-test:
$(call build_golang,${CTR-REMOTE_PATH},make test)
ctr-remote-clean:
$(call build_golang,${CTR-REMOTE_PATH},make clean)
nydusify-lint:
$(call build_golang,${NYDUSIFY_PATH},make lint)
nydus-overlayfs:
$(call build_golang,${NYDUS-OVERLAYFS_PATH},make)
@ -220,17 +183,9 @@ nydus-overlayfs-test:
nydus-overlayfs-clean:
$(call build_golang,${NYDUS-OVERLAYFS_PATH},make clean)
nydus-overlayfs-lint:
$(call build_golang,${NYDUS-OVERLAYFS_PATH},make lint)
docker-static:
docker build -t nydus-rs-static --build-arg RUST_TARGET=${RUST_TARGET_STATIC} misc/musl-static
docker run --rm ${CARGO_BUILD_GEARS} -e RUST_TARGET=${RUST_TARGET_STATIC} --workdir /nydus-rs -v ${current_dir}:/nydus-rs nydus-rs-static
docker-example: all-static-release
cp ${current_dir}/target/${RUST_TARGET_STATIC}/release/nydusd misc/example
cp ${current_dir}/target/${RUST_TARGET_STATIC}/release/nydus-image misc/example
cp contrib/nydusify/cmd/nydusify misc/example
docker build -t nydus-rs-example misc/example
@cid=$(shell docker run --rm -t -d --privileged $(dind_cache_mount) nydus-rs-example)
@docker exec $$cid /run.sh
@EXIT_CODE=$$?
@docker rm -f $$cid
@exit $$EXIT_CODE

View File

@ -1,23 +1,24 @@
[**[⬇️ Download]**](https://github.com/dragonflyoss/image-service/releases)
[**[⬇️ Download]**](https://github.com/dragonflyoss/nydus/releases)
[**[📖 Website]**](https://nydus.dev/)
[**[☸ Quick Start (Kubernetes)**]](https://github.com/containerd/nydus-snapshotter/blob/main/docs/run_nydus_in_kubernetes.md)
[**[🤓 Quick Start (nerdctl)**]](https://github.com/containerd/nerdctl/blob/master/docs/nydus.md)
[**[❓ FAQs & Troubleshooting]**](https://github.com/dragonflyoss/image-service/wiki/FAQ)
[**[❓ FAQs & Troubleshooting]**](https://github.com/dragonflyoss/nydus/wiki/FAQ)
# Nydus: Dragonfly Container Image Service
<p><img src="misc/logo.svg" width="170"></p>
[![Release Version](https://img.shields.io/github/v/release/dragonflyoss/image-service?style=flat)](https://github.com/dragonflyoss/image-service/releases)
[![Release Version](https://img.shields.io/github/v/release/dragonflyoss/nydus?style=flat)](https://github.com/dragonflyoss/nydus/releases)
[![License](https://img.shields.io/crates/l/nydus-rs)](https://crates.io/crates/nydus-rs)
[![Twitter](https://img.shields.io/twitter/url?style=social&url=https%3A%2F%2Ftwitter.com%2Fdragonfly_oss)](https://twitter.com/dragonfly_oss)
[![Nydus Stars](https://img.shields.io/github/stars/dragonflyoss/image-service?label=Nydus%20Stars&style=social)](https://github.com/dragonflyoss/image-service)
[![Nydus Stars](https://img.shields.io/github/stars/dragonflyoss/nydus?label=Nydus%20Stars&style=social)](https://github.com/dragonflyoss/nydus)
[<img src="https://app.devin.ai/devin_v4.png" width="20" title="deepwiki">](https://deepwiki.com/dragonflyoss/nydus)
[![Smoke Test](https://github.com/dragonflyoss/image-service/actions/workflows/smoke.yml/badge.svg?event=schedule)](https://github.com/dragonflyoss/image-service/actions/workflows/smoke.yml)
[![Image Conversion](https://github.com/dragonflyoss/image-service/actions/workflows/convert.yml/badge.svg?event=schedule)](https://github.com/dragonflyoss/image-service/actions/workflows/convert.yml)
[![Integration Test](https://github.com/dragonflyoss/image-service/actions/workflows/integration.yml/badge.svg?event=schedule)](https://github.com/dragonflyoss/image-service/actions/workflows/integration.yml)
[![Release Test Daily](https://github.com/dragonflyoss/image-service/actions/workflows/release.yml/badge.svg?event=schedule)](https://github.com/dragonflyoss/image-service/actions/workflows/release.yml)
[![Coverage](https://codecov.io/gh/dragonflyoss/image-service/branch/master/graph/badge.svg)](https://codecov.io/gh/dragonflyoss/image-service)
[![Smoke Test](https://github.com/dragonflyoss/nydus/actions/workflows/smoke.yml/badge.svg?event=schedule)](https://github.com/dragonflyoss/nydus/actions/workflows/smoke.yml?query=event%3Aschedule)
[![Image Conversion](https://github.com/dragonflyoss/nydus/actions/workflows/convert.yml/badge.svg?event=schedule)](https://github.com/dragonflyoss/nydus/actions/workflows/convert.yml?query=event%3Aschedule)
[![Release Test Daily](https://github.com/dragonflyoss/nydus/actions/workflows/release.yml/badge.svg?event=schedule)](https://github.com/dragonflyoss/nydus/actions/workflows/release.yml?query=event%3Aschedule)
[![Benchmark](https://github.com/dragonflyoss/nydus/actions/workflows/benchmark.yml/badge.svg?event=schedule)](https://github.com/dragonflyoss/nydus/actions/workflows/benchmark.yml?query=event%3Aschedule)
[![Coverage](https://codecov.io/gh/dragonflyoss/nydus/branch/master/graph/badge.svg)](https://codecov.io/gh/dragonflyoss/nydus)
## Introduction
Nydus implements a content-addressable file system on the RAFS format, which enhances the current OCI image specification by improving container launch speed, image space and network bandwidth efficiency, and data integrity.
@ -39,7 +40,7 @@ The following Benchmarking results demonstrate that Nydus images significantly o
- **On-demand Load**: Container images/packages are downloaded on-demand in chunk unit to boost startup.
- **Chunk Deduplication**: Chunk level data de-duplication cross-layer or cross-image to reduce storage, transport, and memory cost.
- **Compatible with Ecosystem**: Storage backend support with Registry, OSS, NAS, Shared Disk, and [P2P service](https://d7y.io/). Compatible with the [OCI images](https://github.com/dragonflyoss/image-service/blob/master/docs/nydus-zran.md), and provide native [eStargz images](https://github.com/containerd/stargz-snapshotter) support.
- **Compatible with Ecosystem**: Storage backend support with Registry, OSS, NAS, Shared Disk, and [P2P service](https://d7y.io/). Compatible with the [OCI images](https://github.com/dragonflyoss/nydus/blob/master/docs/nydus-zran.md), and provide native [eStargz images](https://github.com/containerd/stargz-snapshotter) support.
- **Data Analyzability**: Record accesses, data layout optimization, prefetch, IO amplification, abnormal behavior detection.
- **POSIX Compatibility**: In-Kernel EROFS or FUSE filesystems together with overlayfs provide full POSIX compatibility
- **I/O optimization**: Use merged filesystem tree, data prefetching and User I/O amplification to reduce read latency and improve user I/O performance.
@ -49,13 +50,12 @@ The following Benchmarking results demonstrate that Nydus images significantly o
| Tool | Description |
| ---------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [nydusd](https://github.com/dragonflyoss/image-service/blob/master/docs/nydusd.md) | Nydus user-space daemon, it processes all fscache/FUSE messages from the kernel and parses Nydus images to fullfil those requests |
| [nydus-image](https://github.com/dragonflyoss/image-service/blob/master/docs/nydus-image.md) | Convert a single layer of OCI format container image into a nydus format container image generating meta part file and data part file respectively |
| [nydusify](https://github.com/dragonflyoss/image-service/blob/master/docs/nydusify.md) | It pulls OCI image down and unpack it, invokes `nydus-image create` to convert image and then pushes the converted image back to registry and data storage |
| [nydusctl](https://github.com/dragonflyoss/image-service/blob/master/docs/nydus-image.md) | Nydusd CLI client (`nydus-image inspect`), query daemon's working status/metrics and configure it |
| [ctr-remote](https://github.com/dragonflyoss/image-service/tree/master/contrib/ctr-remote) | An enhanced `containerd` CLI tool enable nydus support with `containerd` ctr |
| [nydusd](https://github.com/dragonflyoss/nydus/blob/master/docs/nydusd.md) | Nydus user-space daemon, it processes all fscache/FUSE messages from the kernel and parses Nydus images to fullfil those requests |
| [nydus-image](https://github.com/dragonflyoss/nydus/blob/master/docs/nydus-image.md) | Convert a single layer of OCI format container image into a nydus format container image generating meta part file and data part file respectively |
| [nydusify](https://github.com/dragonflyoss/nydus/blob/master/docs/nydusify.md) | It pulls OCI image down and unpack it, invokes `nydus-image create` to convert image and then pushes the converted image back to registry and data storage |
| [nydusctl](https://github.com/dragonflyoss/nydus/blob/master/docs/nydus-image.md) | Nydusd CLI client (`nydus-image inspect`), query daemon's working status/metrics and configure it |
| [nydus-docker-graphdriver](https://github.com/nydusaccelerator/docker-nydus-graphdriver) | [Experimental] Works as a `docker` remote graph driver to control how images and containers are stored and managed |
| [nydus-overlayfs](https://github.com/dragonflyoss/image-service/tree/master/contrib/nydus-overlayfs) | `Containerd` mount helper to invoke overlayfs mount with tweaking mount options a bit. So nydus prerequisites can be passed to vm-based runtime |
| [nydus-overlayfs](https://github.com/dragonflyoss/nydus/tree/master/contrib/nydus-overlayfs) | `Containerd` mount helper to invoke overlayfs mount with tweaking mount options a bit. So nydus prerequisites can be passed to vm-based runtime |
| [nydus-backend-proxy](./contrib/nydus-backend-proxy/README.md) | A simple HTTP server to serve local directory as a blob backend for nydusd |
### Supported platforms
@ -64,10 +64,10 @@ The following Benchmarking results demonstrate that Nydus images significantly o
| ------------- | --------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------ |
| Storage | Registry/OSS/S3/NAS | Support for OCI-compatible distribution implementations such as Docker Hub, Harbor, Github GHCR, Aliyun ACR, NAS, and Aliyun OSS-like object storage service | ✅ |
| Storage/Build | [Harbor](https://github.com/goharbor/acceleration-service) | Provides a general service for Harbor to support acceleration image conversion based on kinds of accelerator like Nydus and eStargz etc | ✅ |
| Distribution | [Dragonfly](https://github.com/dragonflyoss/Dragonfly2) | Improve the runtime performance of Nydus image even further with the Dragonfly P2P data distribution system | ✅ |
| Build | [Buildkit](https://github.com/moby/buildkit/blob/master/docs/nydus.md) | Provides the ability to build and export Nydus images directly from Dockerfile | ✅ |
| Distribution | [Dragonfly](https://github.com/dragonflyoss/dragonfly) | Improve the runtime performance of Nydus image even further with the Dragonfly P2P data distribution system | ✅ |
| Build | [Buildkit](https://github.com/nydusaccelerator/buildkit/blob/master/docs/nydus.md) | Provides the ability to build and export Nydus images directly from Dockerfile | ✅ |
| Build/Runtime | [Nerdctl](https://github.com/containerd/nerdctl/blob/master/docs/nydus.md) | The containerd client to build or run (requires nydus snapshotter) Nydus image | ✅ |
| Runtime | [Docker / Moby](https://github.com/dragonflyoss/image-service/blob/master/docs/docker-env-setup.md) | Run Nydus image in Docker container with containerd and nydus-snapshotter | ✅ |
| Runtime | [Docker / Moby](https://github.com/dragonflyoss/nydus/blob/master/docs/docker-env-setup.md) | Run Nydus image in Docker container with containerd and nydus-snapshotter | ✅ |
| Runtime | [Kubernetes](https://github.com/containerd/nydus-snapshotter/blob/main/docs/run_nydus_in_kubernetes.md) | Run Nydus image using CRI interface | ✅ |
| Runtime | [Containerd](https://github.com/containerd/nydus-snapshotter) | Nydus Snapshotter, a containerd remote plugin to run Nydus image | ✅ |
| Runtime | [CRI-O / Podman](https://github.com/containers/nydus-storage-plugin) | Run Nydus image with CRI-O or Podman | 🚧 |
@ -90,7 +90,7 @@ make docker-static
Convert OCIv1 image to Nydus image: [Nydusify](./docs/nydusify.md), [Acceld](https://github.com/goharbor/acceleration-service) or [Nerdctl](https://github.com/containerd/nerdctl/blob/master/docs/nydus.md#build-nydus-image-using-nerdctl-image-convert).
Build Nydus image from Dockerfile directly: [Buildkit](https://github.com/moby/buildkit/blob/master/docs/nydus.md).
Build Nydus image from Dockerfile directly: [Buildkit](https://github.com/nydusaccelerator/buildkit/blob/master/docs/nydus.md).
Build Nydus layer from various sources: [Nydus Image Builder](./docs/nydus-image.md).
@ -153,7 +153,9 @@ Using the key features of nydus as native in your project without preparing and
## Documentation
Please visit [**Wiki**](https://github.com/dragonflyoss/image-service/wiki), or [**docs**](./docs)
Please visit [**Wiki**](https://github.com/dragonflyoss/nydus/wiki), or [**docs**](./docs)
There is also a very nice [Devin](https://devin.ai/) generated document available at [**deepwiki**](https://deepwiki.com/dragonflyoss/nydus).
## Community
@ -170,5 +172,3 @@ Feel free to reach us via Slack or Dingtalk.
- **Dingtalk:** [34971767](https://qr.dingtalk.com/action/joingroup?code=v1,k1,ioWGzuDZEIO10Bf+/ohz4RcQqAkW0MtOwoG1nbbMxQg=&_dt_no_comment=1&origin=11)
<img src="./misc/dingtalk.jpg" width="250" height="300"/>
- **Technical Meeting:** Every Wednesday at 06:00 UTC (Beijing, Shanghai 14:00), please see our [HackMD](https://hackmd.io/@Nydus/Bk8u2X0p9) page for more information.

View File

@ -1,12 +1,12 @@
[package]
name = "nydus-api"
version = "0.3.1"
version = "0.4.0"
description = "APIs for Nydus Image Service"
authors = ["The Nydus Developers"]
license = "Apache-2.0 OR BSD-3-Clause"
homepage = "https://nydus.dev/"
repository = "https://github.com/dragonflyoss/image-service"
edition = "2018"
repository = "https://github.com/dragonflyoss/nydus"
edition = "2021"
[dependencies]
libc = "0.2"
@ -14,6 +14,7 @@ log = "0.4.8"
serde_json = "1.0.53"
toml = "0.5"
thiserror = "1.0.30"
backtrace = { version = "0.3", optional = true }
dbs-uhttp = { version = "0.3.0", optional = true }
http = { version = "0.2.1", optional = true }
@ -23,7 +24,7 @@ serde = { version = "1.0.110", features = ["rc", "serde_derive"] }
url = { version = "2.1.1", optional = true }
[dev-dependencies]
vmm-sys-util = { version = "0.11" }
vmm-sys-util = { version = "0.12.1" }
[features]
error-backtrace = ["backtrace"]

View File

@ -25,10 +25,15 @@ pub struct ConfigV2 {
pub id: String,
/// Configuration information for storage backend.
pub backend: Option<BackendConfigV2>,
/// Configuration for external storage backends, order insensitivity.
#[serde(default)]
pub external_backends: Vec<ExternalBackendConfig>,
/// Configuration information for local cache system.
pub cache: Option<CacheConfigV2>,
/// Configuration information for RAFS filesystem.
pub rafs: Option<RafsConfigV2>,
/// Overlay configuration information for the instance.
pub overlay: Option<OverlayConfig>,
/// Internal runtime configuration.
#[serde(skip)]
pub internal: ConfigV2Internal,
@ -40,8 +45,10 @@ impl Default for ConfigV2 {
version: 2,
id: String::new(),
backend: None,
external_backends: Vec::new(),
cache: None,
rafs: None,
overlay: None,
internal: ConfigV2Internal::default(),
}
}
@ -54,8 +61,10 @@ impl ConfigV2 {
version: 2,
id: id.to_string(),
backend: None,
external_backends: Vec::new(),
cache: None,
rafs: None,
overlay: None,
internal: ConfigV2Internal::default(),
}
}
@ -510,9 +519,6 @@ pub struct OssConfig {
/// Enable HTTP proxy for the read request.
#[serde(default)]
pub proxy: ProxyConfig,
/// Enable mirrors for the read request.
#[serde(default)]
pub mirrors: Vec<MirrorConfig>,
}
/// S3 configuration information to access blobs.
@ -554,9 +560,6 @@ pub struct S3Config {
/// Enable HTTP proxy for the read request.
#[serde(default)]
pub proxy: ProxyConfig,
/// Enable mirrors for the read request.
#[serde(default)]
pub mirrors: Vec<MirrorConfig>,
}
/// Http proxy configuration information to access blobs.
@ -583,9 +586,6 @@ pub struct HttpProxyConfig {
/// Enable HTTP proxy for the read request.
#[serde(default)]
pub proxy: ProxyConfig,
/// Enable mirrors for the read request.
#[serde(default)]
pub mirrors: Vec<MirrorConfig>,
}
/// Container registry configuration information to access blobs.
@ -626,9 +626,6 @@ pub struct RegistryConfig {
/// Enable HTTP proxy for the read request.
#[serde(default)]
pub proxy: ProxyConfig,
/// Enable mirrors for the read request.
#[serde(default)]
pub mirrors: Vec<MirrorConfig>,
}
/// Configuration information for blob cache manager.
@ -684,7 +681,7 @@ impl CacheConfigV2 {
if self.prefetch.batch_size > 0x10000000 {
return false;
}
if self.prefetch.threads == 0 || self.prefetch.threads > 1024 {
if self.prefetch.threads_count == 0 || self.prefetch.threads_count > 1024 {
return false;
}
}
@ -819,9 +816,9 @@ pub struct RafsConfigV2 {
/// Filesystem metadata cache mode.
#[serde(default = "default_rafs_mode")]
pub mode: String,
/// Batch size to read data from storage cache layer.
#[serde(default = "default_batch_size")]
pub batch_size: usize,
/// Amplified user IO request batch size to read data from remote storage backend / local cache.
#[serde(rename = "batch_size", default = "default_user_io_batch_size")]
pub user_io_batch_size: usize,
/// Whether to validate data digest.
#[serde(default)]
pub validate: bool,
@ -850,14 +847,14 @@ impl RafsConfigV2 {
if self.mode != "direct" && self.mode != "cached" {
return false;
}
if self.batch_size > 0x10000000 {
if self.user_io_batch_size > 0x10000000 {
return false;
}
if self.prefetch.enable {
if self.prefetch.batch_size > 0x10000000 {
return false;
}
if self.prefetch.threads == 0 || self.prefetch.threads > 1024 {
if self.prefetch.threads_count == 0 || self.prefetch.threads_count > 1024 {
return false;
}
}
@ -872,9 +869,9 @@ pub struct PrefetchConfigV2 {
/// Whether to enable blob data prefetching.
pub enable: bool,
/// Number of data prefetching working threads.
#[serde(default = "default_prefetch_threads")]
pub threads: usize,
/// The batch size to prefetch data from backend.
#[serde(rename = "threads", default = "default_prefetch_threads_count")]
pub threads_count: usize,
/// The amplify batch size to prefetch data from backend.
#[serde(default = "default_prefetch_batch_size")]
pub batch_size: usize,
/// Network bandwidth rate limit in unit of Bytes and Zero means no limit.
@ -903,6 +900,9 @@ pub struct ProxyConfig {
/// Replace URL to http to request source registry with proxy, and allow fallback to https if the proxy is unhealthy.
#[serde(default)]
pub use_http: bool,
/// Elapsed time to pause proxy health check when the request is inactive, in seconds.
#[serde(default = "default_check_pause_elapsed")]
pub check_pause_elapsed: u64,
}
impl Default for ProxyConfig {
@ -913,37 +913,7 @@ impl Default for ProxyConfig {
fallback: true,
check_interval: 5,
use_http: false,
}
}
}
/// Configuration for registry mirror.
#[derive(Clone, Debug, Deserialize, Eq, PartialEq, Serialize)]
pub struct MirrorConfig {
/// Mirror server URL, for example http://127.0.0.1:65001.
pub host: String,
/// Ping URL to check mirror server health.
#[serde(default)]
pub ping_url: String,
/// HTTP request headers to be passed to mirror server.
#[serde(default)]
pub headers: HashMap<String, String>,
/// Interval for mirror health checking, in seconds.
#[serde(default = "default_check_interval")]
pub health_check_interval: u64,
/// Maximum number of failures before marking a mirror as unusable.
#[serde(default = "default_failure_limit")]
pub failure_limit: u8,
}
impl Default for MirrorConfig {
fn default() -> Self {
Self {
host: String::new(),
headers: HashMap::new(),
health_check_interval: 5,
failure_limit: 5,
ping_url: String::new(),
check_pause_elapsed: 300,
}
}
}
@ -959,6 +929,9 @@ pub struct BlobCacheEntryConfigV2 {
/// Configuration information for storage backend.
#[serde(default)]
pub backend: BackendConfigV2,
/// Configuration for external storage backends, order insensitivity.
#[serde(default)]
pub external_backends: Vec<ExternalBackendConfig>,
/// Configuration information for local cache system.
#[serde(default)]
pub cache: CacheConfigV2,
@ -1022,8 +995,10 @@ impl From<&BlobCacheEntryConfigV2> for ConfigV2 {
version: c.version,
id: c.id.clone(),
backend: Some(c.backend.clone()),
external_backends: c.external_backends.clone(),
cache: Some(c.cache.clone()),
rafs: None,
overlay: None,
internal: ConfigV2Internal::default(),
}
}
@ -1070,7 +1045,7 @@ pub const BLOB_CACHE_TYPE_META_BLOB: &str = "bootstrap";
pub const BLOB_CACHE_TYPE_DATA_BLOB: &str = "datablob";
/// Configuration information for a cached blob.
#[derive(Debug, Deserialize, Serialize)]
#[derive(Debug, Deserialize, Serialize, Clone)]
pub struct BlobCacheEntry {
/// Type of blob object, bootstrap or data blob.
#[serde(rename = "type")]
@ -1186,23 +1161,23 @@ fn default_check_interval() -> u64 {
5
}
fn default_failure_limit() -> u8 {
5
fn default_check_pause_elapsed() -> u64 {
300
}
fn default_work_dir() -> String {
".".to_string()
}
pub fn default_batch_size() -> usize {
128 * 1024
}
fn default_prefetch_batch_size() -> usize {
pub fn default_user_io_batch_size() -> usize {
1024 * 1024
}
fn default_prefetch_threads() -> usize {
pub fn default_prefetch_batch_size() -> usize {
1024 * 1024
}
fn default_prefetch_threads_count() -> usize {
8
}
@ -1285,13 +1260,26 @@ struct CacheConfig {
#[serde(default, rename = "config")]
pub cache_config: Value,
/// Whether to validate data read from the cache.
#[serde(skip_serializing, skip_deserializing)]
#[serde(default, rename = "validate")]
pub cache_validate: bool,
/// Configuration for blob data prefetching.
#[serde(skip_serializing, skip_deserializing)]
pub prefetch_config: BlobPrefetchConfig,
}
/// Additional configuration information for external backend, its items
/// will be merged to the configuration from image.
#[derive(Clone, Debug, Default, Deserialize, Eq, PartialEq, Serialize)]
pub struct ExternalBackendConfig {
/// External backend identifier to merge.
pub patch: HashMap<String, String>,
/// External backend type.
#[serde(rename = "type")]
pub kind: String,
/// External backend config items to merge.
pub config: HashMap<String, String>,
}
impl TryFrom<&CacheConfig> for CacheConfigV2 {
type Error = std::io::Error;
@ -1312,7 +1300,7 @@ impl TryFrom<&CacheConfig> for CacheConfigV2 {
"fscache" => {
config.fs_cache = Some(serde_json::from_value(v.cache_config.clone())?);
}
"" => {}
"" | "dummycache" => {}
t => {
return Err(Error::new(
ErrorKind::InvalidInput,
@ -1333,6 +1321,9 @@ struct FactoryConfig {
pub id: String,
/// Configuration for storage backend.
pub backend: BackendConfig,
/// Configuration for external storage backends, order insensitivity.
#[serde(default)]
pub external_backends: Vec<ExternalBackendConfig>,
/// Configuration for blob cache manager.
#[serde(default)]
pub cache: CacheConfig,
@ -1363,9 +1354,10 @@ struct RafsConfig {
/// Record file name if file access trace log.
#[serde(default)]
pub latest_read_files: bool,
// Amplified user IO request batch size to read data from remote storage backend / local cache.
// ZERO value means, amplifying user io is not enabled.
#[serde(default = "default_batch_size")]
pub amplify_io: usize,
#[serde(rename = "amplify_io", default = "default_user_io_batch_size")]
pub user_io_batch_size: usize,
}
impl TryFrom<RafsConfig> for ConfigV2 {
@ -1376,7 +1368,7 @@ impl TryFrom<RafsConfig> for ConfigV2 {
let mut cache: CacheConfigV2 = (&v.device.cache).try_into()?;
let rafs = RafsConfigV2 {
mode: v.mode,
batch_size: v.amplify_io,
user_io_batch_size: v.user_io_batch_size,
validate: v.digest_validate,
enable_xattr: v.enable_xattr,
iostats_files: v.iostats_files,
@ -1392,8 +1384,10 @@ impl TryFrom<RafsConfig> for ConfigV2 {
version: 2,
id: v.device.id,
backend: Some(backend),
external_backends: v.device.external_backends,
cache: Some(cache),
rafs: Some(rafs),
overlay: None,
internal: ConfigV2Internal::default(),
})
}
@ -1407,23 +1401,23 @@ struct FsPrefetchControl {
pub enable: bool,
/// How many working threads to prefetch data.
#[serde(default = "default_prefetch_threads")]
#[serde(default = "default_prefetch_threads_count")]
pub threads_count: usize,
/// Window size in unit of bytes to merge request to backend.
#[serde(default = "default_batch_size")]
pub merging_size: usize,
/// The amplify batch size to prefetch data from backend.
#[serde(rename = "merging_size", default = "default_prefetch_batch_size")]
pub batch_size: usize,
/// Network bandwidth limitation for prefetching.
///
/// In unit of Bytes. It sets a limit to prefetch bandwidth usage in order to
/// reduce congestion with normal user IO.
/// bandwidth_rate == 0 -- prefetch bandwidth ratelimit disabled
/// bandwidth_rate > 0 -- prefetch bandwidth ratelimit enabled.
/// bandwidth_limit == 0 -- prefetch bandwidth ratelimit disabled
/// bandwidth_limit > 0 -- prefetch bandwidth ratelimit enabled.
/// Please note that if the value is less than Rafs chunk size,
/// it will be raised to the chunk size.
#[serde(default)]
pub bandwidth_rate: u32,
#[serde(default, rename = "bandwidth_rate")]
pub bandwidth_limit: u32,
/// Whether to prefetch all filesystem data.
#[serde(default = "default_prefetch_all")]
@ -1434,9 +1428,9 @@ impl From<FsPrefetchControl> for PrefetchConfigV2 {
fn from(v: FsPrefetchControl) -> Self {
PrefetchConfigV2 {
enable: v.enable,
threads: v.threads_count,
batch_size: v.merging_size,
bandwidth_limit: v.bandwidth_rate,
threads_count: v.threads_count,
batch_size: v.batch_size,
bandwidth_limit: v.bandwidth_limit,
prefetch_all: v.prefetch_all,
}
}
@ -1449,19 +1443,21 @@ struct BlobPrefetchConfig {
pub enable: bool,
/// Number of data prefetching working threads.
pub threads_count: usize,
/// The maximum size of a merged IO request.
pub merging_size: usize,
/// The amplify batch size to prefetch data from backend.
#[serde(rename = "merging_size")]
pub batch_size: usize,
/// Network bandwidth rate limit in unit of Bytes and Zero means no limit.
pub bandwidth_rate: u32,
#[serde(rename = "bandwidth_rate")]
pub bandwidth_limit: u32,
}
impl From<&BlobPrefetchConfig> for PrefetchConfigV2 {
fn from(v: &BlobPrefetchConfig) -> Self {
PrefetchConfigV2 {
enable: v.enable,
threads: v.threads_count,
batch_size: v.merging_size,
bandwidth_limit: v.bandwidth_rate,
threads_count: v.threads_count,
batch_size: v.batch_size,
bandwidth_limit: v.bandwidth_limit,
prefetch_all: true,
}
}
@ -1479,6 +1475,9 @@ pub(crate) struct BlobCacheEntryConfig {
///
/// Possible value: `LocalFsConfig`, `RegistryConfig`, `OssConfig`, `LocalDiskConfig`.
backend_config: Value,
/// Configuration for external storage backends, order insensitivity.
#[serde(default)]
external_backends: Vec<ExternalBackendConfig>,
/// Type of blob cache, corresponding to `FactoryConfig::CacheConfig::cache_type`.
///
/// Possible value: "fscache", "filecache".
@ -1514,12 +1513,22 @@ impl TryFrom<&BlobCacheEntryConfig> for BlobCacheEntryConfigV2 {
version: 2,
id: v.id.clone(),
backend: (&backend_config).try_into()?,
external_backends: v.external_backends.clone(),
cache: (&cache_config).try_into()?,
metadata_path: v.metadata_path.clone(),
})
}
}
/// Configuration information for Overlay filesystem.
/// OverlayConfig is used to configure the writable layer(upper layer),
/// The filesystem will be writable when OverlayConfig is set.
#[derive(Clone, Debug, Default, Deserialize, Eq, PartialEq, Serialize)]
pub struct OverlayConfig {
pub upper_dir: String,
pub work_dir: String,
}
#[cfg(test)]
mod tests {
use super::*;
@ -1530,8 +1539,8 @@ mod tests {
let config = BlobPrefetchConfig::default();
assert!(!config.enable);
assert_eq!(config.threads_count, 0);
assert_eq!(config.merging_size, 0);
assert_eq!(config.bandwidth_rate, 0);
assert_eq!(config.batch_size, 0);
assert_eq!(config.bandwidth_limit, 0);
let content = r#"{
"enable": true,
@ -1542,12 +1551,12 @@ mod tests {
let config: BlobPrefetchConfig = serde_json::from_str(content).unwrap();
assert!(config.enable);
assert_eq!(config.threads_count, 2);
assert_eq!(config.merging_size, 4);
assert_eq!(config.bandwidth_rate, 5);
assert_eq!(config.batch_size, 4);
assert_eq!(config.bandwidth_limit, 5);
let config: PrefetchConfigV2 = (&config).into();
assert!(config.enable);
assert_eq!(config.threads, 2);
assert_eq!(config.threads_count, 2);
assert_eq!(config.batch_size, 4);
assert_eq!(config.bandwidth_limit, 5);
assert!(config.prefetch_all);
@ -1618,7 +1627,7 @@ mod tests {
assert!(blob_config.cache_config.is_object());
assert!(blob_config.prefetch_config.enable);
assert_eq!(blob_config.prefetch_config.threads_count, 2);
assert_eq!(blob_config.prefetch_config.merging_size, 4);
assert_eq!(blob_config.prefetch_config.batch_size, 4);
assert_eq!(
blob_config.metadata_path.as_ref().unwrap().as_str(),
"/tmp/metadata1"
@ -1630,7 +1639,7 @@ mod tests {
assert_eq!(blob_config.cache.cache_type, "fscache");
assert!(blob_config.cache.fs_cache.is_some());
assert!(blob_config.cache.prefetch.enable);
assert_eq!(blob_config.cache.prefetch.threads, 2);
assert_eq!(blob_config.cache.prefetch.threads_count, 2);
assert_eq!(blob_config.cache.prefetch.batch_size, 4);
assert_eq!(
blob_config.metadata_path.as_ref().unwrap().as_str(),
@ -1654,7 +1663,7 @@ mod tests {
let blob_config = config.blob_config_legacy.as_ref().unwrap();
assert!(!blob_config.prefetch_config.enable);
assert_eq!(blob_config.prefetch_config.threads_count, 0);
assert_eq!(blob_config.prefetch_config.merging_size, 0);
assert_eq!(blob_config.prefetch_config.batch_size, 0);
}
#[test]
@ -1826,11 +1835,6 @@ mod tests {
fallback = true
check_interval = 10
use_http = true
[[backend.oss.mirrors]]
host = "http://127.0.0.1:65001"
ping_url = "http://127.0.0.1:65001/ping"
health_check_interval = 10
failure_limit = 10
"#;
let config: ConfigV2 = toml::from_str(content).unwrap();
assert_eq!(config.version, 2);
@ -1857,14 +1861,6 @@ mod tests {
assert_eq!(oss.proxy.check_interval, 10);
assert!(oss.proxy.fallback);
assert!(oss.proxy.use_http);
assert_eq!(oss.mirrors.len(), 1);
let mirror = &oss.mirrors[0];
assert_eq!(mirror.host, "http://127.0.0.1:65001");
assert_eq!(mirror.ping_url, "http://127.0.0.1:65001/ping");
assert!(mirror.headers.is_empty());
assert_eq!(mirror.health_check_interval, 10);
assert_eq!(mirror.failure_limit, 10);
}
#[test]
@ -1890,11 +1886,6 @@ mod tests {
fallback = true
check_interval = 10
use_http = true
[[backend.registry.mirrors]]
host = "http://127.0.0.1:65001"
ping_url = "http://127.0.0.1:65001/ping"
health_check_interval = 10
failure_limit = 10
"#;
let config: ConfigV2 = toml::from_str(content).unwrap();
assert_eq!(config.version, 2);
@ -1923,14 +1914,6 @@ mod tests {
assert_eq!(registry.proxy.check_interval, 10);
assert!(registry.proxy.fallback);
assert!(registry.proxy.use_http);
assert_eq!(registry.mirrors.len(), 1);
let mirror = &registry.mirrors[0];
assert_eq!(mirror.host, "http://127.0.0.1:65001");
assert_eq!(mirror.ping_url, "http://127.0.0.1:65001/ping");
assert!(mirror.headers.is_empty());
assert_eq!(mirror.health_check_interval, 10);
assert_eq!(mirror.failure_limit, 10);
}
#[test]
@ -1967,7 +1950,7 @@ mod tests {
let prefetch = &cache.prefetch;
assert!(prefetch.enable);
assert_eq!(prefetch.threads, 8);
assert_eq!(prefetch.threads_count, 8);
assert_eq!(prefetch.batch_size, 1000000);
assert_eq!(prefetch.bandwidth_limit, 10000000);
}
@ -1998,14 +1981,14 @@ mod tests {
let rafs = config.rafs.as_ref().unwrap();
assert_eq!(&rafs.mode, "direct");
assert_eq!(rafs.batch_size, 1000000);
assert_eq!(rafs.user_io_batch_size, 1000000);
assert!(rafs.validate);
assert!(rafs.enable_xattr);
assert!(rafs.iostats_files);
assert!(rafs.access_pattern);
assert!(rafs.latest_read_files);
assert!(rafs.prefetch.enable);
assert_eq!(rafs.prefetch.threads, 4);
assert_eq!(rafs.prefetch.threads_count, 4);
assert_eq!(rafs.prefetch.batch_size, 1000000);
assert_eq!(rafs.prefetch.bandwidth_limit, 10000000);
assert!(rafs.prefetch.prefetch_all)
@ -2097,7 +2080,7 @@ mod tests {
"type": "blobcache",
"compressed": true,
"config": {
"work_dir": "/var/lib/containerd-nydus/cache",
"work_dir": "/var/lib/containerd/io.containerd.snapshotter.v1.nydus/cache",
"disable_indexed_map": false
}
}
@ -2178,4 +2161,414 @@ mod tests {
let auth = registry.auth.unwrap();
assert_eq!(auth, test_auth);
}
#[test]
fn test_config2_error() {
let content_bad_version = r#"version=3
"#;
let cfg: ConfigV2 = toml::from_str(content_bad_version).unwrap();
assert!(!cfg.validate());
let cfg = ConfigV2::new("id");
assert!(cfg.get_backend_config().is_err());
assert!(cfg.get_cache_config().is_err());
assert!(cfg.get_rafs_config().is_err());
assert!(cfg.get_cache_working_directory().is_err());
let content = r#"version=2
[cache]
type = "filecache"
[cache.filecache]
work_dir = "/tmp"
"#;
let cfg: ConfigV2 = toml::from_str(content).unwrap();
assert_eq!(cfg.get_cache_working_directory().unwrap(), "/tmp");
let content = r#"version=2
[cache]
type = "fscache"
[cache.fscache]
work_dir = "./foo"
"#;
let cfg: ConfigV2 = toml::from_str(content).unwrap();
assert_eq!(cfg.get_cache_working_directory().unwrap(), "./foo");
let content = r#"version=2
[cache]
type = "bar"
"#;
let cfg: ConfigV2 = toml::from_str(content).unwrap();
assert!(cfg.get_cache_working_directory().is_err());
let content = r#"
foo-bar-xxxx
"#;
assert!(toml::from_str::<ConfigV2>(content).is_err());
}
#[test]
fn test_backend_config_valid() {
let mut cfg = BackendConfigV2 {
backend_type: "localdisk".to_string(),
..Default::default()
};
assert!(!cfg.validate());
cfg.localdisk = Some(LocalDiskConfig {
device_path: "".to_string(),
disable_gpt: true,
});
assert!(!cfg.validate());
let cfg = BackendConfigV2 {
backend_type: "localfs".to_string(),
..Default::default()
};
assert!(!cfg.validate());
let cfg = BackendConfigV2 {
backend_type: "oss".to_string(),
..Default::default()
};
assert!(!cfg.validate());
let cfg = BackendConfigV2 {
backend_type: "s3".to_string(),
..Default::default()
};
assert!(!cfg.validate());
let cfg = BackendConfigV2 {
backend_type: "register".to_string(),
..Default::default()
};
assert!(!cfg.validate());
let cfg = BackendConfigV2 {
backend_type: "http-proxy".to_string(),
..Default::default()
};
assert!(!cfg.validate());
let cfg = BackendConfigV2 {
backend_type: "foobar".to_string(),
..Default::default()
};
assert!(!cfg.validate());
}
fn get_config(backend_type: &str) {
let mut cfg: BackendConfigV2 = BackendConfigV2::default();
assert!(cfg.get_localdisk_config().is_err());
cfg.backend_type = backend_type.to_string();
assert!(cfg.get_localdisk_config().is_err());
}
#[test]
fn test_get_config() {
get_config("localdisk");
get_config("localfs");
get_config("oss");
get_config("s3");
get_config("register");
get_config("http-proxy");
}
#[test]
fn test_cache_config_valid() {
let cfg = CacheConfigV2 {
cache_type: "blobcache".to_string(),
..Default::default()
};
assert!(!cfg.validate());
let cfg = CacheConfigV2 {
cache_type: "fscache".to_string(),
..Default::default()
};
assert!(!cfg.validate());
let cfg = CacheConfigV2 {
cache_type: "dummycache".to_string(),
..Default::default()
};
assert!(cfg.validate());
let cfg = CacheConfigV2 {
cache_type: "foobar".to_string(),
..Default::default()
};
assert!(!cfg.validate());
}
#[test]
fn test_get_fscache_config() {
let mut cfg = CacheConfigV2::default();
assert!(cfg.get_fscache_config().is_err());
cfg.cache_type = "fscache".to_string();
assert!(cfg.get_fscache_config().is_err());
}
#[test]
fn test_fscache_get_work_dir() {
let mut cfg = FsCacheConfig::default();
assert!(cfg.get_work_dir().is_err());
cfg.work_dir = ".".to_string();
assert!(cfg.get_work_dir().is_ok());
cfg.work_dir = "foobar".to_string();
let res = cfg.get_work_dir().is_ok();
fs::remove_dir_all("foobar").unwrap();
assert!(res);
}
#[test]
fn test_config_v2_from_file() {
let content = r#"version=2
[cache]
type = "filecache"
[cache.filecache]
work_dir = "/tmp"
"#;
if fs::write("test_config_v2_from_file.cfg", content).is_ok() {
let res = ConfigV2::from_file("test_config_v2_from_file.cfg").is_ok();
fs::remove_file("test_config_v2_from_file.cfg").unwrap();
assert!(res);
} else {
assert!(ConfigV2::from_file("test_config_v2_from_file.cfg").is_err());
}
}
#[test]
fn test_blob_cache_entry_v2_from_file() {
let content = r#"version=2
id = "my_id"
metadata_path = "meta_path"
[backend]
type = "localfs"
[backend.localfs]
blob_file = "/tmp/nydus.blob.data"
dir = "/tmp"
alt_dirs = ["/var/nydus/cache"]
[cache]
type = "filecache"
compressed = true
validate = true
[cache.filecache]
work_dir = "/tmp"
"#;
if fs::write("test_blob_cache_entry_v2_from_file.cfg", content).is_ok() {
let res =
BlobCacheEntryConfigV2::from_file("test_blob_cache_entry_v2_from_file.cfg").is_ok();
fs::remove_file("test_blob_cache_entry_v2_from_file.cfg").unwrap();
assert!(res);
} else {
assert!(ConfigV2::from_file("test_blob_cache_entry_v2_from_file.cfg").is_err());
}
}
#[test]
fn test_blob_cache_valid() {
let err_version_content = r#"version=1"#;
let config: BlobCacheEntryConfigV2 = toml::from_str(err_version_content).unwrap();
assert!(!config.validate());
let content = r#"version=2
id = "my_id"
metadata_path = "meta_path"
[backend]
type = "localfs"
[backend.localfs]
blob_file = "/tmp/nydus.blob.data"
dir = "/tmp"
alt_dirs = ["/var/nydus/cache"]
[cache]
type = "filecache"
compressed = true
validate = true
[cache.filecache]
work_dir = "/tmp"
"#;
let config: BlobCacheEntryConfigV2 = toml::from_str(content).unwrap();
assert!(config.validate());
}
#[test]
fn test_blob_from_str() {
let content = r#"version=2
id = "my_id"
metadata_path = "meta_path"
[backend]
type = "localfs"
[backend.localfs]
blob_file = "/tmp/nydus.blob.data"
dir = "/tmp"
alt_dirs = ["/var/nydus/cache"]
[cache]
type = "filecache"
compressed = true
validate = true
[cache.filecache]
work_dir = "/tmp"
"#;
let config: BlobCacheEntryConfigV2 = BlobCacheEntryConfigV2::from_str(content).unwrap();
assert_eq!(config.version, 2);
assert_eq!(config.id, "my_id");
assert_eq!(config.backend.localfs.unwrap().dir, "/tmp");
assert_eq!(config.cache.file_cache.unwrap().work_dir, "/tmp");
let content = r#"
{
"version": 2,
"id": "my_id",
"backend": {
"type": "localfs",
"localfs": {
"dir": "/tmp"
}
}
}
"#;
let config: BlobCacheEntryConfigV2 = BlobCacheEntryConfigV2::from_str(content).unwrap();
assert_eq!(config.version, 2);
assert_eq!(config.id, "my_id");
assert_eq!(config.backend.localfs.unwrap().dir, "/tmp");
let content = r#"foobar"#;
assert!(BlobCacheEntryConfigV2::from_str(content).is_err());
}
#[test]
fn test_blob_cache_entry_from_file() {
let content = r#"{
"type": "bootstrap",
"id": "blob1",
"config": {
"id": "cache1",
"backend_type": "localfs",
"backend_config": {},
"cache_type": "fscache",
"cache_config": {},
"metadata_path": "/tmp/metadata1"
},
"domain_id": "domain1"
}"#;
if fs::write("test_blob_cache_entry_from_file.cfg", content).is_ok() {
let res = BlobCacheEntry::from_file("test_blob_cache_entry_from_file.cfg").is_ok();
fs::remove_file("test_blob_cache_entry_from_file.cfg").unwrap();
assert!(res);
} else {
assert!(ConfigV2::from_file("test_blob_cache_entry_from_file.cfg").is_err());
}
}
#[test]
fn test_blob_cache_entry_valid() {
let content = r#"{
"type": "bootstrap",
"id": "blob1",
"config": {
"id": "cache1",
"backend_type": "localfs",
"backend_config": {},
"cache_type": "fscache",
"cache_config": {},
"metadata_path": "/tmp/metadata1"
},
"domain_id": "domain1"
}"#;
let mut cfg = BlobCacheEntry::from_str(content).unwrap();
cfg.blob_type = "foobar".to_string();
assert!(!cfg.validate());
let content = r#"{
"type": "bootstrap",
"id": "blob1",
"domain_id": "domain1"
}"#;
let cfg = BlobCacheEntry::from_str(content).unwrap();
assert!(cfg.validate());
}
#[test]
fn test_blob_cache_entry_from_str() {
let content = r#"{
"type": "bootstrap",
"id": "blob1",
"config": {
"id": "cache1",
"backend_type": "localfs",
"backend_config": {},
"cache_type": "fscache",
"cache_config": {},
"metadata_path": "/tmp/metadata1"
},
"domain_id": "domain1"
}"#;
assert!(BlobCacheEntry::from_str(content).is_ok());
let content = r#"{
"type": "foobar",
"id": "blob1",
"config": {
"id": "cache1",
"backend_type": "foobar",
"backend_config": {},
"cache_type": "foobar",
"cache_config": {},
"metadata_path": "/tmp/metadata1"
},
"domain_id": "domain1"
}"#;
assert!(BlobCacheEntry::from_str(content).is_err());
let content = r#"foobar"#;
assert!(BlobCacheEntry::from_str(content).is_err());
}
#[test]
fn test_default_value() {
assert!(default_true());
assert_eq!(default_prefetch_batch_size(), 1024 * 1024);
assert_eq!(default_prefetch_threads_count(), 8);
}
#[test]
fn test_backend_config_try_from() {
let config = BackendConfig {
backend_type: "localdisk".to_string(),
backend_config: serde_json::to_value(LocalDiskConfig::default()).unwrap(),
};
assert!(BackendConfigV2::try_from(&config).is_ok());
let config = BackendConfig {
backend_type: "localfs".to_string(),
backend_config: serde_json::to_value(LocalFsConfig::default()).unwrap(),
};
assert!(BackendConfigV2::try_from(&config).is_ok());
let config = BackendConfig {
backend_type: "oss".to_string(),
backend_config: serde_json::to_value(OssConfig::default()).unwrap(),
};
assert!(BackendConfigV2::try_from(&config).is_ok());
let config = BackendConfig {
backend_type: "s3".to_string(),
backend_config: serde_json::to_value(S3Config::default()).unwrap(),
};
assert!(BackendConfigV2::try_from(&config).is_ok());
let config = BackendConfig {
backend_type: "registry".to_string(),
backend_config: serde_json::to_value(RegistryConfig::default()).unwrap(),
};
assert!(BackendConfigV2::try_from(&config).is_ok());
let config = BackendConfig {
backend_type: "foobar".to_string(),
backend_config: serde_json::to_value(LocalDiskConfig::default()).unwrap(),
};
assert!(BackendConfigV2::try_from(&config).is_err());
}
}

View File

@ -11,16 +11,16 @@ pub fn make_error(
_file: &str,
_line: u32,
) -> std::io::Error {
#[cfg(all(debug_assertions, feature = "error-backtrace"))]
#[cfg(feature = "error-backtrace")]
{
if let Ok(val) = std::env::var("RUST_BACKTRACE") {
if val.trim() != "0" {
log::error!("Stack:\n{:?}", backtrace::Backtrace::new());
log::error!("Error:\n\t{:?}\n\tat {}:{}", _raw, _file, _line);
error!("Stack:\n{:?}", backtrace::Backtrace::new());
error!("Error:\n\t{:?}\n\tat {}:{}", _raw, _file, _line);
return err;
}
}
log::error!(
error!(
"Error:\n\t{:?}\n\tat {}:{}\n\tnote: enable `RUST_BACKTRACE=1` env to display a backtrace",
_raw, _file, _line
);
@ -86,6 +86,8 @@ define_error_macro!(eother, std::io::Error::new(std::io::ErrorKind::Other, ""));
#[cfg(test)]
mod tests {
use std::io::{Error, ErrorKind};
fn check_size(size: usize) -> std::io::Result<()> {
if size > 0x1000 {
return Err(einval!());
@ -101,4 +103,150 @@ mod tests {
std::io::Error::from_raw_os_error(libc::EINVAL).kind()
);
}
#[test]
fn test_make_error() {
let original_error = Error::new(ErrorKind::Other, "test error");
let debug_info = "debug information";
let file = "test.rs";
let line = 42;
let result_error = super::make_error(original_error, debug_info, file, line);
assert_eq!(result_error.kind(), ErrorKind::Other);
}
#[test]
fn test_libc_error_macros() {
// Test einval macro
let err = einval!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EINVAL).kind());
// Test enoent macro
let err = enoent!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::ENOENT).kind());
// Test ebadf macro
let err = ebadf!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EBADF).kind());
// Test eacces macro
let err = eacces!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EACCES).kind());
// Test enotdir macro
let err = enotdir!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::ENOTDIR).kind());
// Test eisdir macro
let err = eisdir!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EISDIR).kind());
// Test ealready macro
let err = ealready!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EALREADY).kind());
// Test enosys macro
let err = enosys!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::ENOSYS).kind());
// Test epipe macro
let err = epipe!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EPIPE).kind());
// Test eio macro
let err = eio!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EIO).kind());
}
#[test]
fn test_libc_error_macros_with_context() {
let test_msg = "test context";
// Test einval macro with context
let err = einval!(test_msg);
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EINVAL).kind());
// Test enoent macro with context
let err = enoent!(test_msg);
assert_eq!(err.kind(), Error::from_raw_os_error(libc::ENOENT).kind());
// Test eio macro with context
let err = eio!(test_msg);
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EIO).kind());
}
#[test]
fn test_custom_error_macros() {
// Test last_error macro
let err = last_error!();
// We can't predict the exact error, but we can check it's a valid error
assert!(!err.to_string().is_empty());
// Test eother macro
let err = eother!();
assert_eq!(err.kind(), ErrorKind::Other);
// Test eother macro with context
let err = eother!("custom context");
assert_eq!(err.kind(), ErrorKind::Other);
}
fn test_bail_einval_function() -> std::io::Result<()> {
bail_einval!("test error message");
}
fn test_bail_eio_function() -> std::io::Result<()> {
bail_eio!("test error message");
}
#[test]
fn test_bail_macros() {
// Test bail_einval macro
let result = test_bail_einval_function();
assert!(result.is_err());
let err = result.unwrap_err();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EINVAL).kind());
// The error message format is controlled by the macro, so just check it's not empty
assert!(!err.to_string().is_empty());
// Test bail_eio macro
let result = test_bail_eio_function();
assert!(result.is_err());
let err = result.unwrap_err();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EIO).kind());
// The error message format is controlled by the macro, so just check it's not empty
assert!(!err.to_string().is_empty());
}
#[test]
fn test_bail_macros_with_formatting() {
fn test_bail_with_format(code: i32) -> std::io::Result<()> {
if code == 1 {
bail_einval!("error code: {}", code);
} else if code == 2 {
bail_eio!("I/O error with code: {}", code);
}
Ok(())
}
// Test bail_einval with formatting
let result = test_bail_with_format(1);
assert!(result.is_err());
let err = result.unwrap_err();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EINVAL).kind());
// The error message format is controlled by the macro, so just check it's not empty
assert!(!err.to_string().is_empty());
// Test bail_eio with formatting
let result = test_bail_with_format(2);
assert!(result.is_err());
let err = result.unwrap_err();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EIO).kind());
// The error message format is controlled by the macro, so just check it's not empty
assert!(!err.to_string().is_empty());
// Test success case
let result = test_bail_with_format(3);
assert!(result.is_ok());
}
}

View File

@ -9,16 +9,17 @@ use std::sync::mpsc::{RecvError, SendError};
use serde::Deserialize;
use serde_json::Error as SerdeError;
use thiserror::Error;
use crate::BlobCacheEntry;
/// Errors related to Metrics.
#[derive(Debug)]
#[derive(Error, Debug)]
pub enum MetricsError {
/// Non-exist counter.
#[error("no counter found for the metric")]
NoCounter,
/// Failed to serialize message.
Serialize(SerdeError),
#[error("failed to serialize metric: {0:?}")]
Serialize(#[source] SerdeError),
}
/// Mount a filesystem.
@ -131,7 +132,7 @@ pub enum DaemonErrorKind {
/// Unexpected event type.
UnexpectedEvent(String),
/// Can't upgrade the daemon.
UpgradeManager,
UpgradeManager(String),
/// Unsupported requests.
Unsupported,
}
@ -145,25 +146,25 @@ pub enum MetricsErrorKind {
Stats(MetricsError),
}
#[derive(Debug)]
#[derive(Error, Debug)]
#[allow(clippy::large_enum_variant)]
pub enum ApiError {
/// Daemon internal error
#[error("daemon internal error: {0:?}")]
DaemonAbnormal(DaemonErrorKind),
/// Failed to get events information
#[error("daemon events error: {0}")]
Events(String),
/// Failed to get metrics information
#[error("metrics error: {0:?}")]
Metrics(MetricsErrorKind),
/// Failed to mount filesystem
#[error("failed to mount filesystem: {0:?}")]
MountFilesystem(DaemonErrorKind),
/// Failed to send request to the API service
RequestSend(SendError<Option<ApiRequest>>),
/// Unrecognized payload content
#[error("failed to send request to the API service: {0:?}")]
RequestSend(#[from] SendError<Option<ApiRequest>>),
#[error("failed to parse response payload type")]
ResponsePayloadType,
/// Failed to receive response from the API service
ResponseRecv(RecvError),
/// Failed to send wakeup notification
Wakeup(io::Error),
#[error("failed to receive response from the API service: {0:?}")]
ResponseRecv(#[from] RecvError),
#[error("failed to wake up the daemon: {0:?}")]
Wakeup(#[source] io::Error),
}
/// Specialized `std::result::Result` for API replies.

View File

@ -140,7 +140,7 @@ impl EndpointHandler for MetricsFsFilesHandler {
(Method::Get, None) => {
let id = extract_query_part(req, "id");
let latest_read_files = extract_query_part(req, "latest")
.map_or(false, |b| b.parse::<bool>().unwrap_or(false));
.is_some_and(|b| b.parse::<bool>().unwrap_or(false));
let r = kicker(ApiRequest::ExportFsFilesMetrics(id, latest_read_files));
Ok(convert_to_response(r, HttpError::FsFilesMetrics))
}

View File

@ -43,9 +43,8 @@ pub fn extract_query_part(req: &Request, key: &str) -> Option<String> {
// right now, below way makes it easy to obtain query parts from uri.
let http_prefix = format!("http:{}", req.uri().get_abs_path());
let url = Url::parse(&http_prefix)
.map_err(|e| {
.inspect_err(|e| {
error!("api: can't parse request {:?}", e);
e
})
.ok()?;
@ -326,35 +325,30 @@ mod tests {
#[test]
fn test_http_api_routes_v1() {
assert!(HTTP_ROUTES.routes.get("/api/v1/daemon").is_some());
assert!(HTTP_ROUTES.routes.get("/api/v1/daemon/events").is_some());
assert!(HTTP_ROUTES.routes.get("/api/v1/daemon/backend").is_some());
assert!(HTTP_ROUTES.routes.get("/api/v1/daemon/start").is_some());
assert!(HTTP_ROUTES.routes.get("/api/v1/daemon/exit").is_some());
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/daemon"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/daemon/events"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/daemon/backend"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/daemon/start"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/daemon/exit"));
assert!(HTTP_ROUTES
.routes
.get("/api/v1/daemon/fuse/sendfd")
.is_some());
.contains_key("/api/v1/daemon/fuse/sendfd"));
assert!(HTTP_ROUTES
.routes
.get("/api/v1/daemon/fuse/takeover")
.is_some());
assert!(HTTP_ROUTES.routes.get("/api/v1/mount").is_some());
assert!(HTTP_ROUTES.routes.get("/api/v1/metrics").is_some());
assert!(HTTP_ROUTES.routes.get("/api/v1/metrics/files").is_some());
assert!(HTTP_ROUTES.routes.get("/api/v1/metrics/pattern").is_some());
assert!(HTTP_ROUTES.routes.get("/api/v1/metrics/backend").is_some());
assert!(HTTP_ROUTES
.routes
.get("/api/v1/metrics/blobcache")
.is_some());
assert!(HTTP_ROUTES.routes.get("/api/v1/metrics/inflight").is_some());
.contains_key("/api/v1/daemon/fuse/takeover"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/mount"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics/files"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics/pattern"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics/backend"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics/blobcache"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics/inflight"));
}
#[test]
fn test_http_api_routes_v2() {
assert!(HTTP_ROUTES.routes.get("/api/v2/daemon").is_some());
assert!(HTTP_ROUTES.routes.get("/api/v2/blobs").is_some());
assert!(HTTP_ROUTES.routes.contains_key("/api/v2/daemon"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v2/blobs"));
}
#[test]

View File

@ -1,18 +1,18 @@
[package]
name = "nydus-builder"
version = "0.1.0"
version = "0.2.0"
description = "Nydus Image Builder"
authors = ["The Nydus Developers"]
license = "Apache-2.0"
homepage = "https://nydus.dev/"
repository = "https://github.com/dragonflyoss/image-service"
edition = "2018"
repository = "https://github.com/dragonflyoss/nydus"
edition = "2021"
[dependencies]
anyhow = "1.0.35"
base64 = "0.21"
hex = "0.4.3"
indexmap = "1"
indexmap = "2"
libc = "0.2"
log = "0.4"
nix = "0.24"
@ -20,13 +20,15 @@ serde = { version = "1.0.110", features = ["serde_derive", "rc"] }
serde_json = "1.0.53"
sha2 = "0.10.2"
tar = "0.4.40"
vmm-sys-util = "0.11.0"
vmm-sys-util = "0.12.1"
xattr = "1.0.1"
parse-size = "1.1.0"
nydus-api = { version = "0.3", path = "../api" }
nydus-rafs = { version = "0.3", path = "../rafs" }
nydus-storage = { version = "0.6", path = "../storage", features = ["backend-localfs"] }
nydus-utils = { version = "0.4", path = "../utils" }
nydus-api = { version = "0.4.0", path = "../api" }
nydus-rafs = { version = "0.4.0", path = "../rafs" }
nydus-storage = { version = "0.7.0", path = "../storage", features = ["backend-localfs"] }
nydus-utils = { version = "0.5.0", path = "../utils" }
gix-attributes = "0.25.0"
[package.metadata.docs.rs]
all-features = true

189
builder/src/attributes.rs Normal file
View File

@ -0,0 +1,189 @@
// Copyright 2024 Nydus Developers. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use std::collections::HashMap;
use std::path::{Path, PathBuf};
use std::{fs, path};
use anyhow::Result;
use gix_attributes::parse;
use gix_attributes::parse::Kind;
const KEY_TYPE: &str = "type";
const KEY_CRCS: &str = "crcs";
const VAL_EXTERNAL: &str = "external";
pub struct Parser {}
#[derive(Clone, Debug, Eq, PartialEq, Default)]
pub struct Item {
pub pattern: PathBuf,
pub attributes: HashMap<String, String>,
}
#[derive(Clone, Debug, Eq, PartialEq, Default)]
pub struct Attributes {
pub items: HashMap<PathBuf, HashMap<String, String>>,
pub crcs: HashMap<PathBuf, Vec<u32>>,
}
impl Attributes {
/// Parse nydus attributes from a file.
pub fn from<P: AsRef<Path>>(path: P) -> Result<Attributes> {
let content = fs::read(path)?;
let _items = parse(&content);
let mut items = HashMap::new();
let mut crcs = HashMap::new();
for _item in _items {
let _item = _item?;
if let Kind::Pattern(pattern) = _item.0 {
let mut path = PathBuf::from(pattern.text.to_string());
if !path.is_absolute() {
path = path::Path::new("/").join(path);
}
let mut current_path = path.clone();
let mut attributes = HashMap::new();
let mut _type = String::new();
let mut _crcs = vec![];
for line in _item.1 {
let line = line?;
let name = line.name.as_str();
let state = line.state.as_bstr().unwrap_or_default();
if name == KEY_TYPE {
_type = state.to_string();
}
if name == KEY_CRCS {
_crcs = state
.to_string()
.split(',')
.map(|s| {
let trimmed = s.trim();
let hex_str = if let Some(stripped) = trimmed.strip_prefix("0x") {
stripped
} else {
trimmed
};
u32::from_str_radix(hex_str, 16).map_err(|e| anyhow::anyhow!(e))
})
.collect::<Result<Vec<u32>, _>>()?;
}
attributes.insert(name.to_string(), state.to_string());
}
crcs.insert(path.clone(), _crcs);
items.insert(path, attributes);
// process parent directory
while let Some(parent) = current_path.parent() {
if parent == Path::new("/") {
break;
}
let mut attributes = HashMap::new();
if !items.contains_key(parent) {
attributes.insert(KEY_TYPE.to_string(), VAL_EXTERNAL.to_string());
items.insert(parent.to_path_buf(), attributes);
}
current_path = parent.to_path_buf();
}
}
}
Ok(Attributes { items, crcs })
}
fn check_external(&self, attributes: &HashMap<String, String>) -> bool {
attributes.get(KEY_TYPE) == Some(&VAL_EXTERNAL.to_string())
}
pub fn is_external<P: AsRef<Path>>(&self, path: P) -> bool {
if let Some(attributes) = self.items.get(path.as_ref()) {
return self.check_external(attributes);
}
false
}
pub fn is_prefix_external<P: AsRef<Path>>(&self, target: P) -> bool {
self.items
.iter()
.any(|item| item.0.starts_with(&target) && self.check_external(item.1))
}
pub fn get_value<P: AsRef<Path>, K: AsRef<str>>(&self, path: P, key: K) -> Option<String> {
if let Some(attributes) = self.items.get(path.as_ref()) {
return attributes.get(key.as_ref()).map(|s| s.to_string());
}
None
}
pub fn get_values<P: AsRef<Path>>(&self, path: P) -> Option<&HashMap<String, String>> {
self.items.get(path.as_ref())
}
pub fn get_crcs<P: AsRef<Path>>(&self, path: P) -> Option<&Vec<u32>> {
self.crcs.get(path.as_ref())
}
}
#[cfg(test)]
mod tests {
use std::{collections::HashMap, fs, path::PathBuf};
use super::{Attributes, Item};
use vmm_sys_util::tempfile::TempFile;
#[test]
fn test_attribute_parse() {
let file = TempFile::new().unwrap();
fs::write(
file.as_path(),
"/foo type=external crcs=0x1234,0x5678
/bar type=external crcs=0x1234,0x5678
/models/foo/bar type=external",
)
.unwrap();
let attributes = Attributes::from(file.as_path()).unwrap();
let _attributes_base: HashMap<String, String> =
[("type".to_string(), "external".to_string())]
.iter()
.cloned()
.collect();
let _attributes: HashMap<String, String> = [
("type".to_string(), "external".to_string()),
("crcs".to_string(), "0x1234,0x5678".to_string()),
]
.iter()
.cloned()
.collect();
let items_map: HashMap<PathBuf, HashMap<String, String>> = vec![
Item {
pattern: PathBuf::from("/foo"),
attributes: _attributes.clone(),
},
Item {
pattern: PathBuf::from("/bar"),
attributes: _attributes.clone(),
},
Item {
pattern: PathBuf::from("/models"),
attributes: _attributes_base.clone(),
},
Item {
pattern: PathBuf::from("/models/foo"),
attributes: _attributes_base.clone(),
},
Item {
pattern: PathBuf::from("/models/foo/bar"),
attributes: _attributes_base.clone(),
},
]
.into_iter()
.map(|item| (item.pattern, item.attributes))
.collect();
assert_eq!(attributes.items, items_map);
assert_eq!(attributes.get_crcs("/foo"), Some(&vec![0x1234, 0x5678]))
}
}

View File

@ -0,0 +1,283 @@
// Copyright (C) 2023 Nydus Developers. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
//! Generate Chunkdict RAFS bootstrap.
//! -------------------------------------------------------------------------------------------------
//! Bug 1: Inconsistent Chunk Size Leading to Blob Size Less Than 4K(v6_block_size)
//! Description: The size of chunks is not consistent, which results in the possibility that a blob,
//! composed of a group of these chunks, may be less than 4K(v6_block_size) in size.
//! This inconsistency leads to a failure in passing the size check.
//! -------------------------------------------------------------------------------------------------
//! Bug 2: Incorrect Chunk Number Calculation Due to Premature Check Logic
//! Description: The current logic for calculating the chunk number is based on the formula size/chunk size.
//! However, this approach is flawed as it precedes the actual check which accounts for chunk statistics.
//! Consequently, this leads to inaccurate counting of chunk numbers.
use super::core::node::{ChunkSource, NodeInfo};
use super::{BlobManager, Bootstrap, BootstrapManager, BuildContext, BuildOutput, Tree};
use crate::core::node::Node;
use crate::NodeChunk;
use crate::OsString;
use anyhow::{Ok, Result};
use nydus_rafs::metadata::chunk::ChunkWrapper;
use nydus_rafs::metadata::inode::InodeWrapper;
use nydus_rafs::metadata::layout::RafsXAttrs;
use nydus_storage::meta::BlobChunkInfoV1Ondisk;
use nydus_utils::compress::Algorithm;
use nydus_utils::digest::RafsDigest;
use std::mem::size_of;
use std::path::PathBuf;
use std::str::FromStr;
use std::sync::Arc;
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
pub struct ChunkdictChunkInfo {
pub image_reference: String,
pub version: String,
pub chunk_blob_id: String,
pub chunk_digest: String,
pub chunk_crc32: u32,
pub chunk_compressed_size: u32,
pub chunk_uncompressed_size: u32,
pub chunk_compressed_offset: u64,
pub chunk_uncompressed_offset: u64,
}
pub struct ChunkdictBlobInfo {
pub blob_id: String,
pub blob_compressed_size: u64,
pub blob_uncompressed_size: u64,
pub blob_compressor: String,
pub blob_meta_ci_compressed_size: u64,
pub blob_meta_ci_uncompressed_size: u64,
pub blob_meta_ci_offset: u64,
}
/// Struct to generate chunkdict RAFS bootstrap.
pub struct Generator {}
impl Generator {
// Generate chunkdict RAFS bootstrap.
pub fn generate(
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_mgr: &mut BlobManager,
chunkdict_chunks_origin: Vec<ChunkdictChunkInfo>,
chunkdict_blobs: Vec<ChunkdictBlobInfo>,
) -> Result<BuildOutput> {
// Validate and remove chunks whose belonged blob sizes are smaller than a block.
let mut chunkdict_chunks = chunkdict_chunks_origin.to_vec();
Self::validate_and_remove_chunks(ctx, &mut chunkdict_chunks);
// Build root tree.
let mut tree = Self::build_root_tree(ctx)?;
// Build child tree.
let child = Self::build_child_tree(ctx, blob_mgr, &chunkdict_chunks, &chunkdict_blobs)?;
let result = vec![child];
tree.children = result;
Self::validate_tree(&tree)?;
// Build bootstrap.
let mut bootstrap_ctx = bootstrap_mgr.create_ctx()?;
let mut bootstrap = Bootstrap::new(tree)?;
bootstrap.build(ctx, &mut bootstrap_ctx)?;
let blob_table = blob_mgr.to_blob_table(ctx)?;
let storage = &mut bootstrap_mgr.bootstrap_storage;
bootstrap.dump(ctx, storage, &mut bootstrap_ctx, &blob_table)?;
BuildOutput::new(blob_mgr, None, &bootstrap_mgr.bootstrap_storage, &None)
}
/// Validate tree.
fn validate_tree(tree: &Tree) -> Result<()> {
let pre = &mut |t: &Tree| -> Result<()> {
let node = t.borrow_mut_node();
debug!("chunkdict tree: ");
debug!("inode: {}", node);
for chunk in &node.chunks {
debug!("\t chunk: {}", chunk);
}
Ok(())
};
tree.walk_dfs_pre(pre)?;
debug!("chunkdict tree is valid.");
Ok(())
}
/// Validates and removes chunks with a total uncompressed size smaller than the block size limit.
fn validate_and_remove_chunks(ctx: &mut BuildContext, chunkdict: &mut Vec<ChunkdictChunkInfo>) {
let mut chunk_sizes = std::collections::HashMap::new();
// Accumulate the uncompressed size for each chunk_blob_id.
for chunk in chunkdict.iter() {
*chunk_sizes.entry(chunk.chunk_blob_id.clone()).or_insert(0) +=
chunk.chunk_uncompressed_size as u64;
}
// Find all chunk_blob_ids with a total uncompressed size > v6_block_size.
let small_chunks: Vec<String> = chunk_sizes
.into_iter()
.filter(|&(_, size)| size < ctx.v6_block_size())
.inspect(|(id, _)| {
eprintln!(
"Warning: Blob with id '{}' is smaller than {} bytes.",
id,
ctx.v6_block_size()
)
})
.map(|(id, _)| id)
.collect();
// Retain only chunks with chunk_blob_id that has a total uncompressed size > v6_block_size.
chunkdict.retain(|chunk| !small_chunks.contains(&chunk.chunk_blob_id));
}
/// Build the root tree.
pub fn build_root_tree(ctx: &mut BuildContext) -> Result<Tree> {
let mut inode = InodeWrapper::new(ctx.fs_version);
inode.set_ino(1);
inode.set_uid(1000);
inode.set_gid(1000);
inode.set_projid(0);
inode.set_mode(0o660 | libc::S_IFDIR as u32);
inode.set_nlink(3);
inode.set_name_size("/".len());
inode.set_rdev(0);
inode.set_blocks(256);
let node_info = NodeInfo {
explicit_uidgid: true,
src_dev: 0,
src_ino: 0,
rdev: 0,
source: PathBuf::from("/"),
path: PathBuf::from("/"),
target: PathBuf::from("/"),
target_vec: vec![OsString::from("/")],
symlink: None,
xattrs: RafsXAttrs::default(),
v6_force_extended_inode: true,
};
let root_node = Node::new(inode, node_info, 0);
let tree = Tree::new(root_node);
Ok(tree)
}
/// Build the child tree.
fn build_child_tree(
ctx: &mut BuildContext,
blob_mgr: &mut BlobManager,
chunkdict_chunks: &[ChunkdictChunkInfo],
chunkdict_blobs: &[ChunkdictBlobInfo],
) -> Result<Tree> {
let mut inode = InodeWrapper::new(ctx.fs_version);
inode.set_ino(2);
inode.set_uid(0);
inode.set_gid(0);
inode.set_projid(0);
inode.set_mode(0o660 | libc::S_IFREG as u32);
inode.set_nlink(1);
inode.set_name_size("chunkdict".len());
inode.set_rdev(0);
inode.set_blocks(256);
let node_info = NodeInfo {
explicit_uidgid: true,
src_dev: 0,
src_ino: 1,
rdev: 0,
source: PathBuf::from("/"),
path: PathBuf::from("/chunkdict"),
target: PathBuf::from("/chunkdict"),
target_vec: vec![OsString::from("/"), OsString::from("/chunkdict")],
symlink: None,
xattrs: RafsXAttrs::new(),
v6_force_extended_inode: true,
};
let mut node = Node::new(inode, node_info, 0);
// Insert chunks.
Self::insert_chunks(ctx, blob_mgr, &mut node, chunkdict_chunks, chunkdict_blobs)?;
let node_size: u64 = node
.chunks
.iter()
.map(|chunk| chunk.inner.uncompressed_size() as u64)
.sum();
node.inode.set_size(node_size);
// Update child count.
node.inode.set_child_count(node.chunks.len() as u32);
let child = Tree::new(node);
child
.borrow_mut_node()
.v5_set_dir_size(ctx.fs_version, &child.children);
Ok(child)
}
/// Insert chunks.
fn insert_chunks(
ctx: &mut BuildContext,
blob_mgr: &mut BlobManager,
node: &mut Node,
chunkdict_chunks: &[ChunkdictChunkInfo],
chunkdict_blobs: &[ChunkdictBlobInfo],
) -> Result<()> {
for (index, chunk_info) in chunkdict_chunks.iter().enumerate() {
let chunk_size: u32 = chunk_info.chunk_compressed_size;
let file_offset = index as u64 * chunk_size as u64;
let mut chunk = ChunkWrapper::new(ctx.fs_version);
// Update blob context.
let (blob_index, blob_ctx) =
blob_mgr.get_or_cerate_blob_for_chunkdict(ctx, &chunk_info.chunk_blob_id)?;
let chunk_uncompressed_size = chunk_info.chunk_uncompressed_size;
let pre_d_offset = blob_ctx.current_uncompressed_offset;
blob_ctx.uncompressed_blob_size = pre_d_offset + chunk_uncompressed_size as u64;
blob_ctx.current_uncompressed_offset += chunk_uncompressed_size as u64;
blob_ctx.blob_meta_header.set_ci_uncompressed_size(
blob_ctx.blob_meta_header.ci_uncompressed_size()
+ size_of::<BlobChunkInfoV1Ondisk>() as u64,
);
blob_ctx.blob_meta_header.set_ci_compressed_size(
blob_ctx.blob_meta_header.ci_uncompressed_size()
+ size_of::<BlobChunkInfoV1Ondisk>() as u64,
);
let chunkdict_blob_info = chunkdict_blobs
.iter()
.find(|blob| blob.blob_id == chunk_info.chunk_blob_id)
.unwrap();
blob_ctx.blob_compressor =
Algorithm::from_str(chunkdict_blob_info.blob_compressor.as_str())?;
blob_ctx
.blob_meta_header
.set_ci_uncompressed_size(chunkdict_blob_info.blob_meta_ci_uncompressed_size);
blob_ctx
.blob_meta_header
.set_ci_compressed_size(chunkdict_blob_info.blob_meta_ci_compressed_size);
blob_ctx
.blob_meta_header
.set_ci_compressed_offset(chunkdict_blob_info.blob_meta_ci_offset);
blob_ctx.blob_meta_header.set_ci_compressor(Algorithm::Zstd);
// Update chunk context.
let chunk_index = blob_ctx.alloc_chunk_index()?;
chunk.set_blob_index(blob_index);
chunk.set_index(chunk_index);
chunk.set_file_offset(file_offset);
chunk.set_compressed_size(chunk_info.chunk_compressed_size);
chunk.set_compressed_offset(chunk_info.chunk_compressed_offset);
chunk.set_uncompressed_size(chunk_info.chunk_uncompressed_size);
chunk.set_uncompressed_offset(chunk_info.chunk_uncompressed_offset);
chunk.set_id(RafsDigest::from_string(&chunk_info.chunk_digest));
chunk.set_crc32(chunk_info.chunk_crc32);
node.chunks.push(NodeChunk {
source: ChunkSource::Build,
inner: Arc::new(chunk.clone()),
});
}
Ok(())
}
}

View File

@ -21,6 +21,9 @@ use nydus_utils::{digest, try_round_up_4k};
use serde::{Deserialize, Serialize};
use sha2::Digest;
use crate::attributes::Attributes;
use crate::core::context::Artifact;
use super::core::blob::Blob;
use super::core::bootstrap::Bootstrap;
use super::{
@ -46,22 +49,30 @@ pub struct Config {
/// available value: 0-99, 0 means disable
/// hint: it's better to disable this option when there are some shared blobs
/// for example: build-cache
#[serde(default)]
min_used_ratio: u8,
pub min_used_ratio: u8,
/// we compact blobs whose size are less than compact_blob_size
#[serde(default = "default_compact_blob_size")]
compact_blob_size: usize,
/// size of compacted blobs should not be large than max_compact_size
#[serde(default = "default_max_compact_size")]
max_compact_size: usize,
pub compact_blob_size: usize,
/// size of compacted blobs should not be larger than max_compact_size
pub max_compact_size: usize,
/// if number of blobs >= layers_to_compact, do compact
/// 0 means always try compact
#[serde(default)]
layers_to_compact: usize,
pub layers_to_compact: usize,
/// local blobs dir, may haven't upload to backend yet
/// what's more, new blobs will output to this dir
/// name of blob file should be equal to blob_id
blobs_dir: String,
pub blobs_dir: String,
}
impl Default for Config {
fn default() -> Self {
Self {
min_used_ratio: 0,
compact_blob_size: default_compact_blob_size(),
max_compact_size: default_max_compact_size(),
layers_to_compact: 0,
blobs_dir: String::new(),
}
}
}
#[derive(Debug, Clone, Copy, Hash, PartialEq, Eq)]
@ -77,7 +88,7 @@ impl ChunkKey {
match c {
ChunkWrapper::V5(_) => Self::Digest(*c.id()),
ChunkWrapper::V6(_) => Self::Offset(c.blob_index(), c.compressed_offset()),
ChunkWrapper::Ref(_) => unimplemented!("unsupport ChunkWrapper::Ref(c)"),
ChunkWrapper::Ref(_) => Self::Digest(*c.id()),
}
}
}
@ -283,7 +294,7 @@ impl BlobCompactor {
version,
states: vec![Default::default(); ori_blobs_number],
ori_blob_mgr,
new_blob_mgr: BlobManager::new(digester),
new_blob_mgr: BlobManager::new(digester, false),
c2nodes: HashMap::new(),
b2nodes: HashMap::new(),
backend,
@ -302,7 +313,7 @@ impl BlobCompactor {
let chunk_dict = self.get_chunk_dict();
let cb = &mut |n: &Tree| -> Result<()> {
let mut node = n.lock_node();
let mut node = n.borrow_mut_node();
for chunk_idx in 0..node.chunks.len() {
let chunk = &mut node.chunks[chunk_idx];
let chunk_key = ChunkKey::from(&chunk.inner);
@ -365,7 +376,7 @@ impl BlobCompactor {
fn apply_blob_move(&mut self, from: u32, to: u32) -> Result<()> {
if let Some(idx_list) = self.b2nodes.get(&from) {
for (n, chunk_idx) in idx_list.iter() {
let mut node = n.lock().unwrap();
let mut node = n.borrow_mut();
ensure!(
node.chunks[*chunk_idx].inner.blob_index() == from,
"unexpected blob_index of chunk"
@ -379,7 +390,7 @@ impl BlobCompactor {
fn apply_chunk_change(&mut self, c: &(ChunkWrapper, ChunkWrapper)) -> Result<()> {
if let Some(chunks) = self.c2nodes.get(&ChunkKey::from(&c.0)) {
for (n, chunk_idx) in chunks.iter() {
let mut node = n.lock().unwrap();
let mut node = n.borrow_mut();
let chunk = &mut node.chunks[*chunk_idx];
let mut chunk_inner = chunk.inner.deref().clone();
apply_chunk_change(&c.1, &mut chunk_inner)?;
@ -545,7 +556,8 @@ impl BlobCompactor {
info!("compactor: delete compacted blob {}", ori_blob_ids[idx]);
}
State::Rebuild(cs) => {
let blob_storage = ArtifactStorage::FileDir(PathBuf::from(dir));
let blob_storage =
ArtifactStorage::FileDir((PathBuf::from(dir), String::new()));
let mut blob_ctx = BlobContext::new(
String::from(""),
0,
@ -555,6 +567,7 @@ impl BlobCompactor {
build_ctx.cipher,
Default::default(),
None,
false,
);
blob_ctx.set_meta_info_enabled(self.is_v6());
let blob_idx = self.new_blob_mgr.alloc_index()?;
@ -607,14 +620,16 @@ impl BlobCompactor {
PathBuf::from(""),
Default::default(),
None,
None,
false,
Features::new(),
false,
Attributes::default(),
);
let mut bootstrap_mgr =
BootstrapManager::new(Some(ArtifactStorage::SingleFile(d_bootstrap)), None);
let mut bootstrap_ctx = bootstrap_mgr.create_ctx()?;
let mut ori_blob_mgr = BlobManager::new(rs.meta.get_digester());
let mut ori_blob_mgr = BlobManager::new(rs.meta.get_digester(), false);
ori_blob_mgr.extend_from_blob_table(&build_ctx, rs.superblock.get_blob_infos())?;
if let Some(dict) = chunk_dict {
ori_blob_mgr.set_chunk_dict(dict);
@ -640,7 +655,7 @@ impl BlobCompactor {
return Ok(None);
}
info!("compatctor: successfully compacted blob");
info!("compactor: successfully compacted blob");
// blobs have already been dumped, dump bootstrap only
let blob_table = compactor.new_blob_mgr.to_blob_table(&build_ctx)?;
bootstrap.build(&mut build_ctx, &mut bootstrap_ctx)?;
@ -653,7 +668,695 @@ impl BlobCompactor {
Ok(Some(BuildOutput::new(
&compactor.new_blob_mgr,
None,
&bootstrap_mgr.bootstrap_storage,
&None,
)?))
}
}
#[cfg(test)]
mod tests {
use crate::core::node::Node;
use crate::HashChunkDict;
use crate::{NodeChunk, Overlay};
use super::*;
use nydus_api::ConfigV2;
use nydus_rafs::metadata::RafsSuperConfig;
use nydus_storage::backend::{BackendResult, BlobReader};
use nydus_storage::device::v5::BlobV5ChunkInfo;
use nydus_storage::device::{BlobChunkFlags, BlobChunkInfo, BlobFeatures};
use nydus_storage::RAFS_DEFAULT_CHUNK_SIZE;
use nydus_utils::crypt::Algorithm;
use nydus_utils::metrics::BackendMetrics;
use nydus_utils::{compress, crypt};
use std::any::Any;
use vmm_sys_util::tempdir::TempDir;
use vmm_sys_util::tempfile::TempFile;
#[doc(hidden)]
#[macro_export]
macro_rules! impl_getter {
($G: ident, $F: ident, $U: ty) => {
fn $G(&self) -> $U {
self.$F
}
};
}
#[derive(Default, Clone)]
struct MockChunkInfo {
pub block_id: RafsDigest,
pub blob_index: u32,
pub flags: BlobChunkFlags,
pub compress_size: u32,
pub uncompress_size: u32,
pub compress_offset: u64,
pub uncompress_offset: u64,
pub file_offset: u64,
pub index: u32,
pub crc32: u32,
}
impl BlobChunkInfo for MockChunkInfo {
fn chunk_id(&self) -> &RafsDigest {
&self.block_id
}
fn id(&self) -> u32 {
self.index
}
fn is_compressed(&self) -> bool {
self.flags.contains(BlobChunkFlags::COMPRESSED)
}
fn is_batch(&self) -> bool {
self.flags.contains(BlobChunkFlags::BATCH)
}
fn is_encrypted(&self) -> bool {
false
}
fn has_crc32(&self) -> bool {
self.flags.contains(BlobChunkFlags::HAS_CRC32)
}
fn crc32(&self) -> u32 {
if self.has_crc32() {
self.crc32
} else {
0
}
}
fn as_any(&self) -> &dyn Any {
self
}
impl_getter!(blob_index, blob_index, u32);
impl_getter!(compressed_offset, compress_offset, u64);
impl_getter!(compressed_size, compress_size, u32);
impl_getter!(uncompressed_offset, uncompress_offset, u64);
impl_getter!(uncompressed_size, uncompress_size, u32);
}
impl BlobV5ChunkInfo for MockChunkInfo {
fn as_base(&self) -> &dyn BlobChunkInfo {
self
}
impl_getter!(index, index, u32);
impl_getter!(file_offset, file_offset, u64);
impl_getter!(flags, flags, BlobChunkFlags);
}
struct MockBackend {
pub metrics: Arc<BackendMetrics>,
}
impl BlobReader for MockBackend {
fn blob_size(&self) -> BackendResult<u64> {
Ok(1)
}
fn try_read(&self, buf: &mut [u8], _offset: u64) -> BackendResult<usize> {
let mut i = 0;
while i < buf.len() {
buf[i] = i as u8;
i += 1;
}
Ok(i)
}
fn metrics(&self) -> &BackendMetrics {
// Safe because nydusd must have backend attached with id, only image builder can no id
// but use backend instance to upload blob.
&self.metrics
}
}
unsafe impl Send for MockBackend {}
unsafe impl Sync for MockBackend {}
impl BlobBackend for MockBackend {
fn shutdown(&self) {}
fn metrics(&self) -> &BackendMetrics {
// Safe because nydusd must have backend attached with id, only image builder can no id
// but use backend instance to upload blob.
&self.metrics
}
fn get_reader(&self, _blob_id: &str) -> BackendResult<Arc<dyn BlobReader>> {
Ok(Arc::new(MockBackend {
metrics: self.metrics.clone(),
}))
}
}
#[test]
fn test_chunk_key_from() {
let cw = ChunkWrapper::new(RafsVersion::V5);
matches!(ChunkKey::from(&cw), ChunkKey::Digest(_));
let cw = ChunkWrapper::new(RafsVersion::V6);
matches!(ChunkKey::from(&cw), ChunkKey::Offset(_, _));
let chunk = Arc::new(MockChunkInfo {
block_id: Default::default(),
blob_index: 2,
flags: BlobChunkFlags::empty(),
compress_size: 0x800,
uncompress_size: 0x1000,
compress_offset: 0x800,
uncompress_offset: 0x1000,
file_offset: 0x1000,
index: 1,
crc32: 0,
}) as Arc<dyn BlobChunkInfo>;
let cw = ChunkWrapper::Ref(chunk);
ChunkKey::from(&cw);
}
#[test]
fn test_chunk_set() {
let mut chunk_set1 = ChunkSet::new();
let mut chunk_wrapper1 = ChunkWrapper::new(RafsVersion::V5);
chunk_wrapper1.set_id(RafsDigest { data: [1u8; 32] });
chunk_wrapper1.set_compressed_size(8);
let mut chunk_wrapper2 = ChunkWrapper::new(RafsVersion::V6);
chunk_wrapper2.set_compressed_size(16);
chunk_set1.add_chunk(&chunk_wrapper1);
chunk_set1.add_chunk(&chunk_wrapper2);
assert_eq!(chunk_set1.total_size, 24);
let chunk_key2 = ChunkKey::from(&chunk_wrapper2);
assert_eq!(
format!("{:?}", Some(chunk_wrapper2)),
format!("{:?}", chunk_set1.get_chunk(&chunk_key2))
);
let mut chunk_wrapper3 = ChunkWrapper::new(RafsVersion::V5);
chunk_wrapper3.set_id(RafsDigest { data: [3u8; 32] });
chunk_wrapper3.set_compressed_size(32);
let mut chunk_set2 = ChunkSet::new();
chunk_set2.add_chunk(&chunk_wrapper3);
chunk_set2.merge(chunk_set1);
assert_eq!(chunk_set2.total_size, 56);
assert_eq!(chunk_set2.chunks.len(), 3);
let build_ctx = BuildContext::default();
let tmp_file = TempFile::new().unwrap();
let blob_storage = ArtifactStorage::SingleFile(PathBuf::from(tmp_file.as_path()));
let cipher_object = Algorithm::Aes256Xts.new_cipher().unwrap();
let mut new_blob_ctx = BlobContext::new(
"blob_id".to_owned(),
0,
BlobFeatures::all(),
compress::Algorithm::Lz4Block,
digest::Algorithm::Sha256,
crypt::Algorithm::Aes256Xts,
Arc::new(cipher_object),
None,
false,
);
let ori_blob_ids = ["1".to_owned(), "2".to_owned()];
let backend = Arc::new(MockBackend {
metrics: BackendMetrics::new("id", "backend_type"),
}) as Arc<dyn BlobBackend + Send + Sync>;
let mut res = chunk_set2
.dump(
&build_ctx,
blob_storage,
&ori_blob_ids,
&mut new_blob_ctx,
0,
true,
&backend,
)
.unwrap();
res.sort_by(|a, b| a.0.id().data.cmp(&b.0.id().data));
assert_eq!(res.len(), 3);
assert_eq!(
format!("{:?}", res[0].1.id()),
format!("{:?}", RafsDigest { data: [0u8; 32] })
);
assert_eq!(
format!("{:?}", res[1].1.id()),
format!("{:?}", RafsDigest { data: [1u8; 32] })
);
assert_eq!(
format!("{:?}", res[2].1.id()),
format!("{:?}", RafsDigest { data: [3u8; 32] })
);
}
#[test]
fn test_state() {
let state = State::Rebuild(ChunkSet::new());
assert!(state.is_rebuild());
let state = State::ChunkDict;
assert!(state.is_from_dict());
let state = State::default();
assert!(state.is_invalid());
let mut chunk_set1 = ChunkSet::new();
let mut chunk_wrapper1 = ChunkWrapper::new(RafsVersion::V5);
chunk_wrapper1.set_id(RafsDigest { data: [1u8; 32] });
chunk_wrapper1.set_compressed_size(8);
chunk_set1.add_chunk(&chunk_wrapper1);
let mut state1 = State::Original(chunk_set1);
assert_eq!(state1.chunk_total_size().unwrap(), 8);
let mut chunk_wrapper2 = ChunkWrapper::new(RafsVersion::V6);
chunk_wrapper2.set_compressed_size(16);
let mut chunk_set2 = ChunkSet::new();
chunk_set2.add_chunk(&chunk_wrapper2);
let mut state2 = State::Rebuild(chunk_set2);
assert_eq!(state2.chunk_total_size().unwrap(), 16);
assert!(state1.merge_blob(state2.clone()).is_err());
assert!(state2.merge_blob(state1).is_ok());
assert!(state2.merge_blob(State::Invalid).is_err());
assert_eq!(state2.chunk_total_size().unwrap(), 24);
assert!(State::Delete.chunk_total_size().is_err());
}
#[test]
fn test_apply_chunk_change() {
let mut chunk_wrapper1 = ChunkWrapper::new(RafsVersion::V5);
chunk_wrapper1.set_id(RafsDigest { data: [1u8; 32] });
chunk_wrapper1.set_uncompressed_size(8);
chunk_wrapper1.set_compressed_size(8);
let mut chunk_wrapper2 = ChunkWrapper::new(RafsVersion::V6);
chunk_wrapper2.set_uncompressed_size(16);
chunk_wrapper2.set_compressed_size(16);
assert!(apply_chunk_change(&chunk_wrapper1, &mut chunk_wrapper2).is_err());
chunk_wrapper2.set_uncompressed_size(8);
assert!(apply_chunk_change(&chunk_wrapper1, &mut chunk_wrapper2).is_err());
chunk_wrapper2.set_compressed_size(8);
chunk_wrapper1.set_blob_index(0x10);
chunk_wrapper1.set_index(0x20);
chunk_wrapper1.set_uncompressed_offset(0x30);
chunk_wrapper1.set_compressed_offset(0x40);
assert!(apply_chunk_change(&chunk_wrapper1, &mut chunk_wrapper2).is_ok());
assert_eq!(chunk_wrapper2.blob_index(), 0x10);
assert_eq!(chunk_wrapper2.index(), 0x20);
assert_eq!(chunk_wrapper2.uncompressed_offset(), 0x30);
assert_eq!(chunk_wrapper2.compressed_offset(), 0x40);
}
fn create_blob_compactor() -> Result<BlobCompactor> {
let root_dir = &std::env::var("CARGO_MANIFEST_DIR").expect("$CARGO_MANIFEST_DIR");
let mut source_path = PathBuf::from(root_dir);
source_path.push("../tests/texture/bootstrap/rafs-v5.boot");
let path = source_path.to_str().unwrap();
let rafs_config = RafsSuperConfig {
version: RafsVersion::V5,
compressor: compress::Algorithm::Lz4Block,
digester: digest::Algorithm::Blake3,
chunk_size: 0x100000,
batch_size: 0,
explicit_uidgid: true,
is_tarfs_mode: false,
};
let dict =
HashChunkDict::from_commandline_arg(path, Arc::new(ConfigV2::default()), &rafs_config)
.unwrap();
let mut ori_blob_mgr = BlobManager::new(digest::Algorithm::Sha256, false);
ori_blob_mgr.set_chunk_dict(dict);
let backend = Arc::new(MockBackend {
metrics: BackendMetrics::new("id", "backend_type"),
});
let tmpdir = TempDir::new()?;
let tmpfile = TempFile::new_in(tmpdir.as_path())?;
let node = Node::from_fs_object(
RafsVersion::V6,
tmpdir.as_path().to_path_buf(),
tmpfile.as_path().to_path_buf(),
Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true,
false,
)?;
let tree = Tree::new(node);
let bootstrap = Bootstrap::new(tree)?;
BlobCompactor::new(
RafsVersion::V6,
ori_blob_mgr,
backend,
digest::Algorithm::Sha256,
&bootstrap,
)
}
#[test]
fn test_blob_compactor_new() {
let compactor = create_blob_compactor();
assert!(compactor.is_ok());
assert!(compactor.unwrap().is_v6());
}
#[test]
fn test_blob_compactor_load_chunk_dict_blobs() {
let mut compactor = create_blob_compactor().unwrap();
let chunk_dict = compactor.get_chunk_dict();
let n = chunk_dict.get_blobs().len();
for i in 0..n {
chunk_dict.set_real_blob_idx(i as u32, i as u32);
}
compactor.states = vec![State::default(); n + 1];
compactor.load_chunk_dict_blobs();
assert_eq!(compactor.states.len(), n + 1);
assert!(compactor.states[0].is_from_dict());
assert!(compactor.states[n >> 1].is_from_dict());
assert!(compactor.states[n - 1].is_from_dict());
assert!(!compactor.states[n].is_from_dict());
}
fn blob_compactor_load_and_dedup_chunks() -> Result<BlobCompactor> {
let mut compactor = create_blob_compactor()?;
let mut chunk1 = ChunkWrapper::new(RafsVersion::V5);
chunk1.set_id(RafsDigest { data: [1u8; 32] });
chunk1.set_uncompressed_size(0);
chunk1.set_compressed_offset(0x11);
chunk1.set_blob_index(1);
let node_chunk1 = NodeChunk {
source: crate::ChunkSource::Dict,
inner: Arc::new(chunk1.clone()),
};
let mut chunk2 = ChunkWrapper::new(RafsVersion::V6);
chunk2.set_id(RafsDigest { data: [2u8; 32] });
chunk2.set_uncompressed_size(0x20);
chunk2.set_compressed_offset(0x22);
chunk2.set_blob_index(2);
let node_chunk2 = NodeChunk {
source: crate::ChunkSource::Dict,
inner: Arc::new(chunk2.clone()),
};
let mut chunk3 = ChunkWrapper::new(RafsVersion::V6);
chunk3.set_id(RafsDigest { data: [3u8; 32] });
chunk3.set_uncompressed_size(0x20);
chunk3.set_compressed_offset(0x22);
chunk3.set_blob_index(2);
let node_chunk3 = NodeChunk {
source: crate::ChunkSource::Dict,
inner: Arc::new(chunk3.clone()),
};
let mut chunk_dict = HashChunkDict::new(digest::Algorithm::Sha256);
chunk_dict.add_chunk(
Arc::new(ChunkWrapper::new(RafsVersion::V5)),
digest::Algorithm::Sha256,
);
chunk_dict.add_chunk(Arc::new(chunk1.clone()), digest::Algorithm::Sha256);
compactor.ori_blob_mgr.set_chunk_dict(Arc::new(chunk_dict));
compactor.states = vec![State::ChunkDict; 5];
let tmpdir = TempDir::new()?;
let tmpfile = TempFile::new_in(tmpdir.as_path())?;
let node = Node::from_fs_object(
RafsVersion::V6,
tmpdir.as_path().to_path_buf(),
tmpfile.as_path().to_path_buf(),
Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true,
false,
)?;
let mut tree = Tree::new(node);
let tmpfile2 = TempFile::new_in(tmpdir.as_path())?;
let mut node = Node::from_fs_object(
RafsVersion::V6,
tmpdir.as_path().to_path_buf(),
tmpfile2.as_path().to_path_buf(),
Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true,
false,
)?;
node.chunks.push(node_chunk1);
node.chunks.push(node_chunk2);
node.chunks.push(node_chunk3);
let tree2 = Tree::new(node);
tree.insert_child(tree2);
let bootstrap = Bootstrap::new(tree)?;
assert!(compactor.load_and_dedup_chunks(&bootstrap).is_ok());
assert_eq!(compactor.c2nodes.len(), 2);
assert_eq!(compactor.b2nodes.len(), 2);
let chunk_key1 = ChunkKey::from(&chunk1);
assert!(compactor.c2nodes.contains_key(&chunk_key1));
assert_eq!(compactor.c2nodes.get(&chunk_key1).unwrap().len(), 1);
assert!(compactor.b2nodes.contains_key(&chunk2.blob_index()));
assert_eq!(
compactor.b2nodes.get(&chunk2.blob_index()).unwrap().len(),
2
);
Ok(compactor)
}
#[test]
fn test_blob_compactor_load_and_dedup_chunks() {
assert!(blob_compactor_load_and_dedup_chunks().is_ok());
}
#[test]
fn test_blob_compactor_dump_new_blobs() {
let tmp_dir = TempDir::new().unwrap();
let build_ctx = BuildContext::new(
"build_ctx".to_string(),
false,
0,
compress::Algorithm::Lz4Block,
digest::Algorithm::Sha256,
true,
WhiteoutSpec::None,
ConversionType::DirectoryToRafs,
PathBuf::from(tmp_dir.as_path()),
Default::default(),
None,
None,
false,
Features::new(),
false,
Attributes::default(),
);
let mut compactor = blob_compactor_load_and_dedup_chunks().unwrap();
let blob_ctx1 = BlobContext::new(
"blob_id1".to_owned(),
0,
build_ctx.blob_features,
build_ctx.compressor,
build_ctx.digester,
build_ctx.cipher,
Default::default(),
None,
false,
);
let blob_ctx2 = BlobContext::new(
"blob_id2".to_owned(),
0,
build_ctx.blob_features,
build_ctx.compressor,
build_ctx.digester,
build_ctx.cipher,
Default::default(),
None,
false,
);
let blob_ctx3 = BlobContext::new(
"blob_id3".to_owned(),
0,
build_ctx.blob_features,
build_ctx.compressor,
build_ctx.digester,
build_ctx.cipher,
Default::default(),
None,
false,
);
let blob_ctx4 = BlobContext::new(
"blob_id4".to_owned(),
0,
build_ctx.blob_features,
build_ctx.compressor,
build_ctx.digester,
build_ctx.cipher,
Default::default(),
None,
false,
);
let blob_ctx5 = BlobContext::new(
"blob_id5".to_owned(),
0,
build_ctx.blob_features,
build_ctx.compressor,
build_ctx.digester,
build_ctx.cipher,
Default::default(),
None,
false,
);
compactor.ori_blob_mgr.add_blob(blob_ctx1);
compactor.ori_blob_mgr.add_blob(blob_ctx2);
compactor.ori_blob_mgr.add_blob(blob_ctx3);
compactor.ori_blob_mgr.add_blob(blob_ctx4);
compactor.ori_blob_mgr.add_blob(blob_ctx5);
compactor.states[0] = State::Invalid;
let tmp_dir = TempDir::new().unwrap();
let dir = tmp_dir.as_path().to_str().unwrap();
assert!(compactor.dump_new_blobs(&build_ctx, dir, true).is_err());
compactor.states = vec![
State::Delete,
State::ChunkDict,
State::Original(ChunkSet::new()),
State::Rebuild(ChunkSet::new()),
State::Delete,
];
assert!(compactor.dump_new_blobs(&build_ctx, dir, true).is_ok());
assert_eq!(compactor.ori_blob_mgr.len(), 3);
}
#[test]
fn test_blob_compactor_do_compact() {
let mut compactor = blob_compactor_load_and_dedup_chunks().unwrap();
let tmp_dir = TempDir::new().unwrap();
let build_ctx = BuildContext::new(
"build_ctx".to_string(),
false,
0,
compress::Algorithm::Lz4Block,
digest::Algorithm::Sha256,
true,
WhiteoutSpec::None,
ConversionType::DirectoryToRafs,
PathBuf::from(tmp_dir.as_path()),
Default::default(),
None,
None,
false,
Features::new(),
false,
Attributes::default(),
);
let mut blob_ctx1 = BlobContext::new(
"blob_id1".to_owned(),
0,
build_ctx.blob_features,
build_ctx.compressor,
build_ctx.digester,
build_ctx.cipher,
Default::default(),
None,
false,
);
blob_ctx1.compressed_blob_size = 2;
let mut blob_ctx2 = BlobContext::new(
"blob_id2".to_owned(),
0,
build_ctx.blob_features,
build_ctx.compressor,
build_ctx.digester,
build_ctx.cipher,
Default::default(),
None,
false,
);
blob_ctx2.compressed_blob_size = 0;
let blob_ctx3 = BlobContext::new(
"blob_id3".to_owned(),
0,
build_ctx.blob_features,
build_ctx.compressor,
build_ctx.digester,
build_ctx.cipher,
Default::default(),
None,
false,
);
let blob_ctx4 = BlobContext::new(
"blob_id4".to_owned(),
0,
build_ctx.blob_features,
build_ctx.compressor,
build_ctx.digester,
build_ctx.cipher,
Default::default(),
None,
false,
);
let blob_ctx5 = BlobContext::new(
"blob_id5".to_owned(),
0,
build_ctx.blob_features,
build_ctx.compressor,
build_ctx.digester,
build_ctx.cipher,
Default::default(),
None,
false,
);
compactor.ori_blob_mgr.add_blob(blob_ctx1);
compactor.ori_blob_mgr.add_blob(blob_ctx2);
compactor.ori_blob_mgr.add_blob(blob_ctx3);
compactor.ori_blob_mgr.add_blob(blob_ctx4);
compactor.ori_blob_mgr.add_blob(blob_ctx5);
let mut chunk_set1 = ChunkSet::new();
chunk_set1.total_size = 4;
let mut chunk_set2 = ChunkSet::new();
chunk_set2.total_size = 6;
let mut chunk_set3 = ChunkSet::new();
chunk_set3.total_size = 5;
compactor.states = vec![
State::Original(chunk_set1),
State::Original(chunk_set2),
State::Rebuild(chunk_set3),
State::ChunkDict,
State::Invalid,
];
let cfg = Config {
min_used_ratio: 50,
compact_blob_size: 10,
max_compact_size: 8,
layers_to_compact: 0,
blobs_dir: "blobs_dir".to_string(),
};
assert!(compactor.do_compact(&cfg).is_ok());
assert!(!compactor.states.last().unwrap().is_invalid());
}
}

View File

@ -3,10 +3,9 @@
// SPDX-License-Identifier: Apache-2.0
use std::borrow::Cow;
use std::io::Write;
use std::slice;
use anyhow::{Context, Result};
use anyhow::{bail, Context, Result};
use nydus_rafs::metadata::RAFS_MAX_CHUNK_SIZE;
use nydus_storage::device::BlobFeatures;
use nydus_storage::meta::{toc, BlobMetaChunkArray};
@ -16,9 +15,10 @@ use sha2::digest::Digest;
use super::layout::BlobLayout;
use super::node::Node;
use crate::{
ArtifactWriter, BlobContext, BlobManager, BuildContext, ConversionType, Feature, Tree,
};
use crate::core::context::Artifact;
use crate::{BlobContext, BlobManager, BuildContext, ConversionType, Feature};
const VALID_BLOB_ID_LENGTH: usize = 64;
/// Generator for RAFS data blob.
pub(crate) struct Blob {}
@ -27,17 +27,15 @@ impl Blob {
/// Dump blob file and generate chunks
pub(crate) fn dump(
ctx: &BuildContext,
tree: &Tree,
blob_mgr: &mut BlobManager,
blob_writer: &mut ArtifactWriter,
blob_writer: &mut dyn Artifact,
) -> Result<()> {
match ctx.conversion_type {
ConversionType::DirectoryToRafs => {
let mut chunk_data_buf = vec![0u8; RAFS_MAX_CHUNK_SIZE as usize];
let (inodes, prefetch_entries) =
BlobLayout::layout_blob_simple(&ctx.prefetch, tree)?;
let (inodes, prefetch_entries) = BlobLayout::layout_blob_simple(&ctx.prefetch)?;
for (idx, node) in inodes.iter().enumerate() {
let mut node = node.lock().unwrap();
let mut node = node.borrow_mut();
let size = node
.dump_node_data(ctx, blob_mgr, blob_writer, &mut chunk_data_buf)
.context("failed to dump blob chunks")?;
@ -98,23 +96,23 @@ impl Blob {
Ok(())
}
fn finalize_blob_data(
pub fn finalize_blob_data(
ctx: &BuildContext,
blob_mgr: &mut BlobManager,
blob_writer: &mut ArtifactWriter,
blob_writer: &mut dyn Artifact,
) -> Result<()> {
// Dump buffered batch chunk data if exists.
if let Some(ref batch) = ctx.blob_batch_generator {
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
let mut batch = batch.lock().unwrap();
if !batch.chunk_data_buf_is_empty() {
let (pre_compressed_offset, compressed_size, _) = Node::write_chunk_data(
let (_, compressed_size, _) = Node::write_chunk_data(
&ctx,
blob_ctx,
blob_writer,
batch.chunk_data_buf(),
)?;
batch.add_context(pre_compressed_offset, compressed_size);
batch.add_context(compressed_size);
batch.clear_chunk_data_buf();
}
}
@ -124,6 +122,9 @@ impl Blob {
&& (ctx.blob_inline_meta || ctx.features.is_enabled(Feature::BlobToc))
{
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
if blob_ctx.external {
return Ok(());
}
blob_ctx.write_tar_header(
blob_writer,
toc::TOC_ENTRY_BLOB_RAW,
@ -145,6 +146,20 @@ impl Blob {
}
}
// check blobs to make sure all blobs are valid.
if blob_mgr.external {
for (index, blob_ctx) in blob_mgr.get_blobs().iter().enumerate() {
if blob_ctx.blob_id.len() != VALID_BLOB_ID_LENGTH {
bail!(
"invalid blob id:{}, length:{}, index:{}",
blob_ctx.blob_id,
blob_ctx.blob_id.len(),
index
);
}
}
}
Ok(())
}
@ -159,7 +174,7 @@ impl Blob {
pub(crate) fn dump_meta_data(
ctx: &BuildContext,
blob_ctx: &mut BlobContext,
blob_writer: &mut ArtifactWriter,
blob_writer: &mut dyn Artifact,
) -> Result<()> {
// Dump blob meta for v6 when it has chunks or bootstrap is to be inlined.
if !blob_ctx.blob_meta_info_enabled || blob_ctx.uncompressed_blob_size == 0 {
@ -194,7 +209,6 @@ impl Blob {
} else if ctx.blob_tar_reader.is_some() {
header.set_separate_blob(true);
};
let mut compressor = Self::get_compression_algorithm_for_meta(ctx);
let (compressed_data, compressed) = compress::compress(ci_data, compressor)
.with_context(|| "failed to compress blob chunk info array".to_string())?;
@ -223,6 +237,9 @@ impl Blob {
}
blob_ctx.blob_meta_header = header;
if let Some(blob_cache) = ctx.blob_cache_generator.as_ref() {
blob_cache.write_blob_meta(ci_data, &header)?;
}
let encrypted_header =
crypt::encrypt_with_context(header.as_bytes(), cipher_obj, cipher_ctx, encrypt)?;
let header_size = encrypted_header.len();

View File

@ -30,15 +30,14 @@ impl Bootstrap {
bootstrap_ctx: &mut BootstrapContext,
) -> Result<()> {
// Special handling of the root inode
let mut root_node = self.tree.lock_node();
let mut root_node = self.tree.borrow_mut_node();
assert!(root_node.is_dir());
let index = bootstrap_ctx.generate_next_ino();
// 0 is reserved and 1 also matches RAFS_V5_ROOT_INODE.
assert_eq!(index, RAFS_V5_ROOT_INODE);
root_node.index = index;
root_node.inode.set_ino(index);
ctx.prefetch
.insert_if_need(&self.tree.node, root_node.deref());
ctx.prefetch.insert(&self.tree.node, root_node.deref());
bootstrap_ctx.inode_map.insert(
(
root_node.layer_idx,
@ -51,7 +50,7 @@ impl Bootstrap {
Self::build_rafs(ctx, bootstrap_ctx, &mut self.tree)?;
if ctx.fs_version.is_v6() {
let root_offset = self.tree.node.lock().unwrap().v6_offset;
let root_offset = self.tree.node.borrow().v6_offset;
Self::v6_update_dirents(&self.tree, root_offset);
}
@ -76,7 +75,9 @@ impl Bootstrap {
let digest = RafsDigest::from_buf(&bootstrap_data, digest::Algorithm::Sha256);
let name = digest.to_string();
bootstrap_ctx.writer.finalize(Some(name.clone()))?;
*bootstrap_storage = Some(ArtifactStorage::SingleFile(p.join(name)));
let mut path = p.0.join(name);
path.set_extension(&p.1);
*bootstrap_storage = Some(ArtifactStorage::SingleFile(path));
Ok(())
} else {
bootstrap_ctx.writer.finalize(Some(String::default()))
@ -91,7 +92,7 @@ impl Bootstrap {
tree: &mut Tree,
) -> Result<()> {
let parent_node = tree.node.clone();
let mut parent_node = parent_node.lock().unwrap();
let mut parent_node = parent_node.borrow_mut();
let parent_ino = parent_node.inode.ino();
let block_size = ctx.v6_block_size();
@ -114,7 +115,7 @@ impl Bootstrap {
let mut dirs: Vec<&mut Tree> = Vec::new();
for child in tree.children.iter_mut() {
let child_node = child.node.clone();
let mut child_node = child_node.lock().unwrap();
let mut child_node = child_node.borrow_mut();
let index = bootstrap_ctx.generate_next_ino();
child_node.index = index;
if ctx.fs_version.is_v5() {
@ -135,11 +136,11 @@ impl Bootstrap {
let nlink = indexes.len() as u32 + 1;
// Update nlink for previous hardlink inodes
for n in indexes.iter() {
n.lock().unwrap().inode.set_nlink(nlink);
n.borrow_mut().inode.set_nlink(nlink);
}
let (first_ino, first_offset) = {
let first_node = indexes[0].lock().unwrap();
let first_node = indexes[0].borrow_mut();
(first_node.inode.ino(), first_node.v6_offset)
};
// set offset for rafs v6 hardlinks
@ -160,7 +161,7 @@ impl Bootstrap {
if !child_node.is_dir() && ctx.fs_version.is_v6() {
child_node.v6_set_offset(bootstrap_ctx, v6_hardlink_offset, block_size)?;
}
ctx.prefetch.insert_if_need(&child.node, child_node.deref());
ctx.prefetch.insert(&child.node, child_node.deref());
if child_node.is_dir() {
dirs.push(child);
}

View File

@ -19,7 +19,7 @@ use nydus_utils::digest::{self, RafsDigest};
use crate::Tree;
#[derive(Debug, PartialEq, Eq, Hash, Ord, PartialOrd)]
pub struct DigestWithBlobIndex(pub RafsDigest, pub u32);
pub struct DigestWithBlobIndex(pub RafsDigest, pub u32, pub Option<u32>);
/// Trait to manage chunk cache for chunk deduplication.
pub trait ChunkDict: Sync + Send + 'static {

View File

@ -10,13 +10,16 @@ use std::collections::{HashMap, VecDeque};
use std::convert::TryFrom;
use std::fs::{remove_file, rename, File, OpenOptions};
use std::io::{BufWriter, Cursor, Read, Seek, Write};
use std::mem::size_of;
use std::os::unix::fs::FileTypeExt;
use std::path::{Display, Path, PathBuf};
use std::result::Result::Ok;
use std::str::FromStr;
use std::sync::{Arc, Mutex};
use std::{fmt, fs};
use anyhow::{anyhow, Context, Error, Result};
use nydus_utils::crc32;
use nydus_utils::crypt::{self, Cipher, CipherContext};
use sha2::{Digest, Sha256};
use tar::{EntryType, Header};
@ -40,9 +43,10 @@ use nydus_storage::meta::{
BlobMetaChunkArray, BlobMetaChunkInfo, ZranContextGenerator,
};
use nydus_utils::digest::DigestData;
use nydus_utils::{compress, digest, div_round_up, round_down, BufReaderInfo};
use nydus_utils::{compress, digest, div_round_up, round_down, try_round_up_4k, BufReaderInfo};
use super::node::ChunkSource;
use crate::attributes::Attributes;
use crate::core::tree::TreeNode;
use crate::{ChunkDict, Feature, Features, HashChunkDict, Prefetch, PrefetchPolicy, WhiteoutSpec};
@ -137,7 +141,7 @@ pub enum ArtifactStorage {
// Won't rename user's specification
SingleFile(PathBuf),
// Will rename it from tmp file as user didn't specify a name.
FileDir(PathBuf),
FileDir((PathBuf, String)),
}
impl ArtifactStorage {
@ -145,7 +149,16 @@ impl ArtifactStorage {
pub fn display(&self) -> Display {
match self {
ArtifactStorage::SingleFile(p) => p.display(),
ArtifactStorage::FileDir(p) => p.display(),
ArtifactStorage::FileDir(p) => p.0.display(),
}
}
pub fn add_suffix(&mut self, suffix: &str) {
match self {
ArtifactStorage::SingleFile(p) => {
p.set_extension(suffix);
}
ArtifactStorage::FileDir(p) => p.1 = String::from(suffix),
}
}
}
@ -193,7 +206,13 @@ impl Write for ArtifactMemoryWriter {
}
}
struct ArtifactFileWriter(ArtifactWriter);
struct ArtifactFileWriter(pub ArtifactWriter);
impl ArtifactFileWriter {
pub fn finalize(&mut self, name: Option<String>) -> Result<()> {
self.0.finalize(name)
}
}
impl RafsIoWrite for ArtifactFileWriter {
fn as_any(&self) -> &dyn Any {
@ -215,6 +234,12 @@ impl RafsIoWrite for ArtifactFileWriter {
}
}
impl ArtifactFileWriter {
pub fn set_len(&mut self, s: u64) -> std::io::Result<()> {
self.0.file.get_mut().set_len(s)
}
}
impl Seek for ArtifactFileWriter {
fn seek(&mut self, pos: std::io::SeekFrom) -> std::io::Result<u64> {
self.0.file.seek(pos)
@ -231,6 +256,37 @@ impl Write for ArtifactFileWriter {
}
}
pub trait Artifact: Write {
fn pos(&self) -> Result<u64>;
fn finalize(&mut self, name: Option<String>) -> Result<()>;
}
#[derive(Default)]
pub struct NoopArtifactWriter {
pos: usize,
}
impl Write for NoopArtifactWriter {
fn write(&mut self, buf: &[u8]) -> std::io::Result<usize> {
self.pos += buf.len();
Ok(buf.len())
}
fn flush(&mut self) -> std::io::Result<()> {
Ok(())
}
}
impl Artifact for NoopArtifactWriter {
fn pos(&self) -> Result<u64> {
Ok(self.pos as u64)
}
fn finalize(&mut self, _name: Option<String>) -> Result<()> {
Ok(())
}
}
/// ArtifactWriter provides a writer to allow writing bootstrap
/// or blob data to a single file or in a directory.
pub struct ArtifactWriter {
@ -291,8 +347,8 @@ impl ArtifactWriter {
ArtifactStorage::FileDir(ref p) => {
// Better we can use open(2) O_TMPFILE, but for compatibility sake, we delay this job.
// TODO: Blob dir existence?
let tmp = TempFile::new_in(p)
.with_context(|| format!("failed to create temp file in {}", p.display()))?;
let tmp = TempFile::new_in(&p.0)
.with_context(|| format!("failed to create temp file in {}", p.0.display()))?;
let tmp2 = tmp.as_file().try_clone()?;
let reader = OpenOptions::new()
.read(true)
@ -308,41 +364,26 @@ impl ArtifactWriter {
}
}
}
}
impl Artifact for ArtifactWriter {
/// Get the current write position.
pub fn pos(&self) -> Result<u64> {
fn pos(&self) -> Result<u64> {
Ok(self.pos as u64)
}
// The `inline-bootstrap` option merges the blob and bootstrap into one
// file. We need some header to index the location of the blob and bootstrap,
// write_tar_header uses tar header that arranges the data as follows:
// data | tar_header | data | tar_header
// This is a tar-like structure, except that we put the tar header after the
// data. The advantage is that we do not need to determine the size of the data
// first, so that we can write the blob data by stream without seek to improve
// the performance of the blob dump by using fifo.
fn write_tar_header(&mut self, name: &str, size: u64) -> Result<Header> {
let mut header = Header::new_gnu();
header.set_path(Path::new(name))?;
header.set_entry_type(EntryType::Regular);
header.set_size(size);
// The checksum must be set to ensure that the tar reader implementation
// in golang can correctly parse the header.
header.set_cksum();
self.write_all(header.as_bytes())?;
Ok(header)
}
/// Finalize the metadata/data blob.
///
/// When `name` is None, it means that the blob is empty and should be removed.
pub fn finalize(&mut self, name: Option<String>) -> Result<()> {
fn finalize(&mut self, name: Option<String>) -> Result<()> {
self.file.flush()?;
if let Some(n) = name {
if let ArtifactStorage::FileDir(s) = &self.storage {
let path = Path::new(s).join(n);
let mut path = Path::new(&s.0).join(n);
if !s.1.is_empty() {
path.set_extension(&s.1);
}
if !path.exists() {
if let Some(tmp_file) = &self.tmp_file {
rename(tmp_file.as_path(), &path).with_context(|| {
@ -367,6 +408,73 @@ impl ArtifactWriter {
}
}
pub struct BlobCacheGenerator {
blob_data: Mutex<ArtifactFileWriter>,
blob_meta: Mutex<ArtifactFileWriter>,
}
impl BlobCacheGenerator {
pub fn new(storage: ArtifactStorage) -> Result<Self> {
Ok(BlobCacheGenerator {
blob_data: Mutex::new(ArtifactFileWriter(ArtifactWriter::new(storage.clone())?)),
blob_meta: Mutex::new(ArtifactFileWriter(ArtifactWriter::new(storage)?)),
})
}
pub fn write_blob_meta(
&self,
data: &[u8],
header: &BlobCompressionContextHeader,
) -> Result<()> {
let mut guard = self.blob_meta.lock().unwrap();
let aligned_uncompressed_size = try_round_up_4k(data.len() as u64).ok_or(anyhow!(
format!("invalid input {} for try_round_up_4k", data.len())
))?;
guard.set_len(
aligned_uncompressed_size + size_of::<BlobCompressionContextHeader>() as u64,
)?;
guard
.write_all(data)
.context("failed to write blob meta data")?;
guard.seek(std::io::SeekFrom::Start(aligned_uncompressed_size))?;
guard
.write_all(header.as_bytes())
.context("failed to write blob meta header")?;
Ok(())
}
pub fn write_blob_data(
&self,
chunk_data: &[u8],
chunk_info: &ChunkWrapper,
aligned_d_size: u32,
) -> Result<()> {
let mut guard = self.blob_data.lock().unwrap();
let curr_pos = guard.seek(std::io::SeekFrom::End(0))?;
if curr_pos < chunk_info.uncompressed_offset() + aligned_d_size as u64 {
guard.set_len(chunk_info.uncompressed_offset() + aligned_d_size as u64)?;
}
guard.seek(std::io::SeekFrom::Start(chunk_info.uncompressed_offset()))?;
guard
.write_all(&chunk_data)
.context("failed to write blob cache")?;
Ok(())
}
pub fn finalize(&self, name: &str) -> Result<()> {
let blob_data_name = format!("{}.blob.data", name);
let mut guard = self.blob_data.lock().unwrap();
guard.finalize(Some(blob_data_name))?;
drop(guard);
let blob_meta_name = format!("{}.blob.meta", name);
let mut guard = self.blob_meta.lock().unwrap();
guard.finalize(Some(blob_meta_name))
}
}
#[derive(Clone)]
/// BlobContext is used to hold the blob information of a layer during build.
pub struct BlobContext {
/// Blob id (user specified or sha256(blob)).
@ -417,6 +525,9 @@ pub struct BlobContext {
/// Cipher to encrypt the RAFS blobs.
pub cipher_object: Arc<Cipher>,
pub cipher_ctx: Option<CipherContext>,
/// Whether the blob is from external storage backend.
pub external: bool,
}
impl BlobContext {
@ -431,6 +542,7 @@ impl BlobContext {
cipher: crypt::Algorithm,
cipher_object: Arc<Cipher>,
cipher_ctx: Option<CipherContext>,
external: bool,
) -> Self {
let blob_meta_info = if features.contains(BlobFeatures::CHUNK_INFO_V2) {
BlobMetaChunkArray::new_v2()
@ -467,6 +579,8 @@ impl BlobContext {
entry_list: toc::TocEntryList::new(),
cipher_object,
cipher_ctx,
external,
};
blob_ctx
@ -505,6 +619,12 @@ impl BlobContext {
blob_ctx
.blob_meta_header
.set_encrypted(features.contains(BlobFeatures::ENCRYPTED));
blob_ctx
.blob_meta_header
.set_is_chunkdict_generated(features.contains(BlobFeatures::IS_CHUNKDICT_GENERATED));
blob_ctx
.blob_meta_header
.set_external(features.contains(BlobFeatures::EXTERNAL));
blob_ctx
}
@ -604,6 +724,7 @@ impl BlobContext {
cipher,
cipher_object,
cipher_ctx,
false,
);
blob_ctx.blob_prefetch_size = blob.prefetch_size();
blob_ctx.chunk_count = blob.chunk_count();
@ -687,6 +808,10 @@ impl BlobContext {
info.set_uncompressed_offset(chunk.uncompressed_offset());
self.blob_meta_info.add_v2_info(info);
} else {
let mut data: u64 = 0;
if chunk.has_crc32() {
data = chunk.crc32() as u64;
}
self.blob_meta_info.add_v2(
chunk.compressed_offset(),
chunk.compressed_size(),
@ -694,8 +819,9 @@ impl BlobContext {
chunk.uncompressed_size(),
chunk.is_compressed(),
chunk.is_encrypted(),
chunk.has_crc32(),
chunk.is_batch(),
0,
data,
);
}
self.blob_chunk_digest.push(chunk.id().data);
@ -722,7 +848,7 @@ impl BlobContext {
}
/// Get blob id if the blob has some chunks.
pub fn blob_id(&mut self) -> Option<String> {
pub fn blob_id(&self) -> Option<String> {
if self.uncompressed_blob_size > 0 {
Some(self.blob_id.to_string())
} else {
@ -731,7 +857,7 @@ impl BlobContext {
}
/// Helper to write data to blob and update blob hash.
pub fn write_data(&mut self, blob_writer: &mut ArtifactWriter, data: &[u8]) -> Result<()> {
pub fn write_data(&mut self, blob_writer: &mut dyn Artifact, data: &[u8]) -> Result<()> {
blob_writer.write_all(data)?;
self.blob_hash.update(data);
Ok(())
@ -740,11 +866,28 @@ impl BlobContext {
/// Helper to write a tar header to blob and update blob hash.
pub fn write_tar_header(
&mut self,
blob_writer: &mut ArtifactWriter,
blob_writer: &mut dyn Artifact,
name: &str,
size: u64,
) -> Result<Header> {
let header = blob_writer.write_tar_header(name, size)?;
// The `inline-bootstrap` option merges the blob and bootstrap into one
// file. We need some header to index the location of the blob and bootstrap,
// write_tar_header uses tar header that arranges the data as follows:
// data | tar_header | data | tar_header
// This is a tar-like structure, except that we put the tar header after the
// data. The advantage is that we do not need to determine the size of the data
// first, so that we can write the blob data by stream without seek to improve
// the performance of the blob dump by using fifo.
let mut header = Header::new_gnu();
header.set_path(Path::new(name))?;
header.set_entry_type(EntryType::Regular);
header.set_size(size);
// The checksum must be set to ensure that the tar reader implementation
// in golang can correctly parse the header.
header.set_cksum();
blob_writer.write_all(header.as_bytes())?;
self.blob_hash.update(header.as_bytes());
Ok(header)
}
@ -773,20 +916,28 @@ pub struct BlobManager {
/// Used for chunk data de-duplication between layers (with `--parent-bootstrap`)
/// or within layer (with `--inline-bootstrap`).
pub(crate) layered_chunk_dict: HashChunkDict,
// Whether the managed blobs is from external storage backend.
pub external: bool,
}
impl BlobManager {
/// Create a new instance of [BlobManager].
pub fn new(digester: digest::Algorithm) -> Self {
pub fn new(digester: digest::Algorithm, external: bool) -> Self {
Self {
blobs: Vec::new(),
current_blob_index: None,
global_chunk_dict: Arc::new(()),
layered_chunk_dict: HashChunkDict::new(digester),
external,
}
}
fn new_blob_ctx(ctx: &BuildContext) -> Result<BlobContext> {
/// Set current blob index
pub fn set_current_blob_index(&mut self, index: usize) {
self.current_blob_index = Some(index as u32)
}
pub fn new_blob_ctx(&self, ctx: &BuildContext) -> Result<BlobContext> {
let (cipher_object, cipher_ctx) = match ctx.cipher {
crypt::Algorithm::None => (Default::default(), None),
crypt::Algorithm::Aes128Xts => {
@ -794,7 +945,7 @@ impl BlobManager {
let iv = crypt::Cipher::generate_random_iv()?;
let cipher_ctx = CipherContext::new(key, iv, false, ctx.cipher)?;
(
ctx.cipher.new_cipher().ok().unwrap_or(Default::default()),
ctx.cipher.new_cipher().ok().unwrap_or_default(),
Some(cipher_ctx),
)
}
@ -805,15 +956,22 @@ impl BlobManager {
)))
}
};
let mut blob_features = ctx.blob_features;
let mut compressor = ctx.compressor;
if self.external {
blob_features.insert(BlobFeatures::EXTERNAL);
compressor = compress::Algorithm::None;
}
let mut blob_ctx = BlobContext::new(
ctx.blob_id.clone(),
ctx.blob_offset,
ctx.blob_features,
ctx.compressor,
blob_features,
compressor,
ctx.digester,
ctx.cipher,
Arc::new(cipher_object),
cipher_ctx,
self.external,
);
blob_ctx.set_chunk_size(ctx.chunk_size);
blob_ctx.set_meta_info_enabled(
@ -829,7 +987,7 @@ impl BlobManager {
ctx: &BuildContext,
) -> Result<(u32, &mut BlobContext)> {
if self.current_blob_index.is_none() {
let blob_ctx = Self::new_blob_ctx(ctx)?;
let blob_ctx = self.new_blob_ctx(ctx)?;
self.current_blob_index = Some(self.alloc_index()?);
self.add_blob(blob_ctx);
}
@ -837,6 +995,21 @@ impl BlobManager {
Ok(self.get_current_blob().unwrap())
}
pub fn get_or_create_blob_by_idx(
&mut self,
ctx: &BuildContext,
blob_idx: u32,
) -> Result<(u32, &mut BlobContext)> {
let blob_idx = blob_idx as usize;
if blob_idx >= self.blobs.len() {
for _ in self.blobs.len()..=blob_idx {
let blob_ctx = self.new_blob_ctx(ctx)?;
self.add_blob(blob_ctx);
}
}
Ok((blob_idx as u32, &mut self.blobs[blob_idx as usize]))
}
/// Get the current blob object.
pub fn get_current_blob(&mut self) -> Option<(u32, &mut BlobContext)> {
if let Some(idx) = self.current_blob_index {
@ -846,6 +1019,33 @@ impl BlobManager {
}
}
/// Get or cerate blob for chunkdict, this is used for chunk deduplication.
pub fn get_or_cerate_blob_for_chunkdict(
&mut self,
ctx: &BuildContext,
id: &str,
) -> Result<(u32, &mut BlobContext)> {
let blob_mgr = Self::new(ctx.digester, false);
if self.get_blob_idx_by_id(id).is_none() {
let blob_ctx = blob_mgr.new_blob_ctx(ctx)?;
self.current_blob_index = Some(self.alloc_index()?);
self.add_blob(blob_ctx);
} else {
self.current_blob_index = self.get_blob_idx_by_id(id);
}
let (_, blob_ctx) = self.get_current_blob().unwrap();
if blob_ctx.blob_id.is_empty() {
blob_ctx.blob_id = id.to_string();
}
// Safe to unwrap because the blob context has been added.
Ok(self.get_current_blob().unwrap())
}
/// Determine if the given blob has been created.
pub fn has_blob(&self, blob_id: &str) -> bool {
self.get_blob_idx_by_id(blob_id).is_some()
}
/// Set the global chunk dictionary for chunk deduplication.
pub fn set_chunk_dict(&mut self, dict: Arc<dyn ChunkDict>) {
self.global_chunk_dict = dict
@ -988,6 +1188,7 @@ impl BlobManager {
compressed_blob_size,
blob_features,
flags,
build_ctx.is_chunkdict_generated,
);
}
RafsBlobTable::V6(table) => {
@ -1007,6 +1208,7 @@ impl BlobManager {
ctx.blob_toc_digest,
ctx.blob_meta_size,
ctx.blob_toc_size,
build_ctx.is_chunkdict_generated,
ctx.blob_meta_header,
ctx.cipher_object.clone(),
ctx.cipher_ctx.clone(),
@ -1113,6 +1315,7 @@ impl BootstrapContext {
}
/// BootstrapManager is used to hold the parent bootstrap reader and create new bootstrap context.
#[derive(Clone)]
pub struct BootstrapManager {
pub(crate) f_parent_path: Option<PathBuf>,
pub(crate) bootstrap_storage: Option<ArtifactStorage>,
@ -1149,6 +1352,7 @@ pub struct BuildContext {
pub digester: digest::Algorithm,
/// Blob encryption algorithm flag.
pub cipher: crypt::Algorithm,
pub crc32_algorithm: crc32::Algorithm,
/// Save host uid gid in each inode.
pub explicit_uidgid: bool,
/// whiteout spec: overlayfs or oci
@ -1174,6 +1378,7 @@ pub struct BuildContext {
/// Storage writing blob to single file or a directory.
pub blob_storage: Option<ArtifactStorage>,
pub external_blob_storage: Option<ArtifactStorage>,
pub blob_zran_generator: Option<Mutex<ZranContextGenerator<File>>>,
pub blob_batch_generator: Option<Mutex<BatchContextGenerator>>,
pub blob_tar_reader: Option<BufReaderInfo<File>>,
@ -1182,6 +1387,13 @@ pub struct BuildContext {
pub features: Features,
pub configuration: Arc<ConfigV2>,
/// Generate the blob cache and blob meta
pub blob_cache_generator: Option<BlobCacheGenerator>,
/// Whether is chunkdict.
pub is_chunkdict_generated: bool,
/// Nydus attributes for different build behavior.
pub attributes: Attributes,
}
impl BuildContext {
@ -1198,9 +1410,11 @@ impl BuildContext {
source_path: PathBuf,
prefetch: Prefetch,
blob_storage: Option<ArtifactStorage>,
external_blob_storage: Option<ArtifactStorage>,
blob_inline_meta: bool,
features: Features,
encrypt: bool,
attributes: Attributes,
) -> Self {
// It's a flag for images built with new nydus-image 2.2 and newer.
let mut blob_features = BlobFeatures::CAP_TAR_TOC;
@ -1222,6 +1436,7 @@ impl BuildContext {
crypt::Algorithm::None
};
let crc32_algorithm = crc32::Algorithm::Crc32Iscsi;
BuildContext {
blob_id,
aligned_chunk,
@ -1229,6 +1444,7 @@ impl BuildContext {
compressor,
digester,
cipher,
crc32_algorithm,
explicit_uidgid,
whiteout_spec,
@ -1241,6 +1457,7 @@ impl BuildContext {
prefetch,
blob_storage,
external_blob_storage,
blob_zran_generator: None,
blob_batch_generator: None,
blob_tar_reader: None,
@ -1250,6 +1467,10 @@ impl BuildContext {
features,
configuration: Arc::new(ConfigV2::default()),
blob_cache_generator: None,
is_chunkdict_generated: false,
attributes,
}
}
@ -1268,6 +1489,10 @@ impl BuildContext {
pub fn set_configuration(&mut self, config: Arc<ConfigV2>) {
self.configuration = config;
}
pub fn set_is_chunkdict(&mut self, is_chunkdict: bool) {
self.is_chunkdict_generated = is_chunkdict;
}
}
impl Default for BuildContext {
@ -1279,6 +1504,7 @@ impl Default for BuildContext {
compressor: compress::Algorithm::default(),
digester: digest::Algorithm::default(),
cipher: crypt::Algorithm::None,
crc32_algorithm: crc32::Algorithm::default(),
explicit_uidgid: true,
whiteout_spec: WhiteoutSpec::default(),
@ -1291,6 +1517,7 @@ impl Default for BuildContext {
prefetch: Prefetch::default(),
blob_storage: None,
external_blob_storage: None,
blob_zran_generator: None,
blob_batch_generator: None,
blob_tar_reader: None,
@ -1299,6 +1526,10 @@ impl Default for BuildContext {
blob_inline_meta: false,
features: Features::new(),
configuration: Arc::new(ConfigV2::default()),
blob_cache_generator: None,
is_chunkdict_generated: false,
attributes: Attributes::default(),
}
}
}
@ -1310,8 +1541,12 @@ pub struct BuildOutput {
pub blobs: Vec<String>,
/// The size of output blob in this build.
pub blob_size: Option<u64>,
/// External blob ids in the blob table of external bootstrap.
pub external_blobs: Vec<String>,
/// File path for the metadata blob.
pub bootstrap_path: Option<String>,
/// File path for the external metadata blob.
pub external_bootstrap_path: Option<String>,
}
impl fmt::Display for BuildOutput {
@ -1326,7 +1561,17 @@ impl fmt::Display for BuildOutput {
"data blob size: 0x{:x}",
self.blob_size.unwrap_or_default()
)?;
write!(f, "data blobs: {:?}", self.blobs)?;
if self.external_blobs.is_empty() {
write!(f, "data blobs: {:?}", self.blobs)?;
} else {
writeln!(f, "data blobs: {:?}", self.blobs)?;
writeln!(
f,
"external meta blob path: {}",
self.external_bootstrap_path.as_deref().unwrap_or("<none>")
)?;
write!(f, "external data blobs: {:?}", self.external_blobs)?;
}
Ok(())
}
}
@ -1335,20 +1580,98 @@ impl BuildOutput {
/// Create a new instance of [BuildOutput].
pub fn new(
blob_mgr: &BlobManager,
external_blob_mgr: Option<&BlobManager>,
bootstrap_storage: &Option<ArtifactStorage>,
external_bootstrap_storage: &Option<ArtifactStorage>,
) -> Result<BuildOutput> {
let blobs = blob_mgr.get_blob_ids();
let blob_size = blob_mgr.get_last_blob().map(|b| b.compressed_blob_size);
let bootstrap_path = if let Some(ArtifactStorage::SingleFile(p)) = bootstrap_storage {
Some(p.display().to_string())
} else {
None
};
let bootstrap_path = bootstrap_storage
.as_ref()
.map(|stor| stor.display().to_string());
let external_bootstrap_path = external_bootstrap_storage
.as_ref()
.map(|stor| stor.display().to_string());
let external_blobs = external_blob_mgr
.map(|mgr| mgr.get_blob_ids())
.unwrap_or_default();
Ok(Self {
blobs,
external_blobs,
blob_size,
bootstrap_path,
external_bootstrap_path,
})
}
}
#[cfg(test)]
mod tests {
use std::sync::atomic::AtomicBool;
use nydus_api::{BackendConfigV2, ConfigV2Internal, LocalFsConfig};
use super::*;
#[test]
fn test_blob_context_from() {
let mut blob = BlobInfo::new(
1,
"blob_id".to_string(),
16,
8,
4,
2,
BlobFeatures::INLINED_FS_META | BlobFeatures::SEPARATE | BlobFeatures::HAS_TOC,
);
let root_dir = &std::env::var("CARGO_MANIFEST_DIR").expect("$CARGO_MANIFEST_DIR");
let mut source_path = PathBuf::from(root_dir);
source_path.push("../tests/texture/blobs/be7d77eeb719f70884758d1aa800ed0fb09d701aaec469964e9d54325f0d5fef");
assert!(blob
.set_blob_id_from_meta_path(source_path.as_path())
.is_ok());
blob.set_blob_meta_size(2);
blob.set_blob_toc_size(2);
blob.set_blob_meta_digest([32u8; 32]);
blob.set_blob_toc_digest([64u8; 32]);
blob.set_blob_meta_info(1, 2, 4, 8);
let mut ctx = BuildContext::default();
ctx.configuration.internal.set_blob_accessible(true);
let config = ConfigV2 {
version: 2,
backend: Some(BackendConfigV2 {
backend_type: "localfs".to_owned(),
localdisk: None,
localfs: Some(LocalFsConfig {
blob_file: source_path.to_str().unwrap().to_owned(),
dir: "/tmp".to_owned(),
alt_dirs: vec!["/var/nydus/cache".to_owned()],
}),
oss: None,
s3: None,
registry: None,
http_proxy: None,
}),
external_backends: Vec::new(),
id: "id".to_owned(),
cache: None,
rafs: None,
overlay: None,
internal: ConfigV2Internal {
blob_accessible: Arc::new(AtomicBool::new(true)),
},
};
ctx.set_configuration(config.into());
let chunk_source = ChunkSource::Dict;
let blob_ctx = BlobContext::from(&ctx, &blob, chunk_source);
assert!(blob_ctx.is_ok());
let blob_ctx = blob_ctx.unwrap();
assert_eq!(blob_ctx.uncompressed_blob_size, 16);
assert!(blob_ctx.blob_meta_info_enabled);
}
}

View File

@ -6,37 +6,26 @@ use anyhow::Result;
use std::ops::Deref;
use super::node::Node;
use crate::{Overlay, Prefetch, Tree, TreeNode};
use crate::{Overlay, Prefetch, TreeNode};
#[derive(Clone)]
pub struct BlobLayout {}
impl BlobLayout {
pub fn layout_blob_simple(prefetch: &Prefetch, tree: &Tree) -> Result<(Vec<TreeNode>, usize)> {
let mut inodes = Vec::with_capacity(10000);
pub fn layout_blob_simple(prefetch: &Prefetch) -> Result<(Vec<TreeNode>, usize)> {
let (pre, non_pre) = prefetch.get_file_nodes();
let mut inodes: Vec<TreeNode> = pre
.into_iter()
.filter(|x| Self::should_dump_node(x.borrow().deref()))
.collect();
let mut non_prefetch_inodes: Vec<TreeNode> = non_pre
.into_iter()
.filter(|x| Self::should_dump_node(x.borrow().deref()))
.collect();
// Put all prefetch inodes at the head
// NOTE: Don't try to sort readahead files by their sizes, thus to keep files
// belonging to the same directory arranged in adjacent in blob file. Together with
// BFS style collecting descendants inodes, it will have a higher merging possibility.
// Later, we might write chunks of data one by one according to inode number order.
let prefetches = prefetch.get_file_nodes();
for n in prefetches {
let node = n.lock().unwrap();
if Self::should_dump_node(node.deref()) {
inodes.push(n.clone());
}
}
let prefetch_entries = inodes.len();
tree.walk_bfs(true, &mut |n| -> Result<()> {
let node = n.lock_node();
// Ignore lower layer node when dump blob
if !prefetch.contains(node.deref()) && Self::should_dump_node(node.deref()) {
inodes.push(n.node.clone());
}
Ok(())
})?;
inodes.append(&mut non_prefetch_inodes);
Ok((inodes, prefetch_entries))
}
@ -46,3 +35,28 @@ impl BlobLayout {
node.overlay == Overlay::UpperAddition || node.overlay == Overlay::UpperModification
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::{core::node::NodeInfo, Tree};
use nydus_rafs::metadata::{inode::InodeWrapper, RafsVersion};
#[test]
fn test_layout_blob_simple() {
let mut inode = InodeWrapper::new(RafsVersion::V6);
inode.set_mode(0o755 | libc::S_IFREG as u32);
inode.set_size(1);
let mut node1 = Node::new(inode.clone(), NodeInfo::default(), 1);
node1.overlay = Overlay::UpperAddition;
let tree = Tree::new(node1);
let mut prefetch = Prefetch::default();
prefetch.insert(&tree.node, tree.node.borrow().deref());
let (inodes, prefetch_entries) = BlobLayout::layout_blob_simple(&prefetch).unwrap();
assert_eq!(inodes.len(), 1);
assert_eq!(prefetch_entries, 0);
}
}

View File

@ -6,7 +6,7 @@
use std::ffi::{OsStr, OsString};
use std::fmt::{self, Display, Formatter, Result as FmtResult};
use std::fs::{self, File};
use std::io::{Read, Write};
use std::io::Read;
use std::ops::Deref;
#[cfg(target_os = "linux")]
use std::os::linux::fs::MetadataExt;
@ -25,16 +25,17 @@ use nydus_rafs::metadata::{Inode, RafsVersion};
use nydus_storage::device::BlobFeatures;
use nydus_storage::meta::{BlobChunkInfoV2Ondisk, BlobMetaChunkInfo};
use nydus_utils::digest::{DigestHasher, RafsDigest};
use nydus_utils::{compress, crypt};
use nydus_utils::{compress, crc32, crypt};
use nydus_utils::{div_round_up, event_tracer, root_tracer, try_round_up_4k, ByteSize};
use parse_size::parse_size;
use sha2::digest::Digest;
use crate::{
ArtifactWriter, BlobContext, BlobManager, BuildContext, ChunkDict, ConversionType, Overlay,
};
use crate::{BlobContext, BlobManager, BuildContext, ChunkDict, ConversionType, Overlay};
use super::context::Artifact;
/// Filesystem root path for Unix OSs.
const ROOT_PATH_NAME: &[u8] = &[b'/'];
const ROOT_PATH_NAME: &[u8] = b"/";
/// Source of chunk data: chunk dictionary, parent filesystem or builder.
#[derive(Clone, Hash, PartialEq, Eq)]
@ -219,11 +220,11 @@ impl Node {
self: &mut Node,
ctx: &BuildContext,
blob_mgr: &mut BlobManager,
blob_writer: &mut ArtifactWriter,
blob_writer: &mut dyn Artifact,
chunk_data_buf: &mut [u8],
) -> Result<u64> {
let mut reader = if self.is_reg() {
let file = File::open(&self.path())
let file = File::open(self.path())
.with_context(|| format!("failed to open node file {:?}", self.path()))?;
Some(file)
} else {
@ -243,7 +244,7 @@ impl Node {
&mut self,
ctx: &BuildContext,
blob_mgr: &mut BlobManager,
blob_writer: &mut ArtifactWriter,
blob_writer: &mut dyn Artifact,
reader: Option<&mut R>,
data_buf: &mut [u8],
) -> Result<u64> {
@ -275,6 +276,88 @@ impl Node {
None
};
if blob_mgr.external {
let external_values = ctx.attributes.get_values(self.target()).unwrap();
let external_blob_index = external_values
.get("blob_index")
.and_then(|v| v.parse::<u32>().ok())
.ok_or_else(|| anyhow!("failed to parse blob_index"))?;
let external_blob_id = external_values
.get("blob_id")
.ok_or_else(|| anyhow!("failed to parse blob_id"))?;
let external_chunk_size = external_values
.get("chunk_size")
.and_then(|v| parse_size(v).ok())
.ok_or_else(|| anyhow!("failed to parse chunk_size"))?;
let mut external_compressed_offset = external_values
.get("chunk_0_compressed_offset")
.and_then(|v| v.parse::<u64>().ok())
.ok_or_else(|| anyhow!("failed to parse chunk_0_compressed_offset"))?;
let external_compressed_size = external_values
.get("compressed_size")
.and_then(|v| v.parse::<u64>().ok())
.ok_or_else(|| anyhow!("failed to parse compressed_size"))?;
let (_, external_blob_ctx) =
blob_mgr.get_or_create_blob_by_idx(ctx, external_blob_index)?;
external_blob_ctx.blob_id = external_blob_id.to_string();
external_blob_ctx.compressed_blob_size = external_compressed_size;
external_blob_ctx.uncompressed_blob_size = external_compressed_size;
let chunk_count = self
.chunk_count(external_chunk_size as u64)
.with_context(|| {
format!("failed to get chunk count for {}", self.path().display())
})?;
self.inode.set_child_count(chunk_count);
info!(
"target {:?}, file_size {}, blob_index {}, blob_id {}, chunk_size {}, chunk_count {}",
self.target(),
self.inode.size(),
external_blob_index,
external_blob_id,
external_chunk_size,
chunk_count
);
for i in 0..self.inode.child_count() {
let mut chunk = self.inode.create_chunk();
let file_offset = i as u64 * external_chunk_size as u64;
let compressed_size = if i == self.inode.child_count() - 1 {
self.inode.size() - (external_chunk_size * i as u64)
} else {
external_chunk_size
} as u32;
chunk.set_blob_index(external_blob_index);
chunk.set_index(external_blob_ctx.alloc_chunk_index()?);
chunk.set_compressed_offset(external_compressed_offset);
chunk.set_compressed_size(compressed_size);
chunk.set_uncompressed_offset(external_compressed_offset);
chunk.set_uncompressed_size(compressed_size);
chunk.set_compressed(false);
chunk.set_file_offset(file_offset);
external_compressed_offset += compressed_size as u64;
external_blob_ctx.chunk_size = external_chunk_size as u32;
if ctx.crc32_algorithm != crc32::Algorithm::None {
self.set_external_chunk_crc32(ctx, &mut chunk, i)?
}
if let Some(h) = inode_hasher.as_mut() {
h.digest_update(chunk.id().as_ref());
}
self.chunks.push(NodeChunk {
source: ChunkSource::Build,
inner: Arc::new(chunk),
});
}
if let Some(h) = inode_hasher {
self.inode.set_digest(h.digest_finalize());
}
return Ok(0);
}
// `child_count` of regular file is reused as `chunk_count`.
for i in 0..self.inode.child_count() {
let chunk_size = ctx.chunk_size;
@ -286,13 +369,14 @@ impl Node {
};
let chunk_data = &mut data_buf[0..uncompressed_size as usize];
let (mut chunk, mut chunk_info) = self.read_file_chunk(ctx, reader, chunk_data)?;
let (mut chunk, mut chunk_info) =
self.read_file_chunk(ctx, reader, chunk_data, blob_mgr.external)?;
if let Some(h) = inode_hasher.as_mut() {
h.digest_update(chunk.id().as_ref());
}
// No need to perform chunk deduplication for tar-tarfs case.
if ctx.conversion_type != ConversionType::TarToTarfs {
// No need to perform chunk deduplication for tar-tarfs/external blob case.
if ctx.conversion_type != ConversionType::TarToTarfs && !blob_mgr.external {
chunk = match self.deduplicate_chunk(
ctx,
blob_mgr,
@ -310,17 +394,23 @@ impl Node {
chunk.set_blob_index(blob_index);
chunk.set_index(chunk_index);
chunk.set_file_offset(file_offset);
let mut dumped_size = chunk.compressed_size();
if ctx.conversion_type == ConversionType::TarToTarfs {
chunk.set_uncompressed_offset(chunk.compressed_offset());
chunk.set_uncompressed_size(chunk.compressed_size());
} else if let Some(info) =
self.dump_file_chunk(ctx, blob_ctx, blob_writer, chunk_data, &mut chunk)?
{
chunk_info = Some(info);
} else {
let (info, d_size) =
self.dump_file_chunk(ctx, blob_ctx, blob_writer, chunk_data, &mut chunk)?;
if info.is_some() {
chunk_info = info;
}
if let Some(d_size) = d_size {
dumped_size = d_size;
}
}
let chunk = Arc::new(chunk);
blob_size += chunk.compressed_size() as u64;
blob_size += dumped_size as u64;
if ctx.conversion_type != ConversionType::TarToTarfs {
blob_ctx.add_chunk_meta_info(&chunk, chunk_info)?;
blob_mgr
@ -341,20 +431,43 @@ impl Node {
Ok(blob_size)
}
fn set_external_chunk_crc32(
&self,
ctx: &BuildContext,
chunk: &mut ChunkWrapper,
i: u32,
) -> Result<()> {
if let Some(crcs) = ctx.attributes.get_crcs(self.target()) {
if (i as usize) >= crcs.len() {
return Err(anyhow!(
"invalid crc index {} for file {}",
i,
self.target().display()
));
}
chunk.set_has_crc32(true);
chunk.set_crc32(crcs[i as usize]);
}
Ok(())
}
fn read_file_chunk<R: Read>(
&self,
ctx: &BuildContext,
reader: &mut R,
buf: &mut [u8],
external: bool,
) -> Result<(ChunkWrapper, Option<BlobChunkInfoV2Ondisk>)> {
let mut chunk = self.inode.create_chunk();
let mut chunk_info = None;
if let Some(ref zran) = ctx.blob_zran_generator {
let mut zran = zran.lock().unwrap();
zran.start_chunk(ctx.chunk_size as u64)?;
reader
.read_exact(buf)
.with_context(|| format!("failed to read node file {:?}", self.path()))?;
if !external {
reader
.read_exact(buf)
.with_context(|| format!("failed to read node file {:?}", self.path()))?;
}
let info = zran.finish_chunk()?;
chunk.set_compressed_offset(info.compressed_offset());
chunk.set_compressed_size(info.compressed_size());
@ -366,21 +479,27 @@ impl Node {
chunk.set_compressed_offset(pos);
chunk.set_compressed_size(buf.len() as u32);
chunk.set_compressed(false);
reader
.read_exact(buf)
.with_context(|| format!("failed to read node file {:?}", self.path()))?;
} else {
if !external {
reader
.read_exact(buf)
.with_context(|| format!("failed to read node file {:?}", self.path()))?;
}
} else if !external {
reader
.read_exact(buf)
.with_context(|| format!("failed to read node file {:?}", self.path()))?;
}
// For tar-tarfs case, no need to compute chunk id.
if ctx.conversion_type != ConversionType::TarToTarfs {
if ctx.conversion_type != ConversionType::TarToTarfs && !external {
chunk.set_id(RafsDigest::from_buf(buf, ctx.digester));
if ctx.crc32_algorithm != crc32::Algorithm::None {
chunk.set_has_crc32(true);
chunk.set_crc32(crc32::Crc32::new(ctx.crc32_algorithm).from_buf(buf));
}
}
if ctx.cipher != crypt::Algorithm::None {
if ctx.cipher != crypt::Algorithm::None && !external {
chunk.set_encrypted(true);
}
@ -388,15 +507,18 @@ impl Node {
}
/// Dump a chunk from u8 slice into the data blob.
/// Return `BlobChunkInfoV2Ondisk` when the chunk is added into a batch chunk.
/// Return `BlobChunkInfoV2Ondisk` iff the chunk is added into a batch chunk.
/// Return dumped size iff not `BlobFeatures::SEPARATE`.
/// Dumped size can be zero if chunk data is cached in Batch Generator,
/// and may contain previous chunk data cached in Batch Generator.
fn dump_file_chunk(
&self,
ctx: &BuildContext,
blob_ctx: &mut BlobContext,
blob_writer: &mut ArtifactWriter,
blob_writer: &mut dyn Artifact,
chunk_data: &[u8],
chunk: &mut ChunkWrapper,
) -> Result<Option<BlobChunkInfoV2Ondisk>> {
) -> Result<(Option<BlobChunkInfoV2Ondisk>, Option<u32>)> {
let d_size = chunk_data.len() as u32;
let aligned_d_size = if ctx.aligned_chunk {
// Safe to unwrap because `chunk_size` is much less than u32::MAX.
@ -412,34 +534,47 @@ impl Node {
let mut chunk_info = None;
let encrypted = blob_ctx.blob_cipher != crypt::Algorithm::None;
let mut dumped_size = None;
if self.inode.child_count() == 1
if ctx.blob_batch_generator.is_some()
&& self.inode.child_count() == 1
&& d_size < ctx.batch_size / 2
&& ctx.blob_batch_generator.is_some()
{
// This chunk will be added into a batch chunk.
let mut batch = ctx.blob_batch_generator.as_ref().unwrap().lock().unwrap();
if batch.chunk_data_buf_len() as u32 + d_size < ctx.batch_size {
// Add into current batch chunk directly.
chunk_info = Some(batch.generate_chunk_info(pre_d_offset, d_size, encrypted)?);
chunk_info = Some(batch.generate_chunk_info(
blob_ctx.current_compressed_offset,
pre_d_offset,
d_size,
encrypted,
)?);
batch.append_chunk_data_buf(chunk_data);
} else {
// Dump current batch chunk if exists, and then add into a new batch chunk.
if !batch.chunk_data_buf_is_empty() {
// Dump current batch chunk.
let (pre_c_offset, c_size, _) =
let (_, c_size, _) =
Self::write_chunk_data(ctx, blob_ctx, blob_writer, batch.chunk_data_buf())?;
batch.add_context(pre_c_offset, c_size);
dumped_size = Some(c_size);
batch.add_context(c_size);
batch.clear_chunk_data_buf();
}
// Add into a new batch chunk.
chunk_info = Some(batch.generate_chunk_info(pre_d_offset, d_size, encrypted)?);
chunk_info = Some(batch.generate_chunk_info(
blob_ctx.current_compressed_offset,
pre_d_offset,
d_size,
encrypted,
)?);
batch.append_chunk_data_buf(chunk_data);
}
} else if !ctx.blob_features.contains(BlobFeatures::SEPARATE) {
// For other case which needs to write chunk data to data blobs.
// For other case which needs to write chunk data to data blobs. Which means,
// `tar-ref`, `targz-ref`, `estargz-ref`, and `estargzindex-ref`, are excluded.
// Interrupt and dump buffered batch chunks.
// TODO: cancel the interruption.
@ -447,9 +582,10 @@ impl Node {
let mut batch = batch.lock().unwrap();
if !batch.chunk_data_buf_is_empty() {
// Dump current batch chunk.
let (pre_c_offset, c_size, _) =
let (_, c_size, _) =
Self::write_chunk_data(ctx, blob_ctx, blob_writer, batch.chunk_data_buf())?;
batch.add_context(pre_c_offset, c_size);
dumped_size = Some(c_size);
batch.add_context(c_size);
batch.clear_chunk_data_buf();
}
}
@ -457,23 +593,27 @@ impl Node {
let (pre_c_offset, c_size, is_compressed) =
Self::write_chunk_data(ctx, blob_ctx, blob_writer, chunk_data)
.with_context(|| format!("failed to write chunk data {:?}", self.path()))?;
dumped_size = Some(dumped_size.unwrap_or(0) + c_size);
chunk.set_compressed_offset(pre_c_offset);
chunk.set_compressed_size(c_size);
chunk.set_compressed(is_compressed);
}
if let Some(blob_cache) = ctx.blob_cache_generator.as_ref() {
blob_cache.write_blob_data(chunk_data, chunk, aligned_d_size)?;
}
event_tracer!("blob_uncompressed_size", +d_size);
Ok(chunk_info)
Ok((chunk_info, dumped_size))
}
pub fn write_chunk_data(
ctx: &BuildContext,
_ctx: &BuildContext,
blob_ctx: &mut BlobContext,
blob_writer: &mut ArtifactWriter,
blob_writer: &mut dyn Artifact,
chunk_data: &[u8],
) -> Result<(u64, u32, bool)> {
let (compressed, is_compressed) = compress::compress(chunk_data, ctx.compressor)
let (compressed, is_compressed) = compress::compress(chunk_data, blob_ctx.blob_compressor)
.with_context(|| "failed to compress node file".to_string())?;
let encrypted = crypt::encrypt_with_context(
&compressed,
@ -483,10 +623,14 @@ impl Node {
)?;
let compressed_size = encrypted.len() as u32;
let pre_compressed_offset = blob_ctx.current_compressed_offset;
blob_writer
.write_all(&encrypted)
.context("failed to write blob")?;
blob_ctx.blob_hash.update(&encrypted);
if !blob_ctx.external {
// For the external blob, both compressor and encrypter should
// be none, and we don't write data into blob file.
blob_writer
.write_all(&encrypted)
.context("failed to write blob")?;
blob_ctx.blob_hash.update(&encrypted);
}
blob_ctx.current_compressed_offset += compressed_size as u64;
blob_ctx.compressed_blob_size += compressed_size as u64;
@ -561,6 +705,7 @@ impl Node {
// build node object from a filesystem object.
impl Node {
#[allow(clippy::too_many_arguments)]
/// Create a new instance of [Node] from a filesystem object.
pub fn from_fs_object(
version: RafsVersion,
@ -568,6 +713,7 @@ impl Node {
path: PathBuf,
overlay: Overlay,
chunk_size: u32,
file_size: u64,
explicit_uidgid: bool,
v6_force_extended_inode: bool,
) -> Result<Node> {
@ -600,7 +746,7 @@ impl Node {
v6_dirents: Vec::new(),
};
node.build_inode(chunk_size)
node.build_inode(chunk_size, file_size)
.context("failed to build Node from fs object")?;
if version.is_v6() {
node.v6_set_inode_compact();
@ -640,7 +786,7 @@ impl Node {
Ok(())
}
fn build_inode_stat(&mut self) -> Result<()> {
fn build_inode_stat(&mut self, file_size: u64) -> Result<()> {
let meta = self
.meta()
.with_context(|| format!("failed to get metadata of {}", self.path().display()))?;
@ -675,7 +821,13 @@ impl Node {
// directory entries, so let's ignore the value provided by source filesystem and
// calculate it later by ourself.
if !self.is_dir() {
self.inode.set_size(meta.st_size());
// If the file size is not 0, and the meta size is 0, it means the file is an
// external dummy file. We need to set the size to file_size.
if file_size != 0 && meta.st_size() == 0 {
self.inode.set_size(file_size);
} else {
self.inode.set_size(meta.st_size());
}
self.v5_set_inode_blocks();
}
self.info = Arc::new(info);
@ -683,7 +835,7 @@ impl Node {
Ok(())
}
fn build_inode(&mut self, chunk_size: u32) -> Result<()> {
fn build_inode(&mut self, chunk_size: u32, file_size: u64) -> Result<()> {
let size = self.name().byte_size();
if size > u16::MAX as usize {
bail!("file name length 0x{:x} is too big", size,);
@ -693,7 +845,7 @@ impl Node {
// NOTE: Always retrieve xattr before attr so that we can know the size of xattr pairs.
self.build_inode_xattr()
.with_context(|| format!("failed to get xattr for {}", self.path().display()))?;
self.build_inode_stat()
self.build_inode_stat(file_size)
.with_context(|| format!("failed to build inode {}", self.path().display()))?;
if self.is_reg() {
@ -865,3 +1017,259 @@ impl Node {
self.info = Arc::new(info);
}
}
#[cfg(test)]
mod tests {
use std::{collections::HashMap, io::BufReader};
use nydus_utils::{digest, BufReaderInfo};
use vmm_sys_util::tempfile::TempFile;
use crate::{attributes::Attributes, ArtifactWriter, BlobCacheGenerator, HashChunkDict};
use super::*;
#[test]
fn test_node_chunk() {
let chunk_wrapper1 = ChunkWrapper::new(RafsVersion::V5);
let mut chunk = NodeChunk {
source: ChunkSource::Build,
inner: Arc::new(chunk_wrapper1),
};
println!("NodeChunk: {}", chunk);
matches!(chunk.inner.deref().clone(), ChunkWrapper::V5(_));
let chunk_wrapper2 = ChunkWrapper::new(RafsVersion::V6);
chunk.copy_from(&chunk_wrapper2);
matches!(chunk.inner.deref().clone(), ChunkWrapper::V6(_));
chunk.set_index(0x10);
assert_eq!(chunk.inner.index(), 0x10);
chunk.set_blob_index(0x20);
assert_eq!(chunk.inner.blob_index(), 0x20);
chunk.set_compressed_size(0x30);
assert_eq!(chunk.inner.compressed_size(), 0x30);
chunk.set_file_offset(0x40);
assert_eq!(chunk.inner.file_offset(), 0x40);
}
#[test]
fn test_node_dump_node_data() {
let root_dir = &std::env::var("CARGO_MANIFEST_DIR").expect("$CARGO_MANIFEST_DIR");
let mut source_path = PathBuf::from(root_dir);
source_path.push("../tests/texture/blobs/be7d77eeb719f70884758d1aa800ed0fb09d701aaec469964e9d54325f0d5fef");
let mut inode = InodeWrapper::new(RafsVersion::V5);
inode.set_child_count(2);
inode.set_size(20);
let info = NodeInfo {
explicit_uidgid: true,
src_ino: 1,
src_dev: u64::MAX,
rdev: u64::MAX,
path: source_path.clone(),
source: PathBuf::from("/"),
target: source_path.clone(),
target_vec: vec![OsString::from(source_path)],
symlink: Some(OsString::from("symlink")),
xattrs: RafsXAttrs::new(),
v6_force_extended_inode: false,
};
let mut node = Node::new(inode, info, 1);
let mut ctx = BuildContext::default();
ctx.set_chunk_size(2);
ctx.conversion_type = ConversionType::TarToRef;
ctx.cipher = crypt::Algorithm::Aes128Xts;
let tmp_file1 = TempFile::new().unwrap();
std::fs::write(
tmp_file1.as_path(),
"This is a test!\n".repeat(32).as_bytes(),
)
.unwrap();
let buf_reader = BufReader::new(tmp_file1.into_file());
ctx.blob_tar_reader = Some(BufReaderInfo::from_buf_reader(buf_reader));
let tmp_file2 = TempFile::new().unwrap();
ctx.blob_cache_generator = Some(
BlobCacheGenerator::new(crate::ArtifactStorage::SingleFile(PathBuf::from(
tmp_file2.as_path(),
)))
.unwrap(),
);
let mut blob_mgr = BlobManager::new(digest::Algorithm::Sha256, false);
let mut chunk_dict = HashChunkDict::new(digest::Algorithm::Sha256);
let mut chunk_wrapper = ChunkWrapper::new(RafsVersion::V5);
chunk_wrapper.set_id(RafsDigest {
data: [
209, 217, 144, 116, 135, 113, 3, 121, 133, 92, 96, 25, 219, 145, 151, 219, 119, 47,
96, 147, 90, 51, 78, 44, 193, 149, 6, 102, 13, 173, 138, 191,
],
});
chunk_wrapper.set_uncompressed_size(2);
chunk_dict.add_chunk(Arc::new(chunk_wrapper), digest::Algorithm::Sha256);
blob_mgr.set_chunk_dict(Arc::new(chunk_dict));
let tmp_file3 = TempFile::new().unwrap();
let mut blob_writer = ArtifactWriter::new(crate::ArtifactStorage::SingleFile(
PathBuf::from(tmp_file3.as_path()),
))
.unwrap();
let mut chunk_data_buf = [1u8; 32];
node.inode.set_mode(0o755 | libc::S_IFDIR as u32);
let data_size =
node.dump_node_data(&ctx, &mut blob_mgr, &mut blob_writer, &mut chunk_data_buf);
assert!(data_size.is_ok());
assert_eq!(data_size.unwrap(), 0);
node.inode.set_mode(0o755 | libc::S_IFLNK as u32);
let data_size =
node.dump_node_data(&ctx, &mut blob_mgr, &mut blob_writer, &mut chunk_data_buf);
assert!(data_size.is_ok());
assert_eq!(data_size.unwrap(), 0);
node.inode.set_mode(0o755 | libc::S_IFBLK as u32);
let data_size =
node.dump_node_data(&ctx, &mut blob_mgr, &mut blob_writer, &mut chunk_data_buf);
assert!(data_size.is_ok());
assert_eq!(data_size.unwrap(), 0);
node.inode.set_mode(0o755 | libc::S_IFREG as u32);
let data_size =
node.dump_node_data(&ctx, &mut blob_mgr, &mut blob_writer, &mut chunk_data_buf);
assert!(data_size.is_ok());
assert_eq!(data_size.unwrap(), 18);
}
#[test]
fn test_node() {
let inode = InodeWrapper::new(RafsVersion::V5);
let info = NodeInfo {
explicit_uidgid: true,
src_ino: 1,
src_dev: u64::MAX,
rdev: u64::MAX,
path: PathBuf::new(),
source: PathBuf::new(),
target: PathBuf::new(),
target_vec: vec![OsString::new()],
symlink: None,
xattrs: RafsXAttrs::new(),
v6_force_extended_inode: false,
};
let mut inode1 = inode.clone();
inode1.set_size(1 << 60);
inode1.set_mode(0o755 | libc::S_IFREG as u32);
let node = Node::new(inode1, info.clone(), 1);
assert!(node.chunk_count(2).is_err());
let mut inode2 = inode.clone();
inode2.set_mode(0o755 | libc::S_IFCHR as u32);
let node = Node::new(inode2, info.clone(), 1);
assert!(node.chunk_count(2).is_ok());
assert_eq!(node.chunk_count(2).unwrap(), 0);
let mut inode3 = inode.clone();
inode3.set_mode(0o755 | libc::S_IFLNK as u32);
let node = Node::new(inode3, info.clone(), 1);
assert_eq!(node.file_type(), "symlink");
let mut inode4 = inode.clone();
inode4.set_mode(0o755 | libc::S_IFDIR as u32);
let node = Node::new(inode4, info.clone(), 1);
assert_eq!(node.file_type(), "dir");
let mut inode5 = inode.clone();
inode5.set_mode(0o755 | libc::S_IFREG as u32);
let node = Node::new(inode5, info.clone(), 1);
assert_eq!(node.file_type(), "file");
let mut info1 = info.clone();
info1.target_vec = vec![OsString::from("1"), OsString::from("2")];
let node = Node::new(inode.clone(), info1, 1);
assert_eq!(node.name(), OsString::from("2").as_os_str());
let mut info2 = info.clone();
info2.target_vec = vec![];
info2.path = PathBuf::from("/");
info2.source = PathBuf::from("/");
let node = Node::new(inode.clone(), info2, 1);
assert_eq!(node.name(), OsStr::from_bytes(ROOT_PATH_NAME));
let mut info3 = info.clone();
info3.target_vec = vec![];
info3.path = PathBuf::from("/1");
info3.source = PathBuf::from("/11");
let node = Node::new(inode.clone(), info3, 1);
assert_eq!(node.name(), OsStr::new("1"));
let target = PathBuf::from("/root/child");
assert_eq!(
Node::generate_target_vec(&target),
vec![
OsString::from("/"),
OsString::from("root"),
OsString::from("child")
]
);
let mut node = Node::new(inode, info, 1);
node.set_symlink(OsString::from("symlink"));
assert_eq!(node.info.deref().symlink, Some(OsString::from("symlink")));
let mut xatter = RafsXAttrs::new();
assert!(xatter
.add(OsString::from("user.key"), [1u8; 16].to_vec())
.is_ok());
assert!(xatter
.add(
OsString::from("system.posix_acl_default.key"),
[2u8; 8].to_vec()
)
.is_ok());
node.set_xattr(xatter);
node.inode.set_has_xattr(true);
node.remove_xattr(OsStr::new("user.key"));
assert!(node.inode.has_xattr());
node.remove_xattr(OsStr::new("system.posix_acl_default.key"));
assert!(!node.inode.has_xattr());
}
#[test]
fn test_set_external_chunk_crc32() {
let mut ctx = BuildContext {
crc32_algorithm: crc32::Algorithm::Crc32Iscsi,
attributes: Attributes {
crcs: HashMap::new(),
..Default::default()
},
..Default::default()
};
let target = PathBuf::from("/test_file");
ctx.attributes
.crcs
.insert(target.clone(), vec![0x12345678, 0x87654321]);
let node = Node::new(
InodeWrapper::new(RafsVersion::V5),
NodeInfo {
path: target.clone(),
target: target.clone(),
..Default::default()
},
1,
);
let mut chunk = node.inode.create_chunk();
print!("target: {}", node.target().display());
let result = node.set_external_chunk_crc32(&ctx, &mut chunk, 1);
assert!(result.is_ok());
assert_eq!(chunk.crc32(), 0x87654321);
assert!(chunk.has_crc32());
// test invalid crc index
let result = node.set_external_chunk_crc32(&ctx, &mut chunk, 2);
assert!(result.is_err());
let err = result.unwrap_err().to_string();
assert!(err.contains("invalid crc index 2 for file /test_file"));
}
}

View File

@ -71,6 +71,16 @@ pub enum WhiteoutSpec {
None,
}
impl fmt::Display for WhiteoutSpec {
fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result {
match self {
WhiteoutSpec::Oci => write!(f, "oci"),
WhiteoutSpec::Overlayfs => write!(f, "overlayfs"),
WhiteoutSpec::None => write!(f, "none"),
}
}
}
impl Default for WhiteoutSpec {
fn default() -> Self {
Self::Oci
@ -209,3 +219,143 @@ impl Node {
None
}
}
#[cfg(test)]
mod tests {
use nydus_rafs::metadata::{inode::InodeWrapper, layout::v5::RafsV5Inode};
use crate::core::node::NodeInfo;
use super::*;
#[test]
fn test_white_spec_from_str() {
let spec = WhiteoutSpec::default();
assert!(matches!(spec, WhiteoutSpec::Oci));
assert!(WhiteoutSpec::from_str("oci").is_ok());
assert!(WhiteoutSpec::from_str("overlayfs").is_ok());
assert!(WhiteoutSpec::from_str("none").is_ok());
assert!(WhiteoutSpec::from_str("foo").is_err());
}
#[test]
fn test_white_type_removal_check() {
let t1 = WhiteoutType::OciOpaque;
let t2 = WhiteoutType::OciRemoval;
let t3 = WhiteoutType::OverlayFsOpaque;
let t4 = WhiteoutType::OverlayFsRemoval;
assert!(!t1.is_removal());
assert!(t2.is_removal());
assert!(!t3.is_removal());
assert!(t4.is_removal());
}
#[test]
fn test_overlay_low_layer_check() {
let t1 = Overlay::Lower;
let t2 = Overlay::UpperAddition;
let t3 = Overlay::UpperModification;
assert!(t1.is_lower_layer());
assert!(!t2.is_lower_layer());
assert!(!t3.is_lower_layer());
}
#[test]
fn test_node() {
let mut inode = InodeWrapper::V5(RafsV5Inode::default());
inode.set_mode(libc::S_IFCHR as u32);
let node = Node::new(inode, NodeInfo::default(), 0);
assert!(!node.is_overlayfs_whiteout(WhiteoutSpec::None));
assert!(node.is_overlayfs_whiteout(WhiteoutSpec::Overlayfs));
assert_eq!(
node.whiteout_type(WhiteoutSpec::Overlayfs).unwrap(),
WhiteoutType::OverlayFsRemoval
);
let mut inode = InodeWrapper::V5(RafsV5Inode::default());
let mut info: NodeInfo = NodeInfo::default();
assert!(info
.xattrs
.add(OVERLAYFS_WHITEOUT_OPAQUE.into(), "y".into())
.is_ok());
inode.set_mode(libc::S_IFDIR as u32);
let node = Node::new(inode, info, 0);
assert!(!node.is_overlayfs_opaque(WhiteoutSpec::None));
assert!(node.is_overlayfs_opaque(WhiteoutSpec::Overlayfs));
assert_eq!(
node.whiteout_type(WhiteoutSpec::Overlayfs).unwrap(),
WhiteoutType::OverlayFsOpaque
);
let mut inode = InodeWrapper::V5(RafsV5Inode::default());
let mut info = NodeInfo::default();
assert!(info
.xattrs
.add(OVERLAYFS_WHITEOUT_OPAQUE.into(), "n".into())
.is_ok());
inode.set_mode(libc::S_IFDIR as u32);
let node = Node::new(inode, info, 0);
assert!(!node.is_overlayfs_opaque(WhiteoutSpec::None));
assert!(!node.is_overlayfs_opaque(WhiteoutSpec::Overlayfs));
let mut inode = InodeWrapper::V5(RafsV5Inode::default());
let mut info = NodeInfo::default();
assert!(info
.xattrs
.add(OVERLAYFS_WHITEOUT_OPAQUE.into(), "y".into())
.is_ok());
inode.set_mode(libc::S_IFCHR as u32);
let node = Node::new(inode, info, 0);
assert!(!node.is_overlayfs_opaque(WhiteoutSpec::None));
assert!(!node.is_overlayfs_opaque(WhiteoutSpec::Overlayfs));
let mut inode = InodeWrapper::V5(RafsV5Inode::default());
let mut info = NodeInfo::default();
assert!(info
.xattrs
.add(OVERLAYFS_WHITEOUT_OPAQUE.into(), "n".into())
.is_ok());
inode.set_mode(libc::S_IFDIR as u32);
let node = Node::new(inode, info, 0);
assert!(!node.is_overlayfs_opaque(WhiteoutSpec::None));
assert!(!node.is_overlayfs_opaque(WhiteoutSpec::Overlayfs));
let inode = InodeWrapper::V5(RafsV5Inode::default());
let info = NodeInfo::default();
let mut node = Node::new(inode, info, 0);
assert_eq!(node.whiteout_type(WhiteoutSpec::None), None);
assert_eq!(node.whiteout_type(WhiteoutSpec::Oci), None);
assert_eq!(node.whiteout_type(WhiteoutSpec::Overlayfs), None);
node.overlay = Overlay::Lower;
assert_eq!(node.whiteout_type(WhiteoutSpec::Overlayfs), None);
let inode = InodeWrapper::V5(RafsV5Inode::default());
let mut info = NodeInfo::default();
let name = OCISPEC_WHITEOUT_PREFIX.to_string() + "foo";
info.target_vec.push(name.clone().into());
let node = Node::new(inode, info, 0);
assert_eq!(
node.whiteout_type(WhiteoutSpec::Oci).unwrap(),
WhiteoutType::OciRemoval
);
assert_eq!(node.origin_name(WhiteoutType::OciRemoval).unwrap(), "foo");
assert_eq!(node.origin_name(WhiteoutType::OciOpaque), None);
assert_eq!(
node.origin_name(WhiteoutType::OverlayFsRemoval).unwrap(),
OsStr::new(&name)
);
let inode = InodeWrapper::V5(RafsV5Inode::default());
let mut info = NodeInfo::default();
info.target_vec.push(OCISPEC_WHITEOUT_OPAQUE.into());
let node = Node::new(inode, info, 0);
assert_eq!(
node.whiteout_type(WhiteoutSpec::Oci).unwrap(),
WhiteoutType::OciOpaque
);
}
}

View File

@ -3,7 +3,6 @@
//
// SPDX-License-Identifier: Apache-2.0
use std::collections::BTreeMap;
use std::path::PathBuf;
use std::str::FromStr;
@ -73,7 +72,7 @@ fn get_patterns() -> Result<IndexMap<PathBuf, Option<TreeNode>>> {
fn generate_patterns(input: Vec<String>) -> Result<IndexMap<PathBuf, Option<TreeNode>>> {
let mut patterns = IndexMap::new();
for (idx, file) in input.iter().enumerate() {
for file in &input {
let file_trimmed: PathBuf = file.trim().into();
// Sanity check for the list format.
if !file_trimmed.is_absolute() {
@ -84,13 +83,21 @@ fn generate_patterns(input: Vec<String>) -> Result<IndexMap<PathBuf, Option<Tree
continue;
}
let mut skip = false;
for prefix in input.iter().take(idx) {
if file_trimmed.starts_with(prefix) {
let mut current_path = file_trimmed.clone();
let mut skip = patterns.contains_key(&current_path);
while !skip && current_path.pop() {
if patterns.contains_key(&current_path) {
skip = true;
break;
}
}
if !skip {
if skip {
warn!(
"prefetch pattern {} is covered by previous pattern and thus omitted",
file
);
} else {
debug!(
"prefetch pattern: {}, trimmed file name {:?}",
file, file_trimmed
@ -114,8 +121,16 @@ pub struct Prefetch {
patterns: IndexMap<PathBuf, Option<TreeNode>>,
// File list to help optimizing layout of data blobs.
// Files from this list may be put at the head of data blob for better prefetch performance.
files: BTreeMap<PathBuf, TreeNode>,
// Files from this list may be put at the head of data blob for better prefetch performance,
// The index of matched prefetch pattern is stored in `usize`,
// which will help to sort the prefetch files in the final layout.
// It only stores regular files.
files_prefetch: Vec<(TreeNode, usize)>,
// It stores all non-prefetch files that is not stored in `prefetch_files`,
// including regular files, dirs, symlinks, etc.,
// with the same order of BFS traversal of file tree.
files_non_prefetch: Vec<TreeNode>,
}
impl Prefetch {
@ -131,50 +146,63 @@ impl Prefetch {
policy,
disabled: false,
patterns,
files: BTreeMap::new(),
files_prefetch: Vec::with_capacity(10000),
files_non_prefetch: Vec::with_capacity(10000),
})
}
/// Insert node into the prefetch list if it matches prefetch rules.
pub fn insert_if_need(&mut self, obj: &TreeNode, node: &Node) {
/// Insert node into the prefetch Vector if it matches prefetch rules,
/// while recording the index of matched prefetch pattern,
/// or insert it into non-prefetch Vector.
pub fn insert(&mut self, obj: &TreeNode, node: &Node) {
// Newly created root inode of this rafs has zero size
if self.policy == PrefetchPolicy::None
|| self.disabled
|| (node.inode.is_reg() && node.inode.size() == 0)
{
self.files_non_prefetch.push(obj.clone());
return;
}
let path = node.target();
for (f, v) in self.patterns.iter_mut() {
// As path is canonicalized, it should be reliable.
if path == f {
if self.policy == PrefetchPolicy::Fs {
let mut path = node.target().clone();
let mut exact_match = true;
loop {
if let Some((idx, _, v)) = self.patterns.get_full_mut(&path) {
if exact_match {
*v = Some(obj.clone());
}
if node.is_reg() {
self.files.insert(path.clone(), obj.clone());
self.files_prefetch.push((obj.clone(), idx));
} else {
self.files_non_prefetch.push(obj.clone());
}
} else if path.starts_with(f) && node.is_reg() {
self.files.insert(path.clone(), obj.clone());
return;
}
// If no exact match, try to match parent dir until root.
if !path.pop() {
self.files_non_prefetch.push(obj.clone());
return;
}
exact_match = false;
}
}
/// Check whether the node is in the prefetch list.
pub fn contains(&self, node: &Node) -> bool {
self.files.contains_key(node.target())
/// Get node Vector of files in the prefetch list and non-prefetch list.
/// The order of prefetch files is the same as the order of prefetch patterns.
/// The order of non-prefetch files is the same as the order of BFS traversal of file tree.
pub fn get_file_nodes(&self) -> (Vec<TreeNode>, Vec<TreeNode>) {
let mut p_files = self.files_prefetch.clone();
p_files.sort_by_key(|k| k.1);
let p_files = p_files.into_iter().map(|(s, _)| s).collect();
(p_files, self.files_non_prefetch.clone())
}
/// Get node index array of files in the prefetch list.
pub fn get_file_nodes(&self) -> Vec<TreeNode> {
self.files.values().cloned().collect()
}
/// Get number of prefetch rules.
/// Get the number of ``valid`` prefetch rules.
pub fn fs_prefetch_rule_count(&self) -> u32 {
if self.policy == PrefetchPolicy::Fs {
self.patterns.values().len() as u32
self.patterns.values().filter(|v| v.is_some()).count() as u32
} else {
0
}
@ -185,7 +213,7 @@ impl Prefetch {
if self.policy == PrefetchPolicy::Fs {
let mut prefetch_table = RafsV5PrefetchTable::new();
for i in self.patterns.values().filter_map(|v| v.clone()) {
let node = i.lock().unwrap();
let node = i.borrow_mut();
assert!(node.inode.ino() < u32::MAX as u64);
prefetch_table.add_entry(node.inode.ino() as u32);
}
@ -200,7 +228,7 @@ impl Prefetch {
if self.policy == PrefetchPolicy::Fs {
let mut prefetch_table = RafsV6PrefetchTable::new();
for i in self.patterns.values().filter_map(|v| v.clone()) {
let node = i.lock().unwrap();
let node = i.borrow_mut();
let ino = node.inode.ino();
debug_assert!(ino > 0);
let nid = calculate_nid(node.v6_offset, meta_addr);
@ -231,13 +259,18 @@ impl Prefetch {
/// Reset to initialization state.
pub fn clear(&mut self) {
self.disabled = false;
self.files.clear();
self.patterns.clear();
self.files_prefetch.clear();
self.files_non_prefetch.clear();
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::core::node::NodeInfo;
use nydus_rafs::metadata::{inode::InodeWrapper, RafsVersion};
use std::cell::RefCell;
#[test]
fn test_generate_pattern() {
@ -273,4 +306,86 @@ mod tests {
PrefetchPolicy::from_str("").unwrap_err();
PrefetchPolicy::from_str("invalid").unwrap_err();
}
#[test]
fn test_prefetch() {
let input = vec![
"/a/b".to_string(),
"/f".to_string(),
"/h/i".to_string(),
"/k".to_string(),
];
let patterns = generate_patterns(input).unwrap();
let mut prefetch = Prefetch {
policy: PrefetchPolicy::Fs,
disabled: false,
patterns,
files_prefetch: Vec::with_capacity(10),
files_non_prefetch: Vec::with_capacity(10),
};
let mut inode = InodeWrapper::new(RafsVersion::V6);
inode.set_mode(0o755 | libc::S_IFREG as u32);
inode.set_size(1);
let info = NodeInfo::default();
let mut info1 = info.clone();
info1.target = PathBuf::from("/f");
let node1 = Node::new(inode.clone(), info1, 1);
let node1 = TreeNode::new(RefCell::from(node1));
prefetch.insert(&node1, &node1.borrow());
let inode2 = inode.clone();
let mut info2 = info.clone();
info2.target = PathBuf::from("/a/b");
let node2 = Node::new(inode2, info2, 1);
let node2 = TreeNode::new(RefCell::from(node2));
prefetch.insert(&node2, &node2.borrow());
let inode3 = inode.clone();
let mut info3 = info.clone();
info3.target = PathBuf::from("/h/i/j");
let node3 = Node::new(inode3, info3, 1);
let node3 = TreeNode::new(RefCell::from(node3));
prefetch.insert(&node3, &node3.borrow());
let inode4 = inode.clone();
let mut info4 = info.clone();
info4.target = PathBuf::from("/z");
let node4 = Node::new(inode4, info4, 1);
let node4 = TreeNode::new(RefCell::from(node4));
prefetch.insert(&node4, &node4.borrow());
let inode5 = inode.clone();
inode.set_mode(0o755 | libc::S_IFDIR as u32);
inode.set_size(0);
let mut info5 = info;
info5.target = PathBuf::from("/a/b/d");
let node5 = Node::new(inode5, info5, 1);
let node5 = TreeNode::new(RefCell::from(node5));
prefetch.insert(&node5, &node5.borrow());
// node1, node2
assert_eq!(prefetch.fs_prefetch_rule_count(), 2);
let (pre, non_pre) = prefetch.get_file_nodes();
assert_eq!(pre.len(), 4);
assert_eq!(non_pre.len(), 1);
let pre_str: Vec<String> = pre
.iter()
.map(|n| n.borrow().target().to_str().unwrap().to_owned())
.collect();
assert_eq!(pre_str, vec!["/a/b", "/a/b/d", "/f", "/h/i/j"]);
let non_pre_str: Vec<String> = non_pre
.iter()
.map(|n| n.borrow().target().to_str().unwrap().to_owned())
.collect();
assert_eq!(non_pre_str, vec!["/z"]);
prefetch.clear();
assert_eq!(prefetch.fs_prefetch_rule_count(), 0);
let (pre, non_pre) = prefetch.get_file_nodes();
assert_eq!(pre.len(), 0);
assert_eq!(non_pre.len(), 0);
}
}

View File

@ -16,10 +16,12 @@
//! lower tree (MetadataTree).
//! - Traverse the merged tree (OverlayTree) to dump bootstrap and data blobs.
use std::cell::{RefCell, RefMut};
use std::ffi::OsString;
use std::os::unix::ffi::OsStrExt;
use std::path::{Path, PathBuf};
use std::sync::{Arc, Mutex, MutexGuard};
use std::rc::Rc;
use std::sync::Arc;
use anyhow::{bail, Result};
use nydus_rafs::metadata::chunk::ChunkWrapper;
@ -34,7 +36,7 @@ use crate::core::overlay::OVERLAYFS_WHITEOUT_OPAQUE;
use crate::{BuildContext, ChunkDict};
/// Type alias for tree internal node.
pub type TreeNode = Arc<Mutex<Node>>;
pub type TreeNode = Rc<RefCell<Node>>;
/// An in-memory tree structure to maintain information and topology of filesystem nodes.
#[derive(Clone)]
@ -52,7 +54,7 @@ impl Tree {
pub fn new(node: Node) -> Self {
let name = node.name().as_bytes().to_vec();
Tree {
node: Arc::new(Mutex::new(node)),
node: Rc::new(RefCell::new(node)),
name,
children: Vec::new(),
}
@ -81,12 +83,12 @@ impl Tree {
/// Set `Node` associated with the tree node.
pub fn set_node(&mut self, node: Node) {
self.node = Arc::new(Mutex::new(node));
self.node.replace(node);
}
/// Get mutex guard to access the associated `Node` object.
pub fn lock_node(&self) -> MutexGuard<Node> {
self.node.lock().unwrap()
/// Get mutably borrowed value to access the associated `Node` object.
pub fn borrow_mut_node(&self) -> RefMut<'_, Node> {
self.node.as_ref().borrow_mut()
}
/// Walk all nodes in DFS mode.
@ -132,7 +134,7 @@ impl Tree {
let mut dirs = Vec::with_capacity(32);
for child in &self.children {
cb(child)?;
if child.lock_node().is_dir() {
if child.borrow_mut_node().is_dir() {
dirs.push(child);
}
}
@ -172,13 +174,37 @@ impl Tree {
Some(tree)
}
/// Get the mutable tree node corresponding to the path.
pub fn get_node_mut(&mut self, path: &Path) -> Option<&mut Tree> {
let target_vec = Node::generate_target_vec(path);
assert!(!target_vec.is_empty());
let mut tree = self;
let last_idx = target_vec.len() - 1;
for name in &target_vec[1..last_idx] {
match tree.get_child_idx(name.as_bytes()) {
Some(idx) => tree = &mut tree.children[idx],
None => return None,
}
}
if let Some(last_name) = target_vec.last() {
match tree.get_child_idx(last_name.as_bytes()) {
Some(idx) => Some(&mut tree.children[idx]),
None => None,
}
} else {
Some(tree)
}
}
/// Merge the upper layer tree into the lower layer tree, applying whiteout rules.
pub fn merge_overaly(&mut self, ctx: &BuildContext, upper: Tree) -> Result<()> {
assert_eq!(self.name, "/".as_bytes());
assert_eq!(upper.name, "/".as_bytes());
// Handle the root node.
upper.lock_node().overlay = Overlay::UpperModification;
upper.borrow_mut_node().overlay = Overlay::UpperModification;
self.node = upper.node.clone();
self.merge_children(ctx, &upper)?;
lazy_drop(upper);
@ -190,7 +216,7 @@ impl Tree {
// Handle whiteout nodes in the first round, and handle other nodes in the second round.
let mut modified = Vec::with_capacity(upper.children.len());
for u in upper.children.iter() {
let mut u_node = u.lock_node();
let mut u_node = u.borrow_mut_node();
match u_node.whiteout_type(ctx.whiteout_spec) {
Some(WhiteoutType::OciRemoval) => {
if let Some(origin_name) = u_node.origin_name(WhiteoutType::OciRemoval) {
@ -220,7 +246,7 @@ impl Tree {
let mut dirs = Vec::new();
for u in modified {
let mut u_node = u.lock_node();
let mut u_node = u.borrow_mut_node();
if let Some(idx) = self.get_child_idx(&u.name) {
u_node.overlay = Overlay::UpperModification;
self.children[idx].node = u.node.clone();
@ -299,7 +325,7 @@ impl<'a> MetadataTreeBuilder<'a> {
children.sort_unstable_by(|a, b| a.name.cmp(&b.name));
for child in children.iter_mut() {
let child_node = child.lock_node();
let child_node = child.borrow_mut_node();
if child_node.is_dir() {
let child_ino = child_node.inode.ino();
drop(child_node);
@ -397,13 +423,14 @@ mod tests {
tmpfile.as_path().to_path_buf(),
Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true,
false,
)
.unwrap();
let mut tree = Tree::new(node);
assert_eq!(tree.name, tmpfile.as_path().file_name().unwrap().as_bytes());
let node1 = tree.lock_node();
let node1 = tree.borrow_mut_node();
drop(node1);
let tmpfile = TempFile::new_in(tmpdir.as_path()).unwrap();
@ -413,12 +440,13 @@ mod tests {
tmpfile.as_path().to_path_buf(),
Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true,
false,
)
.unwrap();
tree.set_node(node);
let node2 = tree.lock_node();
let node2 = tree.borrow_mut_node();
assert_eq!(node2.name(), tmpfile.as_path().file_name().unwrap());
}
@ -432,6 +460,7 @@ mod tests {
tmpfile.as_path().to_path_buf(),
Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true,
false,
)
@ -445,6 +474,7 @@ mod tests {
tmpfile2.as_path().to_path_buf(),
Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true,
false,
)
@ -459,6 +489,7 @@ mod tests {
tmpfile3.as_path().to_path_buf(),
Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true,
false,
)

View File

@ -92,7 +92,7 @@ impl Node {
let mut d_size = 0u64;
for child in children.iter() {
d_size += child.lock_node().inode.name_size() as u64 + RAFS_V5_VIRTUAL_ENTRY_SIZE;
d_size += child.borrow_mut_node().inode.name_size() as u64 + RAFS_V5_VIRTUAL_ENTRY_SIZE;
}
if d_size == 0 {
self.inode.set_size(4096);
@ -124,13 +124,13 @@ impl Node {
impl Bootstrap {
/// Calculate inode digest for directory.
fn v5_digest_node(&self, ctx: &mut BuildContext, tree: &Tree) {
let mut node = tree.lock_node();
let mut node = tree.borrow_mut_node();
// We have set digest for non-directory inode in the previous dump_blob workflow.
if node.is_dir() {
let mut inode_hasher = RafsDigest::hasher(ctx.digester);
for child in tree.children.iter() {
let child = child.lock_node();
let child = child.borrow_mut_node();
inode_hasher.digest_update(child.inode.digest().as_ref());
}
node.inode.set_digest(inode_hasher.digest_finalize());
@ -200,7 +200,7 @@ impl Bootstrap {
let mut has_xattr = false;
self.tree.walk_dfs_pre(&mut |t| {
let node = t.lock_node();
let node = t.borrow_mut_node();
inode_table.set(node.index, inode_offset)?;
// Add inode size
inode_offset += node.inode.inode_size() as u32;
@ -253,7 +253,7 @@ impl Bootstrap {
timing_tracer!(
{
self.tree.walk_dfs_pre(&mut |t| {
t.lock_node()
t.borrow_mut_node()
.dump_bootstrap_v5(ctx, bootstrap_ctx.writer.as_mut())
.context("failed to dump bootstrap")
})

View File

@ -21,7 +21,7 @@ use nydus_rafs::metadata::layout::v6::{
};
use nydus_rafs::metadata::RafsStore;
use nydus_rafs::RafsIoWrite;
use nydus_storage::device::BlobFeatures;
use nydus_storage::device::{BlobFeatures, BlobInfo};
use nydus_utils::{root_tracer, round_down, round_up, timing_tracer};
use super::chunk_dict::DigestWithBlobIndex;
@ -41,6 +41,7 @@ impl Node {
orig_meta_addr: u64,
meta_addr: u64,
chunk_cache: &mut BTreeMap<DigestWithBlobIndex, Arc<ChunkWrapper>>,
blobs: &[Arc<BlobInfo>],
) -> Result<()> {
let xattr_inline_count = self.info.xattrs.count_v6();
ensure!(
@ -70,7 +71,7 @@ impl Node {
if self.is_dir() {
self.v6_dump_dir(ctx, f_bootstrap, meta_addr, meta_offset, &mut inode)?;
} else if self.is_reg() {
self.v6_dump_file(ctx, f_bootstrap, chunk_cache, &mut inode)?;
self.v6_dump_file(ctx, f_bootstrap, chunk_cache, &mut inode, &blobs)?;
} else if self.is_symlink() {
self.v6_dump_symlink(ctx, f_bootstrap, &mut inode)?;
} else {
@ -86,17 +87,12 @@ impl Node {
/// Update whether compact mode can be used for this inode or not.
pub fn v6_set_inode_compact(&mut self) {
if self.info.v6_force_extended_inode
self.v6_compact_inode = !(self.info.v6_force_extended_inode
|| self.inode.uid() > u16::MAX as u32
|| self.inode.gid() > u16::MAX as u32
|| self.inode.nlink() > u16::MAX as u32
|| self.inode.size() > u32::MAX as u64
|| self.path().extension() == Some(OsStr::new("pyc"))
{
self.v6_compact_inode = false;
} else {
self.v6_compact_inode = true;
}
|| self.path().extension() == Some(OsStr::new("pyc")));
}
/// Layout the normal inode (except directory inode) into the meta blob.
@ -182,10 +178,9 @@ impl Node {
} else {
// Avoid sorting again if "." and ".." are at the head after sorting due to that
// `tree.children` has already been sorted.
d_size = (".".as_bytes().len()
+ size_of::<RafsV6Dirent>()
+ "..".as_bytes().len()
+ size_of::<RafsV6Dirent>()) as u64;
d_size =
(".".len() + size_of::<RafsV6Dirent>() + "..".len() + size_of::<RafsV6Dirent>())
as u64;
for child in tree.children.iter() {
let len = child.name().len() + size_of::<RafsV6Dirent>();
// erofs disk format requires dirent to be aligned to block size.
@ -458,6 +453,7 @@ impl Node {
f_bootstrap: &mut dyn RafsIoWrite,
chunk_cache: &mut BTreeMap<DigestWithBlobIndex, Arc<ChunkWrapper>>,
inode: &mut Box<dyn RafsV6OndiskInode>,
blobs: &[Arc<BlobInfo>],
) -> Result<()> {
let mut is_continuous = true;
let mut prev = None;
@ -479,8 +475,15 @@ impl Node {
v6_chunk.set_block_addr(blk_addr);
chunks.extend(v6_chunk.as_ref());
let external =
blobs[chunk.inner.blob_index() as usize].has_feature(BlobFeatures::EXTERNAL);
let chunk_index = if external {
Some(chunk.inner.index())
} else {
None
};
chunk_cache.insert(
DigestWithBlobIndex(*chunk.inner.id(), chunk.inner.blob_index() + 1),
DigestWithBlobIndex(*chunk.inner.id(), chunk.inner.blob_index() + 1, chunk_index),
chunk.inner.clone(),
);
if let Some((prev_idx, prev_pos)) = prev {
@ -581,7 +584,7 @@ impl BuildContext {
impl Bootstrap {
pub(crate) fn v6_update_dirents(parent: &Tree, parent_offset: u64) {
let mut node = parent.lock_node();
let mut node = parent.borrow_mut_node();
let node_offset = node.v6_offset;
if !node.is_dir() {
return;
@ -601,7 +604,7 @@ impl Bootstrap {
let mut dirs: Vec<&Tree> = Vec::new();
for child in parent.children.iter() {
let child_node = child.lock_node();
let child_node = child.borrow_mut_node();
let entry = (
child_node.v6_offset,
OsStr::from_bytes(child.name()).to_owned(),
@ -675,7 +678,7 @@ impl Bootstrap {
// When using nid 0 as root nid,
// the root directory will not be shown by glibc's getdents/readdir.
// Because in some OS, ino == 0 represents corresponding file is deleted.
let root_node_offset = self.tree.lock_node().v6_offset;
let root_node_offset = self.tree.borrow_mut_node().v6_offset;
let orig_meta_addr = root_node_offset - EROFS_BLOCK_SIZE_4096;
let meta_addr = if blob_table_size > 0 {
align_offset(
@ -709,12 +712,13 @@ impl Bootstrap {
timing_tracer!(
{
self.tree.walk_bfs(true, &mut |n| {
n.lock_node().dump_bootstrap_v6(
n.borrow_mut_node().dump_bootstrap_v6(
ctx,
bootstrap_ctx.writer.as_mut(),
orig_meta_addr,
meta_addr,
&mut chunk_cache,
&blobs,
)
})
},
@ -916,6 +920,7 @@ mod tests {
pa_aa.as_path().to_path_buf(),
Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
false,
false,
)
@ -943,6 +948,7 @@ mod tests {
pa.as_path().to_path_buf(),
Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
false,
false,
)
@ -1039,6 +1045,7 @@ mod tests {
pa_reg.as_path().to_path_buf(),
Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
false,
false,
)
@ -1052,6 +1059,7 @@ mod tests {
pa_pyc.as_path().to_path_buf(),
Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
false,
false,
)

View File

@ -8,9 +8,12 @@ use std::fs::DirEntry;
use anyhow::{anyhow, Context, Result};
use nydus_utils::{event_tracer, lazy_drop, root_tracer, timing_tracer};
use crate::core::context::{Artifact, NoopArtifactWriter};
use crate::core::prefetch;
use super::core::blob::Blob;
use super::core::context::{
ArtifactWriter, BlobManager, BootstrapContext, BootstrapManager, BuildContext, BuildOutput,
ArtifactWriter, BlobManager, BootstrapManager, BuildContext, BuildOutput,
};
use super::core::node::Node;
use super::{build_bootstrap, dump_bootstrap, finalize_blob, Builder, Overlay, Tree, TreeNode};
@ -27,14 +30,14 @@ impl FilesystemTreeBuilder {
fn load_children(
&self,
ctx: &mut BuildContext,
bootstrap_ctx: &mut BootstrapContext,
parent: &TreeNode,
layer_idx: u16,
) -> Result<Vec<Tree>> {
let mut result = Vec::new();
let parent = parent.lock().unwrap();
) -> Result<(Vec<Tree>, Vec<Tree>)> {
let mut trees = Vec::new();
let mut external_trees = Vec::new();
let parent = parent.borrow();
if !parent.is_dir() {
return Ok(result);
return Ok((trees.clone(), external_trees));
}
let children = fs::read_dir(parent.path())
@ -44,12 +47,26 @@ impl FilesystemTreeBuilder {
event_tracer!("load_from_directory", +children.len());
for child in children {
let path = child.path();
let target = Node::generate_target(&path, &ctx.source_path);
let mut file_size: u64 = 0;
if ctx.attributes.is_external(&target) {
if let Some(value) = ctx.attributes.get_value(&target, "file_size") {
file_size = value.parse::<u64>().ok().ok_or_else(|| {
anyhow!(
"failed to parse file_size for external file {}",
&target.display()
)
})?;
}
}
let mut child = Node::from_fs_object(
ctx.fs_version,
ctx.source_path.clone(),
path.clone(),
Overlay::UpperAddition,
ctx.chunk_size,
file_size,
parent.info.explicit_uidgid,
true,
)
@ -58,24 +75,41 @@ impl FilesystemTreeBuilder {
// as per OCI spec, whiteout file should not be present within final image
// or filesystem, only existed in layers.
if !bootstrap_ctx.layered
if layer_idx == 0
&& child.whiteout_type(ctx.whiteout_spec).is_some()
&& !child.is_overlayfs_opaque(ctx.whiteout_spec)
{
continue;
}
let mut child = Tree::new(child);
child.children = self.load_children(ctx, bootstrap_ctx, &child.node, layer_idx)?;
let (mut child, mut external_child) = (Tree::new(child.clone()), Tree::new(child));
let (child_children, external_children) =
self.load_children(ctx, &child.node, layer_idx)?;
child.children = child_children;
external_child.children = external_children;
child
.lock_node()
.borrow_mut_node()
.v5_set_dir_size(ctx.fs_version, &child.children);
result.push(child);
external_child
.borrow_mut_node()
.v5_set_dir_size(ctx.fs_version, &external_child.children);
if ctx.attributes.is_external(&target) {
external_trees.push(external_child);
} else {
// TODO: need to implement type=ignore for nydus attributes,
// let's ignore the tree for workaround.
trees.push(child.clone());
if ctx.attributes.is_prefix_external(target) {
external_trees.push(external_child);
}
};
}
result.sort_unstable_by(|a, b| a.name().cmp(b.name()));
trees.sort_unstable_by(|a, b| a.name().cmp(b.name()));
external_trees.sort_unstable_by(|a, b| a.name().cmp(b.name()));
Ok(result)
Ok((trees, external_trees))
}
}
@ -88,59 +122,46 @@ impl DirectoryBuilder {
}
/// Build node tree from a filesystem directory
fn build_tree(
&mut self,
ctx: &mut BuildContext,
bootstrap_ctx: &mut BootstrapContext,
layer_idx: u16,
) -> Result<Tree> {
fn build_tree(&mut self, ctx: &mut BuildContext, layer_idx: u16) -> Result<(Tree, Tree)> {
let node = Node::from_fs_object(
ctx.fs_version,
ctx.source_path.clone(),
ctx.source_path.clone(),
Overlay::UpperAddition,
ctx.chunk_size,
0,
ctx.explicit_uidgid,
true,
)?;
let mut tree = Tree::new(node);
let mut tree = Tree::new(node.clone());
let mut external_tree = Tree::new(node);
let tree_builder = FilesystemTreeBuilder::new();
tree.children = timing_tracer!(
{ tree_builder.load_children(ctx, bootstrap_ctx, &tree.node, layer_idx) },
let (tree_children, external_tree_children) = timing_tracer!(
{ tree_builder.load_children(ctx, &tree.node, layer_idx) },
"load_from_directory"
)?;
tree.lock_node()
tree.children = tree_children;
external_tree.children = external_tree_children;
tree.borrow_mut_node()
.v5_set_dir_size(ctx.fs_version, &tree.children);
external_tree
.borrow_mut_node()
.v5_set_dir_size(ctx.fs_version, &external_tree.children);
Ok(tree)
Ok((tree, external_tree))
}
}
impl Builder for DirectoryBuilder {
fn build(
fn one_build(
&mut self,
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_mgr: &mut BlobManager,
blob_writer: &mut Box<dyn Artifact>,
tree: Tree,
) -> Result<BuildOutput> {
let mut bootstrap_ctx = bootstrap_mgr.create_ctx()?;
let layer_idx = u16::from(bootstrap_ctx.layered);
let mut blob_writer = if let Some(blob_stor) = ctx.blob_storage.clone() {
ArtifactWriter::new(blob_stor)?
} else {
return Err(anyhow!(
"target blob path should always be valid for directory builder"
));
};
// Scan source directory to build upper layer tree.
let tree = timing_tracer!(
{ self.build_tree(ctx, &mut bootstrap_ctx, layer_idx) },
"build_tree"
)?;
// Build bootstrap
let mut bootstrap_ctx = bootstrap_mgr.create_ctx()?;
let mut bootstrap = timing_tracer!(
{ build_bootstrap(ctx, bootstrap_mgr, &mut bootstrap_ctx, blob_mgr, tree) },
"build_bootstrap"
@ -148,13 +169,13 @@ impl Builder for DirectoryBuilder {
// Dump blob file
timing_tracer!(
{ Blob::dump(ctx, &bootstrap.tree, blob_mgr, &mut blob_writer,) },
{ Blob::dump(ctx, blob_mgr, blob_writer.as_mut()) },
"dump_blob"
)?;
// Dump blob meta information
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
Blob::dump_meta_data(ctx, blob_ctx, &mut blob_writer)?;
Blob::dump_meta_data(ctx, blob_ctx, blob_writer.as_mut())?;
}
// Dump RAFS meta/bootstrap and finalize the data blob.
@ -167,14 +188,14 @@ impl Builder for DirectoryBuilder {
&mut bootstrap_ctx,
&mut bootstrap,
blob_mgr,
&mut blob_writer,
blob_writer.as_mut(),
)
},
"dump_bootstrap"
)?;
finalize_blob(ctx, blob_mgr, &mut blob_writer)?;
finalize_blob(ctx, blob_mgr, blob_writer.as_mut())?;
} else {
finalize_blob(ctx, blob_mgr, &mut blob_writer)?;
finalize_blob(ctx, blob_mgr, blob_writer.as_mut())?;
timing_tracer!(
{
dump_bootstrap(
@ -183,7 +204,7 @@ impl Builder for DirectoryBuilder {
&mut bootstrap_ctx,
&mut bootstrap,
blob_mgr,
&mut blob_writer,
blob_writer.as_mut(),
)
},
"dump_bootstrap"
@ -192,6 +213,55 @@ impl Builder for DirectoryBuilder {
lazy_drop(bootstrap_ctx);
BuildOutput::new(blob_mgr, &bootstrap_mgr.bootstrap_storage)
BuildOutput::new(blob_mgr, None, &bootstrap_mgr.bootstrap_storage, &None)
}
}
impl Builder for DirectoryBuilder {
fn build(
&mut self,
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_mgr: &mut BlobManager,
) -> Result<BuildOutput> {
let layer_idx = u16::from(bootstrap_mgr.f_parent_path.is_some());
// Scan source directory to build upper layer tree.
let (tree, external_tree) =
timing_tracer!({ self.build_tree(ctx, layer_idx) }, "build_tree")?;
// Build for tree
let mut blob_writer: Box<dyn Artifact> = if let Some(blob_stor) = ctx.blob_storage.clone() {
Box::new(ArtifactWriter::new(blob_stor)?)
} else {
Box::<NoopArtifactWriter>::default()
};
let mut output = self.one_build(ctx, bootstrap_mgr, blob_mgr, &mut blob_writer, tree)?;
// Build for external tree
ctx.prefetch = prefetch::Prefetch::new(prefetch::PrefetchPolicy::None)?;
let mut external_blob_mgr = BlobManager::new(ctx.digester, true);
let mut external_bootstrap_mgr = bootstrap_mgr.clone();
if let Some(stor) = external_bootstrap_mgr.bootstrap_storage.as_mut() {
stor.add_suffix("external")
}
let mut external_blob_writer: Box<dyn Artifact> =
if let Some(blob_stor) = ctx.external_blob_storage.clone() {
Box::new(ArtifactWriter::new(blob_stor)?)
} else {
Box::<NoopArtifactWriter>::default()
};
let external_output = self.one_build(
ctx,
&mut external_bootstrap_mgr,
&mut external_blob_mgr,
&mut external_blob_writer,
external_tree,
)?;
output.external_bootstrap_path = external_output.bootstrap_path;
output.external_blobs = external_output.blobs;
Ok(output)
}
}

View File

@ -7,6 +7,7 @@
#[macro_use]
extern crate log;
use crate::core::context::Artifact;
use std::ffi::OsString;
use std::os::unix::ffi::OsStrExt;
use std::path::{Path, PathBuf};
@ -22,12 +23,16 @@ use sha2::Digest;
use self::core::node::{Node, NodeInfo};
pub use self::chunkdict_generator::ChunkdictBlobInfo;
pub use self::chunkdict_generator::ChunkdictChunkInfo;
pub use self::chunkdict_generator::Generator;
pub use self::compact::BlobCompactor;
pub use self::compact::Config as CompactConfig;
pub use self::core::bootstrap::Bootstrap;
pub use self::core::chunk_dict::{parse_chunk_dict_arg, ChunkDict, HashChunkDict};
pub use self::core::context::{
ArtifactStorage, ArtifactWriter, BlobContext, BlobManager, BootstrapContext, BootstrapManager,
BuildContext, BuildOutput, ConversionType,
ArtifactStorage, ArtifactWriter, BlobCacheGenerator, BlobContext, BlobManager,
BootstrapContext, BootstrapManager, BuildContext, BuildOutput, ConversionType,
};
pub use self::core::feature::{Feature, Features};
pub use self::core::node::{ChunkSource, NodeChunk};
@ -36,13 +41,18 @@ pub use self::core::prefetch::{Prefetch, PrefetchPolicy};
pub use self::core::tree::{MetadataTreeBuilder, Tree, TreeNode};
pub use self::directory::DirectoryBuilder;
pub use self::merge::Merger;
pub use self::optimize_prefetch::update_ctx_from_bootstrap;
pub use self::optimize_prefetch::OptimizePrefetch;
pub use self::stargz::StargzBuilder;
pub use self::tarball::TarballBuilder;
pub mod attributes;
mod chunkdict_generator;
mod compact;
mod core;
mod directory;
mod merge;
mod optimize_prefetch;
mod stargz;
mod tarball;
@ -82,7 +92,7 @@ fn dump_bootstrap(
bootstrap_ctx: &mut BootstrapContext,
bootstrap: &mut Bootstrap,
blob_mgr: &mut BlobManager,
blob_writer: &mut ArtifactWriter,
blob_writer: &mut dyn Artifact,
) -> Result<()> {
// Make sure blob id is updated according to blob hash if not specified by user.
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
@ -111,9 +121,14 @@ fn dump_bootstrap(
if ctx.blob_inline_meta {
assert_ne!(ctx.conversion_type, ConversionType::TarToTarfs);
// Ensure the blob object is created in case of no chunks generated for the blob.
let (_, blob_ctx) = blob_mgr
.get_or_create_current_blob(ctx)
.map_err(|_e| anyhow!("failed to get current blob object"))?;
let blob_ctx = if blob_mgr.external {
&mut blob_mgr.new_blob_ctx(ctx)?
} else {
let (_, blob_ctx) = blob_mgr
.get_or_create_current_blob(ctx)
.map_err(|_e| anyhow!("failed to get current blob object"))?;
blob_ctx
};
let bootstrap_offset = blob_writer.pos()?;
let uncompressed_bootstrap = bootstrap_ctx.writer.as_bytes()?;
let uncompressed_size = uncompressed_bootstrap.len();
@ -161,7 +176,7 @@ fn dump_bootstrap(
fn dump_toc(
ctx: &mut BuildContext,
blob_ctx: &mut BlobContext,
blob_writer: &mut ArtifactWriter,
blob_writer: &mut dyn Artifact,
) -> Result<()> {
if ctx.features.is_enabled(Feature::BlobToc) {
assert_ne!(ctx.conversion_type, ConversionType::TarToTarfs);
@ -181,7 +196,7 @@ fn dump_toc(
fn finalize_blob(
ctx: &mut BuildContext,
blob_mgr: &mut BlobManager,
blob_writer: &mut ArtifactWriter,
blob_writer: &mut dyn Artifact,
) -> Result<()> {
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
let is_tarfs = ctx.conversion_type == ConversionType::TarToTarfs;
@ -238,8 +253,11 @@ fn finalize_blob(
if !is_tarfs {
blob_writer.finalize(Some(blob_meta_id))?;
}
}
if let Some(blob_cache) = ctx.blob_cache_generator.as_ref() {
blob_cache.finalize(&blob_ctx.blob_id)?;
}
}
Ok(())
}
@ -348,3 +366,46 @@ impl TarBuilder {
|| path == Path::new("/.no.prefetch.landmark")
}
}
#[cfg(test)]
mod tests {
use vmm_sys_util::tempdir::TempDir;
use super::*;
#[test]
fn test_tar_builder_is_stargz_special_files() {
let builder = TarBuilder::new(true, 0, RafsVersion::V6);
let path = Path::new("/stargz.index.json");
assert!(builder.is_stargz_special_files(&path));
let path = Path::new("/.prefetch.landmark");
assert!(builder.is_stargz_special_files(&path));
let path = Path::new("/.no.prefetch.landmark");
assert!(builder.is_stargz_special_files(&path));
let path = Path::new("/no.prefetch.landmark");
assert!(!builder.is_stargz_special_files(&path));
let path = Path::new("/prefetch.landmark");
assert!(!builder.is_stargz_special_files(&path));
let path = Path::new("/tar.index.json");
assert!(!builder.is_stargz_special_files(&path));
}
#[test]
fn test_tar_builder_create_directory() {
let tmp_dir = TempDir::new().unwrap();
let target_paths = [OsString::from(tmp_dir.as_path())];
let mut builder = TarBuilder::new(true, 0, RafsVersion::V6);
let node = builder.create_directory(&target_paths);
assert!(node.is_ok());
let node = node.unwrap();
println!("Node: {}", node);
assert_eq!(node.file_type(), "dir");
assert_eq!(node.target(), tmp_dir.as_path());
assert_eq!(builder.next_ino, 1);
assert_eq!(builder.next_ino(), 2);
}
}

View File

@ -129,7 +129,7 @@ impl Merger {
}
let mut tree: Option<Tree> = None;
let mut blob_mgr = BlobManager::new(ctx.digester);
let mut blob_mgr = BlobManager::new(ctx.digester, false);
let mut blob_idx_map = HashMap::new();
let mut parent_layers = 0;
@ -257,7 +257,7 @@ impl Merger {
let upper = Tree::from_bootstrap(&rs, &mut ())?;
upper.walk_bfs(true, &mut |n| {
let mut node = n.lock_node();
let mut node = n.borrow_mut_node();
for chunk in &mut node.chunks {
let origin_blob_index = chunk.inner.blob_index() as usize;
let blob_ctx = blobs[origin_blob_index].as_ref();
@ -304,14 +304,137 @@ impl Merger {
ctx.chunk_size = chunk_size;
}
// After merging all trees, we need to re-calculate the blob index of
// referenced blobs, as the upper tree might have deleted some files
// or directories by opaques, and some blobs are dereferenced.
let mut used_blobs = HashMap::new(); // HashMap<blob_id, new_blob_index>
let mut used_blob_mgr = BlobManager::new(ctx.digester, false);
let origin_blobs = blob_mgr.get_blobs();
tree.walk_bfs(true, &mut |n| {
let mut node = n.borrow_mut_node();
for chunk in &mut node.chunks {
let origin_blob_index = chunk.inner.blob_index() as usize;
let blob_ctx = origin_blobs[origin_blob_index].clone();
let origin_blob_id = blob_ctx.blob_id();
let new_blob_index = if let Some(new_blob_index) = used_blobs.get(&origin_blob_id) {
*new_blob_index
} else {
let new_blob_index = used_blob_mgr.len();
used_blobs.insert(origin_blob_id, new_blob_index);
used_blob_mgr.add_blob(blob_ctx);
new_blob_index
};
chunk.set_blob_index(new_blob_index as u32);
}
Ok(())
})?;
let mut bootstrap_ctx = BootstrapContext::new(Some(target.clone()), false)?;
let mut bootstrap = Bootstrap::new(tree)?;
bootstrap.build(ctx, &mut bootstrap_ctx)?;
let blob_table = blob_mgr.to_blob_table(ctx)?;
let blob_table = used_blob_mgr.to_blob_table(ctx)?;
let mut bootstrap_storage = Some(target.clone());
bootstrap
.dump(ctx, &mut bootstrap_storage, &mut bootstrap_ctx, &blob_table)
.context(format!("dump bootstrap to {:?}", target.display()))?;
BuildOutput::new(&blob_mgr, &bootstrap_storage)
BuildOutput::new(&used_blob_mgr, None, &bootstrap_storage, &None)
}
}
#[cfg(test)]
mod tests {
use nydus_utils::digest;
use vmm_sys_util::tempfile::TempFile;
use super::*;
#[test]
fn test_merger_get_string_from_list() {
let res = Merger::get_string_from_list(&None, 1);
assert!(res.is_ok());
assert!(res.unwrap().is_none());
let original_ids = vec!["string1".to_owned(), "string2".to_owned()];
let original_ids = Some(original_ids);
let res = Merger::get_string_from_list(&original_ids, 0);
assert!(res.is_ok());
assert_eq!(res.unwrap(), Some("string1".to_owned()));
assert!(Merger::get_string_from_list(&original_ids, 2).is_err());
}
#[test]
fn test_merger_get_digest_from_list() {
let res = Merger::get_digest_from_list(&None, 1);
assert!(res.is_ok());
assert!(res.unwrap().is_none());
let original_ids = vec!["string1".to_owned(), "12ab".repeat(16)];
let original_ids = Some(original_ids);
let res = Merger::get_digest_from_list(&original_ids, 1);
assert!(res.is_ok());
assert_eq!(
res.unwrap(),
Some([
18u8, 171, 18, 171, 18, 171, 18, 171, 18, 171, 18, 171, 18, 171, 18, 171, 18, 171,
18, 171, 18, 171, 18, 171, 18, 171, 18, 171, 18, 171, 18, 171
])
);
assert!(Merger::get_digest_from_list(&original_ids, 0).is_err());
assert!(Merger::get_digest_from_list(&original_ids, 2).is_err());
}
#[test]
fn test_merger_get_size_from_list() {
let res = Merger::get_size_from_list(&None, 1);
assert!(res.is_ok());
assert!(res.unwrap().is_none());
let original_ids = vec![1u64, 2, 3, 4];
let original_ids = Some(original_ids);
let res = Merger::get_size_from_list(&original_ids, 1);
assert!(res.is_ok());
assert_eq!(res.unwrap(), Some(2u64));
assert!(Merger::get_size_from_list(&original_ids, 4).is_err());
}
#[test]
fn test_merger_merge() {
let mut ctx = BuildContext::default();
ctx.configuration.internal.set_blob_accessible(false);
ctx.digester = digest::Algorithm::Sha256;
let root_dir = &std::env::var("CARGO_MANIFEST_DIR").expect("$CARGO_MANIFEST_DIR");
let mut source_path1 = PathBuf::from(root_dir);
source_path1.push("../tests/texture/bootstrap/rafs-v6-2.2.boot");
let mut source_path2 = PathBuf::from(root_dir);
source_path2.push("../tests/texture/bootstrap/rafs-v6-2.2.boot");
let tmp_file = TempFile::new().unwrap();
let target = ArtifactStorage::SingleFile(tmp_file.as_path().to_path_buf());
let blob_toc_digests = Some(vec![
"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855".to_owned(),
"4cf0c409788fc1c149afbf4c81276b92427ae41e46412334ca495991b8526650".to_owned(),
]);
let build_output = Merger::merge(
&mut ctx,
None,
vec![source_path1, source_path2],
Some(vec!["a70f".repeat(16), "9bd3".repeat(16)]),
Some(vec!["blob_id".to_owned(), "blob_id2".to_owned()]),
Some(vec![16u64, 32u64]),
blob_toc_digests,
Some(vec![64u64, 128]),
target,
None,
Arc::new(ConfigV2::new("config_v2")),
);
assert!(build_output.is_ok());
let build_output = build_output.unwrap();
println!("BuildOutput: {}", build_output);
assert_eq!(build_output.blob_size, Some(16));
}
}

View File

@ -0,0 +1,302 @@
use crate::anyhow;
use crate::core::blob::Blob;
use crate::finalize_blob;
use crate::Artifact;
use crate::ArtifactWriter;
use crate::BlobContext;
use crate::BlobManager;
use crate::Bootstrap;
use crate::BootstrapManager;
use crate::BuildContext;
use crate::BuildOutput;
use crate::ChunkSource;
use crate::ConversionType;
use crate::NodeChunk;
use crate::Path;
use crate::PathBuf;
use crate::Tree;
use crate::TreeNode;
use anyhow::Context;
use anyhow::{Ok, Result};
use nydus_api::ConfigV2;
use nydus_rafs::metadata::layout::RafsBlobTable;
use nydus_rafs::metadata::RafsSuper;
use nydus_rafs::metadata::RafsVersion;
use nydus_storage::device::BlobInfo;
use nydus_storage::meta::BatchContextGenerator;
use nydus_storage::meta::BlobChunkInfoV1Ondisk;
use nydus_utils::compress;
use sha2::Digest;
use std::fs::File;
use std::io::{Read, Seek, Write};
use std::mem::size_of;
use std::sync::Arc;
pub struct OptimizePrefetch {}
struct PrefetchBlobState {
blob_info: BlobInfo,
blob_ctx: BlobContext,
blob_writer: Box<dyn Artifact>,
}
impl PrefetchBlobState {
fn new(ctx: &BuildContext, blob_layer_num: u32, blobs_dir_path: &Path) -> Result<Self> {
let mut blob_info = BlobInfo::new(
blob_layer_num,
String::from("prefetch-blob"),
0,
0,
ctx.chunk_size,
u32::MAX,
ctx.blob_features,
);
blob_info.set_compressor(ctx.compressor);
blob_info.set_separated_with_prefetch_files_feature(true);
let mut blob_ctx = BlobContext::from(ctx, &blob_info, ChunkSource::Build)?;
blob_ctx.blob_meta_info_enabled = true;
let blob_writer = ArtifactWriter::new(crate::ArtifactStorage::FileDir((
blobs_dir_path.to_path_buf(),
String::new(),
)))
.map(|writer| Box::new(writer) as Box<dyn Artifact>)?;
Ok(Self {
blob_info,
blob_ctx,
blob_writer,
})
}
}
impl OptimizePrefetch {
/// Generate a new bootstrap for prefetch.
pub fn generate_prefetch(
tree: &mut Tree,
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_table: &mut RafsBlobTable,
blobs_dir_path: PathBuf,
prefetch_nodes: Vec<TreeNode>,
) -> Result<BuildOutput> {
// create a new blob for prefetch layer
let blob_layer_num = match blob_table {
RafsBlobTable::V5(table) => table.get_all().len(),
RafsBlobTable::V6(table) => table.get_all().len(),
};
let mut blob_state = PrefetchBlobState::new(&ctx, blob_layer_num as u32, &blobs_dir_path)?;
let mut batch = BatchContextGenerator::new(0)?;
for node in &prefetch_nodes {
Self::process_prefetch_node(
tree,
&node,
&mut blob_state,
&mut batch,
blob_table,
&blobs_dir_path,
)?;
}
let blob_mgr = Self::dump_blob(ctx, blob_table, &mut blob_state)?;
debug!("prefetch blob id: {}", ctx.blob_id);
Self::build_dump_bootstrap(tree, ctx, bootstrap_mgr, blob_table)?;
BuildOutput::new(&blob_mgr, None, &bootstrap_mgr.bootstrap_storage, &None)
}
fn build_dump_bootstrap(
tree: &mut Tree,
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_table: &mut RafsBlobTable,
) -> Result<()> {
let mut bootstrap_ctx = bootstrap_mgr.create_ctx()?;
let mut bootstrap = Bootstrap::new(tree.clone())?;
// Build bootstrap
bootstrap.build(ctx, &mut bootstrap_ctx)?;
let blob_table_withprefetch = match blob_table {
RafsBlobTable::V5(table) => RafsBlobTable::V5(table.clone()),
RafsBlobTable::V6(table) => RafsBlobTable::V6(table.clone()),
};
bootstrap.dump(
ctx,
&mut bootstrap_mgr.bootstrap_storage,
&mut bootstrap_ctx,
&blob_table_withprefetch,
)?;
Ok(())
}
fn dump_blob(
ctx: &mut BuildContext,
blob_table: &mut RafsBlobTable,
blob_state: &mut PrefetchBlobState,
) -> Result<BlobManager> {
match blob_table {
RafsBlobTable::V5(table) => {
table.entries.push(blob_state.blob_info.clone().into());
}
RafsBlobTable::V6(table) => {
table.entries.push(blob_state.blob_info.clone().into());
}
}
let mut blob_mgr = BlobManager::new(ctx.digester, false);
blob_mgr.add_blob(blob_state.blob_ctx.clone());
blob_mgr.set_current_blob_index(0);
Blob::finalize_blob_data(&ctx, &mut blob_mgr, blob_state.blob_writer.as_mut())?;
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
Blob::dump_meta_data(&ctx, blob_ctx, blob_state.blob_writer.as_mut()).unwrap();
};
ctx.blob_id = String::from("");
blob_mgr.get_current_blob().unwrap().1.blob_id = String::from("");
finalize_blob(ctx, &mut blob_mgr, blob_state.blob_writer.as_mut())?;
ctx.blob_id = blob_mgr
.get_current_blob()
.ok_or(anyhow!("failed to get current blob"))?
.1
.blob_id
.clone();
let entries = match blob_table {
RafsBlobTable::V5(table) => table.get_all(),
RafsBlobTable::V6(table) => table.get_all(),
};
// Verify and update prefetch blob
assert!(
entries
.iter()
.filter(|blob| blob.blob_id() == "prefetch-blob")
.count()
== 1,
"Expected exactly one prefetch-blob"
);
// Rewrite prefetch blob id
match blob_table {
RafsBlobTable::V5(table) => {
rewrite_blob_id(&mut table.entries, "prefetch-blob", ctx.blob_id.clone())
}
RafsBlobTable::V6(table) => {
rewrite_blob_id(&mut table.entries, "prefetch-blob", ctx.blob_id.clone())
}
}
Ok(blob_mgr)
}
fn process_prefetch_node(
tree: &mut Tree,
node: &TreeNode,
prefetch_state: &mut PrefetchBlobState,
batch: &mut BatchContextGenerator,
blob_table: &RafsBlobTable,
blobs_dir_path: &Path,
) -> Result<()> {
let tree_node = tree
.get_node_mut(&node.borrow().path())
.ok_or(anyhow!("failed to get node"))?
.node
.as_ref();
let entries = match blob_table {
RafsBlobTable::V5(table) => table.get_all(),
RafsBlobTable::V6(table) => table.get_all(),
};
let blob_id = tree_node
.borrow()
.chunks
.first()
.and_then(|chunk| entries.get(chunk.inner.blob_index() as usize).cloned())
.map(|entry| entry.blob_id())
.ok_or(anyhow!("failed to get blob id"))?;
let mut blob_file = Arc::new(File::open(blobs_dir_path.join(blob_id))?);
tree_node.borrow_mut().layer_idx = prefetch_state.blob_info.blob_index() as u16;
let mut child = tree_node.borrow_mut();
let chunks: &mut Vec<NodeChunk> = child.chunks.as_mut();
let blob_ctx = &mut prefetch_state.blob_ctx;
let blob_info = &mut prefetch_state.blob_info;
let encrypted = blob_ctx.blob_compressor != compress::Algorithm::None;
for chunk in chunks {
let inner = Arc::make_mut(&mut chunk.inner);
let mut buf = vec![0u8; inner.compressed_size() as usize];
blob_file.seek(std::io::SeekFrom::Start(inner.compressed_offset()))?;
blob_file.read_exact(&mut buf)?;
prefetch_state.blob_writer.write_all(&buf)?;
let info = batch.generate_chunk_info(
blob_ctx.current_compressed_offset,
blob_ctx.current_uncompressed_offset,
inner.uncompressed_size(),
encrypted,
)?;
inner.set_blob_index(blob_info.blob_index());
if blob_ctx.chunk_count == u32::MAX {
blob_ctx.chunk_count = 0;
}
inner.set_index(blob_ctx.chunk_count);
blob_ctx.chunk_count += 1;
inner.set_compressed_offset(blob_ctx.current_compressed_offset);
inner.set_uncompressed_offset(blob_ctx.current_uncompressed_offset);
let aligned_d_size: u64 = nydus_utils::try_round_up_4k(inner.uncompressed_size())
.ok_or_else(|| anyhow!("invalid size"))?;
blob_ctx.compressed_blob_size += inner.compressed_size() as u64;
blob_ctx.uncompressed_blob_size += aligned_d_size;
blob_ctx.current_compressed_offset += inner.compressed_size() as u64;
blob_ctx.current_uncompressed_offset += aligned_d_size;
blob_ctx.add_chunk_meta_info(&inner, Some(info))?;
blob_ctx.blob_hash.update(&buf);
blob_info.set_meta_ci_compressed_size(
(blob_info.meta_ci_compressed_size() + size_of::<BlobChunkInfoV1Ondisk>() as u64)
as usize,
);
blob_info.set_meta_ci_uncompressed_size(
(blob_info.meta_ci_uncompressed_size() + size_of::<BlobChunkInfoV1Ondisk>() as u64)
as usize,
);
}
Ok(())
}
}
fn rewrite_blob_id(entries: &mut [Arc<BlobInfo>], blob_id: &str, new_blob_id: String) {
entries
.iter_mut()
.filter(|blob| blob.blob_id() == blob_id)
.for_each(|blob| {
let mut info = (**blob).clone();
info.set_blob_id(new_blob_id.clone());
*blob = Arc::new(info);
});
}
pub fn update_ctx_from_bootstrap(
ctx: &mut BuildContext,
config: Arc<ConfigV2>,
bootstrap_path: &Path,
) -> Result<RafsSuper> {
let (sb, _) = RafsSuper::load_from_file(bootstrap_path, config, false)?;
ctx.blob_features = sb
.superblock
.get_blob_infos()
.first()
.ok_or_else(|| anyhow!("No blob info found in superblock"))?
.features();
let config = sb.meta.get_config();
if config.is_tarfs_mode {
ctx.conversion_type = ConversionType::TarToRafs;
}
ctx.fs_version =
RafsVersion::try_from(sb.meta.version).context("Failed to get RAFS version")?;
ctx.compressor = config.compressor;
Ok(sb)
}

View File

@ -29,6 +29,8 @@ use nydus_utils::digest::{self, DigestData, RafsDigest};
use nydus_utils::{lazy_drop, root_tracer, timing_tracer, try_round_up_4k, ByteSize};
use serde::{Deserialize, Serialize};
use crate::core::context::{Artifact, NoopArtifactWriter};
use super::core::blob::Blob;
use super::core::context::{
ArtifactWriter, BlobManager, BootstrapManager, BuildContext, BuildOutput,
@ -56,10 +58,10 @@ struct TocEntry {
/// - block: block device
/// - fifo: fifo
/// - chunk: a chunk of regular file data As described in the above section,
/// a regular file can be divided into several chunks. TOCEntry MUST be created for each chunk.
/// TOCEntry of the first chunk of that file MUST be typed as reg. TOCEntry of each chunk after
/// 2nd MUST be typed as chunk. chunk TOCEntry MUST set offset, chunkOffset and chunkSize
/// properties.
/// a regular file can be divided into several chunks. TOCEntry MUST be created for each chunk.
/// TOCEntry of the first chunk of that file MUST be typed as reg. TOCEntry of each chunk after
/// 2nd MUST be typed as chunk. chunk TOCEntry MUST set offset, chunkOffset and chunkSize
/// properties.
#[serde(rename = "type")]
pub toc_type: String,
@ -454,7 +456,7 @@ impl StargzBuilder {
uncompressed_offset: self.uncompressed_offset,
file_offset: entry.chunk_offset as u64,
index: 0,
reserved: 0,
crc32: 0,
});
let chunk = NodeChunk {
source: ChunkSource::Build,
@ -599,7 +601,7 @@ impl StargzBuilder {
}
}
let mut tmp_node = tmp_tree.lock_node();
let mut tmp_node = tmp_tree.borrow_mut_node();
if !tmp_node.is_reg() {
bail!(
"stargz: target {} for hardlink {} is not a regular file",
@ -786,7 +788,7 @@ impl StargzBuilder {
bootstrap
.tree
.walk_bfs(true, &mut |n| {
let mut node = n.lock_node();
let mut node = n.borrow_mut_node();
let node_path = node.path();
if let Some((size, ref mut chunks)) = self.file_chunk_map.get_mut(node_path) {
node.inode.set_size(*size);
@ -800,9 +802,9 @@ impl StargzBuilder {
for (k, v) in self.hardlink_map.iter() {
match bootstrap.tree.get_node(k) {
Some(n) => {
let mut node = n.lock_node();
let target = v.lock().unwrap();
Some(t) => {
let mut node = t.borrow_mut_node();
let target = v.borrow();
node.inode.set_size(target.inode.size());
node.inode.set_child_count(target.inode.child_count());
node.chunks = target.chunks.clone();
@ -836,10 +838,10 @@ impl Builder for StargzBuilder {
} else if ctx.digester != digest::Algorithm::Sha256 {
bail!("stargz: invalid digest algorithm {:?}", ctx.digester);
}
let mut blob_writer = if let Some(blob_stor) = ctx.blob_storage.clone() {
ArtifactWriter::new(blob_stor)?
let mut blob_writer: Box<dyn Artifact> = if let Some(blob_stor) = ctx.blob_storage.clone() {
Box::new(ArtifactWriter::new(blob_stor)?)
} else {
return Err(anyhow!("missing configuration for target path"));
Box::<NoopArtifactWriter>::default()
};
let mut bootstrap_ctx = bootstrap_mgr.create_ctx()?;
let layer_idx = u16::from(bootstrap_ctx.layered);
@ -858,13 +860,13 @@ impl Builder for StargzBuilder {
// Dump blob file
timing_tracer!(
{ Blob::dump(ctx, &bootstrap.tree, blob_mgr, &mut blob_writer) },
{ Blob::dump(ctx, blob_mgr, blob_writer.as_mut()) },
"dump_blob"
)?;
// Dump blob meta information
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
Blob::dump_meta_data(ctx, blob_ctx, &mut blob_writer)?;
Blob::dump_meta_data(ctx, blob_ctx, blob_writer.as_mut())?;
}
// Dump RAFS meta/bootstrap and finalize the data blob.
@ -877,14 +879,14 @@ impl Builder for StargzBuilder {
&mut bootstrap_ctx,
&mut bootstrap,
blob_mgr,
&mut blob_writer,
blob_writer.as_mut(),
)
},
"dump_bootstrap"
)?;
finalize_blob(ctx, blob_mgr, &mut blob_writer)?;
finalize_blob(ctx, blob_mgr, blob_writer.as_mut())?;
} else {
finalize_blob(ctx, blob_mgr, &mut blob_writer)?;
finalize_blob(ctx, blob_mgr, blob_writer.as_mut())?;
timing_tracer!(
{
dump_bootstrap(
@ -893,7 +895,7 @@ impl Builder for StargzBuilder {
&mut bootstrap_ctx,
&mut bootstrap,
blob_mgr,
&mut blob_writer,
blob_writer.as_mut(),
)
},
"dump_bootstrap"
@ -902,20 +904,21 @@ impl Builder for StargzBuilder {
lazy_drop(bootstrap_ctx);
BuildOutput::new(blob_mgr, &bootstrap_mgr.bootstrap_storage)
BuildOutput::new(blob_mgr, None, &bootstrap_mgr.bootstrap_storage, &None)
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::{ArtifactStorage, ConversionType, Features, Prefetch, WhiteoutSpec};
use crate::{
attributes::Attributes, ArtifactStorage, ConversionType, Features, Prefetch, WhiteoutSpec,
};
#[ignore]
#[test]
fn test_build_stargz_toc() {
let tmp_dir = vmm_sys_util::tempdir::TempDir::new().unwrap();
let tmp_dir = tmp_dir.as_path().to_path_buf();
let mut tmp_dir = tmp_dir.as_path().to_path_buf();
let root_dir = &std::env::var("CARGO_MANIFEST_DIR").expect("$CARGO_MANIFEST_DIR");
let source_path =
PathBuf::from(root_dir).join("../tests/texture/stargz/estargz_sample.json");
@ -931,19 +934,126 @@ mod tests {
ConversionType::EStargzIndexToRef,
source_path,
prefetch,
Some(ArtifactStorage::FileDir(tmp_dir.clone())),
Some(ArtifactStorage::FileDir((tmp_dir.clone(), String::new()))),
None,
false,
Features::new(),
false,
Attributes::default(),
);
ctx.fs_version = RafsVersion::V6;
let mut bootstrap_mgr =
BootstrapManager::new(Some(ArtifactStorage::FileDir(tmp_dir)), None);
let mut blob_mgr = BlobManager::new(digest::Algorithm::Sha256);
ctx.conversion_type = ConversionType::EStargzToRafs;
let mut bootstrap_mgr = BootstrapManager::new(
Some(ArtifactStorage::FileDir((tmp_dir.clone(), String::new()))),
None,
);
let mut blob_mgr = BlobManager::new(digest::Algorithm::Sha256, false);
let mut builder = StargzBuilder::new(0x1000000, &ctx);
builder
.build(&mut ctx, &mut bootstrap_mgr, &mut blob_mgr)
.unwrap();
let builder = builder.build(&mut ctx, &mut bootstrap_mgr, &mut blob_mgr);
assert!(builder.is_ok());
let builder = builder.unwrap();
assert_eq!(
builder.blobs,
vec![String::from(
"bd4eff3fe6f5a352457c076d2133583e43db895b4af08d717b3fbcaeca89834e"
)]
);
assert_eq!(builder.blob_size, Some(4128));
tmp_dir.push("e60676aef5cc0d5caca9f4c8031f5b0c8392a0611d44c8e1bbc46dbf7fe7bfef");
assert_eq!(
builder.bootstrap_path.unwrap(),
tmp_dir.to_str().unwrap().to_string()
)
}
#[test]
fn test_toc_entry() {
let root_dir = &std::env::var("CARGO_MANIFEST_DIR").expect("$CARGO_MANIFEST_DIR");
let source_path = PathBuf::from(root_dir).join("../tests/texture/tar/all-entry-type.tar");
let mut entry = TocEntry {
name: source_path,
toc_type: "".to_string(),
size: 0x10,
link_name: PathBuf::from("link_name"),
mode: 0,
uid: 1,
gid: 1,
uname: "user_name".to_string(),
gname: "group_name".to_string(),
dev_major: 255,
dev_minor: 33,
xattrs: Default::default(),
digest: Default::default(),
offset: 0,
chunk_offset: 0,
chunk_size: 0,
chunk_digest: "sha256:".to_owned(),
inner_offset: 0,
};
entry.chunk_digest.extend(vec!['a'; 64].iter());
entry.toc_type = "dir".to_owned();
assert!(entry.is_dir());
assert!(entry.is_supported());
assert_eq!(entry.mode(), libc::S_IFDIR as u32);
assert_eq!(entry.rdev(), u32::MAX);
entry.toc_type = "req".to_owned();
assert!(!entry.is_reg());
entry.toc_type = "reg".to_owned();
assert!(entry.is_reg());
assert!(entry.is_supported());
assert_eq!(entry.mode(), libc::S_IFREG as u32);
assert_eq!(entry.size(), 0x10);
entry.toc_type = "symlink".to_owned();
assert!(entry.is_symlink());
assert!(entry.is_supported());
assert_eq!(entry.mode(), libc::S_IFLNK as u32);
assert_eq!(entry.symlink_link_path(), Path::new("link_name"));
assert!(entry.normalize().is_ok());
entry.toc_type = "hardlink".to_owned();
assert!(entry.is_supported());
assert!(entry.is_hardlink());
assert_eq!(entry.mode(), libc::S_IFREG as u32);
assert_eq!(entry.hardlink_link_path(), Path::new("link_name"));
assert!(entry.normalize().is_ok());
entry.toc_type = "chunk".to_owned();
assert!(entry.is_supported());
assert!(entry.is_chunk());
assert_eq!(entry.mode(), 0);
assert_eq!(entry.size(), 0);
assert!(entry.normalize().is_err());
entry.toc_type = "block".to_owned();
assert!(entry.is_special());
assert!(entry.is_blockdev());
assert_eq!(entry.mode(), libc::S_IFBLK as u32);
entry.toc_type = "char".to_owned();
assert!(entry.is_special());
assert!(entry.is_chardev());
assert_eq!(entry.mode(), libc::S_IFCHR as u32);
assert_ne!(entry.size(), 0x10);
entry.toc_type = "fifo".to_owned();
assert!(entry.is_fifo());
assert!(entry.is_special());
assert_eq!(entry.mode(), libc::S_IFIFO as u32);
assert_eq!(entry.rdev(), 65313);
assert_eq!(entry.name().unwrap().to_str(), Some("all-entry-type.tar"));
entry.name = PathBuf::from("/");
assert_eq!(entry.name().unwrap().to_str(), Some("/"));
assert_ne!(entry.path(), Path::new("all-entry-type.tar"));
assert_eq!(entry.block_id().unwrap().data, [0xaa as u8; 32]);
entry.name = PathBuf::from("");
assert!(entry.normalize().is_err());
}
}

View File

@ -8,11 +8,11 @@
//!
//! The tarball data is arrange as a sequence of tar headers with associated file data interleaved.
//! - (tar header) (tar header) (file data) (tar header) (file data) (tar header)
//! And to support read tarball data from FIFO, we could only go over the tarball stream once.
//! So the workflow is as:
//! And to support read tarball data from FIFO, we could only go over the tarball stream once.
//! So the workflow is as:
//! - for each tar header from the stream
//! -- generate RAFS filesystem node from the tar header
//! -- optionally dump file data associated with the tar header into RAFS data blob
//! -- generate RAFS filesystem node from the tar header
//! -- optionally dump file data associated with the tar header into RAFS data blob
//! - arrange all generated RAFS nodes into a RAFS filesystem tree
//! - dump the RAFS filesystem tree into RAFS metadata blob
use std::ffi::{OsStr, OsString};
@ -39,6 +39,8 @@ use nydus_utils::compress::ZlibDecoder;
use nydus_utils::digest::RafsDigest;
use nydus_utils::{div_round_up, lazy_drop, root_tracer, timing_tracer, BufReaderInfo, ByteSize};
use crate::core::context::{Artifact, NoopArtifactWriter};
use super::core::blob::Blob;
use super::core::context::{
ArtifactWriter, BlobManager, BootstrapManager, BuildContext, BuildOutput, ConversionType,
@ -99,7 +101,7 @@ struct TarballTreeBuilder<'a> {
ty: ConversionType,
ctx: &'a mut BuildContext,
blob_mgr: &'a mut BlobManager,
blob_writer: &'a mut ArtifactWriter,
blob_writer: &'a mut dyn Artifact,
buf: Vec<u8>,
builder: TarBuilder,
}
@ -110,7 +112,7 @@ impl<'a> TarballTreeBuilder<'a> {
ty: ConversionType,
ctx: &'a mut BuildContext,
blob_mgr: &'a mut BlobManager,
blob_writer: &'a mut ArtifactWriter,
blob_writer: &'a mut dyn Artifact,
layer_idx: u16,
) -> Self {
let builder = TarBuilder::new(ctx.explicit_uidgid, layer_idx, ctx.fs_version);
@ -347,7 +349,7 @@ impl<'a> TarballTreeBuilder<'a> {
}
}
}
let mut tmp_node = tmp_tree.lock_node();
let mut tmp_node = tmp_tree.borrow_mut_node();
if !tmp_node.is_reg() {
bail!(
"tarball: target {} for hardlink {} is not a regular file",
@ -450,7 +452,7 @@ impl<'a> TarballTreeBuilder<'a> {
// Tar hardlink header has zero file size and no file data associated, so copy value from
// the associated regular file.
if let Some(t) = hardlink_target {
let n = t.lock_node();
let n = t.borrow_mut_node();
if n.inode.is_v5() {
node.inode.set_digest(n.inode.digest().to_owned());
}
@ -538,7 +540,7 @@ impl<'a> TarballTreeBuilder<'a> {
for c in &mut tree.children {
Self::set_v5_dir_size(c);
}
let mut node = tree.lock_node();
let mut node = tree.borrow_mut_node();
node.v5_set_dir_size(RafsVersion::V5, &tree.children);
}
@ -580,7 +582,7 @@ impl Builder for TarballBuilder {
) -> Result<BuildOutput> {
let mut bootstrap_ctx = bootstrap_mgr.create_ctx()?;
let layer_idx = u16::from(bootstrap_ctx.layered);
let mut blob_writer = match self.ty {
let mut blob_writer: Box<dyn Artifact> = match self.ty {
ConversionType::EStargzToRafs
| ConversionType::EStargzToRef
| ConversionType::TargzToRafs
@ -588,9 +590,9 @@ impl Builder for TarballBuilder {
| ConversionType::TarToRafs
| ConversionType::TarToTarfs => {
if let Some(blob_stor) = ctx.blob_storage.clone() {
ArtifactWriter::new(blob_stor)?
Box::new(ArtifactWriter::new(blob_stor)?)
} else {
return Err(anyhow!("tarball: missing configuration for target path"));
Box::<NoopArtifactWriter>::default()
}
}
_ => {
@ -602,7 +604,7 @@ impl Builder for TarballBuilder {
};
let mut tree_builder =
TarballTreeBuilder::new(self.ty, ctx, blob_mgr, &mut blob_writer, layer_idx);
TarballTreeBuilder::new(self.ty, ctx, blob_mgr, blob_writer.as_mut(), layer_idx);
let tree = timing_tracer!({ tree_builder.build_tree() }, "build_tree")?;
// Build bootstrap
@ -613,13 +615,13 @@ impl Builder for TarballBuilder {
// Dump blob file
timing_tracer!(
{ Blob::dump(ctx, &bootstrap.tree, blob_mgr, &mut blob_writer) },
{ Blob::dump(ctx, blob_mgr, blob_writer.as_mut()) },
"dump_blob"
)?;
// Dump blob meta information
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
Blob::dump_meta_data(ctx, blob_ctx, &mut blob_writer)?;
Blob::dump_meta_data(ctx, blob_ctx, blob_writer.as_mut())?;
}
// Dump RAFS meta/bootstrap and finalize the data blob.
@ -632,14 +634,14 @@ impl Builder for TarballBuilder {
&mut bootstrap_ctx,
&mut bootstrap,
blob_mgr,
&mut blob_writer,
blob_writer.as_mut(),
)
},
"dump_bootstrap"
)?;
finalize_blob(ctx, blob_mgr, &mut blob_writer)?;
finalize_blob(ctx, blob_mgr, blob_writer.as_mut())?;
} else {
finalize_blob(ctx, blob_mgr, &mut blob_writer)?;
finalize_blob(ctx, blob_mgr, blob_writer.as_mut())?;
timing_tracer!(
{
dump_bootstrap(
@ -648,7 +650,7 @@ impl Builder for TarballBuilder {
&mut bootstrap_ctx,
&mut bootstrap,
blob_mgr,
&mut blob_writer,
blob_writer.as_mut(),
)
},
"dump_bootstrap"
@ -657,13 +659,14 @@ impl Builder for TarballBuilder {
lazy_drop(bootstrap_ctx);
BuildOutput::new(blob_mgr, &bootstrap_mgr.bootstrap_storage)
BuildOutput::new(blob_mgr, None, &bootstrap_mgr.bootstrap_storage, &None)
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::attributes::Attributes;
use crate::{ArtifactStorage, Features, Prefetch, WhiteoutSpec};
use nydus_utils::{compress, digest};
@ -685,14 +688,18 @@ mod tests {
ConversionType::TarToTarfs,
source_path,
prefetch,
Some(ArtifactStorage::FileDir(tmp_dir.clone())),
Some(ArtifactStorage::FileDir((tmp_dir.clone(), String::new()))),
None,
false,
Features::new(),
false,
Attributes::default(),
);
let mut bootstrap_mgr =
BootstrapManager::new(Some(ArtifactStorage::FileDir(tmp_dir)), None);
let mut blob_mgr = BlobManager::new(digest::Algorithm::Sha256);
let mut bootstrap_mgr = BootstrapManager::new(
Some(ArtifactStorage::FileDir((tmp_dir, String::new()))),
None,
);
let mut blob_mgr = BlobManager::new(digest::Algorithm::Sha256, false);
let mut builder = TarballBuilder::new(ConversionType::TarToTarfs);
builder
.build(&mut ctx, &mut bootstrap_mgr, &mut blob_mgr)
@ -717,14 +724,18 @@ mod tests {
ConversionType::TarToTarfs,
source_path,
prefetch,
Some(ArtifactStorage::FileDir(tmp_dir.clone())),
Some(ArtifactStorage::FileDir((tmp_dir.clone(), String::new()))),
None,
false,
Features::new(),
true,
Attributes::default(),
);
let mut bootstrap_mgr =
BootstrapManager::new(Some(ArtifactStorage::FileDir(tmp_dir)), None);
let mut blob_mgr = BlobManager::new(digest::Algorithm::Sha256);
let mut bootstrap_mgr = BootstrapManager::new(
Some(ArtifactStorage::FileDir((tmp_dir, String::new()))),
None,
);
let mut blob_mgr = BlobManager::new(digest::Algorithm::Sha256, false);
let mut builder = TarballBuilder::new(ConversionType::TarToTarfs);
builder
.build(&mut ctx, &mut bootstrap_mgr, &mut blob_mgr)

View File

@ -5,7 +5,7 @@ description = "C wrapper library for Nydus SDK"
authors = ["The Nydus Developers"]
license = "Apache-2.0"
homepage = "https://nydus.dev/"
repository = "https://github.com/dragonflyoss/image-service"
repository = "https://github.com/dragonflyoss/nydus"
edition = "2021"
[lib]
@ -15,10 +15,10 @@ crate-type = ["cdylib", "staticlib"]
[dependencies]
libc = "0.2.137"
log = "0.4.17"
fuse-backend-rs = "^0.10.3"
nydus-api = { version = "0.3", path = "../api" }
nydus-rafs = { version = "0.3.1", path = "../rafs" }
nydus-storage = { version = "0.6.3", path = "../storage" }
fuse-backend-rs = "^0.12.0"
nydus-api = { version = "0.4.0", path = "../api" }
nydus-rafs = { version = "0.4.0", path = "../rafs" }
nydus-storage = { version = "0.7.0", path = "../storage" }
[features]
baekend-s3 = ["nydus-storage/backend-s3"]

View File

@ -1 +0,0 @@
bin/

View File

@ -1,21 +0,0 @@
# https://golangci-lint.run/usage/configuration#config-file
linters:
enable:
- staticcheck
- unconvert
- gofmt
- goimports
- revive
- ineffassign
- vet
- unused
- misspell
disable:
- errcheck
run:
deadline: 4m
skip-dirs:
- misc

View File

@ -1,27 +0,0 @@
GIT_COMMIT := $(shell git rev-list -1 HEAD)
BUILD_TIME := $(shell date -u +%Y%m%d.%H%M)
PACKAGES ?= $(shell go list ./... | grep -v /vendor/)
GOARCH ?= $(shell go env GOARCH)
GOPROXY ?= https://goproxy.io
ifdef GOPROXY
PROXY := GOPROXY=${GOPROXY}
endif
.PHONY: all build release test clean
all: build
build:
@CGO_ENABLED=0 ${PROXY} GOOS=linux GOARCH=${GOARCH} go build -v -o bin/ctr-remote ./cmd/main.go
release:
@CGO_ENABLED=0 ${PROXY} GOOS=linux GOARCH=${GOARCH} go build -ldflags '-s -w -extldflags "-static"' -v -o bin/ctr-remote ./cmd/main.go
test: build
go vet $(PACKAGES)
golangci-lint run
go test -v -cover ${PACKAGES}
clean:
rm -f bin/*

View File

@ -1,67 +0,0 @@
/*
Copyright The containerd Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
"fmt"
"os"
"github.com/containerd/containerd/cmd/ctr/app"
"github.com/containerd/containerd/pkg/seed" //nolint:staticcheck // Global math/rand seed is deprecated, but still used by external dependencies
"github.com/dragonflyoss/image-service/contrib/ctr-remote/commands"
"github.com/urfave/cli"
)
func init() {
// From https://github.com/containerd/containerd/blob/f7f2be732159a411eae46b78bfdb479b133a823b/cmd/ctr/main.go
//nolint:staticcheck // Global math/rand seed is deprecated, but still used by external dependencies
seed.WithTimeAndRand()
}
func main() {
customCommands := []cli.Command{commands.RpullCommand}
app := app.New()
app.Description = "NOTE: Enhanced for nydus-snapshotter\n" + app.Description
for i := range app.Commands {
if app.Commands[i].Name == "images" {
sc := map[string]cli.Command{}
for _, subcmd := range customCommands {
sc[subcmd.Name] = subcmd
}
// First, replace duplicated subcommands
for j := range app.Commands[i].Subcommands {
for name, subcmd := range sc {
if name == app.Commands[i].Subcommands[j].Name {
app.Commands[i].Subcommands[j] = subcmd
delete(sc, name)
}
}
}
// Next, append all new sub commands
for _, subcmd := range sc {
app.Commands[i].Subcommands = append(app.Commands[i].Subcommands, subcmd)
}
break
}
}
if err := app.Run(os.Args); err != nil {
fmt.Fprintf(os.Stderr, "ctr-remote: %v\n", err)
os.Exit(1)
}
}

View File

@ -1,103 +0,0 @@
/*
Copyright The containerd Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package commands
import (
"context"
"fmt"
"github.com/containerd/containerd"
"github.com/containerd/containerd/cmd/ctr/commands"
"github.com/containerd/containerd/cmd/ctr/commands/content"
"github.com/containerd/containerd/images"
"github.com/containerd/containerd/log"
"github.com/containerd/nydus-snapshotter/pkg/label"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/urfave/cli"
)
const (
remoteSnapshotterName = "nydus"
)
// RpullCommand is a subcommand to pull an image from a registry levaraging nydus snapshotter
var RpullCommand = cli.Command{
Name: "rpull",
Usage: "pull an image from a registry leveraging nydus-snapshotter",
ArgsUsage: "[flags] <ref>",
Description: `Fetch and prepare an image for use in containerd leveraging nydus-snapshotter.
After pulling an image, it should be ready to use the same reference in a run command.`,
Flags: append(commands.RegistryFlags, commands.LabelFlag),
Action: func(context *cli.Context) error {
var (
ref = context.Args().First()
config = &rPullConfig{}
)
if ref == "" {
return fmt.Errorf("please provide an image reference to pull")
}
client, ctx, cancel, err := commands.NewClient(context)
if err != nil {
return err
}
defer cancel()
ctx, done, err := client.WithLease(ctx)
if err != nil {
return err
}
defer done(ctx)
fc, err := content.NewFetchConfig(ctx, context)
if err != nil {
return err
}
config.FetchConfig = fc
return pull(ctx, client, ref, config)
},
}
type rPullConfig struct {
*content.FetchConfig
}
func pull(ctx context.Context, client *containerd.Client, ref string, config *rPullConfig) error {
pCtx := ctx
h := images.HandlerFunc(func(ctx context.Context, desc ocispec.Descriptor) ([]ocispec.Descriptor, error) {
if desc.MediaType != images.MediaTypeDockerSchema1Manifest {
fmt.Printf("fetching %v... %v\n", desc.Digest.String()[:15], desc.MediaType)
}
return nil, nil
})
log.G(pCtx).WithField("image", ref).Debug("fetching")
configLabels := commands.LabelArgs(config.Labels)
if _, err := client.Pull(pCtx, ref, []containerd.RemoteOpt{
containerd.WithPullLabels(configLabels),
containerd.WithResolver(config.Resolver),
containerd.WithImageHandler(h),
containerd.WithPullUnpack,
containerd.WithPullSnapshotter(remoteSnapshotterName),
containerd.WithImageHandlerWrapper(label.AppendLabelsHandlerWrapper(ref)),
}...); err != nil {
return err
}
return nil
}

View File

@ -1,75 +0,0 @@
module github.com/dragonflyoss/image-service/contrib/ctr-remote
go 1.18
require (
github.com/containerd/containerd v1.7.0
github.com/containerd/nydus-snapshotter v0.10.0
github.com/opencontainers/image-spec v1.1.0-rc3
github.com/urfave/cli v1.22.12
)
require (
github.com/AdaLogics/go-fuzz-headers v0.0.0-20230106234847-43070de90fa1 // indirect
github.com/AdamKorcz/go-118-fuzz-build v0.0.0-20221215162035-5330a85ea652 // indirect
github.com/Microsoft/go-winio v0.6.0 // indirect
github.com/Microsoft/hcsshim v0.10.0-rc.7 // indirect
github.com/cilium/ebpf v0.10.0 // indirect
github.com/containerd/cgroups v1.1.0 // indirect
github.com/containerd/cgroups/v3 v3.0.1 // indirect
github.com/containerd/console v1.0.3 // indirect
github.com/containerd/continuity v0.3.0 // indirect
github.com/containerd/fifo v1.1.0 // indirect
github.com/containerd/go-cni v1.1.9 // indirect
github.com/containerd/go-runc v1.0.0 // indirect
github.com/containerd/ttrpc v1.2.1 // indirect
github.com/containerd/typeurl/v2 v2.1.0 // indirect
github.com/containernetworking/cni v1.1.2 // indirect
github.com/containernetworking/plugins v1.2.0 // indirect
github.com/coreos/go-systemd/v22 v22.5.0 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.2 // indirect
github.com/cyphar/filepath-securejoin v0.2.4 // indirect
github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c // indirect
github.com/docker/go-units v0.5.0 // indirect
github.com/go-logr/logr v1.2.4 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/godbus/dbus/v5 v5.1.0 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.3 // indirect
github.com/google/go-cmp v0.5.9 // indirect
github.com/google/uuid v1.3.0 // indirect
github.com/hashicorp/errwrap v1.1.0 // indirect
github.com/hashicorp/go-multierror v1.1.1 // indirect
github.com/intel/goresctrl v0.3.0 // indirect
github.com/klauspost/compress v1.16.3 // indirect
github.com/moby/locker v1.0.1 // indirect
github.com/moby/sys/mountinfo v0.6.2 // indirect
github.com/moby/sys/sequential v0.5.0 // indirect
github.com/moby/sys/signal v0.7.0 // indirect
github.com/moby/sys/symlink v0.2.0 // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/opencontainers/runc v1.1.5 // indirect
github.com/opencontainers/runtime-spec v1.1.0-rc.1 // indirect
github.com/opencontainers/selinux v1.11.0 // indirect
github.com/pelletier/go-toml v1.9.5 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/sirupsen/logrus v1.9.0 // indirect
go.opencensus.io v0.24.0 // indirect
go.opentelemetry.io/otel v1.14.0 // indirect
go.opentelemetry.io/otel/trace v1.14.0 // indirect
golang.org/x/mod v0.9.0 // indirect
golang.org/x/net v0.8.0 // indirect
golang.org/x/sync v0.1.0 // indirect
golang.org/x/sys v0.6.0 // indirect
golang.org/x/text v0.8.0 // indirect
golang.org/x/tools v0.7.0 // indirect
google.golang.org/genproto v0.0.0-20230330200707-38013875ee22 // indirect
google.golang.org/grpc v1.54.0 // indirect
google.golang.org/protobuf v1.30.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
k8s.io/apimachinery v0.26.2 // indirect
sigs.k8s.io/yaml v1.3.0 // indirect
)

View File

@ -1,356 +0,0 @@
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
github.com/AdaLogics/go-fuzz-headers v0.0.0-20230106234847-43070de90fa1 h1:EKPd1INOIyr5hWOWhvpmQpY6tKjeG0hT1s3AMC/9fic=
github.com/AdaLogics/go-fuzz-headers v0.0.0-20230106234847-43070de90fa1/go.mod h1:VzwV+t+dZ9j/H867F1M2ziD+yLHtB46oM35FxxMJ4d0=
github.com/AdamKorcz/go-118-fuzz-build v0.0.0-20221215162035-5330a85ea652 h1:+vTEFqeoeur6XSq06bs+roX3YiT49gUniJK7Zky7Xjg=
github.com/AdamKorcz/go-118-fuzz-build v0.0.0-20221215162035-5330a85ea652/go.mod h1:OahwfttHWG6eJ0clwcfBAHoDI6X/LV/15hx/wlMZSrU=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/toml v1.2.1/go.mod h1:CxXYINrC8qIiEnFrOxCa7Jy5BFHlXnUU2pbicEuybxQ=
github.com/Microsoft/go-winio v0.6.0 h1:slsWYD/zyx7lCXoZVlvQrj0hPTM1HI4+v1sIda2yDvg=
github.com/Microsoft/go-winio v0.6.0/go.mod h1:cTAf44im0RAYeL23bpB+fzCyDH2MJiz2BO69KH/soAE=
github.com/Microsoft/hcsshim v0.10.0-rc.7 h1:HBytQPxcv8Oy4244zbQbe6hnOnx544eL5QPUqhJldz8=
github.com/Microsoft/hcsshim v0.10.0-rc.7/go.mod h1:ILuwjA+kNW+MrN/w5un7n3mTqkwsFu4Bp05/okFUZlE=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/checkpoint-restore/go-criu/v5 v5.3.0/go.mod h1:E/eQpaFtUKGOOSEBZgmKAcn+zUUwWxqcaKZlF54wK8E=
github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
github.com/cilium/ebpf v0.7.0/go.mod h1:/oI2+1shJiTGAMgl6/RgJr36Eo1jzrRcAWbcXO2usCA=
github.com/cilium/ebpf v0.10.0 h1:nk5HPMeoBXtOzbkZBWym+ZWq1GIiHUsBFXxwewXAHLQ=
github.com/cilium/ebpf v0.10.0/go.mod h1:DPiVdY/kT534dgc9ERmvP8mWA+9gvwgKfRvk4nNWnoE=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
github.com/containerd/cgroups v1.1.0 h1:v8rEWFl6EoqHB+swVNjVoCJE8o3jX7e8nqBGPLaDFBM=
github.com/containerd/cgroups v1.1.0/go.mod h1:6ppBcbh/NOOUU+dMKrykgaBnK9lCIBxHqJDGwsa1mIw=
github.com/containerd/cgroups/v3 v3.0.1 h1:4hfGvu8rfGIwVIDd+nLzn/B9ZXx4BcCjzt5ToenJRaE=
github.com/containerd/cgroups/v3 v3.0.1/go.mod h1:/vtwk1VXrtoa5AaZLkypuOJgA/6DyPMZHJPGQNtlHnw=
github.com/containerd/console v1.0.1/go.mod h1:XUsP6YE/mKtz6bxc+I8UiKKTP04qjQL4qcS3XoQ5xkw=
github.com/containerd/console v1.0.3 h1:lIr7SlA5PxZyMV30bDW0MGbiOPXwc63yRuCP0ARubLw=
github.com/containerd/console v1.0.3/go.mod h1:7LqA/THxQ86k76b8c/EMSiaJ3h1eZkMkXar0TQ1gf3U=
github.com/containerd/containerd v1.7.0 h1:G/ZQr3gMZs6ZT0qPUZ15znx5QSdQdASW11nXTLTM2Pg=
github.com/containerd/containerd v1.7.0/go.mod h1:QfR7Efgb/6X2BDpTPJRvPTYDE9rsF0FsXX9J8sIs/sc=
github.com/containerd/continuity v0.3.0 h1:nisirsYROK15TAMVukJOUyGJjz4BNQJBVsNvAXZJ/eg=
github.com/containerd/continuity v0.3.0/go.mod h1:wJEAIwKOm/pBZuBd0JmeTvnLquTB1Ag8espWhkykbPM=
github.com/containerd/fifo v1.1.0 h1:4I2mbh5stb1u6ycIABlBw9zgtlK8viPI9QkQNRQEEmY=
github.com/containerd/fifo v1.1.0/go.mod h1:bmC4NWMbXlt2EZ0Hc7Fx7QzTFxgPID13eH0Qu+MAb2o=
github.com/containerd/go-cni v1.1.9 h1:ORi7P1dYzCwVM6XPN4n3CbkuOx/NZ2DOqy+SHRdo9rU=
github.com/containerd/go-cni v1.1.9/go.mod h1:XYrZJ1d5W6E2VOvjffL3IZq0Dz6bsVlERHbekNK90PM=
github.com/containerd/go-runc v1.0.0 h1:oU+lLv1ULm5taqgV/CJivypVODI4SUz1znWjv3nNYS0=
github.com/containerd/go-runc v1.0.0/go.mod h1:cNU0ZbCgCQVZK4lgG3P+9tn9/PaJNmoDXPpoJhDR+Ok=
github.com/containerd/nydus-snapshotter v0.10.0 h1:aCQoKmksOmZ2C34znlhOCOlYExiw4s/UPPzbIFKQc8U=
github.com/containerd/nydus-snapshotter v0.10.0/go.mod h1:xEsAzeM0gZEW6POBPOa+1X7EThYsEJNWnO/fhf2moYU=
github.com/containerd/ttrpc v1.2.1 h1:VWv/Rzx023TBLv4WQ+9WPXlBG/s3rsRjY3i9AJ2BJdE=
github.com/containerd/ttrpc v1.2.1/go.mod h1:sIT6l32Ph/H9cvnJsfXM5drIVzTr5A2flTf1G5tYZak=
github.com/containerd/typeurl/v2 v2.1.0 h1:yNAhJvbNEANt7ck48IlEGOxP7YAp6LLpGn5jZACDNIE=
github.com/containerd/typeurl/v2 v2.1.0/go.mod h1:IDp2JFvbwZ31H8dQbEIY7sDl2L3o3HZj1hsSQlywkQ0=
github.com/containernetworking/cni v1.1.2 h1:wtRGZVv7olUHMOqouPpn3cXJWpJgM6+EUl31EQbXALQ=
github.com/containernetworking/cni v1.1.2/go.mod h1:sDpYKmGVENF3s6uvMvGgldDWeG8dMxakj/u+i9ht9vw=
github.com/containernetworking/plugins v1.2.0 h1:SWgg3dQG1yzUo4d9iD8cwSVh1VqI+bP7mkPDoSfP9VU=
github.com/containernetworking/plugins v1.2.0/go.mod h1:/VjX4uHecW5vVimFa1wkG4s+r/s9qIfPdqlLF4TW8c4=
github.com/coreos/go-systemd/v22 v22.3.2/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/coreos/go-systemd/v22 v22.5.0 h1:RrqgGjYQKalulkV8NGVIfkXQf6YYmOyiJKk8iXXhfZs=
github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/cpuguy83/go-md2man/v2 v2.0.2 h1:p1EgwI/C7NhT0JmVkwCD2ZBK8j4aeHQX2pMHHBfMQ6w=
github.com/cpuguy83/go-md2man/v2 v2.0.2/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/cyphar/filepath-securejoin v0.2.3/go.mod h1:aPGpWjXOXUn2NCNjFvBE6aRxGGx79pTxQpKOJNYHHl4=
github.com/cyphar/filepath-securejoin v0.2.4 h1:Ugdm7cg7i6ZK6x3xDF1oEu1nfkyfH53EtKeQYTC3kyg=
github.com/cyphar/filepath-securejoin v0.2.4/go.mod h1:aPGpWjXOXUn2NCNjFvBE6aRxGGx79pTxQpKOJNYHHl4=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c h1:+pKlWGMw7gf6bQ+oDZB4KHQFypsfjYlq/C4rfL7D3g8=
github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c/go.mod h1:Uw6UezgYA44ePAFQYUehOuCzmy5zmg/+nl2ZfMWGkpA=
github.com/docker/go-units v0.4.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/docker/go-units v0.5.0 h1:69rxXcBk27SvSaaxTtLh/8llcHD8vYHT7WSdRZ/jvr4=
github.com/docker/go-units v0.5.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/frankban/quicktest v1.11.3/go.mod h1:wRf/ReqHper53s+kmmSZizM8NamnL3IM0I9ntUbOk+k=
github.com/frankban/quicktest v1.14.4 h1:g2rn0vABPOOXmZUj+vbmUp0lPoXEMuhTpIluN0XL9UY=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=
github.com/fsnotify/fsnotify v1.6.0 h1:n+5WquG0fcWoWp6xPWfHdbskMCQaFnG6PfBrh1Ky4HY=
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.2.4 h1:g01GSCwiDw2xSZfjJ2/T9M+S6pFdcNtFYsp+Y43HYDQ=
github.com/go-logr/logr v1.2.4/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0/go.mod h1:fyg7847qk6SyHyPtNmDHnmrv/HOrqktSC+C9fM+CJOE=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/godbus/dbus/v5 v5.0.6/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/godbus/dbus/v5 v5.1.0 h1:4KLkAxT3aOY8Li4FRJe/KvhoNFFxo0m6fNuFUO8QJUk=
github.com/godbus/dbus/v5 v5.1.0/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da h1:oI5xCqsCo564l8iNU+DwB5epxmsaqB+rhGL0m5jtYqE=
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.3/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8=
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/golang/protobuf v1.5.3 h1:KhyjKVUg7Usr/dYsdSqoFveMYd5ko72D+zANwlG1mmg=
github.com/golang/protobuf v1.5.3/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.3/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0=
github.com/google/pprof v0.0.0-20210407192527-94a9f03dee38/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.3.0 h1:t6JiXgmwXMjEs8VusXIJk2BXHsn+wx8BZdTaoZ5fu7I=
github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/errwrap v1.1.0 h1:OxrOeh75EUXMY8TBjag2fzXGZ40LB6IKw45YeGUDY2I=
github.com/hashicorp/errwrap v1.1.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo=
github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/ianlancetaylor/demangle v0.0.0-20200824232613-28f6c0f3b639/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/intel/goresctrl v0.3.0 h1:K2D3GOzihV7xSBedGxONSlaw/un1LZgWsc9IfqipN4c=
github.com/intel/goresctrl v0.3.0/go.mod h1:fdz3mD85cmP9sHD8JUlrNWAxvwM86CrbmVXltEKd7zk=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/klauspost/compress v1.16.3 h1:XuJt9zzcnaz6a16/OU53ZjWp/v7/42WcR5t2a0PcNQY=
github.com/klauspost/compress v1.16.3/go.mod h1:ntbaceVETuRiXiv4DpjP66DpAtAGkEQskQzEyD//IeE=
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/moby/locker v1.0.1 h1:fOXqR41zeveg4fFODix+1Ch4mj/gT0NE1XJbp/epuBg=
github.com/moby/locker v1.0.1/go.mod h1:S7SDdo5zpBK84bzzVlKr2V0hz+7x9hWbYC/kq7oQppc=
github.com/moby/sys/mountinfo v0.5.0/go.mod h1:3bMD3Rg+zkqx8MRYPi7Pyb0Ie97QEBmdxbhnCLlSvSU=
github.com/moby/sys/mountinfo v0.6.2 h1:BzJjoreD5BMFNmD9Rus6gdd1pLuecOFPt8wC+Vygl78=
github.com/moby/sys/mountinfo v0.6.2/go.mod h1:IJb6JQeOklcdMU9F5xQ8ZALD+CUr5VlGpwtX+VE0rpI=
github.com/moby/sys/sequential v0.5.0 h1:OPvI35Lzn9K04PBbCLW0g4LcFAJgHsvXsRyewg5lXtc=
github.com/moby/sys/sequential v0.5.0/go.mod h1:tH2cOOs5V9MlPiXcQzRC+eEyab644PWKGRYaaV5ZZlo=
github.com/moby/sys/signal v0.7.0 h1:25RW3d5TnQEoKvRbEKUGay6DCQ46IxAVTT9CUMgmsSI=
github.com/moby/sys/signal v0.7.0/go.mod h1:GQ6ObYZfqacOwTtlXvcmh9A26dVRul/hbOZn88Kg8Tg=
github.com/moby/sys/symlink v0.2.0 h1:tk1rOM+Ljp0nFmfOIBtlV3rTDlWOwFRhjEeAhZB0nZc=
github.com/moby/sys/symlink v0.2.0/go.mod h1:7uZVF2dqJjG/NsClqul95CqKOBRQyYSNnJ6BMgR/gFs=
github.com/mrunalp/fileutils v0.5.0/go.mod h1:M1WthSahJixYnrXQl/DFQuteStB1weuxD2QJNHXfbSQ=
github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A=
github.com/nxadm/tail v1.4.8 h1:nPr65rt6Y5JFSKQO7qToXr7pePgD6Gwiw05lkbyAQTE=
github.com/nxadm/tail v1.4.8/go.mod h1:+ncqLTQzXmGhMZNUePPaPqPvBxHAIsmXswZKocGu+AU=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk=
github.com/onsi/ginkgo v1.16.4/go.mod h1:dX+/inL/fNMqNlz0e9LfyB9TswhZpCVdJM/Z6Vvnwo0=
github.com/onsi/ginkgo v1.16.5 h1:8xi0RTUf59SOSfEtZMvwTvXYMzG4gV23XVHOZiXNtnE=
github.com/onsi/ginkgo/v2 v2.1.3/go.mod h1:vw5CSIxN1JObi/U8gcbwft7ZxR2dgaR70JSE3/PpL4c=
github.com/onsi/ginkgo/v2 v2.4.0 h1:+Ig9nvqgS5OBSACXNk15PLdp0U9XPYROt9CFzVdFGIs=
github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY=
github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo=
github.com/onsi/gomega v1.17.0/go.mod h1:HnhC7FXeEQY45zxNK3PPoIUhzk/80Xly9PcubAlGdZY=
github.com/onsi/gomega v1.24.2 h1:J/tulyYK6JwBldPViHJReihxxZ+22FHs0piGjQAvoUE=
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
github.com/opencontainers/image-spec v1.1.0-rc3 h1:fzg1mXZFj8YdPeNkRXMg+zb88BFV0Ys52cJydRwBkb8=
github.com/opencontainers/image-spec v1.1.0-rc3/go.mod h1:X4pATf0uXsnn3g5aiGIsVnJBR4mxhKzfwmvK/B2NTm8=
github.com/opencontainers/runc v1.1.5 h1:L44KXEpKmfWDcS02aeGm8QNTFXTo2D+8MYGDIJ/GDEs=
github.com/opencontainers/runc v1.1.5/go.mod h1:1J5XiS+vdZ3wCyZybsuxXZWGrgSr8fFJHLXuG2PsnNg=
github.com/opencontainers/runtime-spec v1.0.2/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/opencontainers/runtime-spec v1.0.3-0.20210326190908-1c3f411f0417/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/opencontainers/runtime-spec v1.1.0-rc.1 h1:wHa9jroFfKGQqFHj0I1fMRKLl0pfj+ynAqBxo3v6u9w=
github.com/opencontainers/runtime-spec v1.1.0-rc.1/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/opencontainers/selinux v1.10.0/go.mod h1:2i0OySw99QjzBBQByd1Gr9gSjvuho1lHsJxIJ3gGbJI=
github.com/opencontainers/selinux v1.11.0 h1:+5Zbo97w3Lbmb3PeqQtpmTkMwsW5nRI3YaLpt7tQ7oU=
github.com/opencontainers/selinux v1.11.0/go.mod h1:E5dMC3VPuVvVHDYmi78qvhJp8+M586T4DlDRYpFkyec=
github.com/pelletier/go-toml v1.9.5 h1:4yBQzkHv+7BHq2PQUZF3Mx0IYxG7LsP222s7Agd3ve8=
github.com/pelletier/go-toml v1.9.5/go.mod h1:u1nR/EPcESfeI/szUZKdtJ0xRNbUoANCkoOuaOx1Y+c=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/procfs v0.6.0/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA=
github.com/prometheus/procfs v0.9.0 h1:wzCHvIvM5SxWqYvwgVL7yJY8Lz3PKn49KQtpgMYJfhI=
github.com/rogpeppe/go-internal v1.9.0 h1:73kH8U+JUqXU8lRuOHeVHaa/SZPifC7BkcraZVejAe8=
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/seccomp/libseccomp-golang v0.9.2-0.20220502022130-f33da4d89646/go.mod h1:JA8cRccbGaA1s33RQf7Y1+q9gHmZX1yB/z9WDN1C6fg=
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
github.com/sirupsen/logrus v1.7.0/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
github.com/sirupsen/logrus v1.8.1/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
github.com/sirupsen/logrus v1.9.0 h1:trlNQbNUG3OdDrDil03MCb1H2o9nJ1x4/5LYw7byDE0=
github.com/sirupsen/logrus v1.9.0/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0 h1:1zr/of2m5FGMsad5YfcqgdqdWrIhu+EBEJRhR1U7z/c=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.8.2 h1:+h33VjcLVPDHtOdpUCuF+7gSuG3yGIftsP1YvFihtJ8=
github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
github.com/urfave/cli v1.22.1/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
github.com/urfave/cli v1.22.12 h1:igJgVw1JdKH+trcLWLeLwZjU9fEfPesQ+9/e4MQ44S8=
github.com/urfave/cli v1.22.12/go.mod h1:sSBEIC79qR6OvcmsD4U3KABeOTxDqQtdDnaFuUN30b8=
github.com/vishvananda/netlink v1.1.0/go.mod h1:cTgwzPIzzgDAYoQrMm0EdrjRUBkTqKYppBueQtXaqoE=
github.com/vishvananda/netns v0.0.0-20191106174202-0a2b9b5464df/go.mod h1:JP3t17pCcGlemwknint6hfoeCVQrEMVwxRLRjXpq+BU=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
go.opencensus.io v0.24.0 h1:y73uSU6J157QMP2kn2r30vwW1A2W2WFwSCGnAVxeaD0=
go.opencensus.io v0.24.0/go.mod h1:vNK8G9p7aAivkbmorf4v+7Hgx+Zs0yY+0fOtgBfjQKo=
go.opentelemetry.io/otel v1.14.0 h1:/79Huy8wbf5DnIPhemGB+zEPVwnN6fuQybr/SRXa6hM=
go.opentelemetry.io/otel v1.14.0/go.mod h1:o4buv+dJzx8rohcUeRmWUZhqupFvzWis188WlggnNeU=
go.opentelemetry.io/otel/trace v1.14.0 h1:wp2Mmvj41tDsyAJXiWDWpfNsOiIyd38fy85pyKcFq/M=
go.opentelemetry.io/otel/trace v1.14.0/go.mod h1:8avnQLK+CG77yNLUae4ea2JDQ6iT+gozhnZjy/rw9G8=
go.uber.org/goleak v1.1.12 h1:gZAh5/EyT/HQwlpkCy6wTpqfH9H8Lz8zbm3dZh+OyzA=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.9.0 h1:KENHtAZL2y3NLMYZeHY9DW8HW8V+kQyJsY/V9JlKvCs=
golang.org/x/mod v0.9.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200520004742-59133d7f0dd7/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201224014010-6772e930b67b/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210428140749-89ef3d95e781/go.mod h1:OJAsFXCWl8Ukc7SiCT/9KSuxbyM7479/AVlXFRxuMCk=
golang.org/x/net v0.8.0 h1:Zrh2ngAOFYneWTAIAPethzeaQLuHwhuBkuV6ZiRnUaQ=
golang.org/x/net v0.8.0/go.mod h1:QVkue5JL9kW//ek3r6jTKnTFis1tRmNAW2P1shuFdJc=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.1.0 h1:wsuoTGHzEhffawBOhz5CYhcrV4IdKZbEyZjBMuTp12o=
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190606203320-7fc4e5ec1444/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191115151921-52ab43148777/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200916030750-2334cc1a136f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210112080510-489259a85091/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210906170528-6f6e22806c34/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211025201205-69cdffdb9359/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211116061358-0a5406a5449c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0 h1:MVltZSvRTcU2ljQOhs94SXPftV6DCNnZViHeQps87pQ=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.8.0 h1:57P1ETyNKtuIjB4SRd15iJxuhj8Gc416Y78H3qgMh68=
golang.org/x/text v0.8.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20201224043029-2b0845dc783e/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.7.0 h1:W4OVu8VVOaIO0yzWMNdepAulS7YfoS3Zabrm8DOXXU4=
golang.org/x/tools v0.7.0/go.mod h1:4pg6aUX35JBAogB10C9AtvVL+qowtN4pT3CGSQex14s=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20200224152610-e50cd9704f63/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
google.golang.org/genproto v0.0.0-20230330200707-38013875ee22 h1:n3ThVoQnHbCbnkhZZ1fx3+3fBAisViSwrpbtLV7vydY=
google.golang.org/genproto v0.0.0-20230330200707-38013875ee22/go.mod h1:UUQDJDOlWu4KYeJZffbWgBkS1YFobzKbLVfK69pe0Ak=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY=
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.27.1/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.33.2/go.mod h1:JMHMWHQWaTccqQQlmk3MJZS+GWXOdAesneDmEnv2fbc=
google.golang.org/grpc v1.54.0 h1:EhTqbhiYeixwWQtAEZAxmV9MGqcjEU2mFx52xCzNyag=
google.golang.org/grpc v1.54.0/go.mod h1:PUSEXI6iWghWaB6lXM4knEgpJNu2qUcKfDtNci3EC2g=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=
google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=
google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.30.0 h1:kPPoIgf3TsEvrm0PFe15JQ+570QVxYzEvvHqChK+cng=
google.golang.org/protobuf v1.30.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
k8s.io/apimachinery v0.26.2 h1:da1u3D5wfR5u2RpLhE/ZtZS2P7QvDgLZTi9wrNZl/tQ=
k8s.io/apimachinery v0.26.2/go.mod h1:ats7nN1LExKHvJ9TmwootT00Yz05MuYqPXEXaVeOy5I=
sigs.k8s.io/yaml v1.3.0 h1:a2VclLzOGrwOHDiV8EfBGhvjHvP46CtW5j6POvhYGGo=
sigs.k8s.io/yaml v1.3.0/go.mod h1:GeOyir5tyXNByN85N/dRIT9es5UQNerPYEKK56eTBm8=

View File

@ -0,0 +1,8 @@
package main
import "fmt"
// This is a dummy program, to workaround the goreleaser can't pre build the binary.
func main() {
fmt.Println("Hello, World!")
}

File diff suppressed because it is too large Load Diff

View File

@ -1,19 +1,19 @@
[package]
name = "nydus-backend-proxy"
version = "0.1.0"
version = "0.2.0"
authors = ["The Nydus Developers"]
description = "A simple HTTP server to provide a fake container registry for nydusd"
homepage = "https://nydus.dev/"
repository = "https://github.com/dragonflyoss/image-service"
edition = "2018"
repository = "https://github.com/dragonflyoss/nydus"
edition = "2021"
license = "Apache-2.0"
[dependencies]
rocket = "0.5.0-rc"
http-range = "0.1.3"
nix = ">=0.23.0"
clap = "2.33"
once_cell = "1.10.0"
rocket = "0.5.0"
http-range = "0.1.5"
nix = { version = "0.28", features = ["uio"] }
clap = "4.4"
once_cell = "1.19.0"
lazy_static = "1.4"
[workspace]

View File

@ -2,29 +2,22 @@
//
// SPDX-License-Identifier: (Apache-2.0 AND BSD-3-Clause)
#[macro_use]
extern crate rocket;
#[macro_use]
extern crate lazy_static;
#[macro_use(crate_authors, crate_version)]
extern crate clap;
use std::collections::HashMap;
use std::env;
use std::os::unix::io::AsRawFd;
use std::path::{Path, PathBuf};
use std::sync::Arc;
use std::{fs, io};
use clap::{App, Arg};
use clap::*;
use http_range::HttpRange;
use lazy_static::lazy_static;
use nix::sys::uio;
use rocket::fs::{FileServer, NamedFile};
use rocket::futures::lock::{Mutex, MutexGuard};
use rocket::http::Status;
use rocket::request::{self, FromRequest, Outcome};
use rocket::response::{self, stream::ReaderStream, Responder};
use rocket::{Request, Response};
use rocket::*;
lazy_static! {
static ref BLOB_BACKEND: Mutex<BlobBackend> = Mutex::new(BlobBackend {
@ -165,12 +158,12 @@ impl<'r> Responder<'r, 'static> for RangeStream {
let mut read = 0u64;
let startpos = self.start as i64;
let size = self.len;
let raw_fd = self.file.as_raw_fd();
let file = self.file.clone();
Response::build()
.streamed_body(ReaderStream! {
while read < size {
match uio::pread(raw_fd, &mut buf, startpos + read as i64) {
match uio::pread(file.as_ref(), &mut buf, startpos + read as i64) {
Ok(mut n) => {
n = std::cmp::min(n, (size - read) as usize);
read += n as u64;
@ -268,20 +261,31 @@ async fn fetch(
#[rocket::main]
async fn main() {
let cmd = App::new("nydus-backend-proxy")
.author(crate_authors!())
.version(crate_version!())
let cmd = Command::new("nydus-backend-proxy")
.author(env!("CARGO_PKG_AUTHORS"))
.version(env!("CARGO_PKG_VERSION"))
.about("A simple HTTP server to provide a fake container registry for nydusd.")
.arg(
Arg::with_name("blobsdir")
.short("b")
Arg::new("blobsdir")
.short('b')
.long("blobsdir")
.takes_value(true)
.required(true)
.help("path to directory hosting nydus blob files"),
)
.help_template(
"\
{before-help}{name} {version}
{author-with-newline}{about-with-newline}
{usage-heading} {usage}
{all-args}{after-help}
",
)
.get_matches();
// Safe to unwrap() because `blobsdir` takes a value.
let path = cmd.value_of("blobsdir").unwrap();
let path = cmd
.get_one::<String>("blobsdir")
.expect("required argument");
init_blob_backend(Path::new(path)).await;

View File

@ -8,14 +8,14 @@ linters:
- goimports
- revive
- ineffassign
- vet
- govet
- unused
- misspell
disable:
- errcheck
run:
deadline: 4m
skip-dirs:
timeout: 5m
issues:
exclude-dirs:
- misc

View File

@ -2,7 +2,7 @@ GIT_COMMIT := $(shell git rev-parse --verify HEAD --short=7)
BUILD_TIME := $(shell date -u +%Y%m%d.%H%M)
PACKAGES ?= $(shell go list ./... | grep -v /vendor/)
GOARCH ?= $(shell go env GOARCH)
GOPROXY ?= https://goproxy.io
GOPROXY ?=
ifdef GOPROXY
PROXY := GOPROXY=${GOPROXY}
@ -13,15 +13,17 @@ endif
all: build
build:
@CGO_ENABLED=0 ${PROXY} GOOS=linux GOARCH=${GOARCH} go build -ldflags="-s -w -X 'main.Version=${GIT_COMMIT}' -X 'main.BuildTime=${BUILD_TIME}'" -v -o bin/nydus-overlayfs ./cmd/main.go
@CGO_ENABLED=0 ${PROXY} GOOS=linux GOARCH=${GOARCH} go build -ldflags="-s -w -X 'main.Version=${GIT_COMMIT}' -X 'main.BuildTime=${BUILD_TIME}'" -v -o bin/nydus-overlayfs ./cmd/main.go
release:
@CGO_ENABLED=0 ${PROXY} GOOS=linux GOARCH=${GOARCH} go build -ldflags '-s -w -extldflags "-static"' -v -o bin/nydus-overlayfs ./cmd/main.go
test: build
go vet $(PACKAGES)
golangci-lint run
go test -v -cover ${PACKAGES}
lint:
golangci-lint run
clean:
rm -f bin/*

View File

@ -8,7 +8,7 @@ import (
"syscall"
"github.com/pkg/errors"
"github.com/urfave/cli/v2"
cli "github.com/urfave/cli/v2"
"golang.org/x/sys/unix"
)

View File

@ -1,15 +1,15 @@
module github.com/dragonflyoss/image-service/contrib/nydus-overlayfs
module github.com/dragonflyoss/nydus/contrib/nydus-overlayfs
go 1.18
go 1.21
require (
github.com/pkg/errors v0.9.1
github.com/urfave/cli/v2 v2.3.0
golang.org/x/sys v0.1.0
github.com/urfave/cli/v2 v2.27.1
golang.org/x/sys v0.15.0
)
require (
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d // indirect
github.com/russross/blackfriday/v2 v2.0.1 // indirect
github.com/shurcooL/sanitized_anchor_name v1.0.0 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.3 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/xrash/smetrics v0.0.0-20231213231151-1d8dd44e695e // indirect
)

View File

@ -1,17 +1,10 @@
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d h1:U+s90UTSYgptZMwQh2aRr3LuazLJIa+Pg3Kc1ylSYVY=
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/cpuguy83/go-md2man/v2 v2.0.3 h1:qMCsGGgs+MAzDFyp9LpAe1Lqy/fY/qCovCm0qnXZOBM=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/russross/blackfriday/v2 v2.0.1 h1:lPqVAte+HuHNfhJ/0LC98ESWRz8afy9tM/0RK8m9o+Q=
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/shurcooL/sanitized_anchor_name v1.0.0 h1:PdmoCO6wvbs+7yrJyMORt4/BmY5IYyJwS/kOiWx8mHo=
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
github.com/urfave/cli/v2 v2.3.0 h1:qph92Y649prgesehzOrQjdWyxFOp/QVM+6imKHad91M=
github.com/urfave/cli/v2 v2.3.0/go.mod h1:LJmUH05zAU44vOAcrfzZQKsZbVcdbOG8rtL3/XcUArI=
golang.org/x/sys v0.1.0 h1:kunALQeHf1/185U1i0GOB/fy1IPRDDpuoOOqRReG57U=
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v2 v2.2.3/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/urfave/cli/v2 v2.27.1 h1:8xSQ6szndafKVRmfyeUMxkNUJQMjL1F2zmsZ+qHpfho=
github.com/urfave/cli/v2 v2.27.1/go.mod h1:8qnjx1vcq5s2/wpsqoZFndg2CE5tNFyrTvS6SinrnYQ=
github.com/xrash/smetrics v0.0.0-20231213231151-1d8dd44e695e h1:+SOyEddqYF09QP7vr7CgJ1eti3pY9Fn3LHO1M1r/0sI=
golang.org/x/sys v0.15.0 h1:h48lPFYpsTvQJZF4EKyI4aLHaev3CxivZmv7yZig9pc=
golang.org/x/sys v0.15.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=

View File

@ -1,150 +0,0 @@
# Nydus Functional Test
## Introduction
Nydus functional test a.k.a nydus-test is built on top of [pytest](https://docs.pytest.org/en/stable/).
It basically includes two parts:
* Specific test cases located at sub-directory functional-test
* Test framework located at sub-directory framework
## Prerequisites
Debian/Ubuntu
```bash
sudo apt update && sudo apt install --no-install-recommends -y attr libattr1-dev fio pkg-config libssl-dev python3-pip libpython3.7-dev libffi-dev
python3 -m pip install --upgrade pip
# Ensure you install below modules as root user
sudo pip3 install pytest xattr requests psutil requests_unixsocket libconf py-splice fallocate pytest-repeat PyYAML six docker toml
```
## Getting Started
### Configure framework
Nydus-test is controlled and configured by `anchor_conf.json`. Nydus-test will try to find it from its root directory before executing all tests.
```json
{
"workspace": "/path/to/where/nydus-test/stores/intermediates",
"nydus_project": "/path/to/image-service/repo",
"nydus_runtime_conf": {
"profile": "release",
"log_level": "info"
},
"registry": {
"registry_url": "127.0.0.1:5000",
"registry_namespace": "nydus",
"registry_auth": "YourRegistryAuth",
"backend_proxy_url": "127.0.0.1:8000",
"backend_proxy_blobs_dir": "/path/to/where/backend/simulator/stores/blobs"
},
"images": {
"images_array": [
"busybox:latest"
]
},
"artifacts": {
"containerd": "/usr/bin/containerd"
},
"logging_file": "stderr",
"target": "gnu"
}
```
### Compile Nydus components
Before running nydus-test, please compile nydus components.
`nydusd` and `nydus-image`
```bash
cd /path/to/image-service/repo
make release
```
`nydus-backend-proxy`
```bash
cd /path/to/image-service/repo
make -C contrib/nydus-backend-proxy
```
### Define target fs structure
```yaml
depth: 4
width: 6
layers:
- layer1:
- size: 10KB
type: regular
count: 5
- size: 4MB
type: regular
count: 30
- size: 128KB
type: regular
count: 100
- size: 90MB
type: regular
count: 1
- type: symlink
count: 100
```
### Generate your own original rootfs
The framework provides a tool to generate rootfs which will be the test target.
```text
$ sudo python3 nydus_test_config.py --dist fs_structure.yaml
INFO [nydus_test_config - 49:put_files] - putting regular, count 5
INFO [nydus_test_config - 49:put_files] - putting regular, count 30
INFO [nydus_test_config - 49:put_files] - putting regular, count 100
INFO [nydus_test_config - 49:put_files] - putting regular, count 1
INFO [nydus_test_config - 49:put_files] - putting symlink, count 100
INFO [utils - 171:timer] - Generating test layer, Takes time 0.857 seconds
INFO [nydus_test_config - 49:put_files] - putting regular, count 5
INFO [nydus_test_config - 49:put_files] - putting regular, count 30
INFO [nydus_test_config - 49:put_files] - putting regular, count 100
INFO [nydus_test_config - 49:put_files] - putting regular, count 1
INFO [nydus_test_config - 49:put_files] - putting symlink, count 100
INFO [utils - 171:timer] - Generating test parent layer, Takes time 0.760 seconds
```
## Run test
Please run tests as root user.
### Run All Test Cases
The whole nydus functional test suit works on top of pytest.
### Run a Specific Test Case
```bash
pytest -sv functional-test/test_nydus.py::test_basic
```
### Run a Set of Test Cases
```bash
pytest -sv functional-test/test_nydus.py
```
### Stop Once a Case Fails
```bash
pytest -sv functional-test/test_nydus.py::test_basic --pdb
```
### Run case Step by Step
```bash
pytest -sv functional-test/test_nydus.py::test_basic --trace
```

View File

@ -1,220 +0,0 @@
import sys
import os
import re
import shutil
import logging
import pytest
import docker
sys.path.append(os.path.realpath("framework"))
from nydus_anchor import NydusAnchor
from rafs import RafsImage, RafsConf
from backend_proxy import BackendProxy
import utils
ANCHOR = NydusAnchor()
utils.logging_setup(ANCHOR.logging_file)
os.environ["RUST_BACKTRACE"] = "1"
from tools import artifact
@pytest.fixture()
def nydus_anchor(request):
# TODO: check if nydusd executable exists and have a proper version
# TODO: check if bootstrap exists
# TODO: check if blob cache file exists and try to clear it if it does
# TODO: check if blob file was put to oss
nyta = NydusAnchor()
nyta.check_prerequisites()
logging.info("*** Testing case %s ***", os.environ.get("PYTEST_CURRENT_TEST"))
yield nyta
nyta.clear_blobcache()
if hasattr(nyta, "scratch_dir"):
logging.info("Clean up scratch dir")
shutil.rmtree(nyta.scratch_dir)
if hasattr(nyta, "nydusd") and nyta.nydusd is not None:
nyta.nydusd.shutdown()
if hasattr(nyta, "overlayfs") and os.path.ismount(nyta.overlayfs):
nyta.umount_overlayfs()
# Check if nydusd is crashed.
# TODO: Where the core file is places is controlled by kernel.
# Check `/proc/sys/kernel/core_pattern`
files = os.listdir()
for one in files:
assert re.match("^core\..*", one) is None
try:
shutil.rmtree(nyta.localfs_workdir)
except FileNotFoundError:
pass
try:
nyta.cleanup_dustbin()
except FileNotFoundError:
pass
# All nydusd should stop.
assert not NydusAnchor.capture_running_nydusd()
@pytest.fixture()
def nydus_image(nydus_anchor: NydusAnchor, request):
"""
Create images using previous version nydus image tool.
This fixture provides rafs image file, case is not responsible for performing
creating image.
"""
image = RafsImage(
nydus_anchor, nydus_anchor.source_dir, "bootstrap", "blob", clear_from_oss=True
)
yield image
try:
image.clean_up()
except FileNotFoundError as _:
pass
@pytest.fixture()
def nydus_scratch_image(nydus_anchor: NydusAnchor):
"""No longer use source_dir but use scratch_dir,
Scratch image's creation is delayed until runtime of each case.
"""
nydus_anchor.prepare_scratch_dir()
# Scratch image is not made here since specific case decides how to
# scratch this dir
image = RafsImage(
nydus_anchor,
nydus_anchor.scratch_dir,
"bootstrap_scratched",
"blob_scratched",
clear_from_oss=True,
)
yield image
if not image.created:
return
try:
image.clean_up()
except FileNotFoundError as _:
pass
@pytest.fixture()
def nydus_parent_image(nydus_anchor: NydusAnchor):
parent_image = RafsImage(
nydus_anchor, nydus_anchor.parent_rootfs, "bootstrap_parent", "blob_parent"
)
yield parent_image
try:
parent_image.clean_up()
except FileNotFoundError as _:
pass
@pytest.fixture()
def nydus_scratch_parent_image(nydus_anchor: NydusAnchor):
nydus_anchor.prepare_scratch_parent_dir()
parent_image = RafsImage(
nydus_anchor, nydus_anchor.scratch_parent_dir, "bs_parent", "blob_parent"
)
yield parent_image
try:
parent_image.clean_up()
except FileNotFoundError as _:
pass
@pytest.fixture(scope="session", autouse=False)
def collect_report(request):
"""
To enable code coverage report, let @autouse be True.
"""
build_dir = ANCHOR.build_dir
from coverage_collect import collect_coverage
def CC():
collect_coverage(build_dir)
request.addfinalizer(CC)
@pytest.fixture
def rafs_conf(nydus_anchor):
"""Generate conf file via libconf(https://pypi.org/project/libconf/)"""
rc = RafsConf(nydus_anchor)
rc.dump_rafs_conf()
yield rc
@pytest.fixture(scope="session")
def nydusify_converter():
# Can't access a `function` scope fixture.
os.environ["GOTRACEBACK"] = "crash"
nydusify_source_dir = os.path.join(ANCHOR.nydus_project, "contrib/nydusify")
with utils.pushd(nydusify_source_dir):
ret, _ = utils.execute(["make", "release"])
assert ret
@pytest.fixture(scope="session")
def nydus_snapshotter():
# Can't access a `function` scope fixture.
snapshotter_source = os.path.join(ANCHOR.nydus_project, "contrib/nydus-snapshotter")
with utils.pushd(snapshotter_source):
ret, _ = utils.execute(["make"])
assert ret
@pytest.fixture()
def local_registry():
docker_client = docker.from_env()
registry_container = docker_client.containers.run(
"registry:latest", detach=True, network_mode="host", remove=True
)
yield registry_container
try:
registry_container.stop()
except docker.errors.APIError:
assert False, "fail in stopping container"
try:
ANCHOR.backend_proxy_blobs_dir
@pytest.fixture(scope="module", autouse=True)
def nydus_backend_proxy():
backend_proxy = BackendProxy(
ANCHOR,
ANCHOR.backend_proxy_blobs_dir,
bin=os.path.join(
ANCHOR.nydus_project,
"contrib",
"nydus-backend-proxy",
"target",
"release",
"nydus-backend-proxy",
),
)
backend_proxy.start()
yield
backend_proxy.stop()
except AttributeError:
pass

View File

@ -1,24 +0,0 @@
from os import PathLike
import utils
class BackendProxy:
def __init__(self, anchor, blobs_dir: PathLike, bin:PathLike):
self.__blobs_dir = blobs_dir
self.bin = bin
self.anchor = anchor
def start(self):
_, self.p = utils.run(
[self.bin, "-b", self.blobs_dir()],
wait=False,
stdout=self.anchor.logging_file,
stderr=self.anchor.logging_file,
)
def stop(self):
self.p.terminate()
self.p.wait()
def blobs_dir(self):
return self.__blobs_dir

View File

@ -1,54 +0,0 @@
import time
import hmac
import hashlib
import base64
import urllib.parse
import requests
import json
import sys
import os
from string import Template
sys.path.append(os.path.realpath("framework"))
BOT_SECRET = os.getenv("BOT_SECRET")
BOT_ACCESS_TOKEN = os.getenv("BOT_ACCESS_TOKEN")
SEND_CONTENT_TEMPLATE = """**nydus-bot**
${content}"""
class Bot:
def __init__(self):
if BOT_SECRET is None or BOT_ACCESS_TOKEN is None:
raise ValueError
timestamp = str(round(time.time() * 1000))
secret_enc = BOT_SECRET.encode("utf-8")
string_to_sign = "{}\n{}".format(timestamp, BOT_SECRET)
string_to_sign_enc = string_to_sign.encode("utf-8")
hmac_code = hmac.new(
secret_enc, string_to_sign_enc, digestmod=hashlib.sha256
).digest()
sign = urllib.parse.quote_plus(base64.b64encode(hmac_code))
self.url = f"https://oapi.dingtalk.com/robot/send?access_token={BOT_ACCESS_TOKEN}&sign={sign}&timestamp={timestamp}"
def send(self, content: str):
c = Template(SEND_CONTENT_TEMPLATE).substitute(content=content)
d = {
"msgtype": "markdown",
"markdown": {"title": "Nydus-bot", "text": c},
}
ret = requests.post(
self.url, headers={"Content-Type": "application/json"}, data=json.dumps(d)
)
print(ret.__dict__)
if __name__ == "__main__":
bot = Bot()
bot.send(sys.argv[1])

View File

@ -1,5 +0,0 @@
import os
ANCHOR_PATH = os.path.join(
os.getenv("ANCHOR_PATH", default=os.getcwd()), "anchor_conf.json"
)

View File

@ -1,88 +0,0 @@
import tempfile
import subprocess
import toml
import os
from snapshotter import Snapshotter
import utils
class Containerd(utils.ArtifactProcess):
state_dir = "/run/nydus-test_containerd"
def __init__(self, anchor, snapshotter: Snapshotter) -> None:
self.anchor = anchor
self.containerd_bin = anchor.containerd_bin
self.snapshotter = snapshotter
def gen_config(self):
_, p = utils.run(
[self.containerd_bin, "config", "default"], stdout=subprocess.PIPE
)
out, _ = p.communicate()
config = toml.loads(out.decode())
config["state"] = self.state_dir
self.__address = config["grpc"]["address"] = os.path.join(
self.state_dir, "containerd.sock"
)
config["plugins"]["io.containerd.grpc.v1.cri"]["containerd"][
"snapshotter"
] = "nydus"
config["plugins"]["io.containerd.grpc.v1.cri"]["sandbox_image"] = "google/pause"
config["plugins"]["io.containerd.grpc.v1.cri"]["containerd"][
"disable_snapshot_annotations"
] = False
config["plugins"]["io.containerd.runtime.v1.linux"]["no_shim"] = True
self.__root = tempfile.TemporaryDirectory(
dir=self.anchor.workspace, suffix="root"
)
config["root"] = self.__root.name
config["proxy_plugins"] = {
"nydus": {
"type": "snapshot",
"address": self.snapshotter.sock(),
}
}
self.config = tempfile.NamedTemporaryFile(mode="w", suffix="config.toml")
self.config.write(toml.dumps(config))
self.config.flush()
return self
@property
def root(self):
return self.__root.name
def run(self):
_, self.p = utils.run(
[self.containerd_bin, "--config", self.config.name],
wait=False,
stdout=self.anchor.logging_file,
stderr=self.anchor.logging_file,
)
@property
def address(self):
return self.__address
def remove_image_sync(self, repo):
cmd = [
"ctr",
"-n",
"k8s.io",
"-a",
self.__address,
"images",
"rm",
repo,
"--sync",
]
ret, out = utils.execute(cmd)
assert ret
def shutdown(self):
self.p.terminate()
self.p.wait()

View File

@ -1,32 +0,0 @@
import utils
import os
import sys
from argparse import ArgumentParser
def collect_coverage(source_dir, target_dir, report):
"""
Example:
./target/debug/ -s . -t lcov --llvm --branch --ignore-not-existing -o ./target/debug/coverage/
"""
cmd = f"framework/bin/grcov {target_dir} -s {source_dir} -t html --llvm --branch \
--ignore-not-existing -o {report}/coverage_report"
utils.execute(cmd, shell=True)
if __name__ == "__main__":
parser = ArgumentParser()
parser.add_argument("--source", help="path to source code", type=str)
parser.add_argument("--target", help="path to build target directory", type=str)
args = parser.parse_args()
source = args.source
target = args.target
report = "."
os.environ["RUSTFLAGS"] = "-Zinstrument-coverage"
collect_coverage(source, target, report)

View File

@ -1,241 +0,0 @@
import yaml
import tempfile
from string import Template
import json
import time
import uuid
import utils
POD_CONF = """
metadata:
attempt: 1
name: nydus-sandbox
namespace: default
uid: ${uid}
log_directory: /tmp
linux:
security_context:
namespace_options:
network: 2
"""
# annotations:
# "io.containerd.osfeature": "nydus.remoteimage.v1"
CONTAINER_CONF = """
metadata:
name: ${container_name}
image:
image: ${image}
log_path: container.1.log
command: ["sh"]
"""
class Cri:
def __init__(self, runtime_endpoint, image_endpoint) -> None:
config = dict()
config["runtime-endpoint"] = f"unix://{runtime_endpoint}"
config["image-endpoint"] = f"unix://{image_endpoint}"
config["timeout"] = 10
config["debug"] = False
self._config = tempfile.NamedTemporaryFile(
mode="w+", suffix="crictl.config", delete=False
)
yaml.dump(config, self._config)
def run_container(
self,
image,
container_name,
):
container_config = tempfile.NamedTemporaryFile(
mode="w+", suffix="container.config.yaml", delete=True
)
pod_config = tempfile.NamedTemporaryFile(
mode="w+", suffix="pod.config.yaml", delete=True
)
print(pod_config.read())
_s = Template(CONTAINER_CONF).substitute(
image=image, container_name=container_name
)
container_config.write(_s)
container_config.flush()
pod_config.write(
Template(POD_CONF).substitute(
uid=uuid.uuid4(),
)
)
pod_config.flush()
ret, _ = utils.execute(
[
"crictl",
"--config",
self._config.name,
"run",
container_config.name,
pod_config.name,
],
print_err=True,
)
assert ret
def stop_rm_container(self, id):
cmd = [
"crictl",
"--config",
self._config.name,
"stop",
id,
]
ret, _ = utils.execute(cmd)
assert ret
cmd = [
"crictl",
"--config",
self._config.name,
"rm",
id,
]
ret, _ = utils.execute(cmd)
assert ret
def list_images(self):
cmd = [
"crictl",
"--config",
self._config.name,
"images",
"--output",
"json",
]
ret, out = utils.execute(cmd)
assert ret
images = json.loads(out)
return images["images"]
def remove_image(self, repo):
images = self.list_images()
for i in images:
# Example:
# {'id': 'sha256:cc6e5af55020252510374deecb0168fc7170b5621e03317cb7c4192949becb9a',
# 'repoTags': ['reg.docker.alibaba-inc.com/chge-nydus-test/busybox:latest_converted'], 'repoDigests': ['reg.docker.alibaba-inc.com/chge-nydus-test/busybox@sha256:07592f0848a6752de1b58f06b8194dbeaff1cb3314ab3225b6ab698abac1185d'], 'size': '998569', 'uid': None, 'username': ''}
if i["repoTags"][0] == repo:
id = i["id"]
cmd = [
"crictl",
"--config",
self._config.name,
"rmi",
id,
]
ret, _ = utils.execute(cmd)
assert ret
return True
assert False
return False
def check_container_status(self, name, timeout):
"""
{
"containers": [
{
"id": "4098985ed96655dbd43eef2d6502197598b72fe40cfec4cb77466aedf755807f",
"podSandboxId": "2ae536d3481130d8a47a05fb6ffeb303cb3d57b29e8744d3ffcbbc27377ece3d",
"metadata": {
"name": "nydus-container",
"attempt": 0
},
"image": {
"image": "reg.docker.alibaba-inc.com/chge-nydus-test/mysql:latest_converted"
},
"imageRef": "sha256:68e06967547192d5eaf406a21ea39b3131f86e9dc8fb8b75e2437a1bde8d0aad",
"state": "CONTAINER_EXITED",
"createdAt": "1610018967168325132",
"labels": {
},
"annotations": {
}
}
]
}
---
{
"status": {
"id": "4098985ed96655dbd43eef2d6502197598b72fe40cfec4cb77466aedf755807f",
"metadata": {
"attempt": 0,
"name": "nydus-container"
},
"state": "CONTAINER_EXITED",
"createdAt": "2021-01-07T19:29:27.168325132+08:00",
"startedAt": "2021-01-07T19:29:28.172706527+08:00",
"finishedAt": "2021-01-07T19:29:32.882263863+08:00",
"exitCode": 0,
"image": {
"image": "reg.docker.alibaba-inc.com/chge-nydus-test/mysql:latest_converted"
},
"imageRef": "reg.docker.alibaba-inc.com/chge-nydus-test/mysql@sha256:ebadc23a8b2cbd468cb86ab5002dc85848e252de71cdc4002481f63a1d3c90be",
"reason": "Completed",
"message": "",
"labels": {},
"annotations": {},
"mounts": [],
"logPath": "/tmp/container.1.log"
},
"""
elapsed = 0
while elapsed <= timeout:
ps_cmd = [
"crictl",
"--config",
self._config.name,
"ps",
"-a",
"--output",
"json",
]
ret, out = utils.execute(
ps_cmd,
print_err=True,
)
assert ret
containers = json.loads(out)
for c in containers["containers"]:
# The container is found, no need to wait anylonger
if c["metadata"]["name"] == name:
id = c["id"]
inspect_cmd = [
"crictl",
"--config",
self._config.name,
"inspect",
id,
]
ret, out = utils.execute(inspect_cmd)
assert ret
status = json.loads(out)
if status["status"]["exitCode"] == 0:
return id, True
else:
return None, False
time.sleep(1)
elapsed += 1
return None, False

View File

@ -1,56 +0,0 @@
from linux_command import LinuxCommand
import utils
import subprocess
class DdParam(LinuxCommand):
def __init__(self, command_name):
LinuxCommand.__init__(self, command_name)
self.param_name_prefix = ""
def bs(self, block_size):
return self.set_param("bs", block_size)
def input(self, input_file):
return self.set_param("if", input_file)
def output(self, output_file):
return self.set_param("of", output_file)
def count(self, count):
return self.set_param("count", count)
def iflag(self, iflag):
return self.set_param("iflag", iflag)
def skip(self, len):
return self.set_param("skip", len)
class DD:
"""
dd always tries to to copy the entire file.
"""
def __init__(self):
self.dd_params = DdParam("dd")
def create_command(self):
return self.dd_params
def extend_command(self):
return self.dd_params
def __str__(self):
return str(self.dd_params)
def run(self):
ret, _ = utils.run(
str(self),
verbose=False,
wait=True,
shell=True,
stdout=subprocess.DEVNULL,
stderr=subprocess.STDOUT,
)
return ret

View File

@ -1,313 +0,0 @@
from utils import pushd
import os
from random import randint
import shutil
import logging
import random
import string
from fallocate import fallocate, FALLOC_FL_PUNCH_HOLE, FALLOC_FL_KEEP_SIZE
from utils import Size, Unit
import xattr
"""
Generate and distribute target files(regular, symlink, directory), link files.
File with holes(sparse file)
Hardlink
1. Generate directory tree structure firstly.
"""
CHINESE_TABLE = "搀掺蝉馋谗缠铲产阐颤昌猖场尝常长偿肠厂敞畅唱倡超抄钞朝嘲潮巢吵炒车扯撤掣彻澈郴臣辰尘晨忱沉\
愤粪丰封枫蜂峰锋风疯烽逢冯缝讽奉凤佛否夫敷肤孵扶拂辐幅氟符伏俘服浮涪福袱弗甫抚辅俯釜斧脯腑\
楔些歇蝎鞋协挟携邪斜胁谐写械卸蟹懈泄泻谢屑薪芯锌欣辛新忻心信衅星腥猩惺兴刑型形邢行醒幸杏性\
寅饮尹引隐印英樱婴鹰应缨莹萤营荧蝇迎赢盈影颖硬映哟拥佣臃痈庸雍踊蛹咏泳涌永恿勇用幽优悠忧尤\
庥庠庹庵庾庳赓廒廑廛廨廪膺忄忉忖忏怃忮怄忡忤忾怅怆忪忭忸怙怵怦怛怏怍怩怫怊怿怡恸恹恻恺恂恪"
def gb2312(length):
for i in range(0, length):
c = random.choice(CHINESE_TABLE)
yield c.encode("gb2312")
class Distributor:
def __init__(self, top_dir: str, levels: int, max_sub_directories: int):
self.top_dir = top_dir
self.levels = levels
self.max_sub_directories = max_sub_directories
# All files generated by this distributor, no matter `_put_single_file()`
# or `put_multiple_files` wll be recorded in this list.
self.files = []
self.symlinks = []
self.dirs = []
self.hardlinks = {}
def _relative_path_to_top(self, path: str) -> str:
return os.path.relpath(path, start=self.top_dir)
def _generate_one_level(self, level, cur_dir):
dirs = []
with pushd(cur_dir):
# At least, each level has a child directory
for index in range(0, randint(1, self.max_sub_directories)):
d_name = f"DIR.{level}.{index}"
try:
d = os.mkdir(d_name)
except FileExistsError:
pass
dirs.append(d_name)
if level >= self.levels:
return
for d in dirs:
self._generate_one_level(level + 1, d)
# This is top level planted tree.
return dirs
def generate_tree(self):
"""DIR.LEVEL.INDEX"""
dirs = self._generate_one_level(0, self.top_dir)
self.planted_tree_root = dirs[:]
def _random_pos_dir(self):
level = randint(0, self.levels)
with pushd(os.path.join(self.top_dir, random.choice(self.planted_tree_root))):
while level:
files = os.listdir()
level -= 1
files = [f for f in files if os.path.isdir(f)]
if len(files) != 0:
next_level = files[randint(0, len(files) - 1)]
else:
break
os.chdir(next_level)
return os.getcwd()
def put_hardlinks(self, count):
def _create_new_source():
source_file = os.path.join(
self._random_pos_dir(), Distributor.generate_random_name(60)
)
fd = os.open(source_file, os.O_CREAT | os.O_RDWR)
os.write(fd, os.urandom(randint(0, 1024 * 1024 + 7)))
os.close(fd)
return source_file
source_file = _create_new_source()
self.hardlinks[source_file] = []
self.hardlink_aliases = []
for i in range(0, count):
if randint(0, 16) % 4 == 0:
source_file = _create_new_source()
self.hardlinks[source_file] = []
link = os.path.join(
self._random_pos_dir(),
Distributor.generate_random_name(50, suffix="hardlink"),
)
logging.debug(link)
# TODO: `link` may be too long to link, so better to change directory first!
os.link(source_file, link)
self.hardlinks[source_file].append(self._relative_path_to_top(link))
self.hardlink_aliases.append(self._relative_path_to_top(link))
return self.hardlink_aliases[-count:]
def put_symlinks(self, count, chinese=False):
"""
Generate symlinks pointing to regular files or directories.
"""
def _create_new_source():
this_path = ""
if randint(0, 123) % 4 == 0:
self.put_directories(1)
this_path = self.dirs[-1]
del self.dirs[-1]
else:
_, this_path = self._put_single_file(
self._random_pos_dir(),
Size(randint(0, 100), Unit.KB),
chinese=chinese,
)
del self.files[-1]
return this_path
source_file = _create_new_source()
for i in range(0, count):
if randint(0, 12) % 3 == 0:
source_file = _create_new_source()
symlink = os.path.join(
self._random_pos_dir(),
Distributor.generate_random_length_name(20, suffix="symlink"),
)
# XFS limits symlink target path which is stored within symlink length at 1024bytes.
if len(source_file) >= 1024:
continue
if randint(0, 12) % 5 == 0:
source_file = os.path.relpath(source_file, start=self.top_dir)
try:
os.symlink(source_file, symlink)
except FileExistsError as e:
# Sometimes, symlink fails due to an existed symlink file met.
# This should rarely happen if `generate_random_length_name` truly randoms
logging.exception(e)
continue
if randint(0, 12) % 4 == 0:
try:
if os.path.isdir(source_file):
try:
os.rmdir(source_file)
except Exception:
pass
else:
os.unlink(source_file)
except FileNotFoundError:
pass
# Save symlink relative path so that we can tell which symlinks were put.
self.symlinks.append(self._relative_path_to_top(symlink))
return self.symlinks[-count:]
def put_directories(self, count):
for i in range(0, count):
dst_path = os.path.join(
self._random_pos_dir(),
Distributor.generate_random_name(30, suffix="dir"),
)
# We might have a very long name of `dst_path`. So better to mkdir one by one
dst_relpath = os.path.relpath(dst_path, start=self.top_dir)
with pushd(self.top_dir):
for d in dst_relpath.split("/")[0:]:
try:
os.chdir(d)
except FileNotFoundError:
os.mkdir(d)
os.chdir(d)
self.dirs.append(os.path.relpath(dst_path, start=self.top_dir))
return self.dirs[-count:]
@staticmethod
def generate_random_name(length, suffix=None, chinese=False):
if chinese:
result_str = "".join([s.decode("gb2312") for s in gb2312(length)])
else:
letters = string.ascii_letters
result_str = "".join(random.choice(letters) for i in range(length))
if suffix is not None:
result_str += f".{suffix}"
return result_str
@staticmethod
def generate_random_length_name(max_length, suffix=None, chinese=False):
# Shrink the max_length since it has a suffix
# Use max_length - 9 as the minimum length to reduce name conflict.
len = randint((max_length - 9) // 2, max_length - 9)
return Distributor.generate_random_name(len, suffix, chinese)
def _put_single_file(
self,
parent_dir,
file_size: Size,
specified_name=None,
letters=False,
chinese=False,
name_len=32,
):
if specified_name is None:
name = Distributor.generate_random_length_name(
name_len, suffix="regular", chinese=chinese
)
else:
name = specified_name
this_path = os.path.join(parent_dir, name)
with pushd(parent_dir):
if chinese:
fd = os.open(name.encode("gb2312"), os.O_CREAT | os.O_RDWR)
else:
fd = os.open(name.encode("ascii"), os.O_CREAT | os.O_RDWR)
if file_size.B != 0:
left = file_size.B
logging.debug("Putting file %s", this_path)
while left:
length = Size(1, Unit.MB).B if Size(1, Unit.MB).B < left else left
if not letters:
left -= os.write(fd, os.urandom(length))
else:
picked_list = "".join(
random.choices(string.ascii_lowercase[1:4], k=length)
)
left -= os.write(fd, picked_list.encode())
os.close(fd)
self.files.append(self._relative_path_to_top(this_path))
return name, this_path
def put_single_file(self, file_size: Size, pos=None, name=None):
self._put_single_file(
self._random_pos_dir() if pos is None else pos,
file_size,
letters=True,
specified_name=name,
)
return self.files[-1]
def put_single_file_with_xattr(self, file_size: Size, kv, pos=None, name=None):
self._put_single_file(
self._random_pos_dir() if pos is None else pos,
file_size,
letters=True,
specified_name=name,
)
p = os.path.join(self.top_dir, self.files[-1])
xattr.setxattr(p, kv[0].encode(), kv[1].encode())
def put_multiple_files(self, count: int, max_size: Size):
for i in range(0, count):
cur_size = Size.from_B(randint(0, max_size.B))
self._put_single_file(self._random_pos_dir(), cur_size)
return self.files[-count:]
def put_multiple_chinese_files(self, count: int, max_size: Size):
for i in range(0, count):
cur_size = Size.from_B(randint(0, max_size.B))
self._put_single_file(self._random_pos_dir(), cur_size, chinese=True)
return self.files[-count:]
def put_multiple_empty_files(self, count):
for i in range(0, count):
self._put_single_file(self._random_pos_dir(), Size(0, Unit.Byte))
return self.files[-count:]
if __name__ == "__main__":
top_dir = "/mnt/gen_tree"
if os.path.exists(top_dir):
shutil.rmtree(top_dir)
try:
os.makedirs(top_dir, exist_ok=True)
except FileExistsError:
pass
dist = Distributor(top_dir, 2, 5)
dist.generate_tree()
print(dist._random_pos_dir())
dist.put_hardlinks(10)
Distributor.generate_random_name(2000, suffix="sym")
dist._put_single_file(top_dir, Size(100, Unit.MB))
dist.put_multiple_files(1000, Size(4, Unit.KB))

View File

@ -1,17 +0,0 @@
from utils import execute, logging_setup
class Erofs:
def __init__(self) -> None:
pass
def mount(self, fsid, mountpoint):
cmd = f"mount -t erofs -o fsid={fsid} none {mountpoint}"
self.mountpoint = mountpoint
r, _ = execute(cmd, shell=True)
assert r
def umount(self):
cmd = f"umount {self.mountpoint}"
r, _ = execute(cmd, shell=True)
assert r

View File

@ -1,111 +0,0 @@
import datetime
import utils
import json
import os
from types import SimpleNamespace as Namespace
from linux_command import LinuxCommand
class FioParam(LinuxCommand):
def __init__(self, fio, command_name):
LinuxCommand.__init__(self, command_name)
self.fio = fio
self.command_name = command_name
def block_size(self, size):
return self.set_param("blocksize", size)
def direct(self, value: bool = True):
return self.set_param("direct", value)
def size(self, size):
return self.set_param("size", size)
def io_mode(self, io_mode):
return self.set_param("io_mode", io_mode)
def ioengine(self, engine):
return self.set_param("ioengine", engine)
def filename(self, filename):
return self.set_param("filename", filename)
def read_write(self, readwrite):
return self.set_param("readwrite", readwrite)
def iodepth(self, iodepth):
return self.set_param("iodepth", iodepth)
def numjobs(self, jobs):
self.set_flags("group_reporting")
return self.set_param("numjobs", jobs)
class Fio:
def __init__(self):
self.jobs = []
self.base_cmd_params = FioParam(self, "fio")
self.global_cmd_params = FioParam(self, "fio")
def create_command(self, *pattern):
self.global_cmd_params.set_flags("group_reporting")
p = "_".join(pattern)
try:
os.mkdir("benchmark_reports")
except FileExistsError:
pass
self.fio_report_file = os.path.join(
"benchmark_reports",
f'fio_run_{p}_{datetime.datetime.now().strftime("%Y-%m-%d_%H-%M-%s")}',
)
self.base_cmd_params.set_param("output-format", "json").set_param(
"output", self.fio_report_file
)
return self.global_cmd_params
def expand_command(self):
return self.global_cmd_params
def __str__(self):
fio_prams = FioParam(self, "fio")
fio_prams.command_param_dict.update(self.base_cmd_params.command_param_dict)
fio_prams.command_param_dict.update(self.global_cmd_params.command_param_dict)
fio_prams.command_flags.extend(self.global_cmd_params.command_flags)
fio_prams.set_param("name", "fio")
command = str(fio_prams)
return command
def run(self):
ret, _ = utils.run(
str(self),
wait=True,
shell=True,
)
assert ret == 0
def get_result(self, title_line, *keys):
with open(self.fio_report_file) as f:
data = json.load(f, object_hook=lambda d: Namespace(**d))
if hasattr(data, "jobs"):
jobs = data.jobs
assert len(jobs) == 1
job = jobs[0]
print("")
result = f"""
{title_line}
block size: {getattr(data, 'global options').bs}
direct: {getattr(data, 'global options').direct}
ioengine: {getattr(data, 'global options').ioengine}
runtime: {job.read.runtime}
iops: {job.read.iops}
bw(KB/S): {job.read.bw}
latency/ms: min:{job.read.lat_ns.min/1e6}, max: {job.read.lat_ns.max/1e3}, mean: {job.read.lat_ns.mean/1e6}
"""
print(result)
return result

View File

@ -1,45 +0,0 @@
class LinuxCommand:
def __init__(self, command_name):
self.command_name = command_name
self.command_param_dict = {}
self.command_flags = []
self.command_name = command_name
self.param_name_prefix = "--"
self.param_separator = " "
self.param_value_prefix = " "
self.param_value_list_separator = ","
self.subcommand = None
def set_subcommand(self, subcommand):
self.subcommand = subcommand
return self
def set_param(self, key, val):
self.command_param_dict[key] = val
return self
def set_flags(self, *new_flag):
for f in new_flag:
self.command_flags.append(f)
return self
def remove_param(self, key):
try:
del self.command_param_dict[key]
except KeyError:
pass
def __str__(self):
if self.subcommand is not None:
command = self.command_name + " " + self.subcommand
else:
command = self.command_name
for key, value in self.command_param_dict.items():
command += (
f"{self.param_separator}{self.param_name_prefix}"
f"{key}{self.param_value_prefix}{value}"
)
for flag in self.command_flags:
command += f"{self.param_separator}{self.param_name_prefix}{flag}"
return command

View File

@ -1,338 +0,0 @@
import os
import shutil
from inspect import stack, getframeinfo
from containerd import Containerd
from snapshotter import Snapshotter
import utils
from stat import *
import time
import logging
import sys
import signal
import tempfile
import json
import platform
NYDUSD_BIN = "nydusd"
NYDUS_IMG_BIN = "nydus-image"
from conf import ANCHOR_PATH
class NydusAnchor:
"""
Test environment setup, like,
- location of test target executable
- path to directory for data verification by comparing digest.
- wrapper for test io engin.
"""
def __init__(self, path=None):
"""
:rootfs: An alias for bootstrap file.
:verify_dir: Source directory from which to create this test image.
"""
self.machine = platform.machine()
if path is None:
path = ANCHOR_PATH
try:
with open(path, "r") as f:
kwargs = json.load(f)
except FileNotFoundError:
logging.error("Please define your own anchor file! [anchor_conf.json]")
sys.exit(1)
self.workspace = kwargs.pop("workspace", ".")
# Path to be searched for nydus binaries
self.nydus_project = kwargs.pop("nydus_project")
# In case we want to build image on top an existed image.
# Create an image from this parent rootfs firstly.
# TODO: Better to specify a different file system thus to have same inode numbers.
registry_conf = kwargs.pop("registry")
self.registry_url = registry_conf["registry_url"]
self.registry_auth = registry_conf["registry_auth"]
self.registry_namespace = registry_conf["registry_namespace"]
try:
self.backend_proxy_url = registry_conf["backend_proxy_url"]
self.backend_proxy_blobs_dir = registry_conf["backend_proxy_blobs_dir"]
os.makedirs(self.backend_proxy_blobs_dir, exist_ok=True)
except KeyError:
pass
artifacts = kwargs.pop("artifacts")
self.containerd_bin = artifacts["containerd"]
try:
self.ossutil_bin = artifacts["ossutil_bin"]
except KeyError:
self.ossutil_bin = (
"framework/bin/ossutil64.x86"
if self.machine != "aarch64"
else "framework/bin/ossutil64.aarch64"
)
nydus_runtime_conf = kwargs.pop("nydus_runtime_conf")
self.log_level = nydus_runtime_conf["log_level"]
profile = nydus_runtime_conf["profile"]
self.fs_version = kwargs.pop("fs_version", 6)
try:
oss_conf = kwargs.pop("oss")
self.oss_ak_id = oss_conf["ak_id"]
self.oss_ak_secret = oss_conf["ak_secret"]
self.oss_bucket = oss_conf["bucket"]
self.oss_endpoint = oss_conf["endpoint"]
except KeyError:
pass
self.logging_file_path = kwargs.pop("logging_file")
self.logging_file = self.decide_logging_file()
self.dustbin = []
self.tmp_dirs = []
self.localfs_workdir = os.path.join(self.workspace, "localfs_workdir")
self.nydusify_work_dir = os.path.join(self.workspace, "nydusify_work_dir")
# Where to mount this rafs
self.mountpoint = os.path.join(self.workspace, "rafs_mnt")
# From which directory to build rafs image
self.blobcache_dir = os.path.join(self.workspace, "blobcache_dir")
self.overlayfs = os.path.join(self.workspace, "overlayfs_mnt")
self.source_dir = os.path.join(self.workspace, "gen_rootfs")
self.parent_rootfs = os.path.join(self.workspace, "parent_rootfs")
self.fscache_dir = os.path.join(self.workspace, "fscache")
os.makedirs(self.fscache_dir, exist_ok=True)
link_target = kwargs.pop("target")
if link_target == "gnu":
self.binary_release_dir = os.path.join(
self.nydus_project, "target/release"
)
elif link_target == "musl":
arch = platform.machine()
self.binary_release_dir = os.path.join(
self.nydus_project,
f"target/{arch}-unknown-linux-musl",
"release",
)
self.build_dir = os.path.join(self.nydus_project, "target/debug")
self.binary_debug_dir = os.path.join(self.nydus_project, "target/debug")
if profile == "release":
self.binary_dir = self.binary_release_dir
elif profile == "debug":
self.binary_dir = self.binary_debug_dir
else:
sys.exit()
self.nydusd_bin = os.path.join(self.binary_dir, NYDUSD_BIN)
self.image_bin = os.path.join(self.binary_dir, NYDUS_IMG_BIN)
self.nydusify_bin = os.path.join(
self.nydus_project, "contrib", "nydusify", "cmd", "nydusify"
)
self.snapshotter_bin = kwargs.pop(
"snapshotter",
os.path.join(
self.nydus_project,
"contrib",
"nydus-snapshotter",
"bin",
"containerd-nydus-grpc",
),
)
self.images_array = kwargs.pop("images")["images_array"]
try:
shutil.rmtree(self.blobcache_dir)
except FileNotFoundError:
pass
os.makedirs(self.blobcache_dir)
os.makedirs(self.mountpoint, exist_ok=True)
os.makedirs(self.overlayfs, exist_ok=True)
def put_dustbin(self, path):
self.dustbin.append(path)
def cleanup_dustbin(self):
for p in self.dustbin:
if isinstance(p, utils.ArtifactProcess):
p.shutdown()
else:
os.unlink(p)
def check_prerequisites(self):
assert os.path.exists(self.source_dir), "Verification directory not existed!"
assert os.path.exists(self.blobcache_dir), "Blobcache directory not existed!"
assert (
len(os.listdir(self.blobcache_dir)) == 0
), "Blobcache directory not empty!"
assert not os.path.ismount(self.mountpoint), "Mount point was already mounted"
def clear_blobcache(self):
try:
if os.listdir(self.blobcache_dir) == 0:
return
# Under some cases, blob cache dir is temporarily mounted.
if os.path.ismount(self.blobcache_dir):
utils.execute(["umount", self.blobcache_dir])
shutil.rmtree(self.blobcache_dir)
logging.info("Cleared cache %s", self.blobcache_dir)
os.mkdir(self.blobcache_dir)
except Exception as exc:
print(exc)
def prepare_scratch_dir(self):
self.scratch_dir = os.path.join(
self.workspace,
os.path.basename(os.path.normpath(self.source_dir)) + "_scratch",
)
# We don't delete the scratch dir because it helps to analyze prolems.
# But if another round of test trip begins, no need to keep it anymore.
if os.path.exists(self.scratch_dir):
shutil.rmtree(self.scratch_dir)
shutil.copytree(self.source_dir, self.scratch_dir, symlinks=True)
def prepare_scratch_parent_dir(self):
self.scratch_parent_dir = os.path.join(
self.workspace,
os.path.basename(os.path.normpath(self.parent_rootfs)) + "_scratch",
)
# We don't delete the scratch dir because it helps to analyze problems.
# But if another round of test trip begins, no need to keep it anymore.
if os.path.exists(self.scratch_parent_dir):
shutil.rmtree(self.scratch_parent_dir)
shutil.copytree(self.parent_rootfs, self.scratch_parent_dir, symlinks=True)
@staticmethod
def check_nydusd_health():
pid_list = utils.get_pid(NYDUSD_BIN)
if len(pid_list) == 1:
return True
else:
logging.error("Captured nydusd process %s", pid_list)
return False
@staticmethod
def capture_running_nydusd():
pid_list = utils.get_pid(NYDUSD_BIN)
if len(pid_list) != 0:
logging.info("Captured nydusd process %s", pid_list)
# Kill remaining nydusd thus not to affect following cases.
# utils.kill_all_processes(NYDUSD_BIN, signal.SIGINT)
time.sleep(2)
return True
else:
return False
def mount_overlayfs(self, layers, base=os.getcwd()):
"""
We usually use overlayfs to act as a verifying dir. Some cases may scratch
the original source dir.
:source_dir: A directory acts on a layer of overlayfs, from which to build the image
:layers: tail item from layers is the bottom layer.
Cited:
```
Multiple lower layers
---------------------
Multiple lower layers can now be given using the the colon (":") as a
separator character between the directory names. For example:
mount -t overlay overlay -o lowerdir=/lower1:/lower2:/lower3 /merged
As the example shows, "upperdir=" and "workdir=" may be omitted. In
that case the overlay will be read-only.
The specified lower directories will be stacked beginning from the
rightmost one and going left. In the above example lower1 will be the
top, lower2 the middle and lower3 the bottom layer.
```
"""
handled_layers = [l.replace(":", "\\:") for l in layers]
if len(handled_layers) == 1:
self.sticky_lower_dir = tempfile.TemporaryDirectory(dir=self.workspace)
handled_layers.append(self.sticky_lower_dir.name)
layers_set = ":".join(handled_layers)
with utils.pushd(base):
cmd = [
"mount",
"-t",
"overlay",
"-o",
f"lowerdir={layers_set}",
"rafs_ci_overlay",
self.overlayfs,
]
ret, _ = utils.execute(cmd)
assert ret
def umount_overlayfs(self):
cmd = ["umount", self.overlayfs]
ret, _ = utils.execute(cmd)
assert ret
def decide_logging_file(self):
try:
p = os.environ["LOG_FILE"]
return open(p, "w+")
except KeyError:
if self.logging_file_path == "stdin":
return sys.stdin
elif self.logging_file_path == "stderr":
return sys.stderr
else:
return open(self.logging_file_path, "w+")
def check_fuse_conn(func):
last_conn_id = 0
print("last conn id %d" % last_conn_id)
def wrapped():
conn_id = func()
if last_conn_id != 0:
assert last_conn_id == conn_id
else:
last_conn_id == conn_id
return conn_id
return wrapped
# @check_fuse_conn
def inspect_sys_fuse():
sys_fuse_path = "/sys/fs/fuse/connections"
try:
conns = os.listdir(sys_fuse_path)
frameinfo = getframeinfo(stack()[1][0])
logging.info(
"%d | %d fuse connections: %s" % (frameinfo.lineno, len(conns), conns)
)
conn_id = int(conns[0])
return conn_id
except Exception as exc:
logging.exception(exc)

View File

@ -1,351 +0,0 @@
import logging
import subprocess
import tempfile
import utils
from nydus_anchor import NydusAnchor
import os
import json
import posixpath
from linux_command import LinuxCommand
import shutil
import tarfile
import re
class NydusifyParam(LinuxCommand):
def __init__(self, command_name):
super().__init__(command_name)
self.param_name_prefix = "--"
def source(self, source):
return self.set_param("source", source)
def target(self, target):
return self.set_param("target", target)
def nydus_image(self, nydus_image):
return self.set_param("nydus-image", nydus_image)
def work_dir(self, work_dir):
return self.set_param("work-dir", work_dir)
def fs_version(self, fs_version):
return self.set_param("fs-version", str(fs_version))
class Nydusify(LinuxCommand):
def __init__(self, anchor: NydusAnchor):
self.image_builder = anchor.image_bin
self.nydusify_bin = anchor.nydusify_bin
self.registry_url = anchor.registry_url
self.work_dir = anchor.nydusify_work_dir
self.anchor = anchor
# self.generate_auth_config(self.registry_url, anchor.registry_auth)
# os.environ["DOCKER_CONFIG"] = self.__temp_auths_config_dir.name
super().__init__(self.image_builder)
self.cmd = NydusifyParam(self.nydusify_bin)
self.cmd.nydus_image(self.image_builder).work_dir(self.work_dir)
def convert(self, source, suffix="_converted", target_ref=None, fs_version=5):
"""
A reference to image looks like registry/namespace/repo:tag
Before conversion begins, split the reference into those parts.
"""
# Notice: localhost:5000/busybox:latest
self.__repo = posixpath.basename(source).split(":")[0]
self.__converted_image = (
posixpath.basename(source) + suffix if suffix is not None else ""
)
self.__source = source
self.cmd.set_subcommand("convert")
if target_ref is None:
target_ref = posixpath.join(
self.anchor.registry_url,
self.anchor.registry_namespace,
self.__converted_image,
)
self.cmd.source(source).target(target_ref).fs_version(fs_version)
self.target_ref = target_ref
cmd = str(self.cmd)
with utils.timer(
f"### Rafs V{fs_version} Image conversion time including Pull and Push ###"
):
_, p = utils.run(
cmd,
False,
shell=True,
stdout=self.anchor.logging_file,
stderr=self.anchor.logging_file,
)
p.wait()
assert p.returncode == 0
def check(self, source, suffix="_converted", target_ref=None, fs_version=5):
"""
A reference to image looks like registry/namespace/repo:tag
Before conversion begins, split the reference into those parts.
"""
# Notice: localhost:5000/busybox:latest
self.__repo = posixpath.basename(source).split(":")[0]
self.__converted_image = (
posixpath.basename(source) + suffix if suffix is not None else ""
)
self.__source = source
self.cmd.set_subcommand("check")
self.cmd.set_param("nydusd", self.anchor.nydusd_bin)
self.cmd.set_param("nydus-image", self.anchor.image_bin)
if target_ref is None:
target_ref = posixpath.join(
self.anchor.registry_url,
self.anchor.registry_namespace,
self.__converted_image,
)
self.cmd.source(source).target(target_ref).fs_version(fs_version)
self.target_ref = target_ref
cmd = str(self.cmd)
with utils.timer("### Image Check Duration ###"):
_, p = utils.run(
cmd,
False,
shell=True,
stdout=self.anchor.logging_file,
stderr=self.anchor.logging_file,
)
p.wait()
assert p.returncode == 0
def docker_v2(self):
self.cmd.set_flags("docker-v2-format")
return self
def force_push(self):
self.cmd.set_flags("backend-force-push")
return self
def platform(self, p):
self.cmd.set_param("platform", p)
return self
def chunk_dict(self, chunk_dict_arg):
self.cmd.set_param("chunk-dict", chunk_dict_arg)
return self
def with_new_work_dir(self, work_dir):
self.work_dir = work_dir
self.cmd.set_param("work-dir", work_dir)
return self
def enable_multiplatfrom(self, enable: bool):
if enable:
self.cmd.set_flags("multi-platform")
return self
def build_cache_ref(self, ref):
self.cmd.set_param("build-cache", ref)
return self
def backend_type(self, type, oss_object_prefix=None, filed=False):
config = {
"endpoint": self.anchor.oss_endpoint,
"access_key_id": self.anchor.oss_ak_id,
"access_key_secret": self.anchor.oss_ak_secret,
"bucket_name": self.anchor.oss_bucket,
}
if oss_object_prefix is not None:
config["object_prefix"] = oss_object_prefix
self.cmd.set_param("backend-type", type)
if filed:
with open("oss_conf.json", "w") as f:
json.dump(config, f)
self.cmd.set_param("backend-config-file", "oss_conf.json")
else:
self.cmd.set_param("backend-config", json.dumps(json.dumps(config)))
return self
def nydus_image_output(self):
with utils.pushd(os.path.join(self.work_dir, "bootstraps")):
outputs = [o for o in os.listdir() if re.match(r".*json$", o) is not None]
outputs.sort(key=lambda x: int(x.split("-")[0]))
with open(outputs[0], "r") as f:
return json.load(f)
@property
def original_repo(self):
return self.__repo
@property
def converted_repo(self):
return posixpath.join(self.anchor.registry_namespace, self.__repo)
@property
def converted_image(self):
return posixpath.join(
self.registry_url, self.anchor.registry_namespace, self.__converted_image
)
def locate_bootstrap(self):
bootstraps_dir = os.path.join(self.work_dir, "bootstraps")
with utils.pushd(bootstraps_dir):
each_layers = os.listdir()
if len(each_layers) == 0:
return None
each_layers = [l.split("-") for l in each_layers]
each_layers.sort(key=lambda x: int(x[0]))
return os.path.join(bootstraps_dir, "-".join(each_layers[-1]))
def generate_auth_config(self, registry_url, auth):
auths = {"auths": {registry_url: {"auth": auth}}}
self.__temp_auths_config_dir = tempfile.TemporaryDirectory()
self.auths_config = os.path.join(
self.__temp_auths_config_dir.name, "config.json"
)
with open(self.auths_config, "w+") as f:
json.dump(auths, f)
f.flush()
def extract_source_layers_names_and_download(self, arch="amd64"):
skopeo = utils.Skopeo()
manifest, digest = skopeo.inspect(self.__source, image_arch=arch)
layers = [l["digest"] for l in manifest["layers"]]
# trimmed_layers = [os.path.join(self.work_dir, self.__source, l) for l in layers]
# trimmed_layers.reverse()
layers.reverse()
skopeo.copy_to_local(
self.__source,
layers,
os.path.join(self.work_dir, self.__source),
resource_digest=digest,
)
return layers, os.path.join(self.work_dir, self.__source)
def extract_converted_layers_names(self, arch="amd64"):
skopeo = utils.Skopeo()
manifest, _ = skopeo.inspect(
self.target_ref,
tls_verify=False,
features="nydus.remoteimage.v1",
image_arch=arch,
)
layers = [l["digest"] for l in manifest["layers"]]
layers.reverse()
return layers
def pull_bootstrap(self, downloaded_dir, bootstrap_name, arch="amd64"):
"""
Nydusify converts oci to nydus format and push the nydus image manifest to registry,
which belongs to a manifest index.
"""
skopeo = utils.Skopeo()
nydus_manifest, _ = skopeo.inspect(
self.target_ref,
tls_verify=False,
features="nydus.remoteimage.v1",
image_arch=arch,
)
layers = nydus_manifest["layers"]
for l in layers:
if l["mediaType"] == "application/vnd.docker.image.rootfs.diff.tar.gzip":
bootstrap_digest = l["digest"]
import requests
# Currently, we can not handle auth
# OCI distribution spec: /v2/<name>/blobs/<digest>
os.makedirs(downloaded_dir, exist_ok=True)
reader = requests.get(
f"http://{self.registry_url}/v2/{self.anchor.registry_namespace}/{self.original_repo}/blobs/{bootstrap_digest}",
stream=True,
)
with utils.pushd(downloaded_dir):
with open("image.gzip", "wb") as w:
shutil.copyfileobj(reader.raw, w)
with tarfile.open("image.gzip", "r:gz") as tar_gz:
def is_within_directory(directory, target):
abs_directory = os.path.abspath(directory)
abs_target = os.path.abspath(target)
prefix = os.path.commonprefix([abs_directory, abs_target])
return prefix == abs_directory
def safe_extract(tar, path=".", members=None, *, numeric_owner=False):
for member in tar.getmembers():
member_path = os.path.join(path, member.name)
if not is_within_directory(path, member_path):
raise Exception("Attempted Path Traversal in Tar File")
tar.extractall(path, members, numeric_owner=numeric_owner)
safe_extract(tar_gz)
os.rename("image/image.boot", bootstrap_name)
os.remove("image.gzip")
return os.path.join(downloaded_dir, bootstrap_name)
def pull_config(self, image, arch="amd64"):
"""
Nydusify converts oci to nydus format and push the nydus image manifest to registry,
which belongs to a manifest index.
"""
skopeo = utils.Skopeo()
nydus_manifest, digest = skopeo.inspect(
image, tls_verify=False, image_arch=arch
)
import requests
# Currently, we can handle auth
# OCI distribution spec: /v2/<name>/manifests/<digest>
reader = requests.get(
f"http://{self.registry_url}/v2/{self.original_repo}/manifests/{digest}",
stream=True,
)
manifest = json.load(reader.raw)
config_digest = manifest["config"]["digest"]
reader = requests.get(
f"http://{self.registry_url}/v2/{self.original_repo}/blobs/{config_digest}",
stream=True,
)
config = json.load(reader.raw)
return config
def find_nydus_image(self, image, arch):
skopeo = utils.Skopeo()
nydus_manifest, digest = skopeo.inspect(
image, tls_verify=False, image_arch=arch, features="nydus.remoteimage.v1"
)
assert nydus_manifest is not None
def get_build_cache_records(self, ref):
skopeo = utils.Skopeo()
build_cache_records, _ = skopeo.inspect(ref, tls_verify=False)
c = json.dumps(build_cache_records, indent=4, sort_keys=False)
logging.info("build cache: %s", c)
records = build_cache_records["layers"]
return records

View File

@ -1,107 +0,0 @@
import tempfile
from string import Template
import logging
import utils
OSS_CONFIG_TEMPLATE = """
[Credentials]
language=EN
endpoint=${endpoint}
accessKeyID=${ak}
accessKeySecret=${ak_secret}
"""
class OssHelper:
def __init__(self, util, endpoint, bucket, ak_id, ak_secret, prefix=None):
oss_conf = tempfile.NamedTemporaryFile(mode="w+", suffix="oss.conf")
items = {
"endpoint": endpoint,
"ak": ak_id,
"ak_secret": ak_secret,
}
template = Template(OSS_CONFIG_TEMPLATE)
_s = template.substitute(**items)
oss_conf.write(_s)
oss_conf.flush()
self.util = util
self.bucket = bucket
self.conf_wrapper = oss_conf
self.conf_file = oss_conf.name
self.prefix = prefix
self.path = (
f"oss://{self.bucket}/{self.prefix}"
if self.prefix is not None
else f"oss://{self.bucket}/"
)
def upload(self, src, dst, force=False):
if not self.stat(dst) or force:
cmd = [
self.util,
"--config-file",
self.conf_file,
"-f",
"cp",
src,
f"{self.path}{dst}",
]
ret, _ = utils.execute(cmd, print_output=True)
assert ret
if ret:
logging.info("Object %s is uploaded", dst)
def download(self, src, dst):
cmd = [
self.util,
"--config-file",
self.conf_file,
"cp",
"-f",
f"{self.path}{src}",
dst,
]
ret, _ = utils.execute(cmd, print_cmd=True)
if ret:
logging.info("Download %s ", src)
def rm(self, object):
cmd = [
self.util,
"rm",
"--config-file",
self.conf_file,
f"{self.path}{object}",
]
ret, _ = utils.execute(cmd, print_cmd=True, print_output=False)
assert ret
if ret:
logging.info("Object %s is removed from oss", object)
def stat(self, object):
cmd = [
self.util,
"--config-file",
self.conf_file,
"stat",
f"{self.path}{object}",
]
ret, _ = utils.execute(
cmd, print_cmd=False, print_output=False, print_err=False
)
if ret:
logging.info("Object %s already uploaded", object)
else:
logging.warning(
"Object %s was not uploaded yet",
object,
)
return ret
def list(self):
cmd = [self.util, "--config-file", self.conf_file, "ls", self.path]
ret, out = utils.execute(cmd, print_cmd=True, print_output=True, print_err=True)
print(out)

View File

@ -1,816 +0,0 @@
import shutil
import utils
import os
import time
import enum
import posixpath
from linux_command import LinuxCommand
import logging
from types import SimpleNamespace as Namespace
import json
import copy
import hashlib
import contextlib
import subprocess
import tempfile
import pytest
from nydus_anchor import NydusAnchor
from linux_command import LinuxCommand
from utils import Size, Unit
from whiteout import WhiteoutSpec
from oss import OssHelper
from backend_proxy import BackendProxy
class Backend(enum.Enum):
OSS = "oss"
REGISTRY = "registry"
LOCALFS = "localfs"
BACKEND_PROXY = "backend_proxy"
def __str__(self):
return self.value
class Compressor(enum.Enum):
NONE = "none"
LZ4_BLOCK = "lz4_block"
GZIP = "gzip"
ZSTD = "zstd"
def __str__(self):
return self.value
class RafsConf:
"""Generate nydusd working configuration file.
A `registry` backend example:
{
"device": {
"backend": {
"type": "registry",
"config": {
"scheme": "http",
"host": "localhost:5000",
"repo": "busybox"
}
},'
"mode": "direct",
"digest_validate": false
}
}
"""
def __init__(self, anchor: NydusAnchor, image: "RafsImage" = None):
self.__conf_file_wrapper = tempfile.NamedTemporaryFile(
mode="w+", suffix="rafs.config"
)
self.anchor = anchor
self.rafs_image = image
self._rafs_conf_default = {
"device": {
"backend": {
"type": "oss",
"config": {},
}
},
"mode": os.getenv("PREFERRED_MODE", "direct"),
"iostats_files": False,
"fs_prefetch": {"enable": False},
}
self._device_conf = json.loads(
json.dumps(self._rafs_conf_default), object_hook=lambda d: Namespace(**d)
)
self.device_conf = utils.object_to_dict(copy.deepcopy(self._device_conf))
def path(self):
return self.__conf_file_wrapper.name
def set_rafs_backend(self, backend_type, **kwargs):
b = str(backend_type)
self._configure_rafs("device.backend.type", b)
if backend_type == Backend.REGISTRY:
# Manager like nydus-snapshotter can fill the repo field, so we do nothing here.
if "repo" in kwargs:
self._configure_rafs(
"device.backend.config.repo",
posixpath.join(self.anchor.registry_namespace, kwargs.pop("repo")),
)
self._configure_rafs(
"device.backend.config.scheme",
kwargs["scheme"] if "scheme" in kwargs else "http",
)
self._configure_rafs("device.backend.config.host", self.anchor.registry_url)
self._configure_rafs(
"device.backend.config.auth", self.anchor.registry_auth
)
if backend_type == Backend.OSS:
if "prefix" in kwargs:
self._configure_rafs(
"device.backend.config.object_prefix", kwargs.pop("prefix")
)
self._configure_rafs(
"device.backend.config.endpoint", self.anchor.oss_endpoint
)
self._configure_rafs(
"device.backend.config.access_key_id", self.anchor.oss_ak_id
)
self._configure_rafs(
"device.backend.config.access_key_secret", self.anchor.oss_ak_secret
)
self._configure_rafs(
"device.backend.config.bucket_name", self.anchor.oss_bucket
)
if backend_type == Backend.BACKEND_PROXY:
self._configure_rafs("device.backend.type", "registry")
self._configure_rafs(
"device.backend.config.scheme",
"http",
)
self._configure_rafs("device.backend.config.repo", "nydus")
self._configure_rafs(
"device.backend.config.host", self.anchor.backend_proxy_url
)
if backend_type == Backend.LOCALFS:
if "image" in kwargs:
self._configure_rafs(
"device.backend.config.blob_file", kwargs.pop("image").localfs_backing_blob
)
else:
self._configure_rafs(
"device.backend.config.dir", self.anchor.localfs_workdir
)
return self
def get_rafs_backend(self):
return self._device_conf.device.backend.type
def set_registry_repo(self, repo):
self._configure_rafs("device.backend.config.repo", repo)
def _configure_rafs(self, k: str, v):
exec("self._device_conf." + k + "=v")
def enable_files_iostats(self):
self._device_conf.iostats_files = True
return self
def enable_latest_read_files(self):
self._device_conf.latest_read_files = True
return self
def enable_access_pattern(self):
self._device_conf.access_pattern = True
return self
def enable_rafs_blobcache(self, is_compressed=False, work_dir=None):
self._device_conf.device.cache = Namespace(
type="blobcache",
config=Namespace(
work_dir=self.anchor.blobcache_dir if work_dir is None else work_dir
),
compressed=is_compressed,
)
return self
def enable_fs_prefetch(
self,
threads_count=8,
merging_size=128 * 1024,
bandwidth_rate=0,
prefetch_all=False,
):
self._configure_rafs("fs_prefetch.enable", True)
self._configure_rafs("fs_prefetch.threads_count", threads_count)
self._configure_rafs("fs_prefetch.merging_size", merging_size)
self._configure_rafs("fs_prefetch.bandwidth_rate", bandwidth_rate)
self._configure_rafs("fs_prefetch.prefetch_all", prefetch_all)
return self
def enable_validation(self):
if int(self.anchor.fs_version) == 6:
return self
self._configure_rafs("digest_validate", True)
return self
def amplify_io(self, size):
self._configure_rafs("amplify_io", size)
return self
def rafs_mem_mode(self, v):
self._configure_rafs("mode", v)
def enable_xattr(self):
self._configure_rafs("enable_xattr", True)
return self
def dump_rafs_conf(self):
# In case the conf is dumped more than once
if int(self.anchor.fs_version) == 6:
logging.warning("Rafs v6 must enable blobcache")
self.enable_rafs_blobcache()
self.__conf_file_wrapper.truncate(0)
self.__conf_file_wrapper.seek(0)
logging.info("Current rafs metadata mode *%s*", self._rafs_conf_default["mode"])
self.device_conf = utils.object_to_dict(copy.deepcopy(self._device_conf))
json.dump(self.device_conf, self.__conf_file_wrapper)
self.__conf_file_wrapper.flush()
class RafsImage(LinuxCommand):
def __init__(
self,
anchor: NydusAnchor,
source,
bootstrap_name=None,
blob_name=None,
compressor=None,
clear_from_oss=True,
):
"""
:rootfs: A plain directory from which to build rafs images(bootstrap and blob).
:bootstrap_name: Name the generated test purpose bootstrap file.
:blob_prefix: Generally, a sha256 string follows this prefix.
:opts: Specify extra build options.
:parent_image: Associate an parent image which will be created ahead of time if necessary.
A rebuilt image tries to reuse block mapping info from parent image(bootstrap) if
the same block resides in parent image, which means new blob file will not have the
same block.
"""
self.__rootfs = source
self.bootstrap_name = (
bootstrap_name
if bootstrap_name is not None
else tempfile.NamedTemporaryFile(suffix="bootstrap").name
)
# The file name of blob file locally.
self.blob_name = (
blob_name
if blob_name is not None
else tempfile.NamedTemporaryFile(suffix="blob").name
)
# blob_id is used to identify blobs residing in OSS and how a IO can access backend.
self.blob_id = None
self.opts = ""
self.test_dir = os.getcwd()
self.anchor = anchor
LinuxCommand.__init__(self, anchor.image_bin)
self.param_value_prefix = " "
self.clear_from_oss = False
self.created = False
self.compressor = compressor
self.clear_from_oss = clear_from_oss
self.backend_type = None
# self.blob_abs_path = tempfile.TemporaryDirectory(
# "blob", dir=self.anchor.workspace
# ).name
self.blob_abs_path = tempfile.NamedTemporaryFile(
prefix="blob", dir=self.anchor.workspace
).name
def rootfs(self):
return self.__rootfs
def _tweak_build_command(self):
"""
Add more options into command line per as different test case configuration.
"""
for key, value in self.command_param_dict.items():
self.opts += (
f"{self.param_separator}{self.param_name_prefix}"
f"{key}{self.param_value_prefix}{value}"
)
for flag in self.command_flags:
self.opts += f"{self.param_separator}{self.param_name_prefix}{flag}"
def set_backend(self, type: Backend, **kwargs):
self.backend_type = type
if type == Backend.LOCALFS:
if not os.path.exists(self.anchor.localfs_workdir):
os.mkdir(self.anchor.localfs_workdir)
self.set_param("blob-dir", self.anchor.localfs_workdir)
return self
elif type == Backend.OSS:
self.set_param("blob", self.blob_abs_path)
prefix = kwargs.pop("prefix", None)
self.oss_helper = OssHelper(
self.anchor.ossutil_bin,
self.anchor.oss_endpoint,
self.anchor.oss_bucket,
self.anchor.oss_ak_id,
self.anchor.oss_ak_secret,
prefix,
)
elif self.backend_type == Backend.BACKEND_PROXY:
self.set_param("blob", self.blob_abs_path)
elif type == Backend.REGISTRY:
# Let nydusify upload blob from the path, which is an intermediate file
self.set_param("blob", self.blob_abs_path)
pass
return self
def create_image(
self,
image_bin=None,
parent_image=None,
clear_from_oss=True,
oss_uploader="util",
compressor=None,
prefetch_policy=None,
prefetch_files="",
from_stargz=False,
fs_version=None,
disable_check=False,
chunk_size=None,
) -> "RafsImage":
"""
:layers: Create an image on top of an existed one
:oss_uploader: ['util', 'nydusify']. Let image builder itself upload blob to oss or use third-party oss util
"""
self.clear_from_oss = clear_from_oss
self.oss_uploader = oss_uploader
self.compressor = compressor
self.parent_image = parent_image
assert oss_uploader in ("util", "builder", "none")
if prefetch_policy is not None:
self.set_param("prefetch-policy", prefetch_policy)
self.set_param("log-level", self.anchor.log_level)
if disable_check:
self.set_flags("disable-check")
if fs_version is not None:
self.set_param("fs-version", fs_version)
else:
self.set_param("fs-version", str(self.anchor.fs_version))
if self.compressor is not None:
self.set_param("compressor", str(self.compressor))
if chunk_size is not None:
self.set_param("chunk-size", str(hex(chunk_size)))
builder_output_json = tempfile.NamedTemporaryFile("w+", suffix="output.json")
self.set_param("output-json", builder_output_json.name)
builder_output_json.flush()
# In order to support specify different versions of nydus image tool
if image_bin is None:
image_bin = self.anchor.image_bin
# Once it's a layered image test, create test parent layer first.
# TODO: Perhaps, should not create parent together so we can have
# images with different flags and opts
if self.parent_image is not None:
self.set_param("parent-bootstrap", self.parent_image.bootstrap_name)
if from_stargz:
self.set_param("source-type", "stargz_index")
# Just before beginning building image, tweak building parameters
self._tweak_build_command()
cmd = f"{image_bin} create --bootstrap {self.bootstrap_name} {self.opts} {self.__rootfs}"
with utils.timer("Basic rafs image creation time"):
_, p = utils.run(
cmd,
False,
shell=True,
stdin=subprocess.PIPE,
stdout=self.anchor.logging_file,
stderr=self.anchor.logging_file,
)
if prefetch_policy is not None:
p.communicate(input=prefetch_files)
p.wait()
assert p.returncode == 0
assert os.path.exists(os.path.join(self.test_dir, self.bootstrap_name))
self.created = True
self.blob_id = json.load(builder_output_json)["blobs"][-1]
logging.info("Generated blob id %s", self.blob_id)
self.bootstrap_path = os.path.abspath(self.bootstrap_name)
if self.backend_type == Backend.OSS:
# self.blob_id = self.calc_blob_sha256(self.blob_abs_path)
# nydus-rs image builder can also upload image itself.
if self.oss_uploader == "util":
self.oss_helper.upload(self.blob_abs_path, self.blob_id)
elif self.backend_type == Backend.BACKEND_PROXY:
shutil.copy(
self.blob_abs_path,
os.path.join(self.anchor.backend_proxy_blobs_dir, self.blob_id),
)
elif self.backend_type == Backend.LOCALFS:
self.localfs_backing_blob = os.path.join(self.anchor.localfs_workdir, self.blob_id)
self.anchor.put_dustbin(self.bootstrap_name)
# Only oss has a temporary place to hold blob
try:
self.anchor.put_dustbin(self.blob_abs_path)
except AttributeError:
pass
try:
self.anchor.put_dustbin(self.localfs_backing_blob)
except AttributeError:
pass
if self.oss_uploader == "util":
self.dump_image_summary()
return self
def whiteout_spec(self, spec: WhiteoutSpec):
self.set_param("whiteout-spec", str(spec))
return self
def clean_up(self):
# In case image was not successfully created.
if hasattr(self, "bootstrap_path"):
os.unlink(self.bootstrap_path)
if hasattr(self, "oss_blob_abs_path"):
os.unlink(self.blob_abs_path)
if hasattr(self, "localfs_backing_blob"):
# Backing blob may already be put into dustbin.
try:
os.unlink(self.localfs_backing_blob)
except FileNotFoundError:
pass
try:
os.unlink(self.blob_abs_path)
except FileNotFoundError:
pass
except AttributeError:
# In case that test rootfs is not successfully scratched.
pass
try:
os.unlink(self.parent_blob)
os.unlink(self.parent_bootstrap)
except FileNotFoundError:
pass
except AttributeError:
pass
try:
if self.clear_from_oss and self.backend_type == Backend.OSS:
self.oss_helper.rm(self.blob_id)
except AttributeError:
pass
@staticmethod
def calc_blob_sha256(blob):
"""Example: blob id: sha256:a810724c8b2cc9bd2a6fa66d92ced9b429120017c7cf2ef61dfacdab45fa45ca"""
# We calculate the blob sha256 ourselves.
sha256 = hashlib.sha256()
with open(blob, "rb") as f:
for block in iter(lambda: f.read(4096), b""):
sha256.update(block)
return sha256.hexdigest()
def dump_image_summary(self):
return
logging.info(
f"""Image summary:\t
blob: {self.blob_name}\t
bootstrap: {self.bootstrap_name}\t
blob_sha256: {self.blob_id}\t
rootfs: {self.rootfs}\t
parent_rootfs: {self.parent_image.rootfs if self.__layers else 'Not layered image'}
compressor: {self.compressor}\t
blob_size: {os.stat(self.blob_abs_path).st_size//1024}KB, {os.stat(self.blob_abs_path).st_size}Bytes
"""
)
class RafsMountParam(LinuxCommand):
"""
Example:
nydusd --config config.json --bootstrap bs.test --sock \
vhost-user-fs.sock --apisock test_api --log-level trace
"""
def __init__(self, command_name):
LinuxCommand.__init__(self, command_name)
self.param_name_prefix = "--"
def bootstrap(self, bootstrap_file):
return self.set_param("bootstrap", bootstrap_file)
def config(self, config_file):
return self.set_param("config", config_file)
def sock(self, vhost_user_sock):
return self.set_param("sock", vhost_user_sock)
def log_level(self, log_level):
return self.set_param("log-level", log_level)
def mountpoint(self, path):
return self.set_param("mountpoint", path)
class NydusDaemon(utils.ArtifactProcess):
def __init__(
self,
anchor: NydusAnchor,
image: RafsImage,
conf: RafsConf,
with_defaults=True,
bin=None,
mode="fuse",
):
"""Start up nydusd and mount rafs.
:image: If image is `None`, then no `--metadata` will be passed to nydusd.
In this case, we have to use API to mount rafs.
"""
anchor.nydusd = self # So pytest has a chance to clean up dirties.
self.anchor = anchor
self.rafs_image = image # Associate with a rafs image to boot up.
self.conf: RafsConf = conf
self.mountpoint = anchor.mountpoint # To which point nydus will mount
self.param_value_prefix = " "
self.params = RafsMountParam(anchor.nydusd_bin if bin is None else bin)
self.params.set_subcommand(mode)
if with_defaults:
self._set_default_mount_param()
def __str__(self):
return str(self.params)
def __call__(self):
return self.params
def _set_default_mount_param(self):
# Set default part
self.apisock("api_sock").log_level(self.anchor.log_level)
if self.conf is not None:
self.params.mountpoint(self.mountpoint).config(self.conf.path())
if self.rafs_image is not None:
self.params.bootstrap(self.rafs_image.bootstrap_path)
def _wait_for_mount(self, test_fn=os.path.ismount):
elapsed = 0
while elapsed < 300:
if test_fn(self.mountpoint):
return True
if self.p.poll() is not None:
pytest.fail("file system process terminated prematurely")
elapsed -= 1
time.sleep(0.01)
pytest.fail("mountpoint failed to come up")
def thread_num(self, num):
self.params.set_param("thread-num", str(num))
return self
def fscache_thread_num(self, num):
self.params.set_param("fscache-threads", str(num))
return self
def set_fscache(self):
self.params.set_param("fscache", self.anchor.fscache_dir)
return self
def log_level(self, level):
self.params.log_level(level)
return self
def prefetch_files(self, file_path: str):
self.params.set_param("prefetch-files", file_path)
return self
def shared_dir(self, shared_dir):
self.params.set_param("shared-dir", shared_dir)
return self
def set_mountpoint(self, mp):
self.params.set_param("mountpoint", mp)
self.mountpoint = mp
return self
def supervisor(self, path):
self.params.set_param("supervisor", path)
return self
def id(self, daemon_id):
self.params.set_param("id", daemon_id)
return self
def upgrade(self):
self.params.set_flags("upgrade")
return self
def failover_policy(self, p):
self.params.set_param("failover-policy", p)
return self
def apisock(self, apisock):
self.params.set_param("apisock", apisock)
self.__apisock = apisock
self.anchor.put_dustbin(apisock)
return self
def get_apisock(self):
return self.__apisock
def bootstrap(self, b):
self.params.set_param("bootstrap", b)
return self
def mount(self, limited_mem=False, wait_mount=True, dump_config=True):
"""
:limited_mem: Unit is KB, limit nydusd process virtual memory usage thus to
inject some faults.
"""
cmd = str(self).split()
self.anchor.checker_sock = self.get_apisock()
if dump_config and self.conf is not None:
self.conf.dump_rafs_conf()
if isinstance(limited_mem, Size):
limit_kb = limited_mem.B // Size(1, Unit.KB).B
cmd = f"ulimit -v {limit_kb};" + cmd
_, p = utils.run(
cmd,
False,
shell=False,
stdout=self.anchor.logging_file,
stderr=self.anchor.logging_file,
)
self.p = p
if wait_mount:
self._wait_for_mount()
return self
def start(self):
cmd = str(self).split()
_, p = utils.run(
cmd,
False,
shell=False,
stdout=self.anchor.logging_file,
stderr=self.anchor.logging_file,
)
self.p = p
return self
def wait_mount(self):
self._wait_for_mount()
@contextlib.contextmanager
def automatic_mount_umount(self):
self.mount()
yield
self.umount()
def umount(self):
"""
Umount is sometimes invoked during teardown. So it can't assert.
"""
self._catcher_dead = True
ret, _ = utils.execute(["umount", "-l", self.mountpoint], print_output=True)
assert ret
# self.p.wait()
# assert self.p.returncode == 0
def is_mounted(self):
def _costum(self):
_, output = utils.execute(
["cat", "/proc/mounts"], print_output=False, print_cmd=False
)
mounts = output.split("\n")
for m in mounts:
if self.mountpoint in m:
return True
return False
check_fn = os.path.ismount
return check_fn(self.mountpoint)
def shutdown(self):
if self.is_mounted():
self.umount()
logging.error("shutting down nydusd")
self.p.terminate()
self.p.wait()
assert self.p.returncode == 0
BLOB_CONF_TEMPLATE = """
{
"type": "bootstrap",
"id": "5a74e7f26a2970c36ffd8963a278ea11e1fd752705a13c2ec0cb20b40e2a6699",
"domain_id": "5a74e7f26a2970c36ffd8963a278ea11e1fd752705a13c2ec0cb20b40e2a6699",
"config": {
"id": "5a74e7f26a2970c36ffd8963a278ea11e1fd752705a13c2ec0cb20b40e2a6699",
"backend_type": "registry",
"backend_config": {
"readahead": false,
"host": "hub.byted.org",
"repo": "gechangwei/java",
"auth": "",
"scheme": "http",
"proxy": {
"fallback": false
}
},
"cache_type": "fscache",
"cache_config": {
"work_dir": "/var/lib/containerd-nydus-grpc/snapshots/3754/fs"
},
"metadata_path": "/var/lib/containerd-nydus-grpc/snapshots/3754/fs/image/image.boot"
},
"fs_prefetch": {
"enable": false,
"prefetch_all": false,
"threads_count": 0,
"merging_size": 0,
"bandwidth_rate": 0
}
}
"""
class BlobEntryConf:
def __init__(self, anchor) -> None:
self.conf_base = json.loads(
BLOB_CONF_TEMPLATE, object_hook=lambda x: Namespace(**x)
)
self.anchor = anchor
self.conf_base.config.cache_config.work_dir = self.anchor.blobcache_dir
def set_type(self, t):
self.conf_base.type = t
return self
def set_repo(self, repo):
self.conf_base.config.repo = repo
return self
def set_metadata_path(self, path):
self.conf_base.config.metadata_path = path
return self
def set_fsid(self, fsid):
self.conf_base.id = fsid
self.conf_base.domain_id = fsid
self.conf_base.config.id = fsid
return self
def set_backend(self):
self.conf_base.config.backend_config.host = self.anchor.backend_proxy_url
self.conf_base.config.backend_config.repo = "nydus"
return self
def set_prefetch(self, threads_cnt=4):
self.conf_base.fs_prefetch.enable = True
self.conf_base.fs_prefetch.prefetch_all = True
self.conf_base.fs_prefetch.threads_count = threads_cnt
return self
def dumps(self):
return json.dumps(self.conf_base, default=vars)

View File

@ -1,59 +0,0 @@
import os
import tempfile
import utils
class Snapshotter(utils.ArtifactProcess):
def __init__(self, anchor: "NydusAnchor") -> None:
self.anchor = anchor
self.snapshotter_bin = anchor.snapshotter_bin
self.__sock = tempfile.NamedTemporaryFile(suffix="snapshotter.sock")
self.flags = []
def sock(self):
return self.__sock.name
def set_root(self, dir):
self.root = os.path.join(dir, "io.containerd.snapshotter.v1.nydus")
def cache_dir(self):
return os.path.join(self.root, "cache")
def run(self, rafs_conf: os.PathLike):
cmd = [
self.snapshotter_bin,
"--nydusd-path",
self.anchor.nydusd_bin,
"--config-path",
rafs_conf,
"--root",
self.root,
"--address",
self.__sock.name,
"--log-level",
"info",
"--log-to-stdout",
]
cmd = cmd + self.flags
ret, self.p = utils.run(
cmd,
wait=False,
shell=False,
stdout=self.anchor.logging_file,
stderr=self.anchor.logging_file,
)
def shared_mount(self):
self.flags.append("--shared-daemon")
return self
def enable_nydus_overlayfs(self):
self.flags.append("--enable-nydus-overlayfs")
return self
def shutdown(self):
self.p.terminate()
self.p.wait()

View File

@ -1,82 +0,0 @@
import socket
import array
import os
import struct
from multiprocessing import Process
import threading
import time
class RafsSupervisor:
def __init__(self, watcher_socket_name, conn_id):
self.watcher_socket_name = watcher_socket_name
self.conn_id = conn_id
@classmethod
def recv_fds(cls, sock, msglen, maxfds):
"""Function from https://docs.python.org/3/library/socket.html#socket.socket.recvmsg"""
fds = array.array("i") # Array of ints
msg, ancdata, flags, addr = sock.recvmsg(
msglen, socket.CMSG_LEN(maxfds * fds.itemsize)
)
for cmsg_level, cmsg_type, cmsg_data in ancdata:
if cmsg_level == socket.SOL_SOCKET and cmsg_type == socket.SCM_RIGHTS:
# Append data, ignoring any truncated integers at the end.
fds.frombytes(
cmsg_data[: len(cmsg_data) - (len(cmsg_data) % fds.itemsize)]
)
return msg, list(fds)
@classmethod
def send_fds(cls, sock, msg, fds):
"""Function from https://docs.python.org/3/library/socket.html#socket.socket.sendmsg"""
return sock.sendmsg(
[msg], [(socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", fds))]
)
def wait_recv_fd(self, event):
try:
os.unlink(self.watcher_socket_name)
except FileNotFoundError:
pass
sock = socket.socket(family=socket.AF_UNIX)
sock.bind(self.watcher_socket_name)
event.set()
sock.listen()
client, _ = sock.accept()
msg, fds = self.recv_fds(client, 100000, 1)
self.fds = fds
self.opaque = msg
client.close()
def wait_send_fd(self):
try:
os.unlink(self.watcher_socket_name)
except FileNotFoundError:
pass
sock = socket.socket(family=socket.AF_UNIX)
sock.bind(self.watcher_socket_name)
sock.listen()
client, _ = sock.accept()
msg = self.opaque
RafsSupervisor.send_fds(client, msg, self.fds)
client.close()
def send_fd(self):
t = threading.Thread(target=self.wait_send_fd)
t.start()
def recv_fd(self):
event = threading.Event()
t = threading.Thread(target=self.wait_recv_fd, args=(event,))
t.start()
return event

File diff suppressed because it is too large Load Diff

View File

@ -1,659 +0,0 @@
import posixpath
import subprocess
import logging
import sys
import os
import signal
from typing import Tuple
import io
import string
import random
try:
import psutil
except ModuleNotFoundError:
pass
import contextlib
import math
import enum
import datetime
import re
import random
import json
import tarfile
import pprint
import stat
import platform
def logging_setup(logging_stream=sys.stderr):
"""Inspired from Kadalu project"""
root = logging.getLogger()
if root.hasHandlers():
return
verbose = False
try:
if os.environ["NYDUS_TEST_VERBOSE"] == "YES":
verbose = True
except KeyError as _:
pass
# Errors should also be printed to screen.
handler = logging.StreamHandler(logging_stream)
if verbose:
root.setLevel(logging.DEBUG)
handler.setLevel(logging.DEBUG)
else:
root.setLevel(logging.INFO)
handler.setLevel(logging.INFO)
formatter = logging.Formatter(
"[%(asctime)s] %(levelname)s "
"[%(module)s - %(lineno)s:%(funcName)s] "
"- %(message)s"
)
handler.setFormatter(formatter)
root.addHandler(handler)
def execute(cmd, **kwargs):
exc = None
shell = kwargs.pop("shell", False)
print_output = kwargs.pop("print_output", False)
print_cmd = kwargs.pop("print_cmd", True)
print_err = kwargs.pop("print_err", True)
if print_cmd:
logging.info("Executing command: %s" % cmd)
try:
output = subprocess.check_output(
cmd, shell=shell, stderr=subprocess.STDOUT, **kwargs
)
output = output.decode("utf-8")
if print_output:
logging.info("%s" % output)
except subprocess.CalledProcessError as exc:
o = exc.output.decode() if exc.output is not None else ""
if print_err:
logging.error(
"Command: %s\nReturn code: %d\nError output:\n%s"
% (cmd, exc.returncode, o)
)
return False, o
return True, output
def run(cmd, wait: bool = True, verbose=True, **kwargs):
if verbose:
logging.info(cmd)
else:
logging.debug(cmd)
popen_obj = subprocess.Popen(cmd, **kwargs)
if wait:
popen_obj.wait()
return popen_obj.returncode, popen_obj
def kill_all_processes(program_name, sig=signal.SIGKILL):
ret, out = execute(["pidof", program_name])
if not ret:
logging.warning("No %s running" % program_name)
return
processes = out.replace("\n", "").split(" ")
for pid in processes:
try:
logging.info("Killing process %d" % int(pid))
os.kill(int(pid), sig)
except Exception as exc:
logging.exception(exc)
def get_pid(proc_name: str) -> list:
proc_list = []
for proc in psutil.process_iter():
try:
if proc_name.lower() in proc.name().lower():
proc_list.append((proc.pid, proc.name()))
except (psutil.NoSuchProcess, psutil.AccessDenied, psutil.ZombieProcess):
pass
return proc_list
def read_images_array(p) -> list:
with open(p) as f:
images = [i.rstrip("\n") for i in f.readlines() if not i.startswith("#")]
return images
@contextlib.contextmanager
def pushd(new_path: str):
previous_dir = os.getcwd()
os.chdir(new_path)
try:
yield
finally:
os.chdir(previous_dir)
def round_up(n, decimals=0):
return int(math.ceil(n / float(decimals))) * decimals
def get_current_time():
return datetime.datetime.now()
def delta_time(t_end, t_start):
delta = t_end - t_start
return delta.total_seconds(), delta.microseconds
@contextlib.contextmanager
def timer(slogan):
start = get_current_time()
try:
yield
finally:
end = get_current_time()
sec, usec = delta_time(end, start)
logging.info("%s, Takes time %u.%u seconds", slogan, sec, usec // 1000)
class Unit(enum.Enum):
Byte = 1
KB = 1024
MB = 1024 * KB
GB = 1024 * MB
TB = 1024 * GB
Blocks512 = 512
Blocks4096 = 4096
def get_value(self):
return self.value
class Size:
_KiB = 1024
_MiB = _KiB * 1024
_GiB = _MiB * 10244
_TiB = _GiB * 1024
_SECTOR_SIZE = 512
def __init__(self, value: int, unit: Unit = Unit.Byte):
self.bytes = value * unit.get_value()
def __index__(self):
return self.bytes
@classmethod
def from_B(cls, value):
return cls(value)
@classmethod
def from_KiB(cls, value):
return cls(value * cls._KiB)
@classmethod
def from_MiB(cls, value):
return cls(value * cls._MiB)
@classmethod
def from_GiB(cls, value):
return cls(value * cls._GiB)
@classmethod
def from_TiB(cls, value):
return cls(value * cls._TiB)
@classmethod
def from_sector(cls, value):
return cls(value * cls._SECTOR_SIZE)
@property
def B(self):
return self.bytes
@property
def KiB(self):
return self.bytes // self._KiB
@property
def MiB(self):
return self.bytes // self._MiB
@property
def GiB(self):
return self.bytes // self._GiB
@property
def TiB(self):
return self.bytes / self._TiB
@property
def sectors(self):
return self.bytes // self._SECTOR_SIZE
def __str__(self):
if self.bytes < self._KiB:
return "{}B".format(self.B)
elif self.bytes < self._MiB:
return "{}K".format(self.KiB)
elif self.bytes < self._GiB:
return "{}M".format(self.MiB)
elif self.bytes < self._TiB:
return "{}G".format(self.GiB)
else:
return "{}T".format(self.TiB)
def dump_process_mem_cpu_load(pid):
"""
https://psutil.readthedocs.io/en/latest/
"""
p = psutil.Process(pid)
mem_i = p.memory_info()
logging.info(
"[SYS LOAD]: RSS: %u(%u MB) VMS: %u(%u MB) DIRTY: %u | CPU num: %u, Usage: %f"
% (
mem_i.rss,
mem_i.rss / 1024 // 1024,
mem_i.vms,
mem_i.vms / 1024 // 1024,
mem_i.dirty,
p.cpu_num(),
p.cpu_percent(0.5),
)
)
def file_disk_usage(path):
s = os.stat(path).st_blocks * 512
return s
def list_object_to_dict(lst):
return_list = []
for l in lst:
return_list.append(object_to_dict(l))
return return_list
def object_to_dict(object):
if hasattr(object, "__dict__"):
dict = vars(object)
else:
return object
for k, v in dict.items():
if type(v).__name__ not in ["list", "dict", "str", "int", "float", "bool"]:
dict[k] = object_to_dict(v)
if type(v) is list:
dict[k] = list_object_to_dict(v)
return dict
def get_fs_type(path):
partitions = psutil.disk_partitions()
partitions.sort(reverse=True)
for part in partitions:
if path.startswith(part.mountpoint):
return part.fstype
def mess_file(path):
file_size = os.path.getsize(path)
offset = random.randint(0, file_size)
fd = os.open(path, os.O_WRONLY)
os.pwrite(fd, os.urandom(1000), offset)
os.close(fd)
# based on https://stackoverflow.com/a/42865957/2002471
units = {"B": 1, "KB": 1024, "MB": 1024**2, "GB": 1024**3}
def parse_size(size):
size = size.upper()
if not re.match(r" ", size):
size = re.sub(r"([KMGT]?B)", r" \1", size)
number, unit = [string.strip() for string in size.split()]
return int(float(number) * units[unit])
def clean_pagecache():
execute(["echo", "3", ">", "/proc/sys/vm/drop_caches"])
def pretty_print(*args, **kwargs):
pp = pprint.PrettyPrinter(indent=4)
pp.pprint(*args, **kwargs)
def is_regular(path):
mode = os.stat(path)[stat.ST_MODE]
return stat.S_ISREG(mode)
class ArtifactProcess:
def __init__(self) -> None:
super().__init__()
def shutdown(self):
pass
import gzip
def is_gzip(path):
"""
gzip.BadGzipFile: means it is not a gzip
"""
with gzip.open(path, "r") as fh:
try:
fh.read(1)
except Exception:
return False
return True
class Skopeo:
def __init__(self) -> None:
super().__init__()
self.bin = os.path.join(
"framework",
"bin",
"skopeo" if platform.machine() == "x86_64" else "skopeo.aarch64",
)
@staticmethod
def repo_from_image_ref(image):
repo = posixpath.basename(image).split(":")[0]
registry = posixpath.dirname(image)
return posixpath.join(registry, repo)
def inspect(
self, image, tls_verify=False, image_arch="amd64", features=None, verifier=None
):
"""
{
"manifests": [
{
"digest": "sha256:0415f56ccc05526f2af5a7ae8654baec97d4a614f24736e8eef41a4591f08019",
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"platform": {
"architecture": "amd64",
"os": "linux"
},
"size": 527
},
<snipped>
---
{
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"config": {
"mediaType": "application/vnd.docker.container.image.v1+json",
"size": 1457,
"digest": "sha256:b97242f89c8a29d13aea12843a08441a4bbfc33528f55b60366c1d8f6923d0d4"
},
"layers": [
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 764663,
"digest": "sha256:e5d9363303ddee1686b203170d78283404e46a742d4c62ac251aae5acbda8df8"
}
]
}
<snipped>
---
Example to fetch manifest by its hash
skopeo inspect --raw docker://docker.io/busybox@sha256:0415f56ccc05526f2af5a7ae8654baec97d4a614f24736e8eef41a4591f08019
"""
cmd = [self.bin, "inspect", "--raw", f"docker://{image}"]
if not tls_verify:
cmd.insert(2, "--tls-verify=false")
ret, p = run(
cmd,
wait=False,
shell=False,
stdout=subprocess.PIPE,
)
out, _ = p.communicate()
p.wait()
m = json.loads(out)
# manifest = None
digest = None
if m["mediaType"] == "application/vnd.docker.distribution.manifest.v2+json":
manifest = m
elif (
m["mediaType"]
== "application/vnd.docker.distribution.manifest.list.v2+json"
):
for mf in m["manifests"]:
# Choose coresponding platform
if (
mf["platform"]["architecture"] == image_arch
and mf["platform"]["os"] == "linux"
):
if features is not None:
if "os.features" not in mf["platform"]:
continue
elif mf["platform"]["os.features"][0] != features:
logging.error("cccc %s", mf["platform"]["os.features"][0])
continue
digest = mf["digest"]
repo = Skopeo.repo_from_image_ref(image)
cmd = [
self.bin,
"inspect",
"--raw",
f"docker://{repo}@{digest}",
]
if not tls_verify:
cmd.insert(2, "--tls-verify=false")
ret, p = run(
cmd,
wait=False,
shell=False,
stdout=subprocess.PIPE,
)
out, _ = p.communicate()
p.wait()
assert p.returncode == 0
manifest = json.loads(out)
break
else:
assert False
assert isinstance(manifest, dict)
return manifest, digest
def copy_to_local(
self, image, layers, extraced_dir, tls_verify=False, resource_digest=None
):
"""
:layers: From which to decompress each layer
"""
os.makedirs(extraced_dir, exist_ok=True)
if resource_digest is not None:
repo = Skopeo.repo_from_image_ref(image)
cmd = [
self.bin,
"--insecure-policy",
"copy",
f"docker://{repo}@{resource_digest}",
f"dir:{extraced_dir}",
]
else:
cmd = [
self.bin,
"copy",
"--insecure-policy",
f"docker://{image}",
f"dir:{extraced_dir}",
]
if not tls_verify:
cmd.insert(1, "--tls-verify=false")
ret, p = run(
cmd,
wait=True,
shell=False,
stdout=subprocess.PIPE,
)
assert ret == 0
if layers is not None:
with pushd(extraced_dir):
for i in layers:
# Blob layer downloaded has no "sha256" prefix
try:
layer = i.replace("sha256:", "")
os.makedirs(i, exist_ok=True)
with tarfile.open(
layer, "r:gz" if is_gzip(layer) else "r:"
) as tar_gz:
tar_gz.extractall(path=i)
except FileNotFoundError:
logging.warning("Should already downloaded")
def copy_all_to_registry(self, source_image_tagged, dest_image_tagged):
cmd = [
self.bin,
"--insecure-policy",
"copy",
"--all",
"--tls-verify=false",
f"docker://{source_image_tagged}",
f"docker://{dest_image_tagged}",
]
ret, p = run(
cmd,
wait=True,
shell=False,
stdout=subprocess.PIPE,
)
assert ret == 0
def manifest_list(self, image, tls_verify=False):
cmd = [self.bin, "inspect", "--raw", f"docker://{image}"]
if not tls_verify:
cmd.insert(2, "--tls-verify=false")
ret, p = run(
cmd,
wait=False,
shell=False,
stdout=subprocess.PIPE,
)
out, _ = p.communicate()
p.wait()
m = json.loads(out)
if m["mediaType"] == "application/vnd.docker.distribution.manifest.v2+json":
return None
elif (
m["mediaType"]
== "application/vnd.docker.distribution.manifest.list.v2+json"
):
return m
def pretty_print(artifact: dict):
a = json.dumps(artifact, indent=4)
print(a)
def write_tar_gz(source, tar_gz):
def f(ti):
ti.name = os.path.relpath(ti.name, start=source)
return ti
with tarfile.open(tar_gz, "w:gz") as t:
t.add(source, arcname="")
def parse_stargz(stargz):
"""
The footer MUST be the following 51 bytes (1 byte = 8 bits in gzip).
Footer format:
- 10 bytes gzip header
- 2 bytes XLEN (length of Extra field) = 26 (4 bytes header + 16 hex digits + len("STARGZ"))
- 2 bytes Extra: SI1 = 'S', SI2 = 'G'
- 2 bytes Extra: LEN = 22 (16 hex digits + len("STARGZ"))
- 22 bytes Extra: subfield = fmt.Sprintf("%016xSTARGZ", offsetOfTOC)
- 5 bytes flate header: BFINAL = 1(last block), BTYPE = 0(non-compressed block), LEN = 0
- 8 bytes gzip footer
(End of eStargz)
"""
f = open(stargz, "rb")
f.seek(-51, 2)
footer = f.read(51)
assert len(footer) == 51
header_extra = footer[16:]
toc_offset = header_extra[0:16]
toc_offset = int(toc_offset.decode("utf-8"), base=16)
f.seek(toc_offset)
toc_gzip = f.read(toc_offset - 51)
toc_tar = gzip.decompress(toc_gzip)
t = io.BytesIO(toc_tar)
with tarfile.open(fileobj=t, mode="r") as tf:
def is_within_directory(directory, target):
abs_directory = os.path.abspath(directory)
abs_target = os.path.abspath(target)
prefix = os.path.commonprefix([abs_directory, abs_target])
return prefix == abs_directory
def safe_extract(tar, path=".", members=None, *, numeric_owner=False):
for member in tar.getmembers():
member_path = os.path.join(path, member.name)
if not is_within_directory(path, member_path):
raise Exception("Attempted Path Traversal in Tar File")
tar.extractall(path, members, numeric_owner)
safe_extract(tf)
f.close()
return "stargz.index.json"
def docker_image_repo(reference):
return posixpath.basename(reference).split(":")[0]
def random_string(l=64):
res = "".join(random.choices(string.ascii_uppercase + string.digits, k=l))
return res

View File

@ -1,208 +0,0 @@
from abc import ABCMeta, abstractmethod
from distributor import Distributor
from utils import Size, Unit, pushd
import xattr
import os
import utils
from workload_gen import WorkloadGen
"""
Scratch a target directory
Verify image according to per schema
"""
class Verifier:
__metaclass__ = ABCMeta
def __init__(self, target, dist: Distributor):
self.target = target
self.dist = dist
@abstractmethod
def scratch(self):
pass
@abstractmethod
def verify(self):
pass
class XattrVerifier(Verifier):
def __init__(self, target, dist: Distributor):
super().__init__(target, dist)
def scratch(self, scratch_dir):
"""Put various kinds of xattr value into.
1. Very long value
2. a common short value
3. Nothing resides in value field
4. Single file, multiple pairs.
5. /n
6. whitespace
7. 中文
8. Binary
9. Only key?
"""
self.dist.put_symlinks(100)
files_cnt = 20
self.dist.put_multiple_files(files_cnt, Size(9, Unit.KB))
self.scratch_dir = os.path.abspath(scratch_dir)
self.source_files = {}
self.source_xattrs = {}
self.source_dirs = {}
self.source_dirs_xattrs = {}
self.encoding = "gb2312"
self.xattr_pairs = 50 if utils.get_fs_type(os.getcwd()) == "xfs" else 20
# TODO: Only key without values?
with pushd(self.scratch_dir):
for f in self.dist.files[-files_cnt:]:
relative_path = os.path.relpath(f, start=self.scratch_dir)
self.source_xattrs[relative_path] = {}
for idx in range(0, self.xattr_pairs):
# TODO: Random this Key
k = f"trusted.nydus.{Distributor.generate_random_name(20, chinese=True)}"
v = f"_{Distributor.generate_random_length_name(20, chinese=True)}"
xattr.setxattr(f, k.encode(self.encoding), v.encode(self.encoding))
# Use relative or canonicalized names as key to locate
# path in source rootfs directory. So we verify if image is
# packed correctly.
self.source_files[relative_path] = os.path.abspath(f)
self.source_xattrs[relative_path][k] = v
dir_cnt = 20
self.dist.put_directories(dir_cnt)
# Add xattr key-value paris to directories.
with pushd(self.scratch_dir):
for d in self.dist.dirs[-dir_cnt:]:
relative_path = os.path.relpath(d, start=self.scratch_dir)
self.source_dirs_xattrs[relative_path] = {}
for idx in range(0, self.xattr_pairs):
# TODO: Random this Key
k = f"trusted.{Distributor.generate_random_name(20)}"
v = f"{Distributor.generate_random_length_name(50)}"
xattr.setxattr(d, k, v.encode())
# Use relative or canonicalized names as key to locate
# path in source rootfs directory. So we verify if image is
# packed correctly.
self.source_dirs[relative_path] = os.path.abspath(d)
self.source_dirs_xattrs[relative_path][k] = v
def verify(self, target_dir):
""""""
with pushd(target_dir):
for f in self.source_files.keys():
fp = os.path.join(target_dir, f)
attrs = os.listxattr(path=fp, follow_symlinks=False)
assert len(attrs) == self.xattr_pairs
for k in self.source_xattrs[f].keys():
v = os.getxattr(fp, k.encode(self.encoding)).decode(self.encoding)
assert v == self.source_xattrs[f][k]
attrs = os.listxattr(fp, follow_symlinks=False)
if self.encoding != "gb2312":
for attr in attrs:
v = xattr.getxattr(f, attr)
assert attr in self.source_xattrs[f].keys()
assert v.decode(self.encoding) == self.source_xattrs[f][attr]
with pushd(target_dir):
for d in self.source_dirs.keys():
dp = os.path.join(target_dir, d)
attrs = xattr.listxattr(dp)
assert len(attrs) == self.xattr_pairs
for attr in attrs:
v = xattr.getxattr(d, attr)
assert attr in self.source_dirs_xattrs[d].keys()
assert v.decode(self.encoding) == self.source_dirs_xattrs[d][attr]
class SymlinkVerifier(Verifier):
def __init__(self, target, dist: Distributor):
super().__init__(target, dist)
def scratch(self):
# TODO: directory symlinks?
self.dist.put_symlinks(140)
self.dist.put_symlinks(24, chinese=True)
def verify(self, target_dir, source_dir):
for sl in self.dist.symlinks:
vt = os.path.join(target_dir, sl)
st = os.path.join(source_dir, sl)
assert os.readlink(st) == os.readlink(vt)
class HardlinkVerifier(Verifier):
def __init_(self, target, dist):
super().__init__(target, dist)
def scratch(self):
self.dist.put_hardlinks(30)
self.outer_source_name = "outer_source"
self.inner_hardlink_name = "inner_hardlink"
with pushd(os.path.dirname(os.path.realpath(self.dist.top_dir))):
fd = os.open(self.outer_source_name, os.O_CREAT | os.O_RDWR)
os.close(fd)
os.link(
self.outer_source_name,
os.path.join(self.target, self.inner_hardlink_name),
)
assert (
os.stat(os.path.join(self.target, self.inner_hardlink_name)).st_nlink == 2
)
def verify(self, target_dir, source_dir):
for links in self.dist.hardlinks.values():
try:
links_iter = iter(links)
l = next(links_iter)
except StopIteration:
continue
t_hl_path = os.path.join(target_dir, l)
last_md5 = WorkloadGen.calc_file_md5(t_hl_path)
last_stat = os.stat(t_hl_path)
last_path = t_hl_path
for l in links_iter:
t_hl_path = os.path.join(target_dir, l)
t_hl_md5 = WorkloadGen.calc_file_md5(t_hl_path)
t_hl_stat = os.stat(t_hl_path)
assert last_md5 == t_hl_md5
assert (
last_stat == t_hl_stat
), f"last hardlink path {last_path}, cur hardlink path {t_hl_path}"
last_md5 = t_hl_md5
last_stat = t_hl_stat
last_path = t_hl_path
with pushd(target_dir):
assert (
os.stat(os.path.join(target_dir, self.inner_hardlink_name)).st_nlink
== 1
)
class DirectoryVerifier(Verifier):
pass
class FileModeVerifier(Verifier):
pass
class UGIDVerifier(Verifier):
pass
class SparseVerifier(Verifier):
pass

View File

@ -1,113 +0,0 @@
from utils import pushd
import os
import shutil
import xattr
import stat
import enum
class WhiteoutSpec(enum.Enum):
OCI = "oci"
OVERLAY = "overlayfs"
def get_value(self):
return self.value
def __str__(self) -> str:
return self.get_value()
class Whiteout:
opaque_dir_key = "trusted.overlay.opaque".encode()
opaque_dir_value = "y".encode()
def __init__(self, spec=WhiteoutSpec.OCI) -> None:
super().__init__()
self.spec = spec
@staticmethod
def mirror_fs_structure(top, path):
"""
:top: Target dir into which to construct mirrored tree.
"path: Should be a relative path like `a/b/c`
So this function creates directories recursively until reaching to the last component.
Moreover, call should be responsible for creating the target file or directory.
"""
path = os.path.normpath(path)
dir_path = ""
with pushd(top):
for d in path.split("/")[:-1]:
try:
os.chdir(d)
except FileNotFoundError:
if len(d) == 0:
continue
os.mkdir(d)
os.chdir(d)
finally:
dir_path += d + "/"
return dir_path, path.split("/")[-1]
@staticmethod
def mirror_files(files, original_rootfs, target_rootfs):
"""
files paths relative to rootfs, e.g.
foo/bar/f
"""
for f in files:
mirrored_path, name = Whiteout.mirror_fs_structure(target_rootfs, f)
src_path = os.path.join(original_rootfs, f)
dst_path = os.path.join(target_rootfs, mirrored_path, name)
shutil.copyfile(src_path, dst_path, follow_symlinks=False)
def whiteout_one_file(self, top, lower_relpath):
"""
:top: The top root directory from which to mirror from lower relative path.
:lower_relpath: Should look like `a/b/c` and this function puts `{top}/a/b/.wh.c` into upper layer
"""
whiteout_file_parent, whiteout_file = Whiteout.mirror_fs_structure(
top, lower_relpath
)
if self.spec == WhiteoutSpec.OCI:
f = os.open(
os.path.join(top, whiteout_file_parent, f".wh.{whiteout_file}"),
os.O_CREAT,
)
os.close(f)
elif self.spec == WhiteoutSpec.OVERLAY:
d = os.path.join(top, whiteout_file_parent, whiteout_file)
os.mknod(
d,
0o644 | stat.S_IFCHR,
0,
)
# Whitout a regular does not need such xattr pair, but it's a naughty monkey
xattr.setxattr(d, self.opaque_dir_key, self.opaque_dir_value)
def whiteout_opaque_directory(self, top, lower_relpath):
upper_opaque_dir = os.path.join(top, lower_relpath)
if self.spec == WhiteoutSpec.OCI:
os.makedirs(upper_opaque_dir, exist_ok=True)
f = os.open(os.path.join(upper_opaque_dir, ".wh..wh..opq"), os.O_CREAT)
os.close(f)
elif self.spec == WhiteoutSpec.OVERLAY:
os.makedirs(upper_opaque_dir, exist_ok=True)
xattr.setxattr(upper_opaque_dir, self.opaque_dir_key, self.opaque_dir_value)
def whiteout_one_dir(self, top, lower_relpath):
whiteout_dir_parent, whiteout_dir = Whiteout.mirror_fs_structure(
top, lower_relpath
)
if self.spec == WhiteoutSpec.OCI:
os.makedirs(os.path.join(top, whiteout_dir_parent, f".wh.{whiteout_dir}"))
elif self.spec == WhiteoutSpec.OVERLAY:
d = os.path.join(top, whiteout_dir_parent, whiteout_dir)
os.mknod(
d,
0o644 | stat.S_IFCHR,
0,
)
# Whitout a direcotoy does not need such xattr pair, but it's a naughty monkey
xattr.setxattr(d, self.opaque_dir_key, self.opaque_dir_value)
xattr.setxattr(d, "trusted.nydus.opaque", "y".encode())

View File

@ -1,558 +0,0 @@
import multiprocessing
import os
import random
import threading
from stat import *
from utils import logging_setup, Unit, Size, pushd, dump_process_mem_cpu_load
import logging
import datetime
import hashlib
import time
import io
import threading
import multiprocessing
from multiprocessing import Queue, current_process
import stat
def get_current_time():
return datetime.datetime.now()
def rate_limit(interval_rate):
last = datetime.datetime.now()
def inner(func):
def wrapped(*args):
nonlocal last
if (datetime.datetime.now() - last).seconds > interval_rate:
func(*args)
last = datetime.datetime.now()
return wrapped
return inner
@rate_limit(interval_rate=5)
def dump_status(name, cnt):
logging.info("Process %d - %s verified %lu files", os.getpid(), name, cnt)
size_list = [
1,
8,
13,
16,
19,
32,
64,
101,
100,
102,
100,
256,
Size(4, Unit.KB).B,
Size(7, Unit.KB).B,
Size(8, Unit.KB).B,
Size(16, Unit.KB).B,
Size(17, Unit.KB).B,
Size(1, Unit.MB).B - 100,
Size(1, Unit.MB).B,
Size(3, Unit.MB).B - Size(2, Unit.KB).B,
Size(3, Unit.MB).B,
Size(4, Unit.MB).B,
]
class WorkloadGen:
def __init__(self, target_dir, verify_dir):
"""
:target_dir: Generate IO against which directory
:verify_dir: Generally, it has to be the original rootfs of the test image
"""
self.target_dir = target_dir
self.verify_dir = verify_dir
self.verify = True
self.io_error = False
self.verifier = {} # For append write verification
logging.info(
"Target dir: %s, Verified dir: %s", self.target_dir, self.verify_dir
)
def collect_all_dirs(self):
# In case this function is called more than once.
if hasattr(self, "collected"):
return
self.collected = True
self._collected_dirs = []
self._collected_dirs.append(self.target_dir)
with pushd(self.target_dir):
self._collect_each_dir(self.target_dir, self.target_dir)
def _collect_each_dir(self, root_dir, parent_dir):
files = os.listdir(parent_dir)
with pushd(parent_dir):
for one in files:
st = os.lstat(one)
if S_ISDIR(st.st_mode) and len(os.listdir(one)) != 0:
realpath = os.path.realpath(one)
self._collected_dirs.append(realpath)
self._collect_each_dir(root_dir, one)
else:
continue
def iter_all_files(self, file_op, dir_op=None):
for (cur_dir, subdirs, files) in os.walk(
self.target_dir, topdown=True, followlinks=False
):
with pushd(cur_dir):
for f in files:
file_op(f)
if dir_op is not None:
for d in subdirs:
dir_op(d)
def verify_single_file(self, path_from_mp):
target_md5 = WorkloadGen.calc_file_md5(path_from_mp)
# Locate where the source file is, so to calculate its md5 which
# will be verified later
source_path = os.path.join(
self.verify_dir, os.path.relpath(path_from_mp, start=self.target_dir)
)
source_md5 = WorkloadGen.calc_file_md5(source_path)
assert (
target_md5 == source_md5
), f"Verification error. Want {source_md5} but got {target_md5}"
@staticmethod
def count_files(top_dir):
"""
Including hidden files and directories.
Just count files within `top_dir` whether it's oci special file or not.
"""
total = 0
for (cur_dir, subdirs, files) in os.walk(
top_dir, topdown=True, followlinks=False
):
total += len(files)
total += len(subdirs)
logging.info("%d is counted!", total)
return total
@staticmethod
def calc_file_md5(path):
md5 = hashlib.md5()
with open(path, "rb") as f:
for block in iter(lambda: f.read(Size(128, Unit.KB).B), b""):
md5.update(block)
return md5.digest()
def __verify_one_level(self, path_queue, conn):
target_files = []
cnt = 0
err_cnt = 0
name = current_process().name
while True:
# In higher version of python, multiprocess queue can be noticed when it is closed.
# Then we don't have to wait for timeout
try:
(abs_dir, dirs, files) = path_queue.get(timeout=3)
except Exception as exc:
logging.info("Verify process %s finished.", name)
conn.send((target_files, cnt, err_cnt))
return
dump_status(name, cnt)
sub_dir_count = 0
for f in files:
# Per as to OCI image spec, whiteout special file should be present.
# TODO: uncomment me!
assert not f.startswith(".wh.")
# don't try to validate symlink
cur_path = os.path.join(abs_dir, f)
relpath = os.path.relpath(cur_path, start=self.target_dir)
target_files.append(relpath)
source_path = os.path.join(self.verify_dir, relpath)
try:
if os.path.islink(cur_path):
if os.readlink(cur_path) != os.readlink(source_path):
err_cnt += 1
logging.error("Symlink mismatch, %s", cur_path)
elif os.path.isfile(cur_path):
# TODO: How to verify special files?
cur_md5 = WorkloadGen.calc_file_md5(cur_path)
source_md5 = WorkloadGen.calc_file_md5(source_path)
if cur_md5 != source_md5:
err_cnt += 1
logging.error("Verification error. File %s", cur_path)
assert False
elif stat.S_ISBLK(os.stat(cur_path).st_mode):
assert (
os.stat(cur_path).st_rdev == os.stat(source_path).st_rdev
), f"left {os.stat(cur_path).st_rdev} while right {os.stat(source_path).st_rdev} "
elif stat.S_ISCHR(os.stat(cur_path).st_mode):
assert (
os.stat(cur_path).st_rdev == os.stat(source_path).st_rdev
), f"left {os.stat(cur_path).st_rdev} while right {os.stat(source_path).st_rdev} "
elif stat.S_ISFIFO(os.stat(cur_path).st_mode):
pass
elif stat.S_ISSOCK(os.stat(cur_path).st_mode):
pass
except AssertionError as exp:
logging.warning("current %s, source %s", cur_path, source_path)
raise exp
cnt += 1
for d in dirs:
assert not d.startswith(".wh.")
cur_path = os.path.join(abs_dir, d)
relpath = os.path.relpath(cur_path, start=self.target_dir)
target_files.append(relpath)
# Directory nlink should equal to 2 + amount of child direcotory
if not os.path.islink(cur_path):
sub_dir_count += 1
assert sub_dir_count + 2 == os.stat(abs_dir).st_nlink
def verify_entire_fs(self, filter_list: list = []) -> bool:
cnt = 0
err_cnt = 0
target_files = set()
processes = []
# There is underlying threads transfering objects. Set its size smaller so
# there will not be many error printed once the side of the queue is closed.
path_queue = Queue(20)
for i in range(8):
(parent_conn, child_conn) = multiprocessing.Pipe(False)
p = multiprocessing.Process(
name=f"verifier_{i}",
target=self.__verify_one_level,
args=(path_queue, child_conn),
)
p.start()
processes.append((p, parent_conn))
for (abs_dir, dirs, files) in os.walk(self.target_dir, topdown=True):
try:
path_queue.put((abs_dir, dirs, files))
except Exception:
return False
for (p, conn) in processes:
try:
(child_files, child_cnt, child_err_cnt) = conn.recv()
except EOFError:
logging.error("EOF")
return False
p.join()
target_files.update(child_files)
cnt += child_cnt
err_cnt += child_err_cnt
path_queue.close()
path_queue.join_thread()
del path_queue
logging.info("Verified %u files in %s", cnt, self.target_dir)
if err_cnt > 0:
logging.error("Verify fails, %u errors", err_cnt)
return False
# Collect files belonging to the original rootfs into `source_files`.
# Criteria is that each file in `source_files` should appear in the rafs.
source_files = set()
opaque_dirs = []
for (abs_dir, dirs, files) in os.walk(self.verify_dir):
for f in files:
cur_path = os.path.join(abs_dir, f)
relpath = os.path.relpath(cur_path, start=self.verify_dir)
source_files.add(relpath)
if f == ".wh..wh..opq":
opaque_dirs.append(os.path.relpath(abs_dir, start=self.verify_dir))
for d in dirs:
cur_path = os.path.join(abs_dir, d)
relpath = os.path.relpath(cur_path, start=self.verify_dir)
source_files.add(relpath)
diff_files = list()
for el in source_files:
if not el in target_files:
diff_files.append(el)
trimmed_diff_files = []
whiteout_files = [
(
os.path.basename(f),
os.path.join(
os.path.dirname(f), os.path.basename(f).replace(".wh.", "", 1)
),
)
for f in diff_files
if os.path.basename(f).startswith(".wh.")
]
# The only possible reason we have different files is due to whiteout
for suspect in diff_files:
for d in opaque_dirs:
if suspect.startswith(d):
trimmed_diff_files.append(suspect)
continue
# Seems overlay fs does not hide the opaque special file(char) if nothing to whiteout
try:
# Example: c????????? ? ? ? ? ? foo
with open(os.path.join(self.verify_dir, suspect), "rb") as f:
pass
except OSError as e:
if e.errno == 2:
trimmed_diff_files.append(suspect)
else:
pass
# For example:
# ['DIR.0.0/pQGLzKTWSpaCatjcwAqiZAGOxbfexiOvVsXqFqUhldTxLsIpONVnavybHObiCZepXsLyoPwDAXOoDtJFdZVUlrisTDaenJhsJVXegHuTMzFFqhowZAfcgggxVfEvXDtAVakarhSkZhavBtuuTFPOqgyowbI.regular',
# 'DIR.0.0/.wh.pQGLzKTWSpaCatjcwAqiZAGOxbfexiOvVsXqFqUhldTxLsIpONVnavybHObiCZepXsLyoPwDAXOoDtJFdZVUlrisTDaenJhsJVXegHuTMzFFqhowZAfcgggxVfEvXDtAVakarhSkZhavBtuuTFPOqgyowbI.regular',
# 'DIR.0.0/DIR.1.1/DIR.2.0/zktaNKmXMVgITVbAUFHpNfvECfVIdO.dir', 'DIR.0.0/DIR.1.1/DIR.2.0/.wh.zktaNKmXMVgITVbAUFHpNfvECfVIdO.dir', 'i/am/troublemaker/.wh.foo']
if len(whiteout_files):
base = os.path.basename(suspect)
if f".wh.{base}" in list(zip(*whiteout_files))[0]:
trimmed_diff_files.append(suspect)
for (_, s) in whiteout_files:
if suspect.startswith(s):
trimmed_diff_files.append(suspect)
diff_files = list(
filter(
lambda x: x not in trimmed_diff_files
and x not in filter_list
and not os.path.basename(x).startswith(".wh."),
diff_files,
)
)
assert len(diff_files) == 0, print(diff_files)
return True
def read_collected_files(self, duration):
"""
Randomly select a file from a random directory which was collected
when set up this workload generator. No dir recursive read happens.
"""
dirs_cnt = len(self._collected_dirs)
logging.info("Total %u directories will be have stress read", dirs_cnt)
t_begin = get_current_time()
t_delta = t_begin - t_begin
op_cnt, total_size = 0, 0
while t_delta.total_seconds() <= duration:
target_dir = random.choice(self._collected_dirs)
files = os.listdir(target_dir)
target_file = random.choice(files)
one_path = os.path.join(target_dir, target_file)
if os.path.isdir(one_path):
os.listdir(one_path)
continue
if os.path.islink(one_path):
# Don't expect anything broken happen.
os.readlink(one_path)
relpath = os.path.relpath(one_path, start=self.target_dir)
sym_path = os.path.join(self.verify_dir, relpath)
assert os.readlink(one_path) == os.readlink(sym_path)
continue
if not os.path.isfile(one_path):
continue
with open(one_path, "rb") as f:
st = os.stat(one_path)
file_size = st.st_size
do_read = True
while do_read:
# Select a file position randomly
pos = random.randint(0, file_size)
try:
f.seek(pos)
except io.UnsupportedOperation as exc:
logging.exception(exc)
break
except Exception as exc:
raise type(exc)(
str(exc)
+ f"Seek pos {pos}, file {one_path}, file size {file_size}"
)
io_size = WorkloadGen.pick_io_size()
logging.debug(
"File %s , Pos %u, IO Size %u", target_file, pos, io_size
)
op_cnt += 1
total_size += io_size
try:
buf = f.read(io_size)
assert io_size == len(buf) or file_size - pos == len(
buf
), f"file path {one_path}: io_size {io_size} buf len {len(buf)} file_size {file_size} pos {pos}"
except IOError as exc:
logging.error(
"file %s, offset %u, io size %u", one_path, pos, io_size
)
raise exc
if random.randint(0, 13) % 4 == 0:
do_read = False
if self.verify:
self.verify_file_range(one_path, pos, io_size, buf)
t_delta = get_current_time() - t_begin
return op_cnt, total_size, t_delta.total_seconds()
def verify_file_range(self, file_path, offset, length, buf):
relpath = os.path.relpath(file_path, start=self.target_dir)
file_path = os.path.join(self.verify_dir, relpath)
with open(file_path, "rb") as f:
f.seek(offset)
out = f.read(length)
orig_md5 = hashlib.md5(out).digest()
buf_md5 = hashlib.md5(buf).digest()
if orig_md5 != buf_md5:
logging.error(
"File Verification error. path: %s offset: %lu len: %u. want %s but got %s",
file_path,
offset,
length,
str(orig_md5),
str(buf_md5),
)
raise Exception(
f"Verification error {file_path} {offset} {length} failed."
)
def io_read(self, io_duration, conn=None):
try:
cnt, size, duration = self.read_collected_files(io_duration)
WorkloadGen.print_summary(cnt, size, duration)
except Exception as exc:
logging.exception("Stress read failure, %s", exc)
self.io_error = True
finally:
if conn is not None:
conn.send(self.io_error)
conn.close()
def setup_workload_generator(self):
self.collect_all_dirs()
def torture_read(self, threads_cnt: int, duration: int, verify=True):
readers_list = []
self.verify = verify
for idx in range(0, threads_cnt):
reader_name = "rafs_reader_%d" % idx
(parent_conn, child_conn) = multiprocessing.Pipe(False)
rafs_reader = multiprocessing.Process(
name=reader_name,
target=self.io_read,
args=(duration, child_conn),
)
logging.info("Reader %s starts work" % reader_name)
readers_list.append((rafs_reader, parent_conn))
rafs_reader.start()
self.readers = readers_list
def finish_torture_read(self):
for one in self.readers:
self.io_error = one[1].recv() or self.io_error
one[0].join()
if self.verify:
assert not self.io_error
self.stop_load_monitor()
@classmethod
def print_summary(cls, cnt, size, duration):
logging.info(
"Issued reads: %(cnt)lu Total read size: %(size)lu bytes Time duration: %(duration)u"
% {"cnt": cnt, "size": size, "duration": duration}
)
@staticmethod
def pick_io_size():
return random.choice(size_list)
@staticmethod
def issue_single_write(file_name, offset, bs: Size, size: Size):
"""
:size: Amount of data to be written to
:bs: Each write io block size
:offset: From which offset of the file to star write
"""
block = os.urandom(bs.B)
left = size.B
fd = os.open(file_name, os.O_RDWR)
while left > 0:
os.pwrite(fd, block, offset + size.B - left)
left -= bs.B
os.close(fd)
@staticmethod
def issue_single_read(dir, file_name, offset: Size, bs: Size):
with pushd(dir):
with open(file_name, "rb") as f:
buf = os.pread(f.fileno(), bs.B, offset.B)
return buf
def start_load_monitor(self, pid):
def _dump_mem_info(anchor, pid):
while not self.monitor_stopped:
dump_process_mem_cpu_load(pid)
time.sleep(2)
self.load_monitor = threading.Thread(
name="load_monitor", target=_dump_mem_info, args=(self, pid)
)
self.monitor_stopped = False
self.load_monitor.start()
def stop_load_monitor(self):
if "load_monitor" in self.__dict__:
self.monitor_stopped = True
self.load_monitor.join()
if __name__ == "__main__":
print("This is workload generator")
with open("append_test", "a") as f:
wg = WorkloadGen(None, None)
wg.do_append(f.fileno(), Size(1, Unit.KB), Size(16, Unit.KB))
wg = WorkloadGen(".", None)
wg.torture_append(2, Size(1, Unit.KB), Size(16, Unit.MB))
wg.finish_torture_append()

View File

@ -1,18 +0,0 @@
depth: 4
width: 6
layers:
- layer1:
- size: 10KB
type: regular
count: 5
- size: 4MB
type: regular
count: 30
- size: 128KB
type: regular
count: 100
- size: 90MB
type: regular
count: 1
- type: symlink
count: 100

View File

@ -1,2 +0,0 @@
import sys
sys.path.append('framework')

View File

@ -1,277 +0,0 @@
import pytest
from distributor import Distributor
from rafs import Backend, NydusDaemon, RafsConf, RafsImage
from workload_gen import WorkloadGen
from nydus_anchor import NydusAnchor
from utils import logging_setup, Size, Unit
import utils
from nydusd_client import NydusAPIClient
import nydusd_client
import os
import logging
import time
import random
logging_setup()
def test_daemon_info(nydus_anchor, nydus_image, rafs_conf: RafsConf):
nydus_image.set_backend(Backend.BACKEND_PROXY).create_image()
rafs_conf.set_rafs_backend(Backend.BACKEND_PROXY)
rafs = NydusDaemon(nydus_anchor, nydus_image, rafs_conf)
rafs.mount()
nc = NydusAPIClient(rafs.get_apisock())
nc.get_wait_daemon()
@pytest.mark.skip(reason="The files metrics json body is too large")
def test_iostats(
nydus_anchor: NydusAnchor, nydus_image: RafsImage, rafs_conf: RafsConf
):
rafs_id = "/"
rafs_conf.enable_files_iostats().enable_latest_read_files().set_rafs_backend(
Backend.BACKEND_PROXY
)
nydus_image.set_backend(Backend.BACKEND_PROXY).create_image()
rafs = NydusDaemon(nydus_anchor, nydus_image, rafs_conf)
rafs.mount()
assert rafs.is_mounted()
nc = NydusAPIClient(rafs.get_apisock())
duration = 5
wg = WorkloadGen(nydus_anchor.mountpoint, nydus_image.rootfs())
wg.setup_workload_generator()
wg.torture_read(4, duration)
while duration:
time.sleep(1)
duration -= 1
nc.get_global_metrics()
nc.get_files_metrics(rafs_id)
nc.get_backend_metrics(rafs_id)
wg.finish_torture_read()
duration = 7
wg.torture_read(4, duration)
# Disable it firstly and then enable it.
# TODO: files metrics can't be toggled dynamically now. Try to implement it.
# nc.disable_files_metrics(rafs_id)
# nc.enable_files_metrics(rafs_id)
r = nc.get_latest_files_metrics(rafs_id)
print(r)
while duration:
time.sleep(1)
duration -= 1
nc.get_files_metrics(rafs_id)
wg.finish_torture_read()
rafs.umount()
def test_global_metrics(
nydus_anchor: NydusAnchor, nydus_image: RafsImage, rafs_conf: RafsConf
):
rafs_conf.enable_files_iostats().set_rafs_backend(Backend.BACKEND_PROXY)
nydus_image.set_backend(Backend.BACKEND_PROXY).create_image()
rafs = NydusDaemon(nydus_anchor, nydus_image, rafs_conf)
rafs.mount()
nc = NydusAPIClient(rafs.get_apisock())
gm = nc.get_global_metrics()
assert gm["files_account_enabled"]
assert gm["measure_latency"]
wg = WorkloadGen(nydus_anchor.mountpoint, nydus_image.rootfs())
wg.setup_workload_generator()
wg.io_read(4)
gm_old = nc.get_global_metrics()
wg.io_read(4)
gm_new = nc.get_global_metrics()
assert gm_new["data_read"] > gm_old["data_read"]
assert (
gm_new["fop_hits"][nydusd_client.Fop.Read.get_value()]
> gm_old["fop_hits"][nydusd_client.Fop.Read.get_value()]
)
rafs.umount()
@pytest.mark.skip(reason="backend swap keeps get response 500 for no reason.")
def test_backend_swap(
nydus_anchor, nydus_scratch_image: RafsImage, rafs_conf: RafsConf
):
dist = Distributor(nydus_scratch_image.rootfs(), 5, 4)
dist.generate_tree()
dist.put_multiple_files(100, Size(2, Unit.MB))
nydus_scratch_image.set_backend(Backend.BACKEND_PROXY).create_image(
prefetch_policy="fs", prefetch_files="/".encode()
)
rafs_conf.set_rafs_backend(
Backend.BACKEND_PROXY
).enable_rafs_blobcache().enable_fs_prefetch(
threads_count=7, bandwidth_rate=Size(2, Unit.MB).B
)
rafs_conf.dump_rafs_conf()
rafs = NydusDaemon(nydus_anchor, None, rafs_conf, with_defaults=False)
rafs.thread_num(4).set_mountpoint(nydus_anchor.mountpoint).apisock(
"api_sock"
).mount()
nc = NydusAPIClient(rafs.get_apisock())
nc.pseudo_fs_mount(nydus_scratch_image.bootstrap_path, "/", rafs_conf.path(), None)
nc.umount_rafs("/")
assert len(os.listdir(nydus_anchor.mountpoint)) == 0
mp = "/pseudo1"
nc.pseudo_fs_mount(nydus_scratch_image.bootstrap_path, mp, rafs_conf.path(), None)
rafs_conf_2nd = RafsConf(nydus_anchor, nydus_scratch_image)
rafs_conf_2nd.set_rafs_backend(
Backend.LOCALFS, image=nydus_scratch_image
).enable_rafs_blobcache().enable_fs_prefetch(
threads_count=3, bandwidth_rate=Size(1, Unit.MB).B
)
rafs_conf_2nd.dump_rafs_conf()
new_image = (
RafsImage(nydus_anchor, nydus_scratch_image.rootfs())
.set_backend(Backend.BACKEND_PROXY)
.create_image(prefetch_policy="fs", prefetch_files="/".encode())
)
# TODO: Once upon a time, more than one fd are open. Check why this happens.
wg = WorkloadGen(
os.path.join(nydus_anchor.mountpoint, mp.strip("/")),
nydus_scratch_image.rootfs(),
)
wg.setup_workload_generator()
wg.torture_read(8, 8)
for i in range(1, 50):
logging.debug("swap for the %dth time", i)
nc.swap_backend(mp, new_image.bootstrap_name, rafs_conf_2nd.path())
# assert nc.get_blobcache_metrics(mp)["prefetch_workers"] == 3
time.sleep(0.2)
nc.swap_backend(mp, nydus_scratch_image.bootstrap_name, rafs_conf.path())
utils.clean_pagecache()
wg.finish_torture_read()
assert wg.io_error == False
nc.umount_rafs(mp)
utils.clean_pagecache()
def test_access_pattern(
nydus_anchor: NydusAnchor, nydus_image: RafsImage, rafs_conf: RafsConf
):
rafs_id = "/"
rafs_conf.enable_access_pattern().set_rafs_backend(Backend.BACKEND_PROXY)
rafs_conf.dump_rafs_conf()
nydus_image.set_backend(Backend.BACKEND_PROXY).create_image()
rafs = NydusDaemon(nydus_anchor, nydus_image, rafs_conf)
rafs.mount()
nc = NydusAPIClient(rafs.get_apisock())
wg = WorkloadGen(nydus_anchor.mountpoint, nydus_image.rootfs())
wg.setup_workload_generator()
wg.torture_read(4, 8)
duration = 4
while duration:
time.sleep(1)
duration -= 1
global_metrics = nc.get_global_metrics()
global_metrics["access_pattern_enabled"] == True
patterns = nc.get_access_patterns(rafs_id)
assert len(patterns) != 0
patterns = nc.get_access_patterns()
assert len(patterns) != 0
nc.get_access_patterns("poison")
wg.finish_torture_read()
def test_api_mount_with_prefetch(
nydus_anchor, nydus_image: RafsImage, rafs_conf: RafsConf
):
nydus_image.set_backend(Backend.BACKEND_PROXY).create_image()
hint_files = ["/"]
rafs = NydusDaemon(nydus_anchor, None, None, with_defaults=False)
# Prefetch must enable blobcache
rafs_conf.enable_rafs_blobcache()
rafs_conf.set_rafs_backend(Backend.BACKEND_PROXY)
rafs_conf.enable_fs_prefetch(threads_count=4)
rafs_conf.dump_rafs_conf()
rafs.set_mountpoint(nydus_anchor.mountpoint).apisock("api_sock").mount(
dump_config=False,
)
nc = NydusAPIClient(rafs.get_apisock())
nc.pseudo_fs_mount(
nydus_image.bootstrap_path,
"/pseudo_fs_1",
rafs_conf.path(),
hint_files,
"rafs",
)
# Only one rafs mountpoint exists, so whether set rafs id or not is not important.
time.sleep(0.5)
m = nc.get_blobcache_metrics()
assert m["prefetch_data_amount"] != 0
wg = WorkloadGen(
os.path.join(nydus_anchor.mountpoint, "pseudo_fs_1"), nydus_image.rootfs()
)
wg.setup_workload_generator()
wg.torture_read(4, 8)
wg.finish_torture_read()
m = nc.get_blobcache_metrics("/pseudo_fs_1")
def test_detect_io_hang(nydus_anchor, nydus_image: RafsImage, rafs_conf: RafsConf):
rafs_conf.enable_files_iostats().set_rafs_backend(Backend.BACKEND_PROXY)
rafs_conf.dump_rafs_conf()
nydus_image.set_backend(Backend.BACKEND_PROXY).create_image()
rafs = NydusDaemon(nydus_anchor, nydus_image, rafs_conf)
rafs.thread_num(5).mount()
wg = WorkloadGen(nydus_anchor.mountpoint, nydus_image.rootfs())
wg.setup_workload_generator()
wg.torture_read(4, 8)
nc = NydusAPIClient(rafs.get_apisock())
for _ in range(0, 30):
ops = nc.get_inflight_metrics()
time.sleep(0.1)
print(ops)
wg.finish_torture_read()

View File

@ -1,88 +0,0 @@
from nydus_anchor import NydusAnchor
from rafs import NydusDaemon, RafsImage, BlobEntryConf, Backend
import pytest
import uuid
from erofs import Erofs
from nydusd_client import NydusAPIClient
import time
from workload_gen import WorkloadGen
from distributor import Distributor
import random
from utils import logging_setup, Size, Unit
logging_setup()
def test_basic(nydus_anchor: NydusAnchor, nydus_image: RafsImage):
nydus_image.set_backend(Backend.BACKEND_PROXY).create_image()
daemon = NydusDaemon(nydus_anchor, None, None, mode="singleton")
daemon.set_fscache().start()
nc = NydusAPIClient(daemon.get_apisock())
fsid = str(uuid.uuid4())
blob_conf = (
BlobEntryConf(nydus_anchor)
.set_type("bootstrap")
.set_metadata_path(nydus_image.bootstrap_path)
.set_fsid(fsid)
.set_backend()
)
time.sleep(1)
nc.bind_fscache_blob(blob_conf)
erofs = Erofs()
erofs.mount(fsid=fsid, mountpoint=nydus_anchor.mountpoint)
workload_gen = WorkloadGen(nydus_anchor.mountpoint, nydus_image.rootfs())
workload_gen.setup_workload_generator()
workload_gen.torture_read(4, 10)
workload_gen.finish_torture_read()
assert not workload_gen.io_error
def test_prefetch(nydus_anchor: NydusAnchor, nydus_scratch_image: RafsImage):
dist = Distributor(nydus_scratch_image.rootfs(), 4, 4)
dist.generate_tree()
dist.put_directories(20)
dist.put_multiple_files(40, Size(3, Unit.MB))
dist.put_hardlinks(6)
dist.put_multiple_chinese_files(random.randint(20, 28), Size(20, Unit.KB))
nydus_scratch_image.set_backend(Backend.BACKEND_PROXY).create_image()
daemon = NydusDaemon(nydus_anchor, None, None, mode="singleton")
daemon.set_fscache().start()
time.sleep(1)
nc = NydusAPIClient(daemon.get_apisock())
fsid = str(uuid.uuid4())
blob_conf = (
BlobEntryConf(nydus_anchor)
.set_type("bootstrap")
.set_metadata_path(nydus_scratch_image.bootstrap_path)
.set_fsid(fsid)
.set_backend()
.set_prefetch()
)
nc.bind_fscache_blob(blob_conf)
erofs = Erofs()
erofs.mount(fsid=fsid, mountpoint=nydus_anchor.mountpoint)
workload_gen = WorkloadGen(nydus_anchor.mountpoint, nydus_scratch_image.rootfs())
workload_gen.setup_workload_generator()
workload_gen.torture_read(4, 10)
workload_gen.finish_torture_read()
assert workload_gen.verify_entire_fs()
assert not workload_gen.io_error

View File

@ -1,95 +0,0 @@
import os
import utils
from utils import Size, Unit
import pytest
from workload_gen import WorkloadGen
from nydus_anchor import NydusAnchor
from rafs import RafsConf, RafsImage, NydusDaemon, Compressor
@pytest.mark.skip(reason="Constantly failed for no reason.")
@pytest.mark.parametrize("compressor", [Compressor.NONE, Compressor.LZ4_BLOCK])
@pytest.mark.parametrize("backend", ["oss", "localfs"])
def test_blobcache(
nydus_anchor: NydusAnchor,
nydus_image: RafsImage,
rafs_conf: RafsConf,
compressor,
backend,
):
"""
Allocate a file with local test working directory.
Loop the file so to get a small file system which is easy to get full.
Change blob cache location the above test blobdir
"""
blobdir = "/blobdir"
blob_backend = "blob_backend"
fd = os.open(blob_backend, os.O_WRONLY | os.O_CREAT | os.O_TRUNC)
os.posix_fallocate(fd, 0, 1024 * 1024 * 4)
os.close(fd)
utils.execute(["mkfs.ext4", "-F", blob_backend])
utils.execute(["mount", blob_backend, blobdir])
rafs_conf.enable_rafs_blobcache()
rafs_conf.set_rafs_backend(backend)
rafs_conf.dump_rafs_conf()
cache_file = os.listdir(blobdir)
assert len(cache_file) == 1
rafs = NydusDaemon(nydus_anchor, nydus_image, rafs_conf)
rafs.mount()
assert rafs.is_mounted()
workload_gen = WorkloadGen(nydus_anchor.mountpoint, nydus_anchor.source_dir)
workload_gen.setup_workload_generator()
workload_gen.torture_read(4, 15)
nydus_anchor.start_stats_checker()
workload_gen.finish_torture_read()
nydus_anchor.stop_stats_checker()
cache_file = os.listdir(blobdir)
assert len(cache_file) >= 2
if workload_gen.io_error:
warnings.warn(UserWarning("Rafs will return EIO if blobcache file is full"))
rafs.umount()
ret, _ = utils.execute(["umount", blobdir])
assert ret
os.unlink(blob_backend)
@pytest.mark.skip(reason="Constantly failed for no reason.")
def test_limited_mem(nydus_anchor, rafs_conf, nydus_image):
"""
description: Run nydusd in a memory limited environment.
- Use `ulimit` to limit virtual memory nydusd can use.
- Mount rafs
- Torture rafs
"""
rafs_conf.enable_rafs_blobcache()
rafs_conf.dump_rafs_conf()
rafs = NydusDaemon(nydus_anchor, nydus_image, rafs_conf)
rafs.mount(limited_mem=Size(3, Unit.GB))
wg = WorkloadGen(nydus_anchor.mountpoint, nydus_image.rootfs())
wg.setup_workload_generator()
wg.torture_read(8, 10)
nydus_anchor.start_stats_checker()
wg.finish_torture_read()
nydus_anchor.stop_stats_checker()
assert wg.io_error == False
assert nydus_anchor.check_nydusd_health()

View File

@ -1,88 +0,0 @@
import os
import pytest
from rafs import NydusDaemon, RafsConf, RafsImage, Backend, Compressor
from nydus_anchor import NydusAnchor
from workload_gen import WorkloadGen
from distributor import Distributor
from utils import Size, Unit
import random
from nydusd_client import NydusAPIClient
import time
@pytest.mark.skip(reason="Constantly failed for no reason.")
@pytest.mark.parametrize("thread_cnt", [4])
@pytest.mark.parametrize("compressor", [Compressor.LZ4_BLOCK, Compressor.NONE])
@pytest.mark.parametrize("is_cache_compressed", [False])
@pytest.mark.parametrize(
"converter",
[
"framework/bin/nydus-image-1.3.0",
"framework/bin/nydus-image-1.5.0",
"framework/bin/nydus-image-1.6.3",
],
)
@pytest.mark.parametrize("items", [("enable_validation",), ()])
def test_prefetch_with_cache(
nydus_anchor,
nydus_scratch_image: RafsImage,
rafs_conf: RafsConf,
thread_cnt,
compressor,
is_cache_compressed,
converter,
items,
):
"""
title: Prefetch from various backend
description:
- Enable rafs backend blob cache, as it is disabled by default
pass_criteria:
- Rafs can be mounted.
- Rafs can be unmounted.
"""
dist = Distributor(nydus_scratch_image.rootfs(), 4, 4)
dist.generate_tree()
dist.put_directories(20)
dist.put_multiple_files(40, Size(3, Unit.MB))
dist.put_multiple_files(10, Size(5, Unit.MB))
dist.put_hardlinks(6)
dist.put_multiple_chinese_files(random.randint(20, 28), Size(20, Unit.KB))
nydus_scratch_image.set_backend(Backend.LOCALFS).create_image(
image_bin=converter,
compressor=compressor,
prefetch_policy="fs",
prefetch_files="/".encode(),
)
rafs_conf.enable_rafs_blobcache(
is_compressed=is_cache_compressed
).enable_fs_prefetch()
rafs_conf.set_rafs_backend(Backend.LOCALFS, image=nydus_scratch_image)
if len(items) > 0:
for i in items:
item = RafsConf.__dict__[i]
item(rafs_conf)
rafs = NydusDaemon(nydus_anchor, nydus_scratch_image, rafs_conf)
rafs.thread_num(6).mount()
nc = NydusAPIClient(rafs.get_apisock())
workload_gen = WorkloadGen(nydus_anchor.mountpoint, nydus_scratch_image.rootfs())
time.sleep(0.5)
m = nc.get_blobcache_metrics()
assert m["prefetch_data_amount"] != 0
workload_gen.verify_entire_fs()
workload_gen.setup_workload_generator()
workload_gen.torture_read(thread_cnt, 6)
assert NydusAnchor.check_nydusd_health()
workload_gen.finish_torture_read()
assert not workload_gen.io_error

View File

@ -1,442 +0,0 @@
import utils
from utils import logging_setup, Size, Unit
import pytest
from rafs import *
from workload_gen import WorkloadGen
from nydus_anchor import *
from distributor import Distributor
from verifier import XattrVerifier
from random import randint
import os
from nydusd_client import NydusAPIClient
from whiteout import WhiteoutSpec, Whiteout
logging_setup()
def test_verify_layers_images(nydus_anchor: NydusAnchor):
"""
title: Verify if new image on top of parent image is properly built
description: Use debugfs.rafs tool to inspect if new image is correct.
No need to mount rafs in this case.
"""
pass
def test_basic_read(
nydus_anchor: NydusAnchor,
rafs_conf: RafsConf,
nydus_image: RafsImage,
nydus_parent_image: RafsImage,
):
"""
title: Build an image from parent image.
description: Mount rafs to check if can act read correctly.
"""
nydus_parent_image.set_backend(Backend.BACKEND_PROXY).create_image()
nydus_image.set_backend(Backend.BACKEND_PROXY).create_image(
parent_image=nydus_parent_image
)
nydus_anchor.mount_overlayfs([nydus_image.rootfs(), nydus_parent_image.rootfs()])
rafs_conf.enable_rafs_blobcache().set_rafs_backend(Backend.BACKEND_PROXY)
rafs_conf.dump_rafs_conf()
rafs = NydusDaemon(nydus_anchor, nydus_image, rafs_conf)
rafs.mount()
wg = WorkloadGen(nydus_anchor.mountpoint, nydus_anchor.overlayfs)
wg.setup_workload_generator()
wg.io_read(5)
assert wg.verify_entire_fs()
assert wg.io_error == False
def test_read_stress(
nydus_anchor: NydusAnchor,
rafs_conf: RafsConf,
nydus_image: RafsImage,
nydus_parent_image: RafsImage,
):
nydus_parent_image.set_backend(Backend.BACKEND_PROXY).create_image()
nydus_image.set_backend(Backend.BACKEND_PROXY).create_image(
parent_image=nydus_parent_image
)
nydus_anchor.mount_overlayfs([nydus_image.rootfs(), nydus_parent_image.rootfs()])
rafs_conf.enable_rafs_blobcache().set_rafs_backend(Backend.BACKEND_PROXY)
rafs_conf.dump_rafs_conf()
rafs = NydusDaemon(nydus_anchor, nydus_image, rafs_conf)
rafs.thread_num(4).mount()
wg = WorkloadGen(nydus_anchor.mountpoint, nydus_anchor.overlayfs)
wg.setup_workload_generator()
wg.torture_read(8, 10)
wg.finish_torture_read()
assert wg.io_error == False
def test_read_cache(
nydus_anchor: NydusAnchor,
rafs_conf: RafsConf,
nydus_image: RafsImage,
nydus_parent_image: RafsImage,
):
nydus_parent_image.set_backend(Backend.BACKEND_PROXY).create_image()
nydus_image.set_backend(Backend.BACKEND_PROXY).create_image(
parent_image=nydus_parent_image
)
nydus_anchor.mount_overlayfs([nydus_image.rootfs(), nydus_parent_image.rootfs()])
rafs_conf.enable_rafs_blobcache().set_rafs_backend(Backend.BACKEND_PROXY)
rafs_conf.dump_rafs_conf()
rafs = NydusDaemon(nydus_anchor, nydus_image, rafs_conf)
rafs.mount()
wg = WorkloadGen(nydus_anchor.mountpoint, nydus_anchor.overlayfs)
wg.setup_workload_generator()
wg.torture_read(12, 10)
wg.finish_torture_read()
assert wg.verify_entire_fs()
@pytest.mark.parametrize("thread_cnt", [5])
@pytest.mark.parametrize("io_duration", [5])
def test_blobcache(
nydus_anchor: NydusAnchor,
rafs_conf: RafsConf,
nydus_image: RafsImage,
nydus_scratch_parent_image: RafsImage,
thread_cnt,
io_duration,
):
dist_parent = Distributor(nydus_scratch_parent_image.rootfs(), 6, 4)
dist_parent.generate_tree()
dist_parent.put_multiple_files(20, Size(4, Unit.KB))
hint_files_parent = [os.path.join("/", p) for p in dist_parent.files[-20:]]
hint_files_parent = "\n".join(hint_files_parent[-1:])
nydus_scratch_parent_image.set_backend(Backend.BACKEND_PROXY).create_image(
chunk_size=Size(64, Unit.KB).B
)
# shutil.rmtree(nydus_scratch_parent_image.rootfs())
nydus_image.set_backend(Backend.BACKEND_PROXY).create_image(
prefetch_policy="fs",
prefetch_files=hint_files_parent.encode(),
parent_image=nydus_scratch_parent_image,
chunk_size=Size(64, Unit.KB).B,
)
nydus_anchor.mount_overlayfs(
[nydus_image.rootfs(), nydus_scratch_parent_image.rootfs()]
)
rafs_conf.enable_rafs_blobcache().set_rafs_backend(Backend.BACKEND_PROXY)
rafs_conf.enable_fs_prefetch()
rafs_conf.dump_rafs_conf()
rafs = NydusDaemon(nydus_anchor, nydus_image, rafs_conf)
rafs.thread_num(4).mount()
nc = NydusAPIClient(rafs.get_apisock())
time.sleep(0.5)
m = nc.get_blobcache_metrics()
# TODO: Open this check when prefetch is fixed.
assert m["prefetch_data_amount"] != 0
wg = WorkloadGen(nydus_anchor.mountpoint, nydus_anchor.overlayfs)
wg.setup_workload_generator()
wg.torture_read(thread_cnt, io_duration)
wg.finish_torture_read()
@pytest.mark.parametrize("backend", [Backend.BACKEND_PROXY])
def test_layered_rebuild(
nydus_anchor,
nydus_scratch_image: RafsImage,
nydus_scratch_parent_image: RafsImage,
rafs_conf: RafsConf,
backend,
):
"""
title: Layered image rebuild
description:
- Parent and upper have files whose contents are exactly the same.
- Use files stats to check if file is overlayed.
- Files with the same name but different modes.
- Files with xattr in parent should be shadowed.
pass_criteria:
- Mount successfully.
- No data corruption.
"""
rafs_conf.set_rafs_backend(backend)
rafs_conf.enable_rafs_blobcache()
rafs_conf.dump_rafs_conf()
rafs_conf.enable_fs_prefetch()
parent_rootfs = nydus_scratch_parent_image.rootfs()
upper_rootfs = nydus_scratch_image.rootfs()
nydus_anchor.mount_overlayfs(
[nydus_scratch_image.rootfs(), nydus_scratch_parent_image.rootfs()]
)
shared_files = []
dist_parent = Distributor(parent_rootfs, 6, 4)
dist_parent.generate_tree()
shared_files.extend(dist_parent.put_multiple_files(100, Size(64, Unit.KB)))
shared_files.extend(dist_parent.put_multiple_files(20, Size(3, Unit.MB)))
shared_files.extend(dist_parent.put_symlinks(30))
shared_files.extend(dist_parent.put_hardlinks(30))
xattr_verifier = XattrVerifier(parent_rootfs, dist_parent)
Whiteout.mirror_files(shared_files, parent_rootfs, upper_rootfs)
xattr_verifier.scratch(parent_rootfs)
nydus_scratch_parent_image.set_backend(backend).create_image(
chunk_size=Size(64, Unit.KB).B
)
nydus_scratch_image.set_backend(backend).create_image(
parent_image=nydus_scratch_parent_image,
chunk_size=Size(64, Unit.KB).B,
prefetch_policy="fs",
prefetch_files="/".encode(),
)
rafs = NydusDaemon(nydus_anchor, nydus_scratch_image, rafs_conf)
rafs.mount()
workload_gen = WorkloadGen(nydus_anchor.mountpoint, nydus_anchor.overlayfs)
workload_gen.setup_workload_generator()
xattr_verifier.verify(nydus_anchor.mountpoint)
assert workload_gen.verify_entire_fs()
workload_gen.torture_read(5, 4)
workload_gen.finish_torture_read()
def test_layered_localfs(
nydus_anchor, nydus_scratch_image: RafsImage, nydus_scratch_parent_image: RafsImage
):
nydus_scratch_parent_image.set_backend(Backend.LOCALFS, blob_dir=()).create_image()
nydus_scratch_image.set_backend(Backend.LOCALFS, blob_dir=()).create_image(
parent_image=nydus_scratch_parent_image
)
nydus_anchor.mount_overlayfs(
[nydus_scratch_image.rootfs(), nydus_scratch_parent_image.rootfs()]
)
rafs_conf = RafsConf(nydus_anchor).set_rafs_backend(Backend.LOCALFS)
rafs = NydusDaemon(nydus_anchor, nydus_scratch_image, rafs_conf)
rafs.mount()
workload_gen = WorkloadGen(nydus_anchor.mountpoint, nydus_anchor.overlayfs)
workload_gen.setup_workload_generator()
assert workload_gen.verify_entire_fs()
workload_gen.torture_read(5, 4)
workload_gen.finish_torture_read()
@pytest.mark.parametrize("whiteout_spec", [WhiteoutSpec.OCI, WhiteoutSpec.OVERLAY])
def test_whiteout(nydus_anchor, rafs_conf, whiteout_spec):
_td_1 = tempfile.TemporaryDirectory(dir=nydus_anchor.workspace)
_td_2 = tempfile.TemporaryDirectory(dir=nydus_anchor.workspace)
parent_rootfs = _td_1.name
upper_rootfs = _td_2.name
whiteout = Whiteout(whiteout_spec)
parent_image = RafsImage(nydus_anchor, parent_rootfs, "parent_bs", "parent_blob")
dist_parent = Distributor(parent_rootfs, 6, 4)
dist_parent.generate_tree()
dist_parent.put_directories(20)
dist_parent.put_multiple_files(50, Size(32, Unit.KB))
dist_parent.put_symlinks(30)
dist_parent.put_hardlinks(20)
to_be_removed = dist_parent.put_single_file(Size(7, Unit.KB))
layered_image = RafsImage(nydus_anchor, upper_rootfs, "bs", "blob")
dist_upper = Distributor(upper_rootfs, 3, 5)
dist_upper.generate_tree()
dist_upper.put_multiple_files(27, Size(3, Unit.MB))
dist_upper.put_symlinks(5)
# `to_be_removed` should look like `a/b/c`
whiteout.whiteout_one_file(upper_rootfs, to_be_removed)
# Put a whiteout file that does not hide any file from lower layer
whiteout.whiteout_one_file(upper_rootfs, "i/am/troublemaker/foo")
dir_to_be_whiteout_opaque = dist_parent.dirs[randint(0, len(dist_parent.dirs) - 1)]
# `dir_to_be_removed` should look like `a/b/c`
whiteout.whiteout_opaque_directory(upper_rootfs, dir_to_be_whiteout_opaque)
dist_parent.put_directories(1)
dir_to_be_removed = dist_parent.dirs[-1]
whiteout.whiteout_one_dir(upper_rootfs, dir_to_be_removed)
parent_image.set_backend(Backend.BACKEND_PROXY).create_image()
layered_image.set_backend(Backend.BACKEND_PROXY).whiteout_spec(
whiteout_spec
).create_image(parent_image=parent_image)
rafs_conf.set_rafs_backend(Backend.BACKEND_PROXY)
nydus_anchor.mount_overlayfs([layered_image.rootfs(), parent_image.rootfs()])
rafs = NydusDaemon(nydus_anchor, layered_image, rafs_conf)
rafs.mount()
wg = WorkloadGen(nydus_anchor.mountpoint, nydus_anchor.overlayfs)
assert not os.path.exists(os.path.join(nydus_anchor.mountpoint, to_be_removed))
assert not os.path.exists(os.path.join(nydus_anchor.mountpoint, dir_to_be_removed))
files_under_opaque_dir = os.listdir(
os.path.join(nydus_anchor.mountpoint, dir_to_be_whiteout_opaque)
)
# If opaque dir has files, only file from lower layer will be hidden.
if len(files_under_opaque_dir) != 0:
upper_files = os.listdir(os.path.join(upper_rootfs, dir_to_be_whiteout_opaque))
for f in files_under_opaque_dir:
assert f in upper_files
assert wg.verify_entire_fs()
def test_prefetch_with_cache(
nydus_anchor: NydusAnchor,
nydus_scratch_image: RafsImage,
nydus_scratch_parent_image: RafsImage,
rafs_conf: RafsConf,
):
parent_rootfs = nydus_scratch_parent_image.rootfs()
upper_rootfs = nydus_scratch_image.rootfs()
rafs_conf.set_rafs_backend(Backend.BACKEND_PROXY)
rafs_conf.enable_rafs_blobcache()
rafs_conf.enable_fs_prefetch(threads_count=4, merging_size=512 * 1024)
rafs_conf.dump_rafs_conf()
dist_parent = Distributor(parent_rootfs, 6, 4)
dist_parent.generate_tree()
dist_parent.put_directories(20)
dist_parent.put_multiple_files(100, Size(64, Unit.KB))
dist_parent.put_symlinks(30)
dist_parent.put_hardlinks(20)
dist_upper = Distributor(upper_rootfs, 3, 8)
dist_upper.generate_tree()
dist_upper.put_multiple_files(27, Size(3, Unit.MB))
dist_upper.put_symlinks(5)
# hint_files_parent = dist_parent.put_multiple_files(1000, Size(8, Unit.KB))
# hint_files_parent = [os.path.join(parent_rootfs, p) for p in hint_files_parent]
# hint_files_parent = "\n".join(hint_files_parent)
nydus_scratch_parent_image.set_backend(Backend.BACKEND_PROXY).create_image(
prefetch_policy="fs", prefetch_files="/".encode()
)
hint_files = dist_upper.put_multiple_files(1000, Size(8, Unit.KB))
hint_files.extend(dist_upper.put_multiple_empty_files(200))
hint_files = [os.path.join("/", p) for p in hint_files]
hint_files = "\n".join(hint_files)
nydus_scratch_image.set_backend(Backend.BACKEND_PROXY).create_image(
parent_image=nydus_scratch_parent_image,
prefetch_policy="fs",
prefetch_files=hint_files.encode(),
)
nydus_anchor.mount_overlayfs(
[nydus_scratch_image.rootfs(), nydus_scratch_parent_image.rootfs()]
)
rafs = NydusDaemon(nydus_anchor, nydus_scratch_image, rafs_conf)
rafs.thread_num(5).mount()
workload_gen = WorkloadGen(nydus_anchor.mountpoint, nydus_anchor.overlayfs)
workload_gen.setup_workload_generator()
assert workload_gen.verify_entire_fs()
workload_gen.torture_read(5, 20)
workload_gen.finish_torture_read()
def test_different_partitions(nydus_anchor: NydusAnchor, rafs_conf):
loop_file_1 = tempfile.NamedTemporaryFile(suffix="loop")
loop_file_2 = tempfile.NamedTemporaryFile(suffix="loop")
loop_mnt_1 = tempfile.TemporaryDirectory(dir=nydus_anchor.workspace)
loop_mnt_2 = tempfile.TemporaryDirectory(dir=nydus_anchor.workspace)
os.posix_fallocate(loop_file_1.fileno(), 0, Size(400, Unit.MB).B)
os.posix_fallocate(loop_file_2.fileno(), 0, Size(400, Unit.MB).B)
utils.execute(["mkfs.ext4", "-F", loop_file_1.name])
utils.execute(["mkfs.ext4", "-F", loop_file_2.name])
utils.execute(["mount", loop_file_1.name, loop_mnt_1.name])
utils.execute(["mount", loop_file_2.name, loop_mnt_2.name])
# TODO: Put more special files into
dist1 = Distributor(loop_mnt_1.name, 5, 7)
dist1.generate_tree()
dist1.put_multiple_files(100, Size(12, Unit.KB))
dist2 = Distributor(loop_mnt_2.name, 5, 7)
dist2.generate_tree()
dist2.put_symlinks(20)
dist2.put_multiple_files(50, Size(12, Unit.KB))
Whiteout.mirror_files(dist2.files[:20], loop_mnt_2.name, loop_mnt_1.name)
parent_image = (
RafsImage(nydus_anchor, loop_mnt_1.name)
.set_backend(Backend.BACKEND_PROXY)
.create_image()
)
image = RafsImage(nydus_anchor, loop_mnt_2.name)
image.set_backend(Backend.BACKEND_PROXY).create_image(parent_image=parent_image)
rafs_conf.set_rafs_backend(Backend.BACKEND_PROXY)
rafs = NydusDaemon(nydus_anchor, image, rafs_conf)
rafs.mount()
nydus_anchor.mount_overlayfs([image.rootfs(), parent_image.rootfs()])
wg = WorkloadGen(nydus_anchor.mountpoint, nydus_anchor.overlayfs)
wg.setup_workload_generator()
wg.torture_read(5, 5)
wg.finish_torture_read()
utils.execute(["umount", loop_mnt_1.name])
utils.execute(["umount", loop_mnt_2.name])
nydus_anchor.umount_overlayfs()

View File

@ -1,907 +0,0 @@
import os
import tempfile
import time
import random
import signal
import stat
import shutil
from fallocate import fallocate, FALLOC_FL_PUNCH_HOLE, FALLOC_FL_KEEP_SIZE
import pytest
import utils
from rafs import NydusDaemon, RafsConf, RafsImage, Backend, Compressor
from nydus_anchor import NydusAnchor
from workload_gen import WorkloadGen
from distributor import Distributor
from utils import logging_setup, Size, Unit
import verifier
from nydusd_client import NydusAPIClient
from whiteout import Whiteout
import platform
ANCHOR = NydusAnchor()
FS_VERSION = ANCHOR.fs_version
logging_setup()
# Specify nydusd and its tools build target directory.
# Let test script fill test environment variables.
# TODO: Test if nydusd is compatible with early version of image
# We want to test every kind of images
def test_build_image(nydus_anchor, nydus_scratch_image: RafsImage, rafs_conf: RafsConf):
"""
title: Build nydus image
description: Build nydus image from rootfs generating proper bootstrap and
blob
pass_criteria:
- Image can successfully builded and mounted
- Rafs can be unmounted and do a small account of read io and attr
operation
- Try let image builder upload blob itself.
"""
dist = Distributor(nydus_scratch_image.rootfs(), 80, 1)
dist.generate_tree()
dist.put_directories(100)
dist.put_hardlinks(90)
dist.put_symlinks(200)
dist.put_multiple_files(random.randint(20, 28), Size(10, Unit.MB))
dist.put_multiple_chinese_files(random.randint(20, 28), Size(20, Unit.KB))
Whiteout().whiteout_one_file(nydus_scratch_image.rootfs(), "i/am/troublemaker/foo")
nydus_scratch_image.set_backend(Backend.BACKEND_PROXY).create_image()
rafs_conf.set_rafs_backend(backend_type=Backend.BACKEND_PROXY)
rafs_conf.dump_rafs_conf()
rafs = NydusDaemon(nydus_anchor, nydus_scratch_image, rafs_conf)
wg = WorkloadGen(nydus_anchor.mountpoint, nydus_scratch_image.rootfs())
rafs.mount()
assert wg.verify_entire_fs()
rafs.umount()
@pytest.mark.parametrize("io_duration", [5])
@pytest.mark.parametrize("fs_version", [FS_VERSION])
@pytest.mark.parametrize("backend", [Backend.BACKEND_PROXY, Backend.LOCALFS])
def test_basic(
nydus_anchor,
nydus_image: RafsImage,
io_duration,
backend,
rafs_conf: RafsConf,
fs_version,
):
"""
title: Basic functionality test
description: Mount rafs with different mount options
pass_criteria:
- Rafs can be mounted.
- Rafs can be unmounted.
"""
nydus_image.set_backend(backend, blob_dir=()).create_image(fs_version=fs_version)
rafs_conf.set_rafs_backend(backend)
rafs = NydusDaemon(nydus_anchor, nydus_image, rafs_conf)
rafs.mount()
assert rafs.is_mounted()
workload_gen = WorkloadGen(nydus_anchor.mountpoint, nydus_image.rootfs())
workload_gen.setup_workload_generator()
workload_gen.io_read(io_duration)
nydus_anchor.check_nydusd_health()
assert workload_gen.io_error == False
assert workload_gen.verify_entire_fs()
assert rafs.is_mounted()
rafs.umount()
def test_prefetch_without_cache(
nydus_anchor: NydusAnchor, nydus_scratch_image: RafsImage, rafs_conf: RafsConf
):
"""Files prefetch test
1. relative hinted prefetch files
2. absolute hinted prefetch files
3. source rootfs root dir.
"""
rafs_conf.enable_fs_prefetch().set_rafs_backend(Backend.BACKEND_PROXY)
rafs_conf.dump_rafs_conf()
dist = Distributor(nydus_scratch_image.rootfs(), 4, 4)
dist.generate_tree()
dist.put_directories(20)
dist.put_multiple_files(40, Size(8, Unit.KB))
dist.put_hardlinks(6)
dist.put_multiple_chinese_files(random.randint(20, 28), Size(20, Unit.KB))
hint_files = ["/"]
hint_files.extend(dist.files)
hint_files.extend(dist.dirs)
hint_files.extend(dist.symlinks)
hint_files.extend(dist.hardlinks)
hint_files = [os.path.join("/", p) for p in hint_files]
hint_files = "\n".join(hint_files)
nydus_scratch_image.set_backend(Backend.BACKEND_PROXY).create_image(
prefetch_policy="fs", prefetch_files=hint_files.encode()
)
rafs = NydusDaemon(nydus_anchor, nydus_scratch_image, rafs_conf)
rafs.mount()
assert rafs.is_mounted()
workload_gen = WorkloadGen(nydus_anchor.mountpoint, nydus_scratch_image.rootfs())
# TODO: Run several parallel read workers against the mountpoint
workload_gen.setup_workload_generator()
workload_gen.torture_read(8, 5)
workload_gen.finish_torture_read()
assert NydusAnchor.check_nydusd_health()
assert not workload_gen.io_error
assert rafs.is_mounted()
rafs.umount()
@pytest.mark.parametrize("thread_cnt", [3])
@pytest.mark.parametrize("compressor", [Compressor.LZ4_BLOCK, Compressor.NONE])
@pytest.mark.parametrize("is_cache_compressed", [False])
def test_prefetch_with_cache(
nydus_anchor: NydusAnchor,
nydus_scratch_image: RafsImage,
rafs_conf: RafsConf,
thread_cnt,
compressor,
is_cache_compressed,
):
"""
title: Prefetch from various backend
description:
- Enable rafs backend blob cache, as it is disabled by default
pass_criteria:
- Rafs can be mounted.
- Rafs can be unmounted.
"""
rafs_conf.enable_rafs_blobcache(is_compressed=is_cache_compressed)
rafs_conf.set_rafs_backend(Backend.BACKEND_PROXY, prefix="object_prefix/")
rafs_conf.enable_fs_prefetch(threads_count=4, bandwidth_rate=Size(40, Unit.MB).B)
rafs_conf.dump_rafs_conf()
dist = Distributor(nydus_scratch_image.rootfs(), 4, 4)
dist.generate_tree()
dist.put_directories(20)
dist.put_multiple_files(40, Size(3, Unit.MB))
dist.put_hardlinks(6)
dist.put_multiple_chinese_files(random.randint(20, 28), Size(20, Unit.KB))
nydus_scratch_image.set_backend(
Backend.BACKEND_PROXY, prefix="object_prefix/"
).create_image(
compressor=compressor,
prefetch_policy="fs",
prefetch_files="/".encode(),
)
rafs = NydusDaemon(nydus_anchor, nydus_scratch_image, rafs_conf)
rafs.thread_num(4).mount()
nc = NydusAPIClient(rafs.get_apisock())
workload_gen = WorkloadGen(nydus_anchor.mountpoint, nydus_scratch_image.rootfs())
time.sleep(0.5)
m = nc.get_blobcache_metrics()
assert m["prefetch_data_amount"] != 0
workload_gen.setup_workload_generator()
workload_gen.torture_read(thread_cnt, 10)
assert NydusAnchor.check_nydusd_health()
workload_gen.finish_torture_read()
assert not workload_gen.io_error
# In this way, we can check if nydusd is crashed.
assert rafs.is_mounted()
rafs.umount()
@pytest.mark.parametrize(
"compressor", [Compressor.NONE, Compressor.LZ4_BLOCK, Compressor.ZSTD]
)
@pytest.mark.parametrize("amplified_size", [Size(1, Unit.MB).B, Size(32, Unit.KB).B]) # Fix failed test test_large_file in master/v5 branch temporarily
def test_large_file(nydus_anchor, compressor, amplified_size):
_tmp_dir = tempfile.TemporaryDirectory(dir=nydus_anchor.workspace)
large_file_dir = _tmp_dir.name
dist = Distributor(large_file_dir, 3, 3)
dist.generate_tree()
dist.put_single_file(Size(20, Unit.MB))
dist.put_single_file(Size(10891, Unit.KB))
dist.put_multiple_files(10, Size(2, Unit.MB))
dist.put_multiple_files(10, Size(4, Unit.MB))
image = RafsImage(nydus_anchor, large_file_dir, "bs_large", "blob_large")
image.set_backend(Backend.BACKEND_PROXY).create_image(compressor=compressor)
rafs_conf = (
RafsConf(nydus_anchor, image)
.enable_rafs_blobcache()
.amplify_io(amplified_size)
.set_rafs_backend(Backend.BACKEND_PROXY, image=image)
)
rafs = NydusDaemon(nydus_anchor, image, rafs_conf)
rafs.thread_num(4).mount()
workload_gen = WorkloadGen(nydus_anchor.mountpoint, large_file_dir)
workload_gen.setup_workload_generator()
workload_gen.torture_read(8, 5)
workload_gen.finish_torture_read()
assert not workload_gen.io_error
rafs.umount()
image.clean_up()
def test_hardlink(nydus_anchor: NydusAnchor, nydus_scratch_image, rafs_conf: RafsConf):
dist = Distributor(nydus_scratch_image.rootfs(), 8, 6)
dist.generate_tree()
hardlink_verifier = verifier.HardlinkVerifier(nydus_scratch_image.rootfs(), dist)
hardlink_verifier.scratch()
nydus_scratch_image.set_backend(Backend.BACKEND_PROXY).create_image()
rafs_conf.set_rafs_backend(Backend.BACKEND_PROXY)
rafs = NydusDaemon(nydus_anchor, nydus_scratch_image, rafs_conf)
rafs.mount()
wg = WorkloadGen(nydus_anchor.mountpoint, nydus_scratch_image.rootfs())
hardlink_verifier.verify(nydus_anchor.mountpoint, nydus_scratch_image.rootfs())
wg.setup_workload_generator()
wg.io_read(3)
nydus_anchor.check_nydusd_health()
assert wg.io_error == False
def test_meta(
nydus_anchor: NydusAnchor, rafs_conf: RafsConf, nydus_scratch_image: RafsImage
):
anchor = nydus_anchor
dist = Distributor(nydus_scratch_image.rootfs(), 8, 5)
dist.generate_tree()
xattr_verifier = verifier.XattrVerifier(anchor.mountpoint, dist)
xattr_verifier.scratch(nydus_scratch_image.rootfs())
symlink_verifier = verifier.SymlinkVerifier(anchor.mountpoint, dist)
symlink_verifier.scratch()
# Do some meta operations on scratch dir before creating rafs image file.
# Use scratch dir as image source dir as we just prepared test meta into it.
nydus_scratch_image.set_backend(Backend.BACKEND_PROXY).create_image()
rafs_conf.set_rafs_backend(Backend.BACKEND_PROXY).enable_rafs_blobcache()
rafs_conf.dump_rafs_conf()
rafs = NydusDaemon(anchor, nydus_scratch_image, rafs_conf)
rafs.thread_num(4).mount()
assert rafs.is_mounted()
xattr_verifier.verify(anchor.mountpoint)
symlink_verifier.verify(anchor.mountpoint, nydus_scratch_image.rootfs())
workload_gen = WorkloadGen(anchor.mountpoint, nydus_scratch_image.rootfs())
workload_gen.setup_workload_generator()
workload_gen.torture_read(10, 3)
workload_gen.finish_torture_read()
assert not workload_gen.io_error
assert anchor.check_nydusd_health()
@pytest.mark.parametrize("backend", [Backend.BACKEND_PROXY, Backend.LOCALFS])
def test_file_tail(nydus_anchor: NydusAnchor, nydus_scratch_image: RafsImage, backend):
"""
description: Read data from file tail
- Create several files of different sizes
- Punch hole to each file of which some should have hole tail
- Create rafs image from test scratch directory.
- Mount rafs
- Do some test.
"""
file_size_list = [
Size(1, Unit.KB),
Size(6, Unit.KB),
Size(2, Unit.MB),
Size(10034, Unit.KB),
]
file_list = []
dist = Distributor(nydus_anchor.scratch_dir, 2, 2)
dist.generate_tree()
for f_s in file_size_list:
f_name = dist.put_single_file(f_s)
file_list.append(f_name)
# Punch hole
with utils.pushd(nydus_anchor.scratch_dir):
with open(f_name, "a+b") as f:
fallocate(
f,
f_s.B - 500,
1000,
mode=FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE,
)
nydus_scratch_image.set_backend(backend).create_image()
rafs_conf = RafsConf(nydus_anchor, nydus_scratch_image)
rafs_conf.set_rafs_backend(backend)
rafs = NydusDaemon(nydus_anchor, nydus_scratch_image, rafs_conf)
rafs.mount()
with utils.pushd(nydus_anchor.mountpoint):
for name in file_list:
with open(name, "rb") as f:
size = os.stat(name).st_size
f.seek(size - 300)
buf = f.read(1000)
assert len(buf) == 300
wg = WorkloadGen(nydus_anchor.mountpoint, nydus_scratch_image.rootfs())
for f in file_list:
wg.verify_single_file(os.path.join(nydus_anchor.mountpoint, f))
assert not wg.io_error
def test_deep_directory(
nydus_anchor, rafs_conf: RafsConf, nydus_scratch_image: RafsImage
):
dist = Distributor(nydus_anchor.scratch_dir, 100, 1)
dist.generate_tree()
nydus_scratch_image.set_backend(Backend.BACKEND_PROXY).create_image()
rafs_conf.set_rafs_backend(Backend.BACKEND_PROXY)
rafs_conf.dump_rafs_conf()
rafs = NydusDaemon(nydus_anchor, nydus_scratch_image, rafs_conf)
rafs.mount()
wg = WorkloadGen(nydus_anchor.mountpoint, nydus_scratch_image.rootfs())
wg.setup_workload_generator()
wg.torture_read(8, 5)
wg.finish_torture_read()
assert wg.verify_entire_fs()
def test_various_file_types(
nydus_anchor: NydusAnchor, rafs_conf: RafsConf, nydus_scratch_image: RafsImage
):
"""
description: Put various types of files into rootfs.
- Regular, dir, char, block, fifo, sock, symlink
"""
with utils.pushd(nydus_scratch_image.rootfs()):
fd = os.open("regular", os.O_CREAT | os.O_RDWR)
os.close(fd)
os.mkfifo("fifo")
os.mknod("blk", 0o600 | stat.S_IFBLK, device=random.randint(0, 2 ^ 64))
os.mknod("char", 0o600 | stat.S_IFCHR, device=random.randint(0, 2 ^ 64))
os.mknod("sock", 0o600 | stat.S_IFSOCK, device=random.randint(0, 2 ^ 64))
os.symlink("regular", "symlink")
nydus_scratch_image.set_backend(Backend.BACKEND_PROXY).create_image()
rafs_conf.set_rafs_backend(Backend.BACKEND_PROXY)
rafs = NydusDaemon(nydus_anchor, nydus_scratch_image, rafs_conf)
rafs.mount()
with utils.pushd(nydus_anchor.mountpoint):
assert os.path.exists("fifo")
assert os.path.exists("blk")
assert os.path.exists("char")
assert os.path.exists("sock")
assert os.path.exists("symlink")
wg = WorkloadGen(nydus_anchor.mountpoint, nydus_scratch_image.rootfs())
wg.setup_workload_generator()
assert wg.verify_entire_fs()
wg.torture_read(2, 4)
wg.finish_torture_read()
def test_passthough_fs(nydus_anchor, nydus_image, rafs_conf):
nydus_image.set_backend(Backend.BACKEND_PROXY).create_image()
rafs = NydusDaemon(nydus_anchor, None, rafs_conf, with_defaults=False)
rafs.shared_dir(nydus_image.rootfs()).set_mountpoint(
nydus_anchor.mountpoint
).apisock("api_sock").mount()
wg = WorkloadGen(nydus_anchor.mountpoint, nydus_image.rootfs())
wg.setup_workload_generator()
wg.torture_read(8, 5)
wg.finish_torture_read()
assert wg.verify_entire_fs()
def test_pseudo_fs(nydus_anchor, nydus_image: RafsImage, rafs_conf: RafsConf):
_rootfs1 = tempfile.TemporaryDirectory(dir=nydus_anchor.workspace)
rootfs1 = _rootfs1.name
dist1 = Distributor(rootfs1, 3, 3)
dist1.generate_tree()
dist1.put_single_file(Size(10891, Unit.KB))
dist1.put_multiple_files(10, Size(2, Unit.MB))
dist1.put_multiple_files(10, Size(4, Unit.MB))
image1 = RafsImage(nydus_anchor, rootfs1, "bootstrap1", "blob1")
image1.set_backend(Backend.BACKEND_PROXY).create_image()
_rootfs2 = tempfile.TemporaryDirectory(dir=nydus_anchor.workspace)
rootfs2 = _rootfs2.name
dist2 = Distributor(rootfs2, 4, 2)
dist2.generate_tree()
dist2.put_single_file(Size(1, Unit.MB))
dist2.put_single_file(Size(400, Unit.KB))
dist2.put_multiple_files(2, Size(2, Unit.MB))
dist2.put_multiple_files(10, Size(4, Unit.MB))
image2 = RafsImage(nydus_anchor, rootfs2, "bootstrap2", "blob2")
image2.set_backend(Backend.BACKEND_PROXY).create_image()
nydus_image.set_backend(Backend.BACKEND_PROXY).create_image()
rafs_conf.set_rafs_backend(Backend.BACKEND_PROXY)
rafs = NydusDaemon(nydus_anchor, None, rafs_conf)
rafs.mount()
time.sleep(1)
nc = NydusAPIClient(rafs.get_apisock())
try:
shutil.rmtree("pseudo_fs_scratch")
except FileNotFoundError:
pass
scratch_rootfs = shutil.copytree(
nydus_image.rootfs(), "pseudo_fs_scratch", symlinks=True
)
dist = Distributor(scratch_rootfs, 5, 5)
dist.generate_tree()
dist.put_multiple_files(20, Size(8, Unit.KB))
###
suffix = "1"
conf = RafsConf(nydus_anchor)
conf.enable_rafs_blobcache()
conf.enable_fs_prefetch()
conf.set_rafs_backend(Backend.BACKEND_PROXY)
conf.dump_rafs_conf()
nc.pseudo_fs_mount(
nydus_image.bootstrap_path, f"/pseudo{suffix}", conf.path(), None
)
###
suffix = "2"
conf = RafsConf(nydus_anchor)
conf.enable_rafs_blobcache()
conf.set_rafs_backend(Backend.BACKEND_PROXY)
conf.dump_rafs_conf()
nc.pseudo_fs_mount(image1.bootstrap_path, f"/pseudo{suffix}", conf.path(), None)
###
suffix = "3"
conf = RafsConf(nydus_anchor)
conf.enable_rafs_blobcache()
conf.set_rafs_backend(Backend.BACKEND_PROXY)
conf.dump_rafs_conf()
nc.pseudo_fs_mount(image2.bootstrap_path, f"/pseudo{suffix}", conf.path(), None)
wg1 = WorkloadGen(
os.path.join(nydus_anchor.mountpoint, "pseudo1"), nydus_image.rootfs()
)
rootdir1 = os.path.join(nydus_anchor.mountpoint, "pseudo2")
rootdir2 = os.path.join(nydus_anchor.mountpoint, "pseudo3")
wg2 = WorkloadGen(rootdir1, rootfs1)
wg3 = WorkloadGen(rootdir2, rootfs2)
wg1.setup_workload_generator()
wg2.setup_workload_generator()
wg3.setup_workload_generator()
wg1.torture_read(4, 8)
wg2.torture_read(4, 8)
wg3.torture_read(4, 8)
wg1.finish_torture_read()
wg2.finish_torture_read()
wg3.finish_torture_read()
# TODO: Temporarily disable the verification as hard to select `verify dir`
assert wg1.verify_entire_fs()
assert wg2.verify_entire_fs()
assert wg3.verify_entire_fs()
st1 = os.stat(rootdir1)
st2 = os.stat(rootdir2)
assert st1.st_ino != st2.st_ino
nc.umount_rafs("/pseudo1")
nc.umount_rafs("/pseudo2")
nc.umount_rafs("/pseudo3")
def test_shared_blobcache(nydus_anchor: NydusAnchor, nydus_image, rafs_conf: RafsConf, tmp_path):
"""
description:
Start more than one nydusd, let them share the same blobcache.
"""
nydus_image.set_backend(Backend.LOCALFS, blob_dir=()).create_image()
rafs_conf.enable_rafs_blobcache().set_rafs_backend(Backend.LOCALFS)
rafs_conf.dump_rafs_conf()
tmp_file = tmp_path / "prefetch.txt"
tmp_file.write_text("/")
def make_rafs(mountpoint):
rafs = (
NydusDaemon(nydus_anchor, nydus_image, rafs_conf)
.apisock(tempfile.NamedTemporaryFile().name)
.prefetch_files(os.path.abspath(tmp_file))
.set_mountpoint(mountpoint)
)
return rafs
cases = []
count = 10
for num in range(0, count):
mountpoint = tempfile.TemporaryDirectory(
dir=nydus_anchor.workspace, suffix="root_" + str(num)
)
rafs = make_rafs(mountpoint.name)
rafs.mount(dump_config=False)
workload_gen = WorkloadGen(mountpoint.name, nydus_image.rootfs())
workload_gen.setup_workload_generator()
cases.append((rafs, workload_gen, mountpoint))
for case in cases:
utils.clean_pagecache()
case[1].torture_read(4, 5)
for case in cases:
case[1].finish_torture_read()
# Ensure that blob & bitmap files are included in blobcache dir.
if int(nydus_anchor.fs_version) == 5:
assert len(os.listdir(nydus_anchor.blobcache_dir)) == 2
elif int(nydus_anchor.fs_version) == 6:
assert len(os.listdir(nydus_anchor.blobcache_dir)) == 3
for case in cases:
case[0].umount()
# @pytest.mark.skip(reason="ECS can't pass this case!")
@pytest.mark.parametrize("sig", [signal.SIGTERM, signal.SIGINT])
def test_signal_handling(
nydus_anchor: NydusAnchor, nydus_scratch_image: RafsImage, rafs_conf: RafsConf, sig
):
dist = Distributor(nydus_scratch_image.rootfs(), 2, 2)
dist.generate_tree()
dist.put_multiple_files(5, Size(2, Unit.KB))
nydus_scratch_image.set_backend(Backend.BACKEND_PROXY).create_image()
victim = os.path.join(nydus_anchor.mountpoint, dist.files[-1])
rafs_conf.set_rafs_backend(Backend.BACKEND_PROXY)
rafs = NydusDaemon(nydus_anchor, nydus_scratch_image, rafs_conf)
rafs.mount()
fd = os.open(victim, os.O_RDONLY)
assert rafs.is_mounted()
os.kill(rafs.p.pid, sig)
time.sleep(3)
assert not os.path.ismount(nydus_anchor.mountpoint)
rafs.p.wait()
@pytest.mark.parametrize("prefetch_policy", ["fs"])
def test_certain_files_prefetch(
nydus_anchor: NydusAnchor, nydus_scratch_image: RafsImage, prefetch_policy
):
"""
description:
For rafs, there are two types of prefetching.
1. Prefetch files from fs-layer, which means each file is prefetched one by one.
2. Prefetch directly from backend/blob layer, which means a range will be fetched from blob
"""
dist = Distributor(nydus_scratch_image.rootfs(), 8, 2)
dist.generate_tree()
dist.put_directories(20)
dist.put_multiple_files(100, Size(64, Unit.KB))
dist.put_symlinks(8)
dist.put_hardlinks(8)
dist.put_multiple_files(40, Size(64, Unit.KB))
hint_files = dist.files[-20:]
hint_files.extend(dist.symlinks[-6:])
hint_files.append(list(dist.hardlinks.values())[1][0])
hint_files = [os.path.join("/", p) for p in hint_files]
hint_files = "\n".join(hint_files)
nydus_scratch_image.set_backend(Backend.LOCALFS).create_image(
prefetch_policy=prefetch_policy,
prefetch_files=hint_files.encode(),
)
rafs_conf = RafsConf(nydus_anchor, nydus_scratch_image)
rafs_conf.set_rafs_backend(Backend.LOCALFS)
rafs_conf.dump_rafs_conf()
rafs = NydusDaemon(nydus_anchor, nydus_scratch_image, rafs_conf)
rafs.thread_num(7).mount()
wg = WorkloadGen(nydus_anchor.mountpoint, nydus_scratch_image.rootfs())
# TODO: Run several parallel read workers against the mountpoint
wg.setup_workload_generator()
wg.torture_read(5, 5)
wg.finish_torture_read()
assert not wg.io_error
@pytest.mark.parametrize("compressor", [Compressor.NONE, Compressor.LZ4_BLOCK])
def test_digest_validate(
nydus_anchor, rafs_conf: RafsConf, nydus_image: RafsImage, compressor
):
rafs_conf.set_rafs_backend(Backend.LOCALFS)
rafs_conf.enable_validation()
rafs_conf.enable_rafs_blobcache()
nydus_image.set_backend(Backend.LOCALFS, blob_dir=()).create_image(
compressor=compressor
)
rafs = NydusDaemon(nydus_anchor, nydus_image, rafs_conf)
rafs.mount()
wg = WorkloadGen(nydus_anchor.mountpoint, nydus_image.rootfs())
wg.setup_workload_generator()
wg.torture_read(5, 5, verify=True)
wg.finish_torture_read()
@pytest.mark.parametrize("backend", [Backend.BACKEND_PROXY])
def test_specified_prefetch(
nydus_anchor: NydusAnchor,
rafs_conf: RafsConf,
nydus_scratch_image: RafsImage,
backend,
tmp_path
):
"""
description:
Nydusd can have a list including files and directories input when started.
Then it can prefetch files from backend per as to the list.
"""
rafs_conf.set_rafs_backend(backend)
rafs_conf.enable_fs_prefetch(prefetch_all=False)
rafs_conf.enable_rafs_blobcache()
rafs_conf.dump_rafs_conf()
dist = Distributor(nydus_scratch_image.rootfs(), 8, 2)
dist.generate_tree()
dirs = dist.put_directories(20)
dist.put_multiple_files(100, Size(64, Unit.KB))
dist.put_symlinks(30)
dist.put_hardlinks(20)
dist.put_multiple_files(40, Size(64, Unit.KB))
dist.put_single_file(Size(3, Unit.MB), name="test")
nydus_scratch_image.set_backend(backend).create_image()
prefetching_files = dirs
prefetching_files += dist.files[:10]
prefetching_files += dist.dirs[:5]
prefetching_files += dist.symlinks[:10]
prefetching_files.append(list(dist.hardlinks.values())[1][0])
# Fuzz
prefetching_files.append("/a/b/c/d")
prefetching_files.append(os.path.join("/", "f/g/h/"))
tmp_file = tmp_path / "prefetch.txt"
with open(tmp_file, "w") as f:
f.writelines(os.path.join("/", d) + '\n' for d in prefetching_files)
print(os.path.abspath(tmp_file))
rafs = NydusDaemon(nydus_anchor, nydus_scratch_image, rafs_conf)
rafs.prefetch_files(os.path.abspath(tmp_file)).mount()
nc = NydusAPIClient(rafs.get_apisock())
# blobcache_metrics = nc.get_blobcache_metrics()
# Storage prefetch workers does not stop any more
# while blobcache_metrics["prefetch_workers"] != 0:
# time.sleep(0.5)
# blobcache_metrics = nc.get_blobcache_metrics()
time.sleep(1)
begin = nc.get_backend_metrics()["read_amount_total"]
# end = nc.get_backend_metrics()["read_amount_total"]
assert begin != 0
wg = WorkloadGen(nydus_anchor.mountpoint, nydus_scratch_image.rootfs())
wg.setup_workload_generator()
wg.torture_read(5, 10)
wg.finish_torture_read()
assert not wg.io_error
def test_build_image_param_blobid(
nydus_anchor, nydus_image: RafsImage, rafs_conf: RafsConf
):
"""
description:
Test if nydus-image argument `--blob-id` works properly
"""
# More strict id check?
nydus_image.set_backend(Backend.BACKEND_PROXY).set_param(
"blob-id", utils.random_string()
).create_image()
rafs_conf.set_rafs_backend(Backend.BACKEND_PROXY)
rafs = NydusDaemon(nydus_anchor, nydus_image, rafs_conf)
rafs.mount()
wg = WorkloadGen(nydus_anchor.mountpoint, nydus_image.rootfs())
wg.setup_workload_generator()
wg.torture_read(5, 5)
wg.finish_torture_read()
@pytest.mark.skipif(
platform.machine() != "x86_64", reason="Only supported x86 syscalls test"
)
def test_syscalls(
nydus_anchor: NydusAnchor,
nydus_scratch_image: RafsImage,
rafs_conf: RafsConf,
):
syscall_helper = "framework/test_syscalls"
ret, _ = utils.execute(
["gcc", "framework/test_syscalls.c", "-o", syscall_helper],
shell=False,
print_output=True,
)
assert ret
dist = Distributor(nydus_scratch_image.rootfs(), 2, 2)
dist.generate_tree()
dist.put_single_file(
Size(8, Unit.KB), pos=nydus_scratch_image.rootfs(), name="xattr_no_kv"
)
dist.put_single_file_with_xattr(
Size(8, Unit.KB),
("trusted.nydus.key", ""),
pos=nydus_scratch_image.rootfs(),
name="xattr_empty_value",
)
dist.put_single_file_with_xattr(
Size(8, Unit.KB),
("trusted.nydus.key", "1234567890"),
pos=nydus_scratch_image.rootfs(),
name="xattr_insufficient_buffer",
)
nydus_scratch_image.set_backend(Backend.BACKEND_PROXY).create_image()
rafs_conf.enable_xattr().set_rafs_backend(Backend.BACKEND_PROXY)
rafs_conf.dump_rafs_conf()
rafs = NydusDaemon(nydus_anchor, nydus_scratch_image, rafs_conf)
rafs.mount()
for no in [58]:
ret, _ = utils.execute(
[syscall_helper, nydus_anchor.mountpoint, str(no)],
shell=False,
print_output=True,
)
assert ret
def test_blobcache_recovery(
nydus_anchor: NydusAnchor,
rafs_conf: RafsConf,
nydus_scratch_image: RafsImage,
tmp_path
):
rafs_conf.set_rafs_backend(Backend.BACKEND_PROXY)
rafs_conf.enable_fs_prefetch()
rafs_conf.enable_rafs_blobcache()
rafs_conf.dump_rafs_conf()
dist = Distributor(nydus_scratch_image.rootfs(), 8, 2)
dist.generate_tree()
dirs = dist.put_directories(20)
dist.put_multiple_files(100, Size(64, Unit.KB))
dist.put_symlinks(30)
dist.put_hardlinks(20)
dist.put_multiple_files(40, Size(64, Unit.KB))
dist.put_single_file(Size(3, Unit.MB), name="test")
nydus_scratch_image.set_backend(Backend.BACKEND_PROXY).create_image()
rafs = NydusDaemon(nydus_anchor, nydus_scratch_image, rafs_conf)
tmp_file = tmp_path / "prefetch.txt"
tmp_file.write_text("/")
rafs.prefetch_files(os.path.abspath(tmp_file)).mount()
wg = WorkloadGen(nydus_anchor.mountpoint, nydus_scratch_image.rootfs())
wg.setup_workload_generator()
wg.torture_read(4, 4)
# Hopefully, prefetch can be done in 5 secondes.
time.sleep(5)
wg.finish_torture_read()
rafs.umount()
rafs2 = NydusDaemon(nydus_anchor, nydus_scratch_image, rafs_conf)
rafs2.mount()
wg.torture_read(4, 4)
time.sleep(0.5)
nc = NydusAPIClient(rafs2.get_apisock())
begin = nc.get_backend_metrics()["read_amount_total"]
time.sleep(1)
end = nc.get_backend_metrics()["read_amount_total"]
assert end == begin == 0
wg.finish_torture_read()

View File

@ -1,460 +0,0 @@
import pytest
import posixpath
import time
import platform
import os
from nydus_anchor import NydusAnchor
from oss import OssHelper
from rafs import RafsConf, Backend, NydusDaemon
from nydusify import Nydusify
from workload_gen import WorkloadGen
import tempfile
import utils
ANCHOR = NydusAnchor()
FS_VERSION = ANCHOR.fs_version
@pytest.mark.parametrize(
"source",
[
"openjdk:latest",
"python:3.7",
"docker.io/busybox:latest",
],
)
@pytest.mark.parametrize("fs_version", [FS_VERSION])
def test_basic_conversion(
nydus_anchor: NydusAnchor,
rafs_conf: RafsConf,
source,
fs_version,
local_registry,
nydusify_converter,
tmp_path
):
"""
No need to locate where bootstrap is as we can directly pull it from registry
"""
converter = Nydusify(nydus_anchor)
time.sleep(1)
converter.docker_v2().enable_multiplatfrom(False).convert(
source, fs_version=fs_version
)
assert converter.locate_bootstrap() is not None
pulled_bootstrap = converter.pull_bootstrap(
tempfile.TemporaryDirectory(
dir=nydus_anchor.workspace, suffix="bootstrap"
).name,
"pulled_bootstrap",
)
# Skopeo does not support media type: "application/vnd.oci.image.layer.nydus.blob.v1",
# So can't download build cache like a oci image.
layers, base = converter.extract_source_layers_names_and_download()
nydus_anchor.mount_overlayfs(layers, base)
converted_layers = converter.extract_converted_layers_names()
converted_layers.sort()
rafs_conf.set_rafs_backend(
Backend.REGISTRY, repo=posixpath.basename(source).split(":")[0]
)
rafs_conf.enable_fs_prefetch()
rafs_conf.enable_rafs_blobcache()
rafs_conf.dump_rafs_conf()
rafs = NydusDaemon(nydus_anchor, None, rafs_conf)
# Use `nydus-image inspect` to compare blob table in bootstrap and manifest
workload_gen = WorkloadGen(nydus_anchor.mountpoint, nydus_anchor.overlayfs)
# No need to locate where bootstrap is as we can directly pull it from registry
tmp_file = tmp_path / "prefetch.txt"
tmp_file.write_text("/")
rafs.thread_num(6).bootstrap(pulled_bootstrap).prefetch_files(os.path.abspath(tmp_file)).mount()
assert workload_gen.verify_entire_fs()
workload_gen.setup_workload_generator()
workload_gen.torture_read(4, 6, verify=True)
workload_gen.finish_torture_read()
@pytest.mark.parametrize(
"source",
[
"python:3.7",
"docker.io/busybox:latest",
],
)
@pytest.mark.parametrize("enable_multiplatform", [False])
def test_build_cache(
nydus_anchor: NydusAnchor,
rafs_conf: RafsConf,
source,
enable_multiplatform,
local_registry,
nydusify_converter,
tmp_path
):
"""
No need to locate where bootstrap is as we can directly pull it from registry
"""
converter = Nydusify(nydus_anchor)
time.sleep(1)
converter.docker_v2().build_cache_ref(
"localhost:5000/build_cache:000"
).enable_multiplatfrom(enable_multiplatform).convert(source)
# No need to locate where bootstrap is as we can directly pull it from registry
bootstrap = converter.locate_bootstrap()
converter.docker_v2().build_cache_ref("localhost:5000/build_cache:000").convert(
source
)
assert converter.locate_bootstrap() == None
pulled_bootstrap = converter.pull_bootstrap(
tempfile.TemporaryDirectory(
dir=nydus_anchor.workspace, suffix="bootstrap"
).name,
"pulled_bootstrap",
)
# Skopeo does not support media type: "application/vnd.oci.image.layer.nydus.blob.v1",
# So can't download build cache like a oci image.
layers, base = converter.extract_source_layers_names_and_download()
nydus_anchor.mount_overlayfs(layers, base)
converted_layers = converter.extract_converted_layers_names()
converted_layers.sort()
records = converter.get_build_cache_records("localhost:5000/build_cache:000")
assert len(records) != 0
cached_layers = [c["digest"] for c in records]
cached_layers.sort()
for l in converted_layers:
assert l in cached_layers
rafs_conf.set_rafs_backend(
Backend.REGISTRY, repo=posixpath.basename(source).split(":")[0]
)
rafs_conf.enable_fs_prefetch()
rafs_conf.enable_rafs_blobcache()
rafs_conf.dump_rafs_conf()
rafs = NydusDaemon(nydus_anchor, None, rafs_conf)
# Use `nydus-image inspect` to compare blob table in bootstrap and manifest
workload_gen = WorkloadGen(nydus_anchor.mountpoint, nydus_anchor.overlayfs)
# No need to locate where bootstrap is as we can directly pull it from registry
tmp_file = tmp_path / "prefetch.txt"
tmp_file.write_text("/")
rafs.thread_num(6).bootstrap(pulled_bootstrap).prefetch_files(os.path.abspath(tmp_file)).mount()
assert workload_gen.verify_entire_fs()
workload_gen.setup_workload_generator()
workload_gen.torture_read(8, 12, verify=True)
workload_gen.finish_torture_read()
@pytest.mark.parametrize(
"source",
[
"docker.io/busybox:latest",
],
)
def test_upload_oss(
nydus_anchor: NydusAnchor,
rafs_conf: RafsConf,
source,
local_registry,
nydusify_converter,
tmp_path
):
"""
docker python client manual: https://docker-py.readthedocs.io/en/stable/
Use pulled bootstrap from registry instead of newly generated by nydus-image to check if the bootstrap is pushed successfully.
"""
converter = Nydusify(nydus_anchor)
time.sleep(1)
oss_prefix = "nydus_v2/"
converter.docker_v2().backend_type(
"oss", oss_object_prefix=oss_prefix, filed=True
).build_cache_ref("localhost:5000/build_cache:000").force_push().convert(source)
nydus_image_output = converter.nydus_image_output()
blobs_to_remove = nydus_image_output["blobs"]
# Just to observe if convertion is faster
converter.docker_v2().backend_type(
"oss", oss_object_prefix=oss_prefix
).build_cache_ref("localhost:5000/build_cache:000").force_push().convert(source)
rafs_conf.set_rafs_backend(Backend.OSS, prefix=oss_prefix)
rafs_conf.enable_fs_prefetch()
rafs_conf.enable_rafs_blobcache()
rafs_conf.dump_rafs_conf()
bootstrap = converter.locate_bootstrap()
# `check` deletes all files
checker = Nydusify(nydus_anchor)
checker.backend_type("oss", oss_object_prefix=oss_prefix).with_new_work_dir(
nydus_anchor.nydusify_work_dir + "-check"
).check(source)
converted_layers = converter.extract_converted_layers_names()
# With oss backend, ant useage, `layers` only has one member
records = converter.get_build_cache_records("localhost:5000/build_cache:000")
assert len(records) != 0
cached_layers = [c["digest"] for c in records]
assert cached_layers.sort() == converted_layers.sort()
pulled_bootstrap = converter.pull_bootstrap(
tempfile.TemporaryDirectory(
dir=nydus_anchor.workspace, suffix="bootstrap"
).name,
"pulled_bootstrap",
)
layers, base = converter.extract_source_layers_names_and_download()
nydus_anchor.mount_overlayfs(layers, base)
rafs = NydusDaemon(nydus_anchor, None, rafs_conf)
workload_gen = WorkloadGen(nydus_anchor.mountpoint, nydus_anchor.overlayfs)
tmp_file = tmp_path / "prefetch.txt"
tmp_file.write_text("/")
rafs.thread_num(6).bootstrap(pulled_bootstrap).prefetch_files(os.path.abspath(tmp_file)).mount()
assert workload_gen.verify_entire_fs()
workload_gen.setup_workload_generator()
workload_gen.torture_read(8, 12, verify=True)
workload_gen.finish_torture_read()
oss = OssHelper(
nydus_anchor.ossutil_bin,
endpoint=nydus_anchor.oss_endpoint,
bucket=nydus_anchor.oss_bucket,
ak_id=nydus_anchor.oss_ak_id,
ak_secret=nydus_anchor.oss_ak_secret,
prefix=None,
)
# Nydusify will skip upload blob as object if it exists.
for b in blobs_to_remove:
oss.rm(b)
@pytest.mark.parametrize(
"fs_version,source,chunk_dict_arg",
[
(
5,
"memcached:latest",
"bootstrap:registry:ghcr.io/dragonflyoss/image-service/nydus-build-cache:memcached-v5"
),
(
6,
"memcached:latest",
"bootstrap:registry:ghcr.io/dragonflyoss/image-service/nydus-build-cache:memcached-v6"
)
]
)
def test_chunk_dict(
nydus_anchor: NydusAnchor,
rafs_conf: RafsConf,
source,
chunk_dict_arg,
local_registry,
nydusify_converter,
tmp_path
):
'''
only support oss backend now
- convert image with chunk-dict
- check new image
'''
converter = Nydusify(nydus_anchor)
time.sleep(1)
oss_prefix = "nydus_v2/"
converter.docker_v2().backend_type(
"oss", oss_object_prefix=oss_prefix
).build_cache_ref(
"localhost:5000/build_cache:000"
).force_push().chunk_dict(chunk_dict_arg).convert(source, fs_version=fs_version)
rafs_conf.set_rafs_backend(Backend.OSS, prefix=oss_prefix)
rafs_conf.enable_fs_prefetch()
rafs_conf.enable_rafs_blobcache()
rafs_conf.dump_rafs_conf()
bootstrap = converter.locate_bootstrap()
checker = Nydusify(nydus_anchor)
checker.backend_type(
"oss", oss_object_prefix=oss_prefix
).with_new_work_dir(
nydus_anchor.nydusify_work_dir+'-check'
).check(source)
converted_layers = converter.extract_converted_layers_names()
# With oss backend, ant useage, `layers` only has one member
records = converter.get_build_cache_records("localhost:5000/build_cache:000")
assert len(records) != 0
cached_layers = [c["digest"] for c in records]
assert cached_layers.sort() == converted_layers.sort()
pulled_bootstrap = converter.pull_bootstrap(
tempfile.TemporaryDirectory(
dir=nydus_anchor.workspace, suffix="bootstrap"
).name,
"pulled_bootstrap",
)
# Mount source rootfs (ociv1)
layers, base = converter.extract_source_layers_names_and_download()
nydus_anchor.mount_overlayfs(layers, base)
# Mount rafs rootfs
rafs = RafsMount(nydus_anchor, None, rafs_conf)
workload_gen = WorkloadGen(nydus_anchor.mount_point, nydus_anchor.overlayfs)
tmp_file = tmp_path / "prefetch.txt"
tmp_file.write_text("/")
rafs.thread_num(6).bootstrap(pulled_bootstrap).prefetch_files(os.path.abspath(tmp_file)).mount()
assert workload_gen.verify_entire_fs()
workload_gen.setup_workload_generator()
workload_gen.torture_read(8, 12, verify=True)
workload_gen.finish_torture_read()
@pytest.mark.parametrize(
"source",
[
"busybox:latest", # From DockerHub, manifest list image format, image config includes os/arch
],
)
@pytest.mark.parametrize("arch", ["arm64", "amd64"])
@pytest.mark.parametrize("enable_multiplatform", [True])
def test_cross_platform_multiplatform(
nydus_anchor: NydusAnchor,
rafs_conf: RafsConf,
source,
arch,
enable_multiplatform,
local_registry,
nydusify_converter,
tmp_path
):
"""
- copy the entire repo from source registry to target registry
- One image coresponds to manifest list while the other one to single manifest
- Use cloned source rather than the one from original registry
- Push the converted images to the original source
- Also test multiplatform here
- ? Seems with flag --multiplatform to nydusify, it still just push single manifest
- converted manifest index has one more image than origin.
"""
# Copy the entire repo for multiplatform
skopeo = utils.Skopeo()
source_name_tagged = posixpath.basename(source)
target_image = f"localhost:5000/{source_name_tagged}"
cloned_source = f"localhost:5000/{source_name_tagged}"
skopeo.copy_all_to_registry(source, target_image)
origin_manifest_index = skopeo.manifest_list(cloned_source)
utils.Skopeo.pretty_print(origin_manifest_index)
converter = Nydusify(nydus_anchor)
converter.docker_v2().build_cache_ref("localhost:5000/build_cache:000").platform(
f"linux/{arch}"
).enable_multiplatfrom(enable_multiplatform).convert(
cloned_source, target_ref=target_image
)
# TODO: configure registry backend from `local_registry` rather than anchor
rafs_conf.set_rafs_backend(
Backend.REGISTRY, repo=posixpath.basename(source).split(":")[0]
)
rafs_conf.enable_fs_prefetch()
rafs_conf.enable_rafs_blobcache()
pulled_bootstrap = converter.pull_bootstrap(
tempfile.TemporaryDirectory(
dir=nydus_anchor.workspace, suffix="bootstrap"
).name,
"pulled_bootstrap",
arch,
)
# Skopeo does not support media type: "application/vnd.oci.image.layer.nydus.blob.v1",
# So can't download build cache like a oci image.
layers, base = converter.extract_source_layers_names_and_download(arch=arch)
nydus_anchor.mount_overlayfs(layers, base)
converted_layers = converter.extract_converted_layers_names(arch=arch)
converted_layers.sort()
converted_manifest_index = skopeo.manifest_list(cloned_source)
utils.Skopeo.pretty_print(converted_manifest_index)
assert (
len(converted_manifest_index["manifests"])
- len(origin_manifest_index["manifests"])
== 1
)
# `inspect` will succeed if image to arch can be found.
skopeo.inspect(target_image, image_arch=arch)
converter.find_nydus_image(target_image, arch)
target_image_config = converter.pull_config(target_image, arch=arch)
assert target_image_config["architecture"] == arch
records = converter.get_build_cache_records("localhost:5000/build_cache:000")
assert len(records) != 0
cached_layers = [c["digest"] for c in records]
cached_layers.sort()
# > assert cached_layers == converted_layers
# E AssertionError: assert None == ['sha256:3f18...af3234b4c257']
# E +None
# E -['sha256:3f18b27a912188108c8590684206bd9da7d81bbfd0e8325f3ef0af3234b4c257']
for r in converted_layers:
assert r in cached_layers
# Use `nydus-image inspect` to compare blob table in bootstrap and manifest
workload_gen = WorkloadGen(nydus_anchor.mountpoint, nydus_anchor.overlayfs)
# No need to locate where bootstrap is as we can directly pull it from registry
rafs = NydusDaemon(nydus_anchor, None, rafs_conf)
tmp_file = tmp_path / "prefetch.txt"
tmp_file.write_text("/")
rafs.thread_num(6).bootstrap(pulled_bootstrap).prefetch_files(os.path.abspath(tmp_file)).mount()
assert workload_gen.verify_entire_fs()
workload_gen.setup_workload_generator()
workload_gen.torture_read(8, 12, verify=True)
workload_gen.finish_torture_read()

View File

@ -1,262 +0,0 @@
import pytest
import time
from nydus_anchor import NydusAnchor
from snapshotter import Snapshotter
from containerd import Containerd
from rafs import RafsConf, Backend
from cri import Cri
from nydusify import Nydusify
import uuid
import signal
import utils
ANCHOR = NydusAnchor()
SNAPSHOTTER_IMAGE_ARRAY = ANCHOR.images_array
@pytest.mark.parametrize("image_url", SNAPSHOTTER_IMAGE_ARRAY)
def test_snapshotter(
nydus_anchor: NydusAnchor,
rafs_conf: RafsConf,
image_url,
nydus_snapshotter,
local_registry,
):
snapshotter = Snapshotter(nydus_anchor)
containerd = Containerd(nydus_anchor, snapshotter).gen_config()
snapshotter.set_root(containerd.root)
nydus_anchor.put_dustbin(snapshotter)
nydus_anchor.put_dustbin(containerd)
converter = Nydusify(nydus_anchor)
converter.docker_v2().convert(image_url)
rafs_conf.set_rafs_backend(Backend.REGISTRY, repo=converter.original_repo)
rafs_conf.enable_xattr()
rafs_conf.dump_rafs_conf()
snapshotter.run(rafs_conf.path())
time.sleep(1)
containerd.run()
cri = Cri(containerd.address, containerd.address)
container_name = str(uuid.uuid4())
cri.run_container(converter.converted_image, container_name)
id, status = cri.check_container_status(container_name, timeout=30)
assert id is not None
assert status
cri.stop_rm_container(id)
cri.remove_image(converter.converted_image)
containerd.remove_image_sync(converter.converted_image)
@pytest.mark.parametrize(
"converted_images",
[
(
"reg.docker.alibaba-inc.com/chge-nydus-test/python:3.8_converted",
"reg.docker.alibaba-inc.com/chge-nydus-test/python:3.5_converted",
)
],
)
def test_snapshotter_converted_images(
nydus_anchor: NydusAnchor,
rafs_conf: RafsConf,
converted_images,
nydus_snapshotter,
):
# snapshotter = Snapshotter(nydus_anchor).enable_nydus_overlayfs()
snapshotter = Snapshotter(nydus_anchor)
containerd = Containerd(nydus_anchor, snapshotter).gen_config()
snapshotter.set_root(containerd.root)
nydus_anchor.put_dustbin(snapshotter)
nydus_anchor.put_dustbin(containerd)
# We can safely pass the step provide repo configured into the rafs configuration file.
rafs_conf.set_rafs_backend(Backend.REGISTRY, scheme="https")
rafs_conf.enable_xattr()
rafs_conf.dump_rafs_conf()
snapshotter.run(rafs_conf.path())
time.sleep(1)
containerd.run()
cri = Cri(containerd.address, containerd.address)
id_set = []
for ref in converted_images:
container_name = str(uuid.uuid4())
cri.run_container(ref, container_name)
id, status = cri.check_container_status(container_name, timeout=30)
assert id is not None
assert status
id_set.append((id, ref))
time.sleep(2)
for id, ref in id_set:
cri.stop_rm_container(id)
cri.remove_image(ref)
containerd.remove_image_sync(ref)
# TODO: Rafs won't be unmounted and and nydusd still be alive even image is removed locally
# So kill all nydusd here to make following test verification pass. Is this a bug?
# Ensure nydusd must have been stopped here
time.sleep(3)
@pytest.mark.skip(reason="Restart can't take over running nydusd")
@pytest.mark.parametrize(
"converted_images",
[
(
"reg.docker.alibaba-inc.com/chge-nydus-test/python:3.8_converted",
"reg.docker.alibaba-inc.com/chge-nydus-test/python:3.5_converted",
)
],
)
def test_snapshotter_restart(
nydus_anchor: NydusAnchor,
rafs_conf: RafsConf,
converted_images,
nydus_snapshotter,
):
snapshotter = Snapshotter(nydus_anchor)
containerd = Containerd(nydus_anchor, snapshotter).gen_config()
snapshotter.set_root(containerd.root)
nydus_anchor.put_dustbin(containerd)
# We can safely pass the step provide repo configured into the rafs configuration file.
rafs_conf.set_rafs_backend(Backend.REGISTRY, scheme="https")
rafs_conf.enable_xattr().enable_fs_prefetch().enable_rafs_blobcache(
work_dir=snapshotter.cache_dir()
)
rafs_conf.enable_xattr().dump_rafs_conf()
rafs_conf.dump_rafs_conf()
snapshotter.run(rafs_conf.path())
time.sleep(1)
containerd.run()
cri = Cri(containerd.address, containerd.address)
id_set = []
for ref in converted_images:
container_name = str(uuid.uuid4())
cri.run_container(ref, container_name)
id, status = cri.check_container_status(container_name, timeout=30)
assert id is not None
assert status
id_set.append((id, ref))
time.sleep(2)
snapshotter.shutdown()
snapshotter = Snapshotter(nydus_anchor)
snapshotter.set_root(containerd.root)
nydus_anchor.put_dustbin(snapshotter)
snapshotter.run(rafs_conf.path())
for id, ref in id_set:
cri.stop_rm_container(id)
cri.remove_image(ref)
containerd.remove_image_sync(ref)
@pytest.mark.parametrize(
"converted_images",
[
(
"reg.docker.alibaba-inc.com/chge-nydus-test/python:3.8_converted",
"reg.docker.alibaba-inc.com/chge-nydus-test/python:3.5_converted",
)
],
)
def test_snapshotter_converted_images_with_cache(
nydus_anchor: NydusAnchor,
rafs_conf: RafsConf,
converted_images,
nydus_snapshotter,
):
snapshotter = Snapshotter(nydus_anchor)
containerd = Containerd(nydus_anchor, snapshotter).gen_config()
snapshotter.set_root(containerd.root)
nydus_anchor.put_dustbin(snapshotter)
nydus_anchor.put_dustbin(containerd)
# We can safely pass the step provide repo configured into the rafs configuration file.
rafs_conf.set_rafs_backend(
Backend.REGISTRY, scheme="https"
).enable_fs_prefetch().enable_rafs_blobcache(work_dir=snapshotter.cache_dir())
rafs_conf.enable_xattr().dump_rafs_conf()
snapshotter.run(rafs_conf.path())
time.sleep(1)
containerd.run()
cri = Cri(containerd.address, containerd.address)
id_set = []
for ref in converted_images:
container_name = str(uuid.uuid4())
cri.run_container(ref, container_name)
id, status = cri.check_container_status(container_name, timeout=30)
assert id is not None
assert status
id_set.append((id, ref))
time.sleep(2)
for id, ref in id_set:
cri.stop_rm_container(id)
# image is tagged for multiple times, so try to remove the image by both ctr and critctl
cri.remove_image(ref)
containerd.remove_image_sync(ref)
@pytest.mark.parametrize(
"converted_images",
[("ghcr.io/changweige/python:3.8_converted",)],
)
def test_snapshotter_public_converted_images(
nydus_anchor: NydusAnchor,
rafs_conf: RafsConf,
converted_images,
nydus_snapshotter,
):
snapshotter = Snapshotter(nydus_anchor)
containerd = Containerd(nydus_anchor, snapshotter).gen_config()
snapshotter.set_root(containerd.root)
nydus_anchor.put_dustbin(snapshotter)
nydus_anchor.put_dustbin(containerd)
# We can safely pass the step provide repo configured into the rafs configuration file.
rafs_conf.set_rafs_backend(
Backend.REGISTRY, scheme="https"
).enable_fs_prefetch().enable_rafs_blobcache(work_dir=snapshotter.cache_dir())
rafs_conf.enable_xattr().dump_rafs_conf()
snapshotter.run(rafs_conf.path())
time.sleep(1)
containerd.run()
cri = Cri(containerd.address, containerd.address)
id_set = []
for ref in converted_images:
container_name = str(uuid.uuid4())
cri.run_container(ref, container_name)
id, status = cri.check_container_status(container_name, timeout=30)
assert id is not None
assert status
id_set.append((id, ref))
time.sleep(2)
for id, ref in id_set:
cri.stop_rm_container(id)
cri.remove_image(ref)
containerd.remove_image_sync(ref)
snapshotter.shutdown()
containerd.shutdown()

View File

@ -1,80 +0,0 @@
import pytest
from rafs import NydusDaemon, RafsConf, RafsImage, Backend, Compressor
from nydus_anchor import NydusAnchor
from workload_gen import WorkloadGen
from distributor import Distributor
from utils import logging_setup, Size, Unit
import verifier
import random
from nydusd_client import NydusAPIClient
import time
import shutil
import utils
import uuid
logging_setup()
def test_stargz(
nydus_anchor: NydusAnchor,
rafs_conf: RafsConf,
nydus_scratch_image: RafsImage,
):
"""
Example command:
stargzify file:`pwd`/foo.tar.gz foo.stargz
"""
intermediator = "tmp.tar.gz"
stargz_image = "tmp.stargz"
dist = Distributor(nydus_scratch_image.rootfs(), 4, 4)
dist.generate_tree()
dirs = dist.put_directories(20)
dist.put_multiple_files(100, Size(64, Unit.KB))
dist.put_symlinks(30)
dist.put_multiple_files(10, Size(4, Unit.MB))
dist.put_hardlinks(20)
dist.put_single_file(Size(3, Unit.MB), name="test")
try:
shutil.rmtree("origin")
except Exception:
pass
shutil.copytree(nydus_scratch_image.rootfs(), "origin", symlinks=True)
utils.write_tar_gz(nydus_scratch_image.rootfs(), intermediator)
cmd = ["framework/bin/stargzify", f"file:{intermediator}", stargz_image]
utils.execute(cmd)
toc = utils.parse_stargz(stargz_image)
image = RafsImage(
nydus_anchor,
toc,
"bootstrap_scratched",
"blob_scratched",
clear_from_oss=True,
)
# This is a trick since blob name is usually a temp file created when RafsImage instantiated.
# framework will upload stargz to oss.
image.blob_abs_path = stargz_image
image.set_backend(Backend.OSS).set_param("blob-id", uuid.uuid4()).create_image(
from_stargz=True
)
rafs_conf.set_rafs_backend(Backend.OSS)
rafs_conf.enable_rafs_blobcache(is_compressed=True)
rafs = NydusDaemon(nydus_anchor, image, rafs_conf)
rafs.mount()
wg = WorkloadGen(nydus_anchor.mountpoint, "origin")
wg.verify_entire_fs()
wg.setup_workload_generator()
wg.torture_read(4, 4)
wg.finish_torture_read()
assert not wg.io_error

Some files were not shown because too many files have changed in this diff Show More