Compare commits

...

597 Commits

Author SHA1 Message Date
Fan Shang f7d513844d Remove mirrors configuration
Signed-off-by: Fan Shang <2444576154@qq.com>
2025-08-05 10:38:09 +08:00
Baptiste Girard-Carrabin 29dc8ec5c8 [registry] Accept empty scope during token auth challenge
The distribution spec (https://distribution.github.io/distribution/spec/auth/scope/#authorization-server-use) mentions that the access token provided during auth challenge "may include a scope" which means that it's not necessary to have one either to comply with the spec.
Additionally, this is something that is already accepted by containerd which will simply log a warning when no scope is specified: https://github.com/containerd/containerd/blob/main/core/remotes/docker/auth/fetch.go#L64
To match with what containerd and the spec suggest, the commit modifies the `parse_auth` logic to accept an empty `scope` field. It also logs the same warning as containerd.

Signed-off-by: Baptiste Girard-Carrabin <baptiste.girardcarrabin@datadoghq.com>
2025-07-31 20:28:47 +08:00
imeoer 7886e1868f storage: fix redirect in registry backend
To fix https://github.com/dragonflyoss/nydus/issues/1720

Signed-off-by: imeoer <yansong.ys@antgroup.com>
2025-07-31 11:49:44 +08:00
Peng Tao e1dffec213 api: increase error.rs UT coverage
Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2025-07-17 11:33:17 +08:00
Peng Tao cc62dd6890 github: add project common copilot instructions
Copilot generated with slight modification.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2025-07-17 11:33:17 +08:00
Peng Tao d140d60bea rafs: increase UT coverage for cached_v5.rs
Copilot generated.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2025-07-17 11:33:17 +08:00
Peng Tao f323c7f6e3 gitignore: ignore temp files generated by UTs
Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2025-07-17 11:33:17 +08:00
Peng Tao 5c8299c7f7 service: skip init fscache test if cachefiles is unavailable
Also skip the test for non-root users.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2025-07-17 11:33:17 +08:00
Jack Decker 14c0062cee Make filesystem sync operation fatal on failure
Signed-off-by: Jack Decker <jack@thundercompute.com>
2025-07-11 10:42:45 +08:00
Jack Decker d3bbc3e509 Add filesystem sync in both container and host namespaces before pausing container for commit to ensure all changes are flushed to disk.
Signed-off-by: Jack Decker <jack@thundercompute.com>
2025-07-11 10:42:45 +08:00
imeoer 80f80dda0e cargo: bump crates version
Signed-off-by: imeoer <yansong.ys@antgroup.com>
2025-07-08 10:38:27 +08:00
Yang Kaiyong a26c7bf99c test: support miri for unit test in actions
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-07-04 10:17:32 +08:00
imeoer 72b1955387 misc: add issue / PR stale workflow
Signed-off-by: imeoer <yansong.ys@antgroup.com>
2025-06-18 10:38:00 +08:00
ymy d589292ebc feat(nydusify): After converting the image, if the push operation fails, increase the number of retries.
Signed-off-by: ymy <ymy@zetyun.com>
2025-06-17 17:11:38 +08:00
Zephyrcf 344a208e86 Make ssl fallback check case-insensitive
Signed-off-by: Zephyrcf <zinsist77@gmail.com>
2025-06-12 19:03:49 +08:00
imeoer 9645820222 docs: add MAINTAINERS doc
Signed-off-by: imeoer <yansong.ys@antgroup.com>
2025-05-30 18:40:33 +08:00
Baptiste Girard-Carrabin d36295a21e [registry] Modify TokenResponse instead
Apply github comment.
Use `serde:default` in TokenResponse to have the same behavior as Option<String> without changing the struct signature.

Signed-off-by: Baptiste Girard-Carrabin <baptiste.girardcarrabin@datadoghq.com>
2025-04-28 16:11:23 +08:00
Baptiste Girard-Carrabin c048fcc45f [registry] Fix auth token parsing for access_token
Extend auth token parsing to support token in different json fields.
There is no real consensus on Oauth2 token response format, which means that each registry can implement their own. In particular, Azure ACR uses `access_token` as described here https://github.com/Azure/acr/blob/main/docs/Token-BasicAuth.md#get-a-pull-access-token-for-the-user. As such, when attempting to parse the JSON response containing the authorization token, we should attempt to deserialize using either `token` or `access_token` (and potentially more fields in the future if needed).
To not break the integration with existing registry, the behavior is to fallback to `access_token` only if `token` does not exist in the response.

Signed-off-by: Baptiste Girard-Carrabin <baptiste.girardcarrabin@datadoghq.com>
2025-04-28 16:11:23 +08:00
Baptiste Girard-Carrabin 67bf8b8283 [storage] Modify redirect policy to follow 10 redirects
From 2378d074fe (diff-c9f1f654cf0ba5d46a4ed25d8bb0ea22c942840c6693d31927a9fd912bcb9456R125-R131)
it seems that the redirect policy of the http client has always been to not follow redirects. However, this means that pulling from registries which have redirects when pulling blobs does not work. This is the case for instance on GCP's former container registries that were migrated to artifact registries.
Additionally, containerd's behavior is to follow up to 10 redirects https://github.com/containerd/containerd/blob/main/core/remotes/docker/resolver.go#L596 so it makes sense to use the same value.

Signed-off-by: Baptiste Girard-Carrabin <baptiste.girardcarrabin@datadoghq.com>
2025-04-27 18:54:04 +08:00
Peng Tao d74629233b readme: add deepwiki reference
Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2025-04-27 18:53:16 +08:00
Yang Kaiyong 21206e75b3 nydusify(refactor): handle layer with retry
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-04-23 11:04:54 +08:00
Yan Song c288169c1a action: add free-disk-space job
Try to fix the broken CI: https://github.com/dragonflyoss/nydus/actions/runs/14569290750/job/40863611290
It might be due to insufficient disk space.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2025-04-23 10:28:06 +08:00
Yang Kaiyong 23fdda1020 nydusify(feat): support for specifing log file and concurrently processing external model manifests
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-04-21 15:16:57 +08:00
Yang Kaiyong 9b915529a9 nydusify(feat): add crc32 in file attributes
Read CRC32 from external models' manifest and pass it to builder.

Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-04-18 18:30:18 +08:00
Yang Kaiyong 96c3e5569a nydus-image: only add crc32 flag in chunk level
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-04-18 14:39:03 +08:00
Yang Kaiyong 44069d6091 feat: support crc32 validation when validating chunks
- Add CRC32 algorithm implementation wiht crc-rs crate.
- Introduced a crc_enable option to the nydus builder.
- Support for generating CRC32 checksums when building images.
- Support for validating CRC32 in both normal chunk or external chunks.

Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-04-18 14:39:03 +08:00
Yang Kaiyong 31c8e896f0 chore: fix cargo-deny check failed
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-04-16 19:39:21 +08:00
Yang Kaiyong 8593498dbd nydusify: remove nydusd code which is working in progress
- remove the unready nydusd (runtime) implemention.
- remove the debug code.
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-04-02 22:21:27 +08:00
Yang Kaiyong 6161868e41 builder: suport build external model image from modctl
builder: add support for build external model image from modctl in local
context or remote registery.

feat(nydusify): add support for mount external large model images

chore: introduce GoReleaser for RPM package generation

nydusify(feat): add support for model image in check command

nydusify(test): add support for binary-based testing in external model's smoke tests

Signed-off-by: Yan Song <yansong.ys@antgroup.com>

Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-04-02 22:21:27 +08:00
Yang Kaiyong 871e1c6e4f chore(smoke): fix broken CI in smoke test
Run `rustup run stable cargo` instead of `cargo` to explicitly specify the toolchain.

Since `nextest` fails due to symlink resolution with new rustup v1.28.

Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-03-25 18:23:18 +08:00
Yan Song 8c0925b091 action: fix bootstrap path for fsck.erofs check
The output bootstrap path has been changed in the nydusify
check subcommand.

Related PR: https://github.com/dragonflyoss/nydus/pull/1652

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2025-03-25 14:51:28 +08:00
Yan Song baadb3990d misc: remove centos image from image conversion CI
The centos image has been deprecated on Docker Hub, so we can't
pull it in "Convert & Check Images" CI pipeline.

See https://hub.docker.com/_/centos

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2025-03-25 14:51:28 +08:00
Yan Song bd2123f2ed smoke: add v0.1.0 nydusd into native layer cases
To check the compatibility between the newer builder and old nydusd.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2025-03-24 20:34:54 +08:00
Yan Song c41ac4760d builder: remove redundant blobs for merge subcommand
After merging all trees, we need to re-calculate the blob index of
referenced blobs, as the upper tree might have deleted some files
or directories by opaques, and some blobs are dereferenced.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2025-03-24 20:34:54 +08:00
Yan Song 7daa0a3cd9 nydusify: refactor check subcommand
- allow either the source or target to be an OCI or nydus image;
- improve output directory structure and log format;

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2025-03-24 17:45:50 +08:00
ymy 7e5147990c feat(nydusify): A short container id is supported when you commit a container
Signed-off-by: ymy <ymy@zetyun.com>
2025-03-10 10:21:06 +08:00
ymy 36382b54dd Optimize: Improve code style in push lower blob section
Signed-off-by: ymy <ymy@zetyun.com>
2025-03-10 10:21:06 +08:00
yumy 8b03fd7593 fix: nydusify golang ci arg
Signed-off-by: ymy <ymy@zetyun.com>
2025-03-04 23:48:02 +08:00
ymy 76651c319a nydusify: fix the issue of blob not found when modifying image name during commit
Signed-off-by: ymy <ymy@zetyun.com>
2025-03-04 23:48:02 +08:00
Yang Kaiyong 91931607f8 fix(nydusd): fix parsing of failover-policy argument
Use `inspect_err` instead of `inspect` to to correctly handle and log
errors when parsing the `failover-policy` argument.

Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-02-24 11:25:26 +08:00
Yan Song dd9ba54e33 misc: remove goproxy.io for go build
The goproxy.io service is unstable for now, it effects,
the github CI, let's remove it.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2025-02-17 09:55:13 +08:00
Yan Song 09b81c50b4 nydusify: fix layer push retry for copy subcommand
Add push retry mechanism, enhance the success rate for image copy
when a single layer copy failure.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2025-02-17 09:55:13 +08:00
Yang Kaiyong 3beb9a72d9 chore: bump deps to address rustsec warnning
- Bump vm-memory to 1.14.1, vmm-sys-util to 0.12.1 and vhost to 0.11.0.
- Bump cargo-deny-action version from v1 to v2 in workflows.

Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-02-11 20:29:22 +08:00
Yang Kaiyong 3c10b59324 chore: comment the unused code to address clippy error
The backend-oss feature is nerver enabled, so comment the test code.

Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-02-10 10:21:58 +08:00
Yang Kaiyong bf17d221d6 fix: Support building rafs without the dedup feature
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-02-10 10:21:58 +08:00
Yang Kaiyong ee5ef64cdd chore: pass rust version to build docker container in CI
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-02-10 10:21:58 +08:00
Yang Kaiyong 05ea41d159 chore: specify the rust version to 1.84.0 and enable docker cache
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-02-10 10:21:58 +08:00
Yang Kaiyong 4def4db396 chore: fix the broken CI on riscv64
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-02-10 10:21:58 +08:00
Yang Kaiyong d48d3dbdb3 chore: bump rust version to 1.8.4 and update deps to reslove cargo deny check failures
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-02-10 10:21:58 +08:00
Kostis Papazafeiropoulos f60e40aafa fix(blobfs): Use correct result types for `open` and `create`
Use the correct result types for `open` and `create` expected by the
`fuse_backend_rs` 0.12.0 `Filesystem` trait

Signed-off-by: Kostis Papazafeiropoulos <papazof@gmail.com>
2025-01-15 10:18:59 +08:00
Kostis Papazafeiropoulos 83fa946897 build(rafs): Add missing `dedup` feature for `storage` crate dependency
Fix `rafs` build by adding missing `dedup` feature for `storage` crate
dependency

Signed-off-by: Kostis Papazafeiropoulos <papazof@gmail.com>
2025-01-15 10:18:59 +08:00
Gaius 365f13edcf chore: rename repo Dragonfly2 to dragonfly
Signed-off-by: Gaius <gaius.qi@gmail.com>
2024-12-20 17:09:10 +08:00
Lin Wang e23d5bc570 fix: dragonflyoss#1644 and #1651 resolve Algorithm to_string and FromStr inconsistency
Signed-off-by: Lin Wang <l.wang@mail.dlut.edu.cn>
2024-12-16 20:39:08 +08:00
Liu Bo acdf021ec9 rafs: fix typo
Fix an invalid info! usage.

Signed-off-by: Liu Bo <liub.liubo@gmail.com>
2024-12-13 14:40:50 +08:00
Xing Ma b175fc4baa nydusify: introduce optimize subcommand of nydusify
We can statically analyze the image entrypoint dependency, or use runtime dynamic
analysis technologies such as ebpf, fanotify, metric, etc. to obtain the container
file access pattern, and then build this part of data into an independent image layer:

* preferentially fetch blob during the image startup phase to reduce network and disk IO.
* avoid frequent image builds, allows for better local cache utilization.

Implement optimize subcommand of nydusify to generate a new image, which references a new
blob included prefetch file chunks.
```
nydusify optimize --policy separated-prefetch-blob \
	--source $existed-nydus-image \
	--target $new-nydus-image \
	--prefetch-files /path/to/prefetch-files
```

More detailed process is as follows:
1. nydusify first downloads the source image and bootstrap, utilize nydus-image to output a
new bootstrap along with an independent prefetchblob;
2. nydusify generate&push new meta layer including new bootstrap and the prefetch-files ,
also generates&push new manifest/config/prefetchblob, completing the incremental image build.

Signed-off-by: Xing Ma <maxing.lan@bytedance.com>
2024-12-09 14:51:13 +08:00
Xing Ma 8edc031a31 builder: Enhance optimize subcommand for prefetch
Major changes:
1. Added compatibility for rafs v5/v6 formats;
2. Set IS_SEPARATED_WITH_PREFETCH_FILES flag in BlobInfo for prefetchblob;
3. Add option output-json to store build output.

Signed-off-by: Xing Ma <maxing.lan@bytedance.com>
2024-12-09 14:51:13 +08:00
pyq bb4744c7fb docs: fix docker-env-setup.md
Signed-off-by: pyq <eilo.pengyq@gmail.com>
2024-12-04 10:10:26 +08:00
dDai Yongxuan 375f55f32e builder: introduce optimize subcommand for prefetch
We can statically analyze the image entrypoint dependency, or use runtime dynamic
analysis technologies such as ebpf, fanotify, metric, etc. to obtain the container
file access pattern, and then build this part of data into an independent image layer:

* preferentially fetch blob during the image startup phase to reduce network and disk IO.
* avoid frequent image builds, allows for better local cache utilization.

Implement optimize subcommand to optimize image bootstrap
from a prefetch file list, generate a new blob.

```
nydus-image optimize --prefetch-files /path/to/prefetch-files.txt \
  --bootstrap /path/to/bootstrap \
  --blob-dir /path/to/blobs
```
This will generate a new bootstrap and new blob in `blob-dir`.

Signed-off-by: daiyongxuan <daiyongxuan20@mails.ucas.ac.cn>
2024-10-29 14:52:17 +08:00
abushwang a575439471 fix: correct some typos about nerdctl image rm
Signed-off-by: abushwang <abushwangs@gmail.com>
2024-10-25 16:11:22 +08:00
abushwang 4ee6ddd931 fix: correct some typos in nydus-fscache.md
Signed-off-by: abushwang <abushwangs@gmail.com>
2024-10-25 15:05:32 +08:00
Yadong Ding 57c112a998 smoke: add smoking test for cas and chunk dedup
Add smoking test case for cas and chunk dedup.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-10-23 18:25:01 +08:00
Jiang Liu b9ba409f13 docs: add documentation for cas
Add documentation for cas.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2024-10-23 18:25:01 +08:00
Yadong Ding 2387fe8217 storage: enable chunk deduplication for file cache
Enable chunk deduplication for file cache. It works in this way:
- When a chunk is not in blob cache file yet, inquery CAS database
  whether other blob data files have the required chunk. If there's
  duplicated data chunk in other data files, copy the chunk data
  into current blob cache file by using copy_file_range().
- After downloading a data chunk from remote, save file/offset/chunk-id
  into CAS database, so it can be reused later.

Co-authored-by: Jiang Liu <gerry@linux.alibaba.com>
Co-authored-by: Yading Ding <ding_yadong@foxmail.com>
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-10-23 18:25:01 +08:00
Yadong Ding 4b1fd55e6e storage: add garbage collection in CasMgr
- Changed `delete_blobs` method in `CasDb` to take an immutable reference (`&self`) instead of a mutable reference (`&mut self`).
- Updated `dedup_chunk` method in `CasMgr` to correctly handle the deletion of non-existent blob files from both the file descriptor cache and the database.
- Implemented the `gc` (garbage collection) method in `CasMgr` to identify and remove blobs that no longer exist on the filesystem, ensuring the database and cache remain consistent.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-10-23 18:25:01 +08:00
Jiang Liu 45e07eab3d storage: implement CasManager to support chunk dedup at runtime
Implement CasManager to support chunk dedup at runtime.
The manager provides to major interfaces:
- add chunk data to the CAS database
- check whether a chunk exists in CAS database and copy it to blob file
  by copy_file_range() if the chunk exists.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2024-10-23 18:25:01 +08:00
Yadong Ding 51a6045d74 storage: improve copy_file_range
- improve copy_file_range when target os is not linux
- add more comprehensive tests

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-10-23 18:25:01 +08:00
Jiang Liu 7d1c2e635a storage: add helper copy_file_range
Add helper copy_file_range() which:
- avoid copy data into userspace
- may support reflink on xfs etc

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2024-10-23 18:25:01 +08:00
Mike Hotan 15ec192e3d Nydusify `localfs` support
Signed-off-by: Mike Hotan <mike@union.ai>
2024-10-17 09:42:59 +08:00
Yadong Ding da2510b6f5 action: bump macos-13
The macOS 12 Actions runner image will begin deprecation on 10/7/24.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-10-16 18:35:50 +08:00
Yadong Ding 47025395fa lint: bump golangci-lint v1.61.0 and fix lint
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-10-16 09:45:05 +08:00
Yadong Ding 678b44ba32 rust: upgrade to 1.75.0
1. reduce the binary size.
2. use more rust-clippy lints.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-10-16 09:45:05 +08:00
Yifan Zhao 7c498497fb nydusify: modify compact interface
This patch modifies the compact interface to meet the change in
nydus-image.

Signed-off-by: Yifan Zhao <zhaoyifan@sjtu.edu.cn>
2024-10-15 09:27:34 +08:00
Yifan Zhao 1ccc603525 nydus-image: modify compact interface
This commit uses compact parameter directly  insteadof compact config
file in the cli interface. It also fix a bug where chunk key for
ChunkWrapper::Ref is not generated correctly.

Signed-off-by: Yifan Zhao <zhaoyifan@sjtu.edu.cn>
2024-10-15 09:27:34 +08:00
Yifan Zhao a4683baa1e rafs: fix bug in InodeWrapper::is_sock()
We incorrectly use is_dir() to check if a file is a socket. This patch
fixes it.

Signed-off-by: Yifan Zhao <zhaoyifan@sjtu.edu.cn>
2024-09-27 12:35:14 +08:00
Yadong Ding 9f439ab404 bats: use nerdctl replace ctr-remote
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-09-25 09:23:19 +08:00
Yadong Ding 0c0ba2adec chore: remove contrib/ctr-remote
Nerdctl is more useful than `ctr-remote`, deprecate it.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-09-25 09:23:19 +08:00
Yadong Ding c5ef5c97a4 chore: keep smoke test component latest version
- Use the latest `nerdctl`, `nydus-snapshotter`, and `cni` in smoke test env.
- Delete `misc/takeover/snapshotter_config.toml`, use modifyed `misc/performance/snapshotter_config.toml` when test.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-09-25 09:11:08 +08:00
Yadong Ding 37a7b96412 nydusctl: fix build version info
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-09-20 17:32:55 +08:00
Yadong Ding 742954eb2c tests: chang assertr of test_worker_mgr_rate_limiter
assert_eq!(mgr.prefetch_inflight.load(Ordering::Acquire), 3); and assert!(mgr.prefetch_inflight.load(Ordering::Acquire) <= 2); sometimes failed.
The reason is the worker threads may have already started processing the requests and decreased the counter before the main thread checks it.

- change assert!(mgr.prefetch_inflight.load(Ordering::Acquire) = 3); to assert!(mgr.prefetch_inflight.load(Ordering::Acquire) <= 3);
-  thread::sleep(Duration::from_secs(1)); to thread::sleep(Duration::from_secs(2));

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-09-19 20:30:27 +08:00
Yadong Ding 849591afa9 feat: add retry mechanism in read blob metadata
When read blob size from blob metadata, we should retry read from the remote if error occurs.
Also set the max retry times is 3.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-09-19 15:12:04 +08:00
Yadong Ding e8a4305773 chore: bump go lint action v6 and version 1.61.0
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-09-19 15:04:16 +08:00
Yadong Ding 7fc9edeec5 chore: change nydus snapshotter work dir
- use /var/lib/containerd/io.containerd.snapshotter.v1.nydus
- bump nydusd snapshotter v1.14.0

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-09-18 11:13:22 +08:00
Yadong Ding f4fb04a50f lint: remove unused fieldsPath
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-09-18 09:18:12 +08:00
dependabot[bot] 481a63b885 build(deps): bump github.com/docker/docker in /contrib/nydusify
Bumps [github.com/docker/docker](https://github.com/docker/docker) from 25.0.5+incompatible to 25.0.6+incompatible.
- [Release notes](https://github.com/docker/docker/releases)
- [Commits](https://github.com/docker/docker/compare/v25.0.5...v25.0.6)

---
updated-dependencies:
- dependency-name: github.com/docker/docker
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-09-16 20:23:59 +08:00
BruceAko 9b4c272d78 fix: add tests for nydusify copy
Signed-off-by: BruceAko <chongzhi@hust.edu.cn>
2024-09-15 21:10:37 +08:00
BruceAko 30d53c3f25 fix: add a doc about nydusify copy
Signed-off-by: BruceAko <chongzhi@hust.edu.cn>
2024-09-15 21:10:37 +08:00
BruceAko 309feab765 fix: add getLocalPath() and close decompressor
Signed-off-by: BruceAko <chongzhi@hust.edu.cn>
2024-09-15 21:10:37 +08:00
BruceAko a1ceb176f4 feat: support local tarball for nydusify copy
Signed-off-by: BruceAko <chongzhi@hust.edu.cn>
2024-09-15 21:10:37 +08:00
Jiancong Zhu 6106fbc539 refactor: fixed the unnecessary mutex lock operation
Signed-off-by: Jiancong Zhu <Chasing1020@gmail.com>
2024-09-12 18:26:26 +08:00
Yifan Zhao d89410f3fc nydus-image: refactor unpack/compact cli interface
Since unpack and compact subcommands does not need the entire nydusd
configuration file, let's refactor their cli interface and directly
take backend configuration file.

Specifically, we introduce `--backend-type`, `--backend-config` and
`--backend-config-file` options to specify the backend type and remove
`--config` option.

Signed-off-by: Yifan Zhao <zhaoyifan@sjtu.edu.cn>

Fixes: #1602
2024-09-10 14:33:51 +08:00
Yifan Zhao 36fe98b3ac smoke: fix invalid cleanup issue in main_test.go
The cleanup of new registry is invalid as TestMain() calls os.Exit()
and will not run defer functions. This patch fixes the issue by
doing the cleanup explicitly.

Signed-off-by: Yifan Zhao <zhaoyifan@sjtu.edu.cn>
2024-09-10 14:33:51 +08:00
fappy1234567 114ec880a2 smoke: add mount api test case
Signed-off-by: fappy1234567 <2019gexinlei@bupt.edu.cn>
2024-08-30 15:36:59 +08:00
Yan Song 3eb5c7b5ef nydusify: small improvements for mount & check subcommands
- Add `--prefetch` option for enabling full image data prefetch.
- Support `HTTP_PROXY` / `HTTPS_PROXY` env for enabling proxy for nydusd.
- Change nydusd log level to `warn` for mount & check subcommands.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2024-08-28 11:07:26 +08:00
Yadong Ding 52ed07b4cf deny: ignore RUSTSEC-2024-0357
openssl 0.10.55 can't build in riscv64 and ppc64le.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-08-08 14:42:44 +08:00
Yan Song a6bd8ccb8d smoke: add nydusd hot upgrade test case
The test case in hot_upgrade_test.go is different with takeover_test.go,
it not depend on snapshotter component.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2024-07-22 09:22:47 +08:00
Yan Song 642571236d smoke: refactor nydusd methods for testing
Renaming and adding some methods for nydusd struct, for easily controlling
nydusd process.

And support SKIP_CASES env to allow skipping some cases.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2024-07-22 09:22:47 +08:00
Yadong Ding 32b6ead5ec action: fix upload-coverage-to-codecov with secret
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-07-15 09:32:19 +08:00
Yadong Ding c92fe6512f action: upgrade macos to 12
macos-11 was deprecated since 2024.06.28.
https://docs.github.com/actions/using-jobs/choosing-the-runner-for-a-job

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-07-15 09:32:19 +08:00
BruceAko 3684474254 fix: rename mirrors' check_pause_elapsed to health_check_pause_elapsed
Signed-off-by: BruceAko <chongzhi@hust.edu.cn>
2024-07-11 09:13:11 +08:00
BruceAko cd24506d43 feat: skip health check if connection is not active
1. Add last_active field for Connection. When Connection.call() is called, last_active is updated to current timestamp.
2. Add check_pause_elapsed field for ProxyConfig and MirrorConfig. Connection is considered to be inactive if the current time to the last_active time exceeds check_pause_elapsed.
3. In proxy and mirror's health checking thread's loop, if the connection is not active (exceeds check_pause_elapsed), this round of health check is skipped.
4. Update the document.

Signed-off-by: BruceAko <chongzhi@hust.edu.cn>
2024-07-11 09:13:11 +08:00
YuQiang 19b09ed12f fix: add namespace flag for nydusify commit.
Signed-off-by: YuQiang <yu_qiang@mail.nwpu.edu.cn>
2024-07-09 18:15:25 +08:00
BruceAko da5d423b8c fix: correct some typos in Nydusify
Signed-off-by: BruceAko <chongzhi@hust.edu.cn>
2024-07-09 18:14:16 +08:00
Lin Wang 455c856aa8 nydus-image: add documentation for chunk-level deduplication
Signed-off-by: Lin Wang <l.wang@mail.dlut.edu.cn>
2024-07-04 18:08:59 +08:00
Lin Wang 5dec7536fa nydusify: add chunkdict generate command and corresponding tests
Signed-off-by: Lin Wang <l.wang@mail.dlut.edu.cn>
2024-07-04 18:08:59 +08:00
Lin Wang 087c0b1baf nydus-image: Add support for chunkdict generation
Signed-off-by: Lin Wang <l.wang@mail.dlut.edu.cn>
2024-07-04 18:08:59 +08:00
泰友 332f3dd456 fix: compatibility to image without ext table for blob cache
There are scenes that cache file is smaller than expect size. Such as:

    1. Nydusd 1.6 generates cache file by prefetch, which is smaller than size in boot.
    2. Nydusd 2.2 generates cache file by prefetch, when image not provide ext blob tables.
    3. Nydusd not have enough time to fill cache for blob.

    Equality check for size is too much strict for both 1.6
    compatibility and 2.2 concurrency. This pr ensures blob size smaller
    or equal than expect size. It also truncates blob cache when smaller
    than expect size.

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2024-06-18 10:43:42 +08:00
泰友 7cf2d4a2d7 fix: bad read by wrong data region
User io may involve discontinuous segments in different chunks. Bad
    read is produced by merging them into continuous one. That is what
    Region does. This pr separate discontinuous segments into different
    regions, avoiding merging forcibly.

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2024-06-18 10:43:42 +08:00
泰友 64dddd2d2b fix: residual fuse mountpoint after graceful shutdown
1. Case1: Fuse server exits in thread not main. There is possibility
       that process finishes before shutdown of server.
    2. Case2: Fuse server exits in thread of state machine. There is
       possibiltiy that state machine not responses to signal catch
       thread. Then dead lock happens. Process exits before shutdown of
       server.

    This pr aims to seperator shutdown actions from signal catch
    handler. It only notifies controller. Controller exits with
    shutdown of fuse server. No race. No deadlock.

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2024-06-18 10:43:42 +08:00
Yan Song de7cfc4088 nydusify: upgrade acceleration-service v0.2.14
To bring the fixup: https://github.com/goharbor/acceleration-service/pull/290

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2024-06-06 10:18:45 +08:00
Yadong Ding 79a7015496 chore: upgrade components version in test env
1. Upgrade cni to v1.5.0 and try to fix error in TestCommit.
2. upgrade nerdctl to v1.7.6.
3. upgrade nydus-snapshotter to v0.13.13 and fix path error.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-06-06 09:56:26 +08:00
BruceAko 3b9b0d4588 fix: correct some typos and grammatical problem
Signed-off-by: chongzhi <chongzhi@hust.edu.cn>
2024-06-06 09:55:11 +08:00
Yadong Ding 7ea510b237 docs: fix incorrect file path
https://github.com/containerd/nydus-snapshotter/blob/main/misc/snapshotter/config.toml#L27
In snapshotter config nydusd config file path is /etc/nydus/nydusd-config.fusedev.json

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-06-06 09:50:40 +08:00
dependabot[bot] 34ab06b6b3 build(deps): bump golang.org/x/net in /contrib/ctr-remote
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.19.0 to 0.23.0.
- [Commits](https://github.com/golang/net/compare/v0.19.0...v0.23.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-06-05 16:32:26 +08:00
dependabot[bot] 9483286863 build(deps): bump golang.org/x/net in /contrib/nydusify
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.19.0 to 0.23.0.
- [Commits](https://github.com/golang/net/compare/v0.19.0...v0.23.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-06-05 15:56:24 +08:00
Yadong Ding 13a9aa625b fix: downgraded to codecov/codecov-action@v4.0.0
codecov/codecov-action@v4 is unstable.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-06-04 15:59:46 +08:00
Yadong Ding 305a418b31 fix: upload-coverage failed in master
When action don't run on pull request, Codecov GitHub Action V4 need token.
Refence:
1. https://github.com/codecov/codecov-action?tab=readme-ov-file#breaking-changes
2. https://docs.codecov.com/docs/codecov-uploader#supporting-token-less-uploads-for-forks-of-open-source-repos-using-codecov

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-06-04 15:18:48 +08:00
Qinqi Qu 4a16402120 action: bump codecov-action to v4
To solve the problem of CI failure.

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2024-04-17 16:39:48 +08:00
Qinqi Qu 1d1691692c deps: update indexmap from v1 to v2
Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2024-04-16 15:12:25 +08:00
Qinqi Qu d1dfe7bd65 backend-proxy: refactor to support latest versions of crates
Also fix some security alerts of Dependabot:
1. https://github.com/advisories/GHSA-q6cp-qfwq-4gcv
2. https://github.com/advisories/GHSA-8r5v-vm4m-4g25
3. https://github.com/advisories/GHSA-g98v-hv3f-hcfr

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2024-04-16 15:12:25 +08:00
Qinqi Qu 3b2a0c0bcc deps: remove dependency on atty
The atty crate is not maintained, so flexi_logger and clap are updated
to remove the dependency on atty.

Fix: https://github.com/advisories/GHSA-g98v-hv3f-hcfr

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>

s

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2024-04-16 15:12:25 +08:00
Qinqi Qu 9826b2cc3f bats test: add a backup image to avoid network errors
Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2024-04-09 17:32:28 +08:00
dependabot[bot] 260a044c6e build(deps): bump h2 from 0.3.24 to 0.3.26
Bumps [h2](https://github.com/hyperium/h2) from 0.3.24 to 0.3.26.
- [Release notes](https://github.com/hyperium/h2/releases)
- [Changelog](https://github.com/hyperium/h2/blob/v0.3.26/CHANGELOG.md)
- [Commits](https://github.com/hyperium/h2/compare/v0.3.24...v0.3.26)

---
updated-dependencies:
- dependency-name: h2
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-09 15:27:13 +08:00
dependabot[bot] e926d2ff9c build(deps): bump google.golang.org/protobuf in /contrib/nydusify
Bumps google.golang.org/protobuf from 1.32.0 to 1.33.0.

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-31 11:36:18 +08:00
dependabot[bot] fc52ebc7a1 build(deps): bump github.com/docker/docker in /contrib/nydusify
Bumps [github.com/docker/docker](https://github.com/docker/docker) from 25.0.3+incompatible to 25.0.5+incompatible.
- [Release notes](https://github.com/docker/docker/releases)
- [Commits](https://github.com/docker/docker/compare/v25.0.3...v25.0.5)

---
updated-dependencies:
- dependency-name: github.com/docker/docker
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-29 17:05:58 +08:00
YuQiang af914dd1a5 fix: modify benchmark prepare bash path
1. correct the performance test prepare bash file path

Signed-off-by: YuQiang <y_q_email@163.com>
2024-03-26 10:02:52 +08:00
Adolfo Ochagavía 2308efa6f7 Add compression method support to zran docs
Signed-off-by: Adolfo Ochagavía <github@adolfo.ochagavia.nl>
2024-03-25 17:38:44 +08:00
Wei Zhang 9ae8e3a7b5 overlay: add overlay implementation
With help of newly introduced Overlay FileSystem in `fuse-backend-rs`
library, now we can create writable rootfs in Nydus. Implementation of
writable rootfs is based on one passthrough FS(as upper layer) over one
readonly rafs(as lower layer).

To do so, configuration is extended with some Overlay options.

Signed-off-by: Wei Zhang <weizhang555.zw@gmail.com>
2024-03-15 14:15:54 +08:00
YuQiang 3dfa9e9776 docs: add doc for nydus-image check command
Signed-off-by: YuQiang <y_q_email@163.com>
2024-03-14 11:10:46 +08:00
YuQiang f10782c79d docs: add doc for nydusify commit command
Signed-off-by: YuQiang <y_q_email@163.com>
2024-03-14 09:33:02 +08:00
YuQiang ae842f9b8b action: merge and move prepare.sh
remove misc/performance/prepare.sh and misc/performance/prepare.sh and merge to misc/prepare.sh

Signed-off-by: YuQiang <y_q_email@163.com>
2024-03-14 09:32:38 +08:00
YuQiang 26b1d7db5a feat: add smoke test for nydusify commit
Signed-off-by: YuQiang <y_q_email@163.com>
2024-03-14 09:32:38 +08:00
YuQiang c14790cb21 feat: add nydusify commit command
add nydusify commit command to commit a nydus container into nydus image

Signed-off-by: YuQiang <y_q_email@163.com>
2024-03-14 09:32:38 +08:00
YuQiang 19daa7df6f feat: ported write overlay upperdir capability
ported  capability of get and write diff between overlayfs upper and lower.

Signed-off-by: YuQiang <y_q_email@163.com>
2024-03-14 09:32:38 +08:00
dependabot[bot] a0ec880182 build(deps): bump github.com/go-jose/go-jose/v3 in /contrib/nydusify
Bumps [github.com/go-jose/go-jose/v3](https://github.com/go-jose/go-jose) from 3.0.1 to 3.0.3.
- [Release notes](https://github.com/go-jose/go-jose/releases)
- [Changelog](https://github.com/go-jose/go-jose/blob/v3.0.3/CHANGELOG.md)
- [Commits](https://github.com/go-jose/go-jose/compare/v3.0.1...v3.0.3)

---
updated-dependencies:
- dependency-name: github.com/go-jose/go-jose/v3
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-08 19:01:13 +08:00
dependabot[bot] c57e7c038c build(deps): bump mio in /contrib/nydus-backend-proxy
Bumps [mio](https://github.com/tokio-rs/mio) from 0.8.5 to 0.8.11.
- [Release notes](https://github.com/tokio-rs/mio/releases)
- [Changelog](https://github.com/tokio-rs/mio/blob/master/CHANGELOG.md)
- [Commits](https://github.com/tokio-rs/mio/compare/v0.8.5...v0.8.11)

---
updated-dependencies:
- dependency-name: mio
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-08 19:00:57 +08:00
dependabot[bot] eba6afe5b8 build(deps): bump mio from 0.8.10 to 0.8.11
Bumps [mio](https://github.com/tokio-rs/mio) from 0.8.10 to 0.8.11.
- [Release notes](https://github.com/tokio-rs/mio/releases)
- [Changelog](https://github.com/tokio-rs/mio/blob/master/CHANGELOG.md)
- [Commits](https://github.com/tokio-rs/mio/compare/v0.8.10...v0.8.11)

---
updated-dependencies:
- dependency-name: mio
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-07 14:46:07 +08:00
YuQiang aaab560aa9 feat: add fs_version and compressor output of nydus image check
1.Add rafs_version value, output like 5 or 6.
2.Add compressor algorithm value, like ztsd.
Add rafs_version and compressor json output of nydus image check,so that more info can be get if it is necessary.

Signed-off-by: YuQiang <y_q_email@163.com>
2024-02-29 14:15:39 +08:00
Yadong Ding 7b3cc503a2 action: add contrib-lint in smoke test
1. Use the official GitHub action for golangci-lint from its authors.
2. fix golang lint error with v1.56
3. separate test and golang lint.Sometimes we need tests without golang lint and sometimes we just want to do golang lint.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-02-21 11:44:33 +08:00
dependabot[bot] 5fb809605d build(deps): bump github.com/opencontainers/runc in /contrib/ctr-remote
Bumps [github.com/opencontainers/runc](https://github.com/opencontainers/runc) from 1.1.11 to 1.1.12.
- [Release notes](https://github.com/opencontainers/runc/releases)
- [Changelog](https://github.com/opencontainers/runc/blob/v1.1.12/CHANGELOG.md)
- [Commits](https://github.com/opencontainers/runc/compare/v1.1.11...v1.1.12)

---
updated-dependencies:
- dependency-name: github.com/opencontainers/runc
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-02-20 13:11:38 +08:00
Yan Song abaf9caa16 docs: update outdated dingtalk QR code
And remove the outdated technical meeting schedule.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2024-02-20 10:17:19 +08:00
dependabot[bot] d7ea50e621 build(deps): bump github.com/opencontainers/runc in /contrib/nydusify
Bumps [github.com/opencontainers/runc](https://github.com/opencontainers/runc) from 1.1.11 to 1.1.12.
- [Release notes](https://github.com/opencontainers/runc/releases)
- [Changelog](https://github.com/opencontainers/runc/blob/v1.1.12/CHANGELOG.md)
- [Commits](https://github.com/opencontainers/runc/compare/v1.1.11...v1.1.12)

---
updated-dependencies:
- dependency-name: github.com/opencontainers/runc
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-02-18 17:11:09 +08:00
Yadong Ding d12634f998 action: bump nodejs20 github action
https://github.blog/changelog/2023-09-22-github-actions-transitioning-from-node-16-to-node-20/

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-02-06 09:36:54 +08:00
loheagn 9a1c47bd00 docs: add doc for nydusd failover and hot upgrade
Signed-off-by: loheagn <loheagn@icloud.com>
2024-01-23 20:01:48 +08:00
Yadong Ding 3f47f1ec6d fix: upload-artifact v4 break changes
upload-artifact v4 can't upload artifact to same name

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-01-19 11:01:50 +08:00
Yadong Ding 5f26f8ee1c fix: upgrade h2 to 0.3.24 to fix RUSTSEC-2024-0003
ID: RUSTSEC-2024-0003
Advisory: https://rustsec.org/advisories/RUSTSEC-2024-0003
An attacker with an HTTP/2 connection to an affected endpoint can send a steady stream of invalid frames to force the
generation of reset frames on the victim endpoint.
By closing their recv window, the attacker could then force these resets to be queued in an unbounded fashion,
resulting in Out Of Memory (OOM) and high CPU usage.

This fix is corrected in [hyperium/h2#737](https://github.com/hyperium/h2/pull/737), which limits the total number of
internal error resets emitted by default before the connection is closed.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-01-18 16:58:44 +08:00
Yadong Ding eae9ed7e45 fix: upload-artifact@v4 breake in release
Error:
Failed to CreateArtifact: Received non-retryable error: Failed request: (409) Conflict: an artifact with this name already exists on the workflow run

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-01-18 16:58:44 +08:00
Yadong Ding a3922b8e0d action: bump upload-artifact/download-artifact v4
Since https://github.com/actions/download-artifact/issues/249 are fixed,
we can use the v4 version.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-01-17 10:04:49 +08:00
Wenhao Ren 9dae4eccee storage: fix the tiny prefetch request for batch chunks
By passing the chunk continuous check, and correctly sort batch chunks,
the prefetch request will not be interrupted by batch chunks anymore.

Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2024-01-15 17:56:26 +08:00
Wenhao Ren d7190d9fee action: add convert test for batch chunk
Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2024-01-15 17:56:26 +08:00
Wenhao Ren 8bb53a873a storage: add validation and unit test for batch chunks
1. Add the validation for batch chunks.
2. Add unit test for `BatchInflateContext`.

Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2024-01-15 17:56:26 +08:00
Wenhao Ren 7f799ec8bb storage: introduce `BlobCCI` for reading batch chunk info
`BlobCompressionContextInfo` is need to read batch chunk info.
`BlobCCI` is introduced for simplifying the code,
and decrease the times of getting this context by lazy loading.

Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2024-01-15 17:56:26 +08:00
Wenhao Ren c557f99d08 storage: fix the read amplification for batch chunks.
Read amplification for batch chunk is not correctly implemented that may crash.
The read amplification is rewrited to fix this bug.
A unit test for read amplification is also added for covering this code.

Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2024-01-15 17:56:26 +08:00
Wenhao Ren 676acd0a6f storage: fix the Error type to log the error correctly
Currently, many error are output as `os error 22` lossing customized log info.
So we change the Error type for correctly output and log the error info
as what we expected.

Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2024-01-15 17:56:26 +08:00
Wenhao Ren fa72c98ffc rafs: add `is_batch()` for `BlobChunkInfo`
Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2024-01-15 17:56:26 +08:00
Wenhao Ren b4fe28aad6 rafs: move `compressed_offset` from `BatchInflateContext` to chunk info for batch chunks.
1. `compressed_offset` is used for build-time and runtime sorting for chunk info.
So we move `compressed_offset` from `BatchInflateContext` to chunk info for batch chunks.

2. the `compressed_size` for the blobs in batch mode is not correctly set.
We thus fix it by setting the value of `dumped_size`.

Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2024-01-15 17:56:26 +08:00
dependabot[bot] 596492b932 build(deps): bump github.com/go-jose/go-jose/v3 in /contrib/nydusify
Bumps [github.com/go-jose/go-jose/v3](https://github.com/go-jose/go-jose) from 3.0.0 to 3.0.1.
- [Release notes](https://github.com/go-jose/go-jose/releases)
- [Changelog](https://github.com/go-jose/go-jose/blob/main/CHANGELOG.md)
- [Commits](https://github.com/go-jose/go-jose/compare/v3.0.0...v3.0.1)

---
updated-dependencies:
- dependency-name: github.com/go-jose/go-jose/v3
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-01-04 18:52:59 +08:00
Yadong Ding 2743f163b9 deps: update the latest version and sync
Bump containerd v1.7.11 and golang.org/x/crypto v0.17.0.
Resolve GHSA-45x7-px36-x8w8 and GHSA-7ww5-4wqc-m92c.
Update dependents to latest version and sync in muti modules.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-01-04 14:11:36 +08:00
loheagn 04b4552e03 tests: add smoke test for hot upgrade
Signed-off-by: loheagn <loheagn@icloud.com>
2024-01-04 14:10:31 +08:00
Qinqi Qu 5ecda8c057 bats test: upgrade golang version to 1.21.5
Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2024-01-03 11:54:30 +08:00
Qinqi Qu 8e1799e5df bats test: change rust docker image to Debian 11 bullseye version
The rust:1.72.1 image is based on the Debian 12 bookworm, and requires
an excessively high version of glibc, resulting in the inability to
find the glibc version to run the compiled nydus program on some old
operating systems.

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2024-01-03 11:54:30 +08:00
Yadong Ding f08587928b rust: bump 1.72.1 and fix errors
https://rust-lang.github.io/rust-clippy/master/index.html#non_minimal_cfg
https://rust-lang.github.io/rust-clippy/master/index.html#unwrap_or_default
https://rust-lang.github.io/rust-clippy/master/index.html#needless_borrows_for_generic_args
https://rust-lang.github.io/rust-clippy/master/index.html#reserve_after_initializatio
https://rust-lang.github.io/rust-clippy/master/index.html#/arc_with_non_send_sync
https://rust-lang.github.io/rust-clippy/master/index.html#useless_vec

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-29 08:58:02 +08:00
Xin Yin cf76edbc52 dep: upgrade tokio to 1.35.1
Fix painc after all prefetch worker exit for fscache mode.

Signed-off-by: Xin Yin <yinxin.x@bytedance.com>
2023-12-27 20:36:23 +08:00
loheagn 7f27b7ae78 tests: add smoke test for nydusd failover
Signed-off-by: loheagn <loheagn@icloud.com>
2023-12-25 16:35:14 +08:00
Yadong Ding 17c373fc29 nydusify: fix error in go vet
`sudo` in action will change go env, remove sudo.
In runner user, we can create file in unpacktargz-test inseted of temp/unpacktargz-test,
so don't use os.CreateTemp in archive_test.go.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-25 14:02:59 +08:00
Yadong Ding d5242901f9 action: delete useless env
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-25 14:02:59 +08:00
Yadong Ding 39daa97bac nydusify: fix unit test fail in utils
utils_test.go:248:
                Error Trace:    /root/nydus/contrib/nydusify/pkg/utils/utils_test.go:248
                Error:          Should be true
                Test:           TestRetryWithHTTP

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-25 14:02:59 +08:00
Yadong Ding 2cd8ba25bd nydusify: add unit test for nydusify
We had removed the tests files(e2e) in nydusify, we need add the unit tests
to improve test coverage.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-25 09:14:54 +08:00
Yadong Ding 3164f19ab7 makefile: remove build in test
use `make test` to run unit tests, it don't need build.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-25 09:14:54 +08:00
Yadong Ding 6675da3186 action: use upload-artifact/download-artifact v3
master branch is unstable, change to v3.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-25 09:14:54 +08:00
Yadong Ding 7772082411 action: use sudo in contrib-unit-test-coverage
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-25 09:14:54 +08:00
Yadong Ding 65046b0533 refactor: use ErrSchemeMismatch and ECONNREFUSED
ref: https://github.com/golang/go/issues/44855

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-25 09:03:20 +08:00
Yadong Ding b5e88a4f4e chore: upgrade go version to 1.21
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-25 09:03:20 +08:00
Yadong Ding 18ba2eda63 action: fix failed to compile `cross v0.2.4`
error: failed to compile `cross v0.2.4`, intermediate artifacts can be found at `/tmp/cargo-installG1Scm4`

Caused by:
  package `home v0.5.9` cannot be built because it requires rustc 1.70.0 or newer, while the currently active rustc version is 1.68.2
  Try re-running cargo install with `--locked`

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-19 14:25:04 +08:00
Yadong Ding ab06841c39 revent build(deps): bump openssl from 0.10.55 to 0.10.60
Revent https://github.com/dragonflyoss/nydus/pull/1513.
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-19 14:25:04 +08:00
Yadong Ding e9d63f5d3b chore: upgrade dbs-snapshot to 1.5.1
v1.5.1 brings support of ppc64le and riscv64.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-19 14:25:04 +08:00
Yadong Ding 1a1e8fdb98 action: test build with more architectures
Test build with more architectures, but only use `amd64` in next jobs.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-19 14:25:04 +08:00
Yadong Ding a4ec9b8061 tests: add go module unit coverage to Codecov
resolve dragonflyoss#1518.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-19 09:48:48 +08:00
Yadong Ding 54a3395434 action: add contrib-test and build
Use contrib-tes job to test the golang modules in contrib.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-19 09:48:48 +08:00
Yadong Ding 0458817278 chore: modify repo to dragonflyoss/nydus
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-18 17:41:00 +08:00
Yadong Ding 763786f316 chore: change go module name to nydus
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-18 17:41:00 +08:00
dependabot[bot] d6da88a8f1 build(deps): bump github.com/docker/docker in /contrib/nydusify
Bumps [github.com/docker/docker](https://github.com/docker/docker) from 23.0.3+incompatible to 24.0.7+incompatible.
- [Release notes](https://github.com/docker/docker/releases)
- [Commits](https://github.com/docker/docker/compare/v23.0.3...v24.0.7)

---
updated-dependencies:
- dependency-name: github.com/docker/docker
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-12-18 13:38:23 +08:00
Yadong Ding 06755fe74b tests: remove useless test files
Since https://github.com/dragonflyoss/nydus/pull/983, we have the new smoke test, we can remove the
old smoke test files including nydusify and nydus.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-15 14:14:05 +08:00
Yadong Ding 2bca6f216a smoke: use golangci-lint to improve code quality
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-15 14:12:44 +08:00
Yadong Ding 0e81f2605d nydusify: fix errors found by golangci-lint
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-15 14:12:44 +08:00
Yadong Ding f98b6e8332 action: upgrade golangci-lint to v1.54.2
We have some golang lint error in nydusify.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-15 14:12:44 +08:00
Yadong Ding 1d289e25f9 rust: update to edition2021
Since we are using cargo 1.68.2 we don't need to require edition 2018 any more.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-15 14:10:50 +08:00
Yadong Ding 194641a624 chore: remove go test cover
In golang smoke test, go test don't need coverage analysis and create coverage profile.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-13 15:54:42 +08:00
Yiqun Leng 45331d5e18 bats test: move the logic of generating dockerfile into common lib
Signed-off-by: Yiqun Leng <yqleng@linux.alibaba.com>
2023-12-13 15:25:15 +08:00
dependabot[bot] 55a999b9e6 build(deps): bump openssl from 0.10.55 to 0.10.60
Bumps [openssl](https://github.com/sfackler/rust-openssl) from 0.10.55 to 0.10.60.
- [Release notes](https://github.com/sfackler/rust-openssl/releases)
- [Commits](https://github.com/sfackler/rust-openssl/compare/openssl-v0.10.55...openssl-v0.10.60)

---
updated-dependencies:
- dependency-name: openssl
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-12-13 13:09:44 +08:00
Yan Song 87e3db7186 nydusify: upgrade containerd package
To import some fixups from https://github.com/containerd/containerd/pull/9405.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-12-13 09:57:20 +08:00
Qinqi Qu a84400d165 misc: update rust-toolchain file to TOML format
1. Move rust-toolchain to rust-toolchain.toml
2. Update the parsing process of rust-toolchain in the test script.

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-12-12 20:27:12 +08:00
Yadong Ding d793aee881 action: delete clean-cache
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-11 09:47:54 +08:00
Yadong Ding a3e60c0801 action: benchmark add conversion_elapsed
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-08 09:33:03 +08:00
Yadong Ding 794f7f7293 smoke: add image conversion time in benchmark
ConversionElapsed can express the performance of accelerated image conversion.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-08 09:33:03 +08:00
Xin Yin e12416ef09 upgrade: change to use dbs_snapshot crate
Signed-off-by: Xin Yin <yinxin.x@bytedance.com>
2023-12-07 20:10:13 +08:00
Xin Yin 7b25d8a059 service: add unit test for upgrade mananger
Signed-off-by: Xin Yin <yinxin.x@bytedance.com>
2023-12-07 20:10:13 +08:00
Xin Yin e0ad430486 feat: support takeover for fscache
refine the UpgradeManager, make it can also support store status for
fscache daemon. And make the takeover feature applies to both fuse and
fscache mode.

Signed-off-by: Xin Yin <yinxin.x@bytedance.com>
2023-12-07 20:10:13 +08:00
Nan Li 16f5ac3d14 feat: implement `takeover` for nydusd fusedev daemon
This patch implements the `save` and `restore` functions in the `fusedev_upgrade` in the service create.
To do this,
- This patch add a new create named `nydus-upgrade` into the workspace. The `nydus-upgrade` create has some util functions help to do serialization and deserialization for the rust structs using the versionize and snapshot crates. The crate also has a trait named `StorageBackend` which can be used to store and restore fuse session fds and state data for the upgrade action, and there's also an implementation named `UdsStorageBackend` which uses unix domain socket to do this.
- as we have to use the same fuse session connection, backend file system mount commands, Vfs to re-mount the rafs for the new daemon (created for "hot upgrade" or failover), this patch add a new struct named `FusedevState` to hold these information. The `FusedevState` is serialized and stored into the `UdsStorageBackend` (which happens in the `save` function in the `fusedev_upgrade` module) before the new daemon is created, and the `FusedevState` is deserialized and restored from the `UdsStorageBackend` (which happens in the `restore` function in the `fusedev_upgrade` module) when the new daemon is triggered by `takeover`.

Signed-off-by: Nan Li <loheagn@icloud.com>
Signed-off-by: linan.loheagn3 <linan.loheagn3@bytedance.com>
2023-12-07 20:10:13 +08:00
Yadong Ding e4cf98b125 action: add oci in benchmark
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-07 10:19:53 +08:00
Yadong Ding b87814b557 smoke: support different snapshooter in bench
We can use overlayfs to test OCI V1 image.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-07 10:19:53 +08:00
Jiang Liu 50b8988751 storage: use connection pool for sqlite
Sqlite connection is not thread safe, so use connection pool to
support multi-threading.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-12-06 15:54:09 +08:00
Jiang Liu 1c293cfefd storage: move cas db from util into storage
Move cas db from util into storage.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-12-06 15:54:09 +08:00
Jiang Liu bfc171a933 util: refine database structure for CAS
Refine the sqlite database structure for storing CAS information.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-12-06 15:54:09 +08:00
xwb1136021767 6ca3ca7dc0 utils: introduce sqlite to store CAS related information
Introduce sqlite to store CAS related information.

Signed-off-by: Huang Jianan <jnhuang95@gmail.com>
Signed-off-by: xwb1136021767 <1136021767@qq.com>
2023-12-06 15:54:09 +08:00
Yadong Ding 93ef71db79 action:use more images in benchmark
Include:
- python:3.10.7
- golang:1.19.3
- ruby:3.1.3
- amazoncorretto:8-al2022-jdk

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-04 10:14:17 +08:00
Yadong Ding ba8d3102ab smoke: support more images in container
Support: python, golang, ruby, amazoncorretto.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-04 10:14:17 +08:00
Yadong Ding eeddfff9a0 nydusify: fix deprecated
1. replace `github.com/docker/distribution` with `github.com/distribution/reference`
2. replace `EndpointResolver` with `BaseEndpoint`

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-04 10:12:45 +08:00
Yadong Ding 11592893ea nydusify: update dependencies version
github.com/aliyun/aliyun-oss-go-sdk: `v2.2.6+incompatible` -> `v3.0.1+incompatible`
github.com/aws/aws-sdk-go-v2 `v1.17.6` -> `v1.23.5`
github.com/aws/aws-sdk-go-v2/config `v1.18.16` -> `v1.25.11`
github.com/aws/aws-sdk-go-v2/credentials `v1.13.16` -> `v1.16.9`
github.com/aws/aws-sdk-go-v2/feature/s3/manager `v1.11.56` -> `v1.15.4`
github.com/aws/aws-sdk-go-v2/service/s3 `v1.30.6` -> `v1.47.2`
github.com/containerd/nydus-snapshotter `v0.13.2 -> v0.13.3`
github.com/docker/cli `v24.0.6+incompatible` -> `v24.0.7+incompatible`
github.com/docker/distribution `v2.8.2+incompatible` -> `v2.8.3+incompatible`
github.com/google/uuid `v1.3.1` -> `v1.4.0`
github.com/hashicorp/go-hclog `v1.3.1` -> `v1.5.0`
github.com/hashicorp/go-plugin `v1.4.5` -> `v1.6.0`
github.com/opencontainers/image-spec `v1.1.0-rc4` -> `v1.1.0-rc5`
github.com/prometheus/client_golang `v1.16.0` -> `v1.17.0`
github.com/sirupsen/logrus `v1.9.0` -> `v1.9.3`
github.com/stretchr/testify `v1.8.3` -> `v1.8.4`
golang.org/x/sync `v0.3.0` -> `v0.5.0`
golang.org/x/sys `v0.13.0` -> `v0.15.0`
lukechampine.com/blake3 `v1.1.5` -> `v1.2.1`

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-04 10:12:45 +08:00
Yadong Ding 3f999a70c5 action: add `node:19.8` in benchmark
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-01 14:54:31 +08:00
Yadong Ding e0041ec9cb smoke: benchamrk support node
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-01 14:54:31 +08:00
Yadong Ding d266599128 docs: add benchmark badge with schedule event
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-01 11:38:31 +08:00
Yan Song e0fc6a1106 contrib: fix golangci lint for ctr-remote
Fix the lint check error by updating containerd package:

```
golangci-lint run
Error: commands/rpull.go:89:2: SA1019: log.G is deprecated: use [log.G]. (staticcheck)
	log.G(pCtx).WithField("image", ref).Debug("fetching")
	^
```

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-12-01 10:59:28 +08:00
Yan Song 838593fed3 nydusify: support --push-chunk-size option
Reference: https://github.com/containerd/containerd/pull/9405

Will replace containerd dep to upstream version if the PR can be merged.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-12-01 10:59:28 +08:00
Yadong Ding f1de095905 action: use same golang cache
setup-go@v4 use cache name `setup-go-Linux-ubuntu22-go-1.20.11-${hash}`.
`actions/cache@v3` restore the same content, so just restore the same cache.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-01 10:42:24 +08:00
Yadong Ding a1ad70a46c action: update setup-go to v4 and enabled caching
When update setup-go to v4, it can cache by itself. And select go version
by `go.work`.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-01 08:39:32 +08:00
Yadong Ding 40489c7365 action: update rust cache version and share caches
1. update Swatinem/rust-cache to v2.7.0.
2. share caches betwwen jobs in release, smoke, convert and benchmark.
3. save rust cache only in master branch in smoke test.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-01 08:39:32 +08:00
wuheng 3f5c2c8bb9 docs: nydus-sandbox.yaml add uid
Signed-off-by: wuheng <wuheng@kylinos.cn>
2023-11-30 15:05:07 +08:00
Yadong Ding f5001bbdc3 misc: delete python version benchmark
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-11-30 11:34:27 +08:00
Yadong Ding 0e10dbcaae action: use smoke BenchmarkTest in Benchmark
We should deprecate python version benchmark.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-11-30 11:34:27 +08:00
Yadong Ding 822c935c77 smoke: add benchmark test
1. refactor performance_test, move clearContainer to tools.
2. add benchmark test.
benchmark test will test image container, and save metrics to json file.
Fox example
```json
{
	e2e_time: 2747131
	image_size: 2107412
	read_amount: 121345
	read_count: 121
}
```

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-11-30 11:34:27 +08:00
Yadong Ding 8ad7ae541d fix: smoke test-performance env var set up failed
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-11-29 17:12:43 +08:00
zyfjeff 96f402bfee Let targz type conversions support multi-stream gzip
code reference https://github.com/madler/zlib/blob/master/examples/zran.c

at present, zran and normal targz do not consider the support for
multi stream gzip when decompressing, so there will be problems
when encountering this kind of image, and this PR is used to
support the gzip multi-stream.

Signed-off-by: zyfjeff <zyfjeff@linux.alibaba-inc.com>
2023-11-29 12:57:37 +08:00
zyfjeff 8247fe7b01 Update libz-sys& flate2 crate to latest version
Signed-off-by: zyfjeff <zyfjeff@linux.alibaba-inc.com>
2023-11-29 12:57:37 +08:00
Qinqi Qu 091697918c action: disable codecov patch check
Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-11-27 09:00:33 +08:00
Yadong Ding f21fe67a81 action: use performance test in smoke test
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-11-27 08:59:33 +08:00
Yadong Ding c51ecd0e42 smoke: add performance test
Add performance test to make sure there don't have performance descend

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-11-27 08:59:33 +08:00
Yadong Ding 4c33d4e605 action: remove benchmark test in smoke
We will rewrite it in performance_test with golang

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-11-27 08:59:33 +08:00
Wenhao Ren 71dfc6ff7e builder: align file dump order with prefetch list, fix #1488
1. The dump order for prefetch files does not match the order specified in prefetch list,
so let's fix it.
2. The construction of `Prefetch` is slow due to inefficient matching of prefetch patterns,
By adopting a more efficient data structure, this process has been accelerated.
3. Unit tests for prefetch are added.

Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2023-11-27 08:58:52 +08:00
Yadong Ding e2b131e4c6 go mod: sync deps by go mod tidy
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-11-22 14:50:20 +08:00
Yadong Ding 6f9551a328 git: add go.work.sum to .gitignore
`go.work.sum` always changes too large. We only need it to work well locally.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-11-22 14:50:20 +08:00
Yan Song 767adcf03a nydusify: fix unnecessary manifest index when copy one platform image
When use the command to copy the image with specified one platform:

```
nydusify copy --platform linux/amd64 --source nginx --target localhost:5000/nginx
```

We found the target image is a manifest index format like:

```
{
  "schemaVersion": 2,
  "mediaType": "application/vnd.docker.distribution.manifest.list.v2+json",
  "manifests": [
    {
      "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
      "digest": "sha256:d2e65182b5fd330470eca9b8e23e8a1a0d87cc9b820eb1fb3f034bf8248d37ee",
      "size": 1778,
      "platform": {
        "architecture": "amd64",
        "os": "linux"
      }
    }
  ]
}
```

This can be a bit strange, in fact just the manifest is enough, the patch improves this.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-11-10 16:50:41 +08:00
Wenhao Ren c9fbce8ccf nydusd: add the config support of `amplify_io`
Add the support of `amplify_io` in the config file of nydusd
to configure read amplification.

Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2023-11-09 14:15:18 +08:00
Wenhao Ren 468eeaa2cf rafs: rename variable names about prefetch configuration
Variable names about prefetch are confused currently.
So we merge variable names that have the same meaning,
while DO NOT affect the field names read from the configuration file.

Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2023-11-09 14:15:18 +08:00
Peng Tao 46dca1785f rafs/builder: fix build on macos
These are u16 on macos.

Signed-off-by: Peng Tao <bergwolf@gmail.com>
2023-11-09 11:20:25 +08:00
Peng Tao e06c1ca85f ut: stop testing some unit tests on macos
We are only testing blob cache and fscache in unit tests. And we are
testing linux device id. All of them do not work on macos at all.

Signed-off-by: Peng Tao <bergwolf@gmail.com>
2023-11-09 11:20:25 +08:00
Peng Tao 3061050e20 smoke: add macos build test
Signed-off-by: Peng Tao <bergwolf@gmail.com>
2023-11-09 11:20:25 +08:00
Yan Song 1c24213802 docs: update multiple snapshotter switch troubleshooting
Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-11-09 10:28:10 +08:00
weizhen.zt b572a0f24e utils: bugfix for unit test case.
Signed-off-by: weizhen.zt <wodemia@linux.alibaba.com>
2023-11-09 10:24:45 +08:00
weizhen.zt c608ef6231 storage: move toml to dev-dependencies
Signed-off-by: weizhen.zt <wodemia@linux.alibaba.com>
2023-11-09 10:24:45 +08:00
weizhen.zt 19185ed0d2 builder: add some unit test cases
Signed-off-by: weizhen.zt <wodemia@linux.alibaba.com>
2023-11-09 10:24:45 +08:00
weizhen.zt cc5a8c5035 api: add some unit test cases
Signed-off-by: weizhen.zt <wodemia@linux.alibaba.com>
2023-11-09 10:24:45 +08:00
weizhen.zt 60db5334ff rafs: add some unit test cases
Signed-off-by: weizhen.zt <wodemia@linux.alibaba.com>
2023-11-09 10:24:45 +08:00
weizhen.zt f75e0da3ad storage: add some unit test cases
Signed-off-by: weizhen.zt <wodemia@linux.alibaba.com>
2023-11-09 10:24:45 +08:00
weizhen.zt 9021871596 utils: add some unit test cases
Signed-off-by: weizhen.zt <wodemia@linux.alibaba.com>
2023-11-09 10:24:45 +08:00
Yan Song 360b59fa98 docs: unify object_prefix field for oss/s3 backend
Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-11-09 09:47:48 +08:00
Yan Song ea5db01442 docs: some improvements for usage
1. buildkit upstream follow-up is slow, update to nydusaccelerator/buildkit;
2. runtime-level snapshotter usage needs extra containerd patch;
3. add s3 storage backend example for nydusd doc page;

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-11-09 09:47:48 +08:00
hijackthe2 002b2f2c8a builder: fix assertion error by explicitly specifying type when building nydus in macos arm64 environment.
Signed-off-by: hijackthe2 <2948278083@qq.com>
2023-11-07 13:42:04 +08:00
hijackthe2 89882a4002 storage: add some unit test cases
Some unit test cases are added for device.rs, meta/batch.rs, meta/chunk_info_v2.rs, meta/mod.rs, and meta/toc.rs in storage/src to increase code coverage.

Signed-off-by: hijackthe2 <2948278083@qq.com>
2023-11-07 09:13:12 +08:00
Yadong Ding 2fb293411d action: get latest tag by Github API
Use https://api.github.com/repos/Dragonflyoss/nydus/releases/latest to get the
latest tag of nydus, and used in smoke/integration-test.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-11-07 09:08:14 +08:00
Junduo Dong 8b81a99108 contrib: correct parameter name
Signed-off-by: Junduo Dong <andj4cn@gmail.com>
2023-11-06 09:04:31 +08:00
hijackthe2 240af3e336 builder: add some unit test cases
Some unit test cases are added for compact.rs, lib.rs, merge.rs, stargz.rs, core/context.rs, and core/node.rs in builder/src to increase code coverage.

Signed-off-by: hijackthe2 <2948278083@qq.com>
2023-11-03 16:56:14 +08:00
hijackthe2 689900cc18 ci: add configurations to setup fscache
Since using `/dev/cachefiles` requires sudo mode, so some environment variables are defined and we use `sudo -E` to pass these environment variables to sudo operations.

The script file for enabling fscache is misc/fscache/setup.sh

Signed-off-by: hijackthe2 <2948278083@qq.com>
2023-11-03 08:35:31 +08:00
hijackthe2 cdc41de069 docs: add fscache configuation
Signed-off-by: hijackthe2 <2948278083@qq.com>
2023-11-03 08:35:31 +08:00
hijackthe2 3c57fc608c tests: add unit test case for blob_cache.rs, block_device.rs, fs_cache.rs, singleton.rs under service/src
1. In blob_cache.rs, two simple lines of code have been added to cover previously missed cases.
2. In block_device.rs, some test cases are added to cover function export(), block_size(), blocks_to_size(), and size_to_blocks().
3. In fs_cache.rs, some test cases are added to cover function try_from() for struct FsCacheMsgOpen and FsCacheMsgRead.
4. In singletion.rs, some test cases are added to cover function initialize_blob_cache() and initialize_fscache_service(). In addition, fscache must be correctly enabled firstly as the device file `/dev/cachefiles` will used by function initialize_fscache_service().

Signed-off-by: hijackthe2 <2948278083@qq.com>
2023-11-03 08:35:31 +08:00
Yadong Ding 4d4ebe66c0 go work: support go workspace mode and sync deps
We have mutile golang modules in repo, golang had supported the workspaces,
see https://go.dev/blog/get-familiar-with-workspaces.
Use `go work sync` to synchronize versions of the same dependencies for different modules.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-11-02 22:28:39 +08:00
Yan Song ac55d7f932 smoke: add basic nydusify copy test
Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-11-02 16:50:48 +08:00
Yan Song a478fb6e76 nydusify: fix copy race issue
1. Fix lost namespace on containerd image pull context:

```
pull source image: namespace is required: failed precondition
```

2. Fix possible semaphore Acquire race on the same one context:

```
panic: semaphore: released more than held
```

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-11-02 16:50:48 +08:00
Yan Song ace7c3633d smoke: fix stable version for compatibility test
And let's make stable version name as a env.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-11-02 10:35:00 +08:00
dependabot[bot] 75c87e9e42 build(deps): bump rustix in /contrib/nydus-backend-proxy
Bumps [rustix](https://github.com/bytecodealliance/rustix) from 0.36.8 to 0.36.17.
- [Release notes](https://github.com/bytecodealliance/rustix/releases)
- [Commits](https://github.com/bytecodealliance/rustix/compare/v0.36.8...v0.36.17)

---
updated-dependencies:
- dependency-name: rustix
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-11-02 08:47:06 +08:00
Peng Tao d638eb26e1 smoke: test v2.2.3 by default
Let's make stable v2.2.y a LTS one.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2023-11-01 11:44:25 +08:00
Yan Song 34a09d87ce api: fix unsupported dummy cache type
The dummycache type is missed to handle in config validation:

```
ERROR [/src/fusedev.rs:595] service mount error: RAFS failed to handle request, Failed to load config: failed to parse configuration information`
ERROR [/src/error.rs:18] Stack:
   0: backtrace::backtrace::trace
   1: backtrace::capture::Backtrace::new

ERROR [/src/error.rs:19] Error:
        Rafs(LoadConfig(Custom { kind: InvalidInput, error: "failed to parse configuration information" }))
        at service/src/fusedev.rs:596
ERROR [src/bin/nydusd/main.rs:525] Failed in starting daemon:
Error: Custom { kind: Other, error: "" }
```

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-10-31 18:00:45 +08:00
Yadong Ding e64b912a10 action: rename images-service to nydus
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-10-31 14:10:16 +08:00
Yadong Ding 44149519d1 docs: replace images-service to nydus in links
Since https://github.com/dragonflyoss/nydus/issues/1405, we had changed repo name to nydus.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-10-31 14:10:16 +08:00
Yan Song 55bba9d80b tests: remove useless rust smoke test
The rust integration test has been replaced with the go integration
test in smoke/tests, let's remove it.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-10-31 12:14:56 +08:00
Yan Song 47b62d978c contrib: remove unmaintained python integration test
The python integration test is too long without maintenance, it should
be replaced with the go integration test in smoke/tests.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-10-31 12:14:56 +08:00
Qinqi Qu f55d2c948f deps: bump google.golang.org/grpc to 1.59.0
1. Fix gRPC-Go HTTP/2 Rapid Reset vulnerability

Please refer to:
https://github.com/advisories/GHSA-m425-mq94-257g

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-10-27 16:13:49 +08:00
Qinqi Qu 69ddef9f4c smoke: replaces the io/ioutil API which was deprecated in go 1.19
Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-10-27 15:19:30 +08:00
Qinqi Qu cb458bdea4 contrib: upgrade to go 1.20
Keep consistent with other components in container ecosystem,
for example containerd is using go 1.20.

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-10-27 15:19:30 +08:00
YuQiang 46fc7249b4 update: integrate-acceld-cacheIntegrate acceld cache module
Signed-off-by: YuQiang <y_q_email@163.com>
2023-10-27 14:14:51 +08:00
linchuan 6dc9144193 enhance error handling with thiserror
Signed-off-by: linchuan <linchuan.jh@antgroup.com>
2023-10-27 10:27:24 +08:00
hijackthe2 3bb124ba77 tests: add unit test case for service/src/upgrade.rs
test type transformation between struct FailoverPolicy and String/&str
2023-10-24 18:48:51 +08:00
liyaojie acb689f19b CI : fix the failed fsck patch apply in CI
Signed-off-by: liyaojie <lyj199907@outlook.com>
2023-10-24 15:40:42 +08:00
Yan Song 9632d18e0b api: fix the log message print in macro
Regardless of whether debug compilation is enabled, we should
always print error messages. Otherwise, some error logs may be
lost, making it difficult to debug codes.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-10-20 10:46:42 +08:00
Yan Song 0cad49a6bd storage: fix compatibility on fetching token for registry backend
The registry backend received an unauthorized error from Harbor registry
when fetching registry token by HTTP GET method, the bug is introduced
from https://github.com/dragonflyoss/image-service/pull/1425/files#diff-f7ce8f265a570c66eae48c85e0f5b6f29fdaec9cf2ee2eded95810fe320d80e1L263.

We should insert the basic auth header to ensure the compatibility of
fetching token by HTTP GET method.

This refers to containerd implementation: dc7dba9c20/remotes/docker/auth/fetch.go (L187)

The change has been tested for Harbor v2.9.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-10-20 10:46:42 +08:00
Qinqi Qu 5c63ba924e deps: bump golang.org/x/net to v0.17.0
Fix the following 2 issues:
1. HTTP/2 rapid reset can cause excessive work in net/http
2. Improper rendering of text nodes in golang.org/x/net/html

Please refer to:
https://github.com/dragonflyoss/image-service/security/dependabot/95
https://github.com/dragonflyoss/image-service/security/dependabot/96
https://github.com/dragonflyoss/image-service/security/dependabot/97
https://github.com/dragonflyoss/image-service/security/dependabot/98
https://github.com/dragonflyoss/image-service/security/dependabot/99
https://github.com/dragonflyoss/image-service/security/dependabot/100

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-10-13 03:59:27 -05:00
zyfjeff 9ab1ec1297 Add --blob-cache-dir arg use to generate raw blob cache and meta
generate blob cache and blob meta through the —-blob-cache-dir parameters,
so that nydusd can be started directly from these two files without
going to the backend to download. this can improve the performance
of data loading in localfs mode.

Signed-off-by: zyfjeff <zyfjeff@linux.alibaba.com>
2023-10-10 05:19:53 -05:00
Yan Song 6ea22ccd8a docs: update containerd integration tutorial
Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-10-08 20:50:30 -05:00
Yan Song a9678d2c97 misc: remove outdated example doc
These docs and configs are poorly maintained, and it also can be
replaced by the doc https://github.com/dragonflyoss/image-service/blob/master/docs/containerd-env-setup.md.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-10-08 20:50:30 -05:00
lihuahua123 d7b1851f42 storage: fix auth compatibility for registry backend
Signed-off-by: lihuahua123 <771725652@qq.com>
2023-09-27 10:49:32 +08:00
YuQiang aa9c95ab42 feat: notifying incosistent fs version problem with exit code
If acceld converts with different fs version cache, leading to an inconsistent fs version problem when merging into boostrap layer. So we need to notify acceld that an inconsistent version occured and handle this error.

Signed-off-by: YuQiang <y_q_email@163.com>
2023-09-25 21:08:40 +08:00
zyfjeff b777564f45 Always use blob id as the name of the filecache when use separate blob
Before we only has one blob, called a data blob, so when generating a filecache,
we always used the id of this blob as the name of the filecache,
and later after supporting separate blobs, we has two blobs, one is a data blob,
the other is a meta blob, in order to maintain compatibility,
we should always use the data blob id as the filecache name, not the meta blob id.

Signed-off-by: zyfjeff <zyfjeff@linux.alibaba.com>
2023-09-25 11:45:46 +08:00
Junduo Dong 0cdf4725ac nydus-image: Fix blobs unpack bug
Signed-off-by: Junduo Dong <dongjunduo.djd@antgroup.com>
2023-09-25 10:40:41 +08:00
dependabot[bot] 35cd712d96 build(deps): bump github.com/cyphar/filepath-securejoin
Bumps [github.com/cyphar/filepath-securejoin](https://github.com/cyphar/filepath-securejoin) from 0.2.3 to 0.2.4.
- [Release notes](https://github.com/cyphar/filepath-securejoin/releases)
- [Commits](https://github.com/cyphar/filepath-securejoin/compare/v0.2.3...v0.2.4)

---
updated-dependencies:
- dependency-name: github.com/cyphar/filepath-securejoin
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-09-25 10:39:29 +08:00
dependabot[bot] 8b598f0060 build(deps): bump github.com/cyphar/filepath-securejoin
Bumps [github.com/cyphar/filepath-securejoin](https://github.com/cyphar/filepath-securejoin) from 0.2.3 to 0.2.4.
- [Release notes](https://github.com/cyphar/filepath-securejoin/releases)
- [Commits](https://github.com/cyphar/filepath-securejoin/compare/v0.2.3...v0.2.4)

---
updated-dependencies:
- dependency-name: github.com/cyphar/filepath-securejoin
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-09-25 10:39:19 +08:00
Junduo Dong 148cf96782 Fix no export subcmd panic on mac
Signed-off-by: Junduo Dong <dongjunduo.djd@antgroup.com>
2023-09-25 10:37:31 +08:00
Lin Wang 278915b4eb nydus-image:Optimize Chunkdict Save
Refactor the Deduplicate implementation to only
initialize config when inserting chunk data.
Simplify code for better maintainability.

Signed-off-by: Lin Wang <l.wang@mail.dlut.edu.cn>
2023-09-22 16:33:53 +08:00
Yan Song d2fcfcd56d action: update test branch for integration
We are focusing on v2.2 maintenance, let's we change the test branch
from `stable/v2.1` to `stable/v2.2`.

It also fix the broken integration test:
https://github.com/dragonflyoss/image-service/actions/runs/6153232407

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-09-13 17:53:55 +08:00
Yadong Ding a35e634202 misc: rename vault from library to hashicorp
Upcoming in Vault 1.14, stop publishing official Dockerhub images and publish only our Verified Publisher images.
Users of Docker images should pull from hashicorp/vault instead of vault.
Verified Publisher images can be found at https://hub.docker.com/r/hashicorp/vault.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-09-08 09:45:59 +08:00
Jiang Liu 919e8ac534 nydus-overlayfs: filter option "io.katacontainers.volume"
Filter mount option "io.katacontainers.volume", which is a superset
of "extraoptions".

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-09-06 14:25:13 +08:00
Yan Song 1d93f129c9 storage: fix chunk map compatibility
The blob cache file of nydusd v2.2 and <=v2.1 are in different
formats, which are not compatible. Should use different chunk map
files for them, in order to upgrade or downgrade smoothly.

For the nydusd <=v2.1, the files in blob cache directory:

```
$blob_id
$blob_id.chunk_map
```

For the nydusd =v2.2, the files in blob cache directory:

```
$blob_id.blob.data
$blob_id.chunk_map
```

NOTE: nydusd (v2.2) maybe use the chunk map file of nydusd(<=v2.1),
it will cause the corrupted blob cache data to be read.

For the nydusd of current patch, the files in blob cache directory:

```
$blob_id.blob.data
$blob_id.blob.data.chunk_map
```

NOTE: this will discard the old blob cache data and chunk map files.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-09-05 13:19:42 +08:00
zyfjeff 631db29759 Add seekable method for TarReader use to determine whether the current reader supports seek
Signed-off-by: zyfjeff <zyfjeff@linux.alibaba.com>
2023-08-29 15:38:40 +08:00
zyfjeff 55d8ac12f1 Add seekable method for TarReader use to determine whether the current reader supports seek
Signed-off-by: zyfjeff <zyfjeff@linux.alibaba.com>
2023-08-29 15:38:40 +08:00
zyfjeff d54c43f59a add --original-blob-ids args for merge
the default merge command is to get the name of the original
blob from the bootstrap name, and add a cli args for it

Signed-off-by: zyfjeff <zyfjeff@linux.alibaba.com>
2023-08-29 14:07:17 +08:00
zyfjeff 0e2d72c59b bugfix: do not fill 0 buffer, and skip validate features
1. Buffer reset to 0 will cause race during concurrency.

2. Previously, the second validate_header did not actually take effect. Now
it is repaired, and it is found that the features of blob info do not
set the --inline-bootstrap position to true, so the check of features is
temporarily skipped. Essentially needs to be fixed from nydus-image from
upstream.

Signed-off-by: zhaoshang <zhaoshangsjtu@linux.alibaba.com>
Signed-off-by: zyfjeff <zyfjeff@linux.alibaba.com>
2023-08-29 12:02:07 +08:00
zyfjeff 49cc3f9c73 Support use /dev/stdin as SOURCE path for image build
Signed-off-by: zyfjeff <zyfjeff@linux.alibaba.com>
2023-08-29 11:56:17 +08:00
zyfjeff 1abf0aeb84 Change /contrib/**/.vscode to **/.vscode
Signed-off-by: zyfjeff <zyfjeff@linux.alibaba.com>
2023-08-25 16:43:16 +08:00
zyfjeff 7455cdd233 Update cargo.lock to latest
Signed-off-by: zyfjeff <zyfjeff@linux.alibaba.com>
2023-08-25 16:43:16 +08:00
zyfjeff e5798eb228 Add vscode to gitignore for all contrib subdir
Signed-off-by: zyfjeff <zyfjeff@linux.alibaba.com>
2023-08-25 16:43:16 +08:00
Yan Song d9f8fa9c99 docs: add nydusify copy usage
Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-08-25 14:07:28 +08:00
Yan Song e4339ee2a2 nydusify: introduce copy subcommand
`nydusify copy` copies an image from source registry to target
registry, it also supports to specify a source backend storage.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-08-25 14:07:28 +08:00
David Baird 156ba6a8a3 Fix image-create with ACLs. Fixes #1394.
Signed-off-by: David Baird <dhbaird@gmail.com>
2023-08-17 10:28:22 +08:00
Qinqi Qu f3cdd071b0 deps: change tar-rs to upstream version
Since upstream tar-rs merged our fix for reading large uids/gids from
the PAX extension, so change tar-rs back to the upstream version.

Update tar-rs dependency xattr to 1.0.1 as well.

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-08-08 23:05:18 +08:00
Qinqi Qu 32143077d6 cargo: update rafs/storage/api/utils in cargo.lock
This change will be automatically generated during make.

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-08-08 23:05:18 +08:00
Zhao Yuan 8b59e192bb nydusify chunkdict generate --sources
Add the 'nydus-image chunkdict save' command
with the "--sources" followed by the nydus image of registry
(e.g.,'registry.com/busybox:nydus-v1,registry.com/busybox:nydus-v2')

Signed-off-by: Zhao Yuan <1627990440@qq.com>
2023-08-08 15:34:07 +08:00
Lin Wang 8a9302402d nydus-image: Store chunk and blob metadata
Add functionality to store chunk and blob metadata
from nydus source images.
Use the 'nydus-image chunkdict save' command
with the '--bootstrap' option followed by the
path to the nydus bootstrap file (e.g., '~/output/nydus_bootstrap')

Signed-off-by: Lin Wang <l.wang@mail.dlut.edu.cn>
2023-08-08 15:32:02 +08:00
Lin Wang 4d0c0c08ff cargo: Add rusqlite package to dependencies
Updates 'Cargo.toml' and 'Cargo.lock' files to include the 'rusqlite' package.
Enabling interaction with SQLite databases in the project.

Signed-off-by: Lin Wang <l.wang@mail.dlut.edu.cn>
2023-08-08 15:32:02 +08:00
Peng Tao 34ee8255de cargo: bump rafs/storage/api/utils crate version
To publish them on crates.io.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2023-08-07 22:11:16 +08:00
Wei Zhang f41934993f service: print more error message
Some error messages were swallowed which makes user confused, for
example, for RAFSv6, we need to set blobcache config in `localfs.json`
(following docs tutorial), before modification, the error message
indicates nothing:

```
ERROR [src/bin/nydusd/main.rs:525] Failed in starting daemon: Invalid
argument (os error 22)
```

After this modification, we get clearer error message:

```
ERROR [/src/fusedev.rs:595] service mount error: RAFS failed to handle
request, Configure("Rafs v6 must have local blobcache configured")
```

Signed-off-by: Wei Zhang <weizhang555.zw@gmail.com>
2023-08-04 16:31:37 +08:00
Xuewei Niu 43c737d816 deps: Bump dependent crate versions
This pull request is mainly for updating vm-memory and vmm-sys-util.

The affacted crates include:

- vm-memory: from 0.9.0 to 0.10.0;
- vmm-sys-util: from 0.10.0 to 0.11.0;
- vhost: from 0.5.0 to 0.6.0;
- virtio-queue: from 0.6.0 to 0.7.0
- fuse-backend-rs: from 0.10.4 to 0.10.5
- vhost-user-backend: from 0.7.0 to 0.8.0

Signed-off-by: Xuewei Niu <niuxuewei.nxw@antgroup.com>
2023-08-04 14:06:50 +08:00
Qinqi Qu a295c5429b deps: update tar-rs to handle very large uid/gid in image unpack
update tar-rs to support read large uid/gid from PAX extensions to
fix very large UIDs/GIDs (>=2097151, limit of USTAR tar) lost in
PAX style tar during unpack.

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-08-03 16:39:33 +08:00
Yan Song 48762896e5 nydusify: support --with-referrer option
With this option, we can track all nydus images associated with
an OCI image. For example, in Harbor we can cascade to show nydus
images linked to an OCI image, deleting the OCI image can also delete
the corresponding nydus images. At runtime, nydus snapshotter can also
automatically upgrade an OCI image run to nydus image.

Prior to this PR, we had enabled this feature by default. However,
it is now known that Docker Hub does not yet support Referrer.

Therefore, adding this option to disable this feature by default,
to ensure broad compatibility with various image registries.

Fix https://github.com/dragonflyoss/image-service/issues/1363.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-08-01 13:26:18 +08:00
Yan Song 69e6874d2c dep: upgrade nydus-snapshotter & acceleration-service package
Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-08-01 13:26:18 +08:00
Jiang Liu ddb4627b7a builder: optimize tarfs building speed by skipping file content
The tarfs crate provides seekable reader to iterate entries in tar
file, so optimize tarfs building speed by skipping file content.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-07-31 10:58:08 +09:00
Bin Tang 82ebd11ab8
parse image pull auth from env (#1382)
* nydusd: parse image pull auth from env

Signed-off-by: Bin Tang <tangbin.bin@bytedance.com>

* docs: introduce IMAGE_PULL_AUTH env

Signed-off-by: Bin Tang <tangbin.bin@bytedance.com>

* fs: add test for filling auth

Signed-off-by: Bin Tang <tangbin.bin@bytedance.com>

---------

Signed-off-by: Bin Tang <tangbin.bin@bytedance.com>
2023-07-27 11:39:28 +08:00
Zhang Tianci 0c01dacf2e builder: add a trace log for building v5 image
Signed-off-by: Zhang Tianci <zhangtianci.1997@bytedance.com>
2023-07-25 14:42:05 +08:00
Zhang Tianci 471a7370cc nydusctl: fixup umount argument usage
Signed-off-by: Zhang Tianci <zhangtianci.1997@bytedance.com>
2023-07-25 14:42:05 +08:00
Yan Song 58842cbfd1 storage: adjust token refresh interval automatically
- Make registry mirror log pretty;
- Adjust token refresh interval automatically;

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-07-25 14:40:42 +08:00
Yan Song 01c58e00b3 storage: remove auth_through option for registry mirror
The auth_through option adds user burden to configure the mirror
and understand its meaning, and since we have optimized handling
of concurrent token requests, this option can now be removed.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-07-25 14:40:42 +08:00
Yan Song 4eda4266dc storage: implement simpler first token request
Nydusd uses a registry backend which generates a surge of blob requests without
auth tokens on initial startup. This caused mirror backends (e.g. dragonfly)
to process very slowly, the commit fixes this problem.

It implements waiting for the first blob request to complete before making other
blob requests, this ensures the first request caches a valid registry auth token,
and subsequent concurrent blob requests can reuse the cached token.

This change is worthwhile to reduce concurrent token requests, it also makes the
behavior consistent with containerd, which first requests the image manifest and
caches the token before concurrently requesting blobs.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-07-25 14:40:42 +08:00
Jiang Liu be52ebd28b storage: support manually add blob object to localdisk backend driver
Enhance the localdisk storage backend, so we can manually add blob
objects in the disk, in addition to discovering blob objects by
scanning GPT partition table.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-07-18 10:07:20 +08:00
Jiang Liu d5cdc78d8e storage: use File instead of RawFd to avoid possible race conditions
Use File instead of RawFd in struct LocalDiskBlob to avoid possible
race conditions.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-07-18 10:07:20 +08:00
Jiang Liu d834fba87a storage: introduce feature `backend-localdisk-gpt`
Introduce feature `backend-localdisk-gpt` for localdisk storage backend,
so it can be optionally disabled.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-07-18 10:07:20 +08:00
Yiqun Leng 4e3c954702 change a new nydus image for ci test
The network is not stable when pulling the old image, which may result in
ci test failure, so use the new image instead.

Signed-off-by: Yiqun Leng <yqleng@linux.alibaba.com>
2023-07-13 10:19:57 +08:00
Peng Tao 3de2025495 Makefile: allow to build debug version
We still build release version by default, but make sure that `make build`
only generates a debug version nydusd.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2023-07-12 15:26:10 +08:00
ccx1024cc c8a39c876a
fix: amplify io is too large to hold in fuse buffer (#1311)
* fix: amplify io is too large to hold in fuse buffer

Fuse request buffer is fixed by `FUSE_KERN_BUF_SIZE * pagesize() + FUSE_HEADER_ SIZE`. When amplify io is larger than it, FuseDevWriter suffers from smaller buffer. As a result, invalid data error is returned.

Reproduction:
    run nydusd with 3MB amplify_io
    error from random io:
        reply error header OutHeader { len: 16, error: -5, unique: 108 }, error Custom { kind: InvalidData, error: "data out of range, available 1052656 requested 1250066" }

Details:
    size of fuse buffer = 1052656 + 16 (size of inner header) = 256(page number) * 4096(page size) + 4096(fuse header)
    let amplify_io = min(user_specified, fuseWriter.available_bytes())

Resolution:
    This pr is not best implements, but independent of modification to [fuse-backend-rs]("https://github.com/cloud-hypervisor/fuse-backend-rs").
    In future, evalucation of amplify_io will be replaced with [ZeroCopyWriter.available_bytes()]("https://github.com/cloud-hypervisor/fuse-backend-rs/pull/135").

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>

* feat: e2e for amplify io larger than fuse buffer

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>

---------

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
Co-authored-by: 泰友 <cuichengxu.ccx@antgroup.com>
2023-07-12 08:59:50 +08:00
泰友 31f2170bb9 fix: large files broke prefetch
Files larger than 4G leads to prefetch panic, because the max blob io
range is smaller than 4G. This pr changes blob io max size from u32 to
u64.

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2023-07-11 19:09:41 +08:00
泰友 9bb51517be fix: deprecated docker field leads to failure of nydusify check
`NydusImage.Config.Config.ArgsEscaped` is present only for legacy compatibility
with Docker and should not be used by new image builders. Nydusify (1.6 and
above) ignores it, which is an expected behavior.

This pr ignores comparision of it in nydusify checking, which leads to failure.

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2023-07-11 09:49:19 +08:00
xwb1136021767 0c225cae10 nydus-image: add unit test for setting default compression algorithm
Signed-off-by: xwb1136021767 <weibinxue@foxmail.com>
2023-07-10 22:24:56 +08:00
Yiqun Leng 1ae9800512 fix incidental bugs in ci test
1. sleep for a while after restart containerd
2. only show detailed logs when test failed

Signed-off-by: Yiqun Leng <yqleng@linux.alibaba.com>
2023-07-10 16:52:51 +08:00
kangkexi ba3c8fae62 Update docs
Signed-off-by: kangkexi <kangkexi@megvii.com>
2023-07-07 14:49:02 +08:00
kangkexi ad92996726 update docs about using runtime-level snapshotter
Signed-off-by: kangkexi <kangkexi@megvii.com>
2023-07-07 14:49:02 +08:00
kangkexi b2e507350d docs: add containerd runtime-level snapshotter usage for nydus
Signed-off-by: kangkexi <kangkexi@megvii.com>
2023-07-07 14:49:02 +08:00
taohong 98834dd4ef tests: add encrypt integration test
Add image encryption test integration case to Smoke test.

Signed-off-by: taohong <taoohong@linux.alibaba.com>
2023-07-06 13:52:50 +08:00
taohong 94c6378ed1 feat: nydus support encrypted images
Extend native nydus v6 to support handling encrypted
containers images:
* An encrypted nydus image is composed of encrypted
bootstrap and chunk-level encrypted data blobs. The
bootstrap is encrypted by the Ocicrypt and the data
blobs are encrypted by aes-128-xts with randomly
generated key and iv at chunk-level.
* For every data blob, all the chunk data, conpression
context. table and compression context table header
are encrypted.
* The chunk encryption key and iv are stored in the blob
info reusing some items of the structure to save reserved
space.
* Encrypted chunk data will be decrypted and then be
decompressed while be fetched by the storage backend.
* Encrypted or unencrypted blobs can be merged together.

Signed-off-by: taohong <taoohong@linux.alibaba.com>
2023-07-06 13:52:50 +08:00
Qinqi Qu 62643677d0 action: reduce the number of times the codecov tool sends comments
This patch alleviates the problem that codecov frequently sending
emails to users when PR is updated.

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-07-06 12:19:00 +08:00
泰友 5db3f0ac33 fix: merge io from same blob panic
When merging io from same blob with different id, assertion breaks. The
images without blob deduplication suffers from it.

This pr removes the assertion that requires merging in same blob index.
By design, it makes sense, because different blob layer may share same
blob file. A continuous read from same blob for different layer is
helpful for performance.

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2023-07-06 12:12:28 +08:00
Qinqi Qu 7d5cb1adfd docs: update the OpenAnolis kernel installation guide in fscache doc.
OpenAnolis adds support for fscache mode since kernel version
4.19.91-27 or 5.10.134-12.

Fix: #1342

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-07-04 20:53:53 +08:00
Yan Song c1247fdce1 nydusify: bump github.com/goharbor/acceleration-service v0.2.5
To bring some internal changes and features:

https://github.com/goharbor/acceleration-service/releases/tag/v0.2.5

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-07-03 18:37:25 +08:00
Jiang Liu 662117a065 rafs: add special handling of invalid zero blob index
The rafs v6 format reserves blob index 0 for meta blobs, so ensure
invalid zero blob index doesn't cause abnormal behavior.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-06-27 17:24:17 +08:00
lihuahua123 65cf530f64 nydusify: update the doc of nydusify about the subcommand mount
Signed-off-by: lihuahua123 <771725652@qq.com>
2023-06-25 09:38:39 +08:00
Jiang Liu ee433ab1d7 dep: upgrade base64 to v0.21
Upgrade base64 to v0.21, to avoid multiple versions of the base64
crate.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-06-21 11:56:32 +08:00
Jiang Liu 0f628fb804 storage: introduce feature flag `prefetch-rate-limit`
Introduce feature flag `prefetch-rate-limit` to reduce dependencies
of the nydus-service crate.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-06-21 11:56:32 +08:00
Jiang Liu 7ec8fd75b1 api: introduce feature `error-backtrace`
Intoduce feature `error-backtrace` to reduce dependency of the
nydus-service crate.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-06-21 11:56:32 +08:00
Jiang Liu ea32ee4408 dep: upgrade openssl to 0.10.55 to fix cve warnings
error[vulnerability]: `openssl` `X509VerifyParamRef::set_host` buffer over-read
    ┌─ /github/workspace/Cargo.lock:122:1
    │
122 │ openssl 0.10.48 registry+https://github.com/rust-lang/crates.io-index
    │ --------------------------------------------------------------------- security vulnerability detected
    │
    = ID: RUSTSEC-2023-0044
    = Advisory: https://rustsec.org/advisories/RUSTSEC-2023-0044
    = When this function was passed an empty string, `openssl` would attempt to call `strlen` on it, reading arbitrary memory until it reached a NUL byte.
    = Announcement: https://github.com/sfackler/rust-openssl/issues/1965
    = Solution: Upgrade to >=0.10.55

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-06-21 10:07:00 +08:00
Jiang Liu f8b561aacc rafs: enhance rafs to support inspecting rafs v6 raw block image
The rafs core assume meta data is 4k-aligned, so it fails to inspect
raw block image generated from tarfs images, which are 512-bytes
aligned.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-06-21 10:07:00 +08:00
Jiang Liu 2b6d6ea2db service: refine block device implementation
Refine block device implementation by:
1) limit number of blocks to u32::MAX
2) rename BlockDevice::new() to new_with_cache_manager()
3) introduce another implementation of BlockDevice::new()

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-06-21 10:07:00 +08:00
Yadong Ding 1339af4996 gha:add some descriptions for convert ci
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-06-20 16:39:36 +08:00
lihuahua123 66761f2ddd Nydusify: fix some bug about the subcommand mount of nydusify
- The `nydusify mount` subcomend don't require `--backend-type` and `--backend-config` options when the backend is registry.
    - The methord to resolve it is we can get the `--backend-type` and `--backend-config` options  from the docker configuration.
    - Also, we have refractor the code of checker module in order to reuse the code

Signed-off-by: lihuahua123 <771725652@qq.com>
2023-06-16 15:27:34 +08:00
killagu 3b71868e08 ci(release): fix macos nydusd rust target
Can not use `declare -A` in macos shell.

Signed-off-by: killagu <killa123@126.com>
2023-06-16 15:23:05 +08:00
Jiang Liu 9d89b8d193 service: prepare for publishing v0.2.1
Prepare for publishing v0.2.1.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-06-16 15:05:09 +08:00
Jiang Liu 9a1524b6be rafs: publish nydus-rafs v0.3.1
Publish nydus-rafs v0.3.1.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-06-16 15:05:09 +08:00
Jiang Liu aa2305beb1 storage: publish nydus-storage v0.6.3
Publish nydus-storage v0.6.3.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-06-16 15:05:09 +08:00
Jiang Liu fdd99e3962 utils: publish nydus-utils v0.4.2
Publish nydus-utils v0.4.2.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-06-16 15:05:09 +08:00
Jiang Liu caa7d055c4 api: publish v0.3.0
Publish nydus-api v0.3.0.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-06-16 15:05:09 +08:00
Jiang Liu cce78d4663 builder: split out builder into a dedicated crate
Split out builder into a dedicate nydus-builder crate, to reduce
dependencies of the nydus-rafs crate.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-06-16 11:47:45 +08:00
Wenhao Ren 3ab3a759b1 smoke: add integration test for batch chunk mode
Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2023-06-15 16:25:44 +08:00
Wenhao Ren 2ec92e1513 storage: add runtime prefetch support for batch chunk
Add prefetch range calculation for batch chunk.

Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2023-06-15 16:25:44 +08:00
Wenhao Ren 827d953b84 storage: add runtime support for batch chunk
Add region calculation and batch chunk decompression capability for nydusd.
Do not support prefetch for batch chunk.

Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2023-06-15 16:25:44 +08:00
Wenhao Ren c089873d61 storage: add basic runtime support for batch chunk
1. Add function utils for batch chunk for the help to get batch information.
2. Implement `Default` for `BlobCompressionContext` for simplification.

Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2023-06-15 16:25:44 +08:00
Wenhao Ren 830bfacdff rafs: Terminate the build of a buffered batch chunk if a large chunk is encountered
Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2023-06-15 16:25:44 +08:00
killagu b4c76cf2dd ci(release): add macos arm64 artifact
Signed-off-by: killagu <killa123@126.com>
2023-06-12 15:28:40 +08:00
Liu Bo 5b15922ea0 Rafs: Add missing prefix of hex
Without hex prefix, it confuses me a little bit when debugging flags problems.

Signed-off-by: Liu Bo <bo.liu@linux.alibaba.com>
2023-06-02 13:57:42 +08:00
Huang Jianan a88a2e88aa builder: set the default compression algorithm for meta ci to lz4
We set the compression algorithm of meta ci to zstd by default, but there
is no option for nydus-image to configure it.

This could cause compatibility problems on the nydus version that does
not support zstd. Let's reset it to lz4 by default.

Signed-off-by: Huang Jianan <jnhuang95@gmail.com>
2023-06-02 10:12:17 +08:00
Jiang Liu 08c3d0fa83 builder: fix a compilation failure on macos
Fix a compilation failure on macos.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-05-28 19:58:01 +08:00
Jiang Liu 92e6340a6f builder: correctly generate nid for v6 inodes
The `nid` is not actually used yet, but we should still generate it
with correct value.

Fixes: https://github.com/dragonflyoss/image-service/issues/1301

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-05-28 19:58:01 +08:00
Jiang Liu 23c61104a9 smoke: use v2.1.6 instead of v2.1.4
Update smoke tests to use the latest v2.1.6 instead of v2.1.4.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-05-25 18:29:39 +08:00
Jiang Liu 8517120a47 error: merge crate nydus-error into nydus-utils and nydus-api
Merge crate nydus-error into nydus-utils and nydus-api, to reduce
number of crates.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-05-24 21:35:43 -07:00
Jiang Liu 43e651802d builder: delay free data structure to reduce image build time
According to perf flame graph, it takes a long time to free objects
used by image builder. In most common cases, the builder will only
run once and exit, so it's unnecessary to free those used objects.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-05-25 08:50:33 +08:00
Jiang Liu 8a413345ac rafs: enhance blobfs to support read() operation
Enhance blobfs to support read() operation, in addition to DAX.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-05-24 17:34:58 -07:00
Jiang Liu 07437542c3 rafs: cache blobfs inode information
Cache blobfs inode information to avoid opening file on every dax
window operations.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-05-24 17:34:58 -07:00
Jiang Liu 17573f8610 rafs: use rwlock instead of mutex for blobfs
Use rwlock instead of mutex for blobfs, to avoid serializaiton.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-05-24 17:34:58 -07:00
Jiang Liu 809f8d9727 rafs: optimize the way to build RAFS filesystem
The current way to build RAFS filesystem is:
- build the lower tree from parent bootstrap
- convert the lower tree into an array
- build the upper tree from source
- merge the upper tree into the lower tree
- convert the merged tree into another array
- dump nodes from the array

Now we optimize it as:
- build the lower tree from parent bootstrap
- build the upper tree from source
- merge the upper tree into the lower tree
- dump the merged tree

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-05-23 10:47:07 +08:00
Jiang Liu 80bd7dca34 rafs: rename set_4k_aligned() to set_aligned()
Rename set_4k_aligned() to set_aligned(), for tarfs.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-05-23 10:47:07 +08:00
Qinqi Qu 17fd41c9a0 action: fix failing test test_large_file for v5 image temporarily
Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-05-22 20:45:01 +08:00
Qinqi Qu 5be4be9ab5 action: fix pytest failing to install in integration tests.
Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-05-22 20:45:01 +08:00
Jiang Liu 5879c91864 blobfs: merge crate blobfs into crate rafs
Merge crate blobfs into crate rafs, to reduce number of crates.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-05-22 11:13:21 +08:00
Qinqi Qu 197795d02a cargo: fix fuse-backend-rs dependency in cargo.toml
The previous changes #1283 only upgraded the values in cargo.lock,
we should also upgrade cargo.toml.

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-05-18 22:30:45 -07:00
Qinqi Qu 8ebccf2e69 docs: add pull request and issue templates
Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-05-19 11:26:35 +08:00
Jiang Liu d751679923 api: merge crate nydus-app into crate nydus
Merge crate nydus-app into crate nydus, to reduce number of crates.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-05-18 01:46:37 -07:00
Yadong Ding faa41163b7 action: benchmark add description
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-05-17 22:52:08 +08:00
Peng Tao 29260393ed cargo: update fuse-backend-rs dependency
To fetch several critical fixes.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2023-05-17 17:27:33 +08:00
Yadong Ding 29b5d5dfc4 action:clean cache after branch closes
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-05-17 17:24:01 +08:00
Yadong Ding 5f76e8bd9c action: convert ci show metrics during conversion
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-05-17 11:21:11 +08:00
Yadong Ding 49c2a9f100 fix: nydusify save metric to specify the file path
We should not save metrics in work directory, otherwise it will be cleared.
We need use to specify the file path, so just change output-json to string opt.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-05-17 11:21:11 +08:00
Peng Tao 0a88fda5a2 smoke: no need to run vet and lint
There is no need to run vet and lint on the test code.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2023-05-16 17:27:01 +08:00
Jiang Liu 293b032d7f rafs: add root inode into inode map when building RAFS
Add root inode into inode map when building RAFS filesystem,
so RAFS v5 gets correct inode number counts.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-05-16 16:03:37 +08:00
Jiang Liu 251730990e rafs: avoid a debug_assert related to v5 amplify io
In function RafsSuper::amplify_io(), is the next inode `ni` is
zero-sized, the debug assertion in function calculate_bio_chunk_index()
(rafs/src/metadata/layout/v5.rs) will get triggered. So zero-sized
file should be skipped by amplify_io().

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-05-16 16:03:37 +08:00
Yadong Ding c7c9fad14a nydusify: add new option output-json
During convert, we can collect the metric: image size and convert time, etc.
Nydusify can dump the metric to local file if user needs.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-05-15 22:52:13 +08:00
Yadong Ding 3c31e133ac nydusify: update acceleration-service version
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-05-15 22:52:13 +08:00
Yadong Ding db0cc412bb action: benchmark more images on schedule
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-05-15 15:00:18 +08:00
Yadong Ding 6139399f5d misc: benchmark delete tmp file
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-05-15 15:00:18 +08:00
dependabot[bot] 7eaea415f7 build(deps): bump github.com/docker/distribution in /contrib/nydusify
Bumps [github.com/docker/distribution](https://github.com/docker/distribution) from 2.8.1+incompatible to 2.8.2+incompatible.
- [Release notes](https://github.com/docker/distribution/releases)
- [Commits](https://github.com/docker/distribution/compare/v2.8.1...v2.8.2)

---
updated-dependencies:
- dependency-name: github.com/docker/distribution
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-05-12 10:52:59 +08:00
Qinqi Qu f09b579bfe misc: reorganize the configuration file of nydusd
1. Move configuration files from docs/samples to misc/configs
2. Fix incomplete configuration in docs/nydusd.md
3. Update outdated nydusd-config.json from nydus-snapshotter repo

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-05-12 09:11:36 +08:00
Yadong Ding 9d87631171 misc: benchmark metrics support more images
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-05-08 17:19:30 +08:00
Yadong Ding ad8f870344 misc: benchmark support more images
add support for golang, java(amazoncorretto), ruby, and python.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-05-08 17:19:30 +08:00
Yadong Ding 44f3b16c22 action: use benchmark runtime image arg
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-05-08 17:19:30 +08:00
Yadong Ding c2c79a21ec misc: move benchmark image from config to runtime
to support more images run on benchmark.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-05-08 17:19:30 +08:00
Yadong Ding 2f743c8a53 action: update actions version to use Node.js 16
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-05-08 09:57:59 +08:00
Yadong Ding 5a179bedf2 action: remove target-dir input in rust cache
Since we update the rust cache version from v1 to v2.2.0, the target-dir
is useless, and the ./target will be cached default.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-05-08 09:57:59 +08:00
Yadong Ding c2637eead0 action: benchmark will be triggered by pr
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-05-06 13:44:21 +08:00
Yadong Ding 348ac74554 misc: fix benchmark on schedule without cache will panic
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-05-06 13:44:21 +08:00
Yadong Ding 6d1d56e3d6 action: use the same version rust-cache@v2.2.0
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-05-06 11:24:34 +08:00
Yadong Ding 0e4135cba7 action: reuse rust cache by shared-key
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-05-06 11:24:34 +08:00
Yadong Ding 7a48992bce misc: supprot benchmark on schedule
supprot the benchmark on schedule by new mode in benchmark_summary.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-05-05 14:17:52 +08:00
Yadong Ding 5bfb155780 action: add benchmark on schedule
we will run benchmark twice per-week, and compare result with the last by cache.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-05-05 14:17:52 +08:00
Qinqi Qu 35aa3a2b08 nydusify: add some unit tests for pkg/utils and cmd/nydusify
Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-04-28 17:50:32 +08:00
Qinqi Qu c5fdfda77c nydusify: add unit test coverage output
1. Introduce `make coverage` to print coverage in console.

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-04-28 17:50:32 +08:00
Huang Jianan 8743e81b3b contrib: support nydus-overlayfs and ctr-remote on different platforms
Otherwise, the binary we compiled cannot run on other platforms such as
arm.

Signed-off-by: Huang Jianan <jnhuang@linux.alibaba.com>
2023-04-28 17:49:45 +08:00
Jiang Liu f11391d476 storage: refine the way to define compression algorithms
Reserve 4 bits to store toc compression algorithms, and use enumeration
instead of bitmask for algorithms.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-04-27 10:15:44 +08:00
Desiki-high 944cc69a3e misc: make benchmark summary more clear
All the data is reserved for two decimal places. When the gap of
current pr and master over five percent of the master, we will add ↑ or ↓.

Signed-off-by: Desiki-high <ding_yadong@foxmail.com>
2023-04-27 10:12:36 +08:00
Qinqi Qu 44b70fa07b action: replace cargo test with cargo nextest in CI
1. improve test speed and presents test results concisely.

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-04-25 17:55:25 +08:00
Desiki-high 7edea8a0e3 misc: add image-size for benchmark
Signed-off-by: Desiki-high <ding_yadong@foxmail.com>
2023-04-25 15:48:08 +08:00
Desiki-high 730c9bfe85 action: add benchmark-compare for PR
compare the benchmark result between PR and master when smoke test triggered by pull request

Signed-off-by: Desiki-high <ding_yadong@foxmail.com>
2023-04-25 15:48:08 +08:00
Desiki-high 849e4f3abd refactor: use python refactor benchmark_summary.sh
1. refactor.
2. add the arg mode to adapt two benchmark summary modes.

Signed-off-by: Desiki-high <ding_yadong@foxmail.com>
2023-04-25 15:48:08 +08:00
Desiki-high a78db8ba4a feat: support the arg batch-size for small chunks mergence for nydusify
Signed-off-by: Desiki-high <ding_yadong@foxmail.com>
2023-04-24 10:26:50 +08:00
Qinqi Qu ca8dab805a ctr-remote: update contaeinrd to v1.7.0 and fix lint error
Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-04-23 13:54:17 +08:00
Desiki-high 05fff6e939 misc: add read-amount and read-count for benchmark
We should add the read-amount and read-count for nydus benchmark to
compare nydus and zran or different batchsize.

Signed-off-by: Desiki-high <ding_yadong@foxmail.com>
2023-04-23 10:39:59 +08:00
Jiang Liu 27fa97393d nydus-image: optimize the way to generate tarfs
Optimize the way to generate tarfs from tar file, to reduce memory
and time consumption.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-04-23 10:05:20 +08:00
Jiang Liu d9fe6d8c19 dep: update dependency to fix a CVE warning
error[vulnerability]: Resource exhaustion vulnerability in h2 may lead to Denial of Service (DoS)
   ┌─ /github/workspace/Cargo.lock:68:1
   │
68 │ h2 0.3.13 registry+https://github.com/rust-lang/crates.io-index
   │ --------------------------------------------------------------- security vulnerability detected
   │
   = ID: RUSTSEC-2023-0034
   = Advisory: https://rustsec.org/advisories/RUSTSEC-2023-0034
   = If an attacker is able to flood the network with pairs of `HEADERS`/`RST_STREAM` frames, such that the `h2` application is not able to accept them faster than the bytes are received, the pending accept queue can grow in memory usage. Being able to do this consistently can result in excessive memory use, and eventually trigger Out Of Memory.

     This flaw is corrected in [hyperium/h2#668](https://github.com/hyperium/h2/pull/668), which restricts remote reset stream count by default.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-04-23 10:05:20 +08:00
Desiki-high 1088f47394 action: add zran-no-prefetch in benchmark
1. add the zran without prefetch benchmark.
2. move the same steps to prepare_env.sh.
3. move benchmark summary script to benchmark_summary.sh.
3. change the benchmark-result order and enable in push and schedule.
4. set stable the wordpress tag 6.1.1.
5. delete the artifacts after benchmark-result download all artifacts.

Signed-off-by: Desiki-high <ding_yadong@foxmail.com>
2023-04-22 16:21:32 +08:00
Desiki-high 6cd8781459 misc: create shell for benchmark
1. prepare_env.sh for prepare container environment.
2. benchmark_summary.sh for benchmark-result to summary result.

Signed-off-by: Desiki-high <ding_yadong@foxmail.com>
2023-04-22 16:21:32 +08:00
Desiki-high faa10b7e8c misc: delete unused benchmark code
Signed-off-by: Desiki-high <ding_yadong@foxmail.com>
2023-04-22 16:21:32 +08:00
Desiki-high b712c6e528 docs: add codecov
Signed-off-by: Desiki-high <ding_yadong@foxmail.com>
2023-04-22 12:04:44 +08:00
Yan Song 0d2958e6a8 docs: update the perf graph
Keep it simple and clean.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-04-21 20:22:41 +08:00
YanSong b937989f56 action: fix checkout on pull_request_target
The `pull_request_target` trigger will checkout the master branch
codes by default, but we need to use the new PR codes on smoke test.

See: https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#pull_request_target

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-04-21 13:07:03 +08:00
Jiang Liu ca9f7a8087 rafs: minor optimization for tarfs builder
Minor optimization for tarfs builder.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-04-20 18:33:50 +08:00
Yan Song 79f4a685c9 action: fix smoke test for branch pattern
To match `master` and `stable/*` branches at least.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-04-20 16:21:35 +08:00
Yan Song 39eed8cd19 action: allow running smoke test for stable/* branch
Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-04-20 15:27:18 +08:00
Desiki-high 36d8a5b4eb change the comment content
Signed-off-by: Desiki-high <ding_yadong@foxmail.com>
2023-04-20 09:58:19 +08:00
Desiki-high 9b699fa6d9 add the benchmark test for nydus image
1. add the benchmark scripts in misc/benchmark
2. add five benchmark jobs in smoke test and the benchmark-result job for show the benchmark result in the PR comment

Signed-off-by: Desiki-high <ding_yadong@foxmail.com>
2023-04-20 09:58:19 +08:00
泰友 1a934b6f77 feat: add more types of file to smoke
Including:
    * regular file with chinese name
    * regular with long name
    * symbolic link of deleted file
    * large regular file of 13MB
    * regular file with hole at both head and tail
    * empty regular file

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2023-04-20 09:30:35 +08:00
Qinqi Qu 34710b5837 action: add unit test coverage check workflow
1. Introduce `make coverage` to print coverage in console.
2. Github CI use `make coverage-codecov` to get coverage info.

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-04-19 22:26:48 +08:00
Eryu Guan 8671b0aa11
misc: update toolchain to 1.68.2 and fix clippy warnings (#1227)
Signed-off-by: Eryu Guan <eguan@linux.alibaba.com>
2023-04-19 17:15:56 +08:00
Wenhao Ren e8ba11ae40 rafs: enhance builder to support batch chunk
Add `--batch-size` subcommand on nydus-image.
Add build time support of batch chunk.

Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2023-04-18 19:25:32 +08:00
Wenhao Ren 180f6d2c9a storage: introduce BatchInflateContext to support batch chunk
Enhance chunk info to support batch chunk.
Introduce BatchInflateContext and generator.

Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2023-04-18 19:25:32 +08:00
Wenhao Ren d0ae0d574e rafs: reuse chunk data compress and write procedure
Refactor `Node::dump_file_chunk()` to reuse data compress and write procedure.

Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2023-04-18 19:25:32 +08:00
Wenhao Ren fb9560b5d0 storage: check `zran` flag before set `zran` values
Check `zran` flag before set `zran` values.
Refine comments.

Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2023-04-18 19:25:32 +08:00
Jiang Liu 6b78bd1be0 rafs: optimize Node::name() to reduce image build time
According to perf flamegraph, Node::name() costs too much time
when generating nydus images from tar files. So optimize it.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-04-18 14:52:47 +08:00
Desiki-high 52d563999d docs: add the tip for nydus-zran
we should tell user the nydus zran image must have the same namespace with the oci image

Signed-off-by: Desiki-high <ding_yadong@foxmail.com>
2023-04-18 14:51:56 +08:00
imeoer 0dc95f8fda
Merge pull request #1192 from jiangliu/encrypt
Enhance file cache to encrypt data written to the cache file
2023-04-17 14:29:56 +08:00
Jiang Liu 2a23e99589 storage: encrypt data in local cache file
Encrypt data before writing data to local cache file.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-04-15 16:33:14 +08:00
Jiang Liu 37273bfbcf api: add encryption configuration to file cache
Add encryption configuration to file cache, so we can encrypt data
written to the local cache file.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-04-15 16:33:14 +08:00
Jiang Liu f82cf6d144 storage: introduce struct CipherContext
Introduce struct CipherContext for data encryption/decryption.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-04-15 16:33:14 +08:00
Jiang Liu 5f1fc40ac4 storage: add fields for chunk encryption
Add data fields to BlobInfo and CacheFile for chunk encryption.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-04-15 16:33:14 +08:00
Jiang Liu d31c3b31c9 storage: add flag to indicate encrypted data chunk
Add method and flag to indicate that a data chunk is encrypted or not.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-04-15 16:33:14 +08:00
Jiang Liu a3eb243d66
Merge pull request #1218 from taoohong/mushu/fuse_backend
service: add a functoin to help create fuse vfs backend
2023-04-15 16:03:47 +08:00
taohong f31e930f88 service: add a functoin to help create fuse vfs backend
Add a function to help create fuse vfs backend,
reduce explicit references to crate fuse_backend_rs.

Signed-off-by: taohong <taoohong@linux.alibaba.com>
2023-04-15 14:34:06 +08:00
Jiang Liu 80ede7528e
Merge pull request #1213 from jiangliu/dir-entry-name
rafs: fix a regression caused by commit 2616fb2c05
2023-04-14 17:43:36 +08:00
Jiang Liu 56c48bcccb rafs: fix a regression caused by commit 2616fb2c05
Fix a regression caused by commit 2616fb2c05.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-04-14 17:03:20 +08:00
Jiang Liu a57a97b1f2
Merge pull request #1208 from jiangliu/v6-dir-size
rafs: fix a possible bug in v6_dirent_size()
2023-04-14 15:44:54 +08:00
Jiang Liu b29e4aa7f6
Merge pull request #1212 from dragonflyoss/dependabot/cargo/contrib/nydus-backend-proxy/h2-0.3.17
build(deps): bump h2 from 0.3.13 to 0.3.17 in /contrib/nydus-backend-proxy
2023-04-14 14:03:26 +08:00
dependabot[bot] 99a75addc7
build(deps): bump h2 in /contrib/nydus-backend-proxy
Bumps [h2](https://github.com/hyperium/h2) from 0.3.13 to 0.3.17.
- [Release notes](https://github.com/hyperium/h2/releases)
- [Changelog](https://github.com/hyperium/h2/blob/master/CHANGELOG.md)
- [Commits](https://github.com/hyperium/h2/compare/v0.3.13...v0.3.17)

---
updated-dependencies:
- dependency-name: h2
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-14 03:46:59 +00:00
Jiang Liu 5a83128561 rafs: fix a possible bug in v6_dirent_size()
Function Node::v6_dirent_size() may return wrong result when "." and
".." are not at the first and second entries in the sorted dirent array.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-04-13 17:41:25 +08:00
Bin Liu a5603f2ede
Merge pull request #1207 from imeoer/add-coreweave-adopter
add CoreWeave to the adopter list
2023-04-12 14:29:17 +08:00
Yan Song a9e5852d79 add CoreWeave to the adopter list
Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-04-12 06:14:53 +00:00
Jiang Liu a8c6a2328d
Merge pull request #1205 from adamqqqplay/add-helm-docs
docs: add helm quickstart link to deploy Dragonfly+Nydus
2023-04-10 22:53:32 +08:00
Qinqi Qu 7dff3c39b9 docs: add helm quickstart link to deploy Dragonfly+Nydus
Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-04-10 16:33:13 +08:00
dependabot[bot] 38e388bf53
Merge pull request #1197 from dragonflyoss/dependabot/go_modules/contrib/nydusify/github.com/docker/docker-23.0.3incompatible 2023-04-10 07:07:55 +00:00
Jiang Liu f767b66ce3
Merge pull request #1200 from taoohong/mushu/cc-feature
service: add coco feature in Cargo.toml
2023-04-10 14:27:44 +08:00
dependabot[bot] 28634faa35
build(deps): bump github.com/docker/docker in /contrib/nydusify
Bumps [github.com/docker/docker](https://github.com/docker/docker) from 23.0.1+incompatible to 23.0.3+incompatible.
- [Release notes](https://github.com/docker/docker/releases)
- [Commits](https://github.com/docker/docker/compare/v23.0.1...v23.0.3)

---
updated-dependencies:
- dependency-name: github.com/docker/docker
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-10 06:13:41 +00:00
taohong 2b808caa30 service: add coco feature in Cargo.toml
Add feature coco to Cargo.toml, so that confidential containers
can apply this feature to use nydus to download images.

Signed-off-by: taohong <taoohong@linux.alibaba.com>
2023-04-10 11:06:32 +08:00
Jiang Liu 5a6551328b
Merge pull request #1203 from imeoer/upgrade-golangci-lint
action: upgrade golangci-lint to v1.51.2
2023-04-10 10:57:43 +08:00
Yan Song 1282914e77 action: upgrade golangci-lint to v1.51.2
To resolve the panic when run golangci-lint:

```
panic: load embedded ruleguard rules: rules/rules.go:13: can't load fmt
```

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-04-10 02:38:12 +00:00
Jiang Liu 902fd71819
Merge pull request #1193 from dragonflyoss/dependabot/cargo/contrib/nydus-backend-proxy/spin-0.9.8
build(deps): bump spin from 0.9.3 to 0.9.8 in /contrib/nydus-backend-proxy
2023-04-04 15:11:33 +08:00
dependabot[bot] 60b02c0335
build(deps): bump spin in /contrib/nydus-backend-proxy
Bumps [spin](https://github.com/mvdnes/spin-rs) from 0.9.3 to 0.9.8.
- [Release notes](https://github.com/mvdnes/spin-rs/releases)
- [Changelog](https://github.com/mvdnes/spin-rs/blob/master/CHANGELOG.md)
- [Commits](https://github.com/mvdnes/spin-rs/commits)

---
updated-dependencies:
- dependency-name: spin
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-04 06:09:56 +00:00
imeoer 32cc7df139
Merge pull request #1153 from changweige/update-docs
doc: update descriptions about nydus-snapshotter
2023-04-03 10:05:43 +08:00
Changwei Ge 8cc04f15c2 doc: update descriptions about nydus-snapshotter
To match the latest nydus-snapshotter UI

Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2023-04-03 09:29:52 +08:00
Jiang Liu 06d2292d9d
Merge pull request #1189 from jiangliu/macos
macos: fix a build failure
2023-03-31 17:17:15 +08:00
Jiang Liu ff21a87531 macos: fix a build failure
Fix a build failure for macos caused by block device related code.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-31 16:40:23 +08:00
imeoer 86d36f704a
Merge pull request #1188 from adamqqqplay/upgrade-contrib-dependency
contrib: upgrade runc to v1.1.5
2023-03-31 15:38:46 +08:00
Qinqi Qu 2ecd25ea1d contrib: upgrade runc to v1.1.5
Runc v1.1.5 fixes three CVEs, we should upgrade it.

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-03-31 14:23:11 +08:00
Jiang Liu b79e90bc27
Merge pull request #1176 from jiangliu/export-block-verity
Add verity digests for exported block device
2023-03-31 14:11:06 +08:00
imeoer a5297847c7
Merge pull request #1183 from adamqqqplay/refine-readme
docs: polish and simplify README.md
2023-03-31 11:48:42 +08:00
Qinqi Qu ba4d2f9c98 docs: polish and simplify README.md
1. Add FAQ, Website and Quickstart link.
2. Reorganize document structure.
3. Remove some redundant descriptions.

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-03-31 11:41:04 +08:00
imeoer 140a0d7c9d
Merge pull request #1184 from jiangliu/v6-mapped-blkaddr
rafs: fix a incorrect mapped_blkaddr for multi layer images
2023-03-31 10:58:59 +08:00
Jiang Liu cd3d2444c6
Merge pull request #1185 from dragonflyoss/dependabot/go_modules/contrib/ctr-remote/github.com/opencontainers/runc-1.1.5
build(deps): bump github.com/opencontainers/runc from 1.1.4 to 1.1.5 in /contrib/ctr-remote
2023-03-31 10:38:51 +08:00
Jiang Liu fb8db88944
Merge pull request #1181 from jiangliu/nydus-image-doc
nydus-image: update documentation docs/nydus-image.md
2023-03-30 23:39:28 +08:00
Jiang Liu 646f320665 nydus-image: update documentation docs/nydus-image.md
Update documentation docs/nydus-image.md.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-30 23:31:24 +08:00
Jiang Liu 6594edb719
Merge pull request #1186 from imeoer/fix-https-fallback
storage: fix http fallback handle
2023-03-30 23:24:58 +08:00
Yan Song 74677615d2 storage: fix http fallback handle
If we attempt to establish a TLS connection with the HTTP registry server,
we are likely to encounter these types of error:

- Error `wrong version number` from openssl library;
- Error `connection refused` from standard library;

Before this, only the first type of error was handled. This commit handles
the second type of error, which was reproduced by running a local insecure
harbor registry.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-03-30 08:41:05 +00:00
dependabot[bot] a6d7a1ee89
build(deps): bump github.com/opencontainers/runc in /contrib/ctr-remote
Bumps [github.com/opencontainers/runc](https://github.com/opencontainers/runc) from 1.1.4 to 1.1.5.
- [Release notes](https://github.com/opencontainers/runc/releases)
- [Changelog](https://github.com/opencontainers/runc/blob/v1.1.5/CHANGELOG.md)
- [Commits](https://github.com/opencontainers/runc/compare/v1.1.4...v1.1.5)

---
updated-dependencies:
- dependency-name: github.com/opencontainers/runc
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-03-30 06:52:43 +00:00
Jiang Liu 4dd44255ff rafs: change alignment for v6 mapped_blkaddr from 2M to 512K
Change alignment for v6 mapped_blkaddr from 2M to 512K, 512K is enough
to support dm-verity.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-30 14:12:28 +08:00
Jiang Liu b7f8af04f6 rafs: fix a incorrect mapped_blkaddr for multi layer images
When generating a RAFS filesystem with multiple data blobs, the
mapped_blkaddr for second and following-on blobs are incorrect.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-30 13:51:52 +08:00
Jiang Liu 01e59a6149 nydus-image: generate dm-verity data for block device
Add `--verity` option to `nydus-image export --block` to generate
dm-verity data for block devices.

```
[root@iZ0jl3vazmhc81dur3xnm3Z image-service]# tar -cvf src.tar src
[root@iZ0jl3vazmhc81dur3xnm3Z image-service]# sha256sum src.tar
0e2dbe8b6e0f55f42c75034ed9dfc582ad0a94098cfc248c968522e7ef02e00a  src.tar
[root@iZ0jl3vazmhc81dur3xnm3Z image-service]# cp src.tar images/0e2dbe8b6e0f55f42c75034ed9dfc582ad0a94098cfc248c968522e7ef02e00a
[root@iZ0jl3vazmhc81dur3xnm3Z image-service]# target/debug/nydus-image create -t tar-tarfs -D images/ images/0e2dbe8b6e0f55f42c75034ed9dfc582ad0a94098cfc248c968522e7ef02e00a
[2023-03-27 16:32:00.068730 +08:00] INFO successfully built RAFS filesystem:
meta blob path: images/90f0e6e7e0ff822d4acddf30c36ac77fe06f549fe58f89a818fa824b19f70d47
data blob size: 0x3c000
data blobs: ["0e2dbe8b6e0f55f42c75034ed9dfc582ad0a94098cfc248c968522e7ef02e00a"]
[root@iZ0jl3vazmhc81dur3xnm3Z image-service]# target/debug/nydus-image export --block --verity -D images/ -B images/90f0e6e7e0ff822d4acddf30c36ac77fe06f549fe58f89a818fa824b19f70d47
[2023-03-27 23:49:14.450762 +08:00] INFO RAFS features: COMPRESSION_NONE | HASH_SHA256 | EXPLICIT_UID_GID | TARTFS_MODE
dm-verity options: --no-superblock --format=1 -s "" --hash=sha256 --data-block-size=4096 --hash-block-size=4096 --data-blocks 572 --hash-offset 2342912 ab7b417fc284c3b58a72044a996ec55e2c68a8b9dcf10bc469f4e640e5d98e6a
losetup -r /dev/loop1 images/90f0e6e7e0ff822d4acddf30c36ac77fe06f549fe58f89a818fa824b19f70d47.disk
[root@iZ0jl3vazmhc81dur3xnm3Z image-service]# veritysetup open -v --no-superblock --format=1 -s "" --hash=sha256 --data-block-size=4096 --hash-block-size=4096 --data-blocks 572 --hash-offset 2342912 /dev/loop1 verity /dev/loop1 ab7b417fc284c3b58a72044a996ec55e2c68a8b9dcf10bc469f4e640e5d98e6a
[root@iZ0jl3vazmhc81dur3xnm3Z image-service]# veritysetup status verity
/dev/mapper/verity is active.
  type:        VERITY
  status:      verified
  hash type:   1
  data block:  4096
  hash block:  4096
  hash name:   sha256
  salt:        -
  data device: /dev/loop1
  data loop:   /root/image-service/images/90f0e6e7e0ff822d4acddf30c36ac77fe06f549fe58f89a818fa824b19f70d47.disk
  size:        4576 sectors
  mode:        readonly
  hash device: /dev/loop1
  hash loop:   /root/image-service/images/90f0e6e7e0ff822d4acddf30c36ac77fe06f549fe58f89a818fa824b19f70d47.disk
  hash offset: 4576 sectors
  root hash:   ab7b417fc284c3b58a72044a996ec55e2c68a8b9dcf10bc469f4e640e5d98e6a
```

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-29 17:43:01 +08:00
Jiang Liu c6d2065c0c utils: introduce mechanism to generate Merkle tree for verity
Introduce mechanism to generate Merkle tree for verity.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-29 15:29:29 +08:00
imeoer 819ccafda5
Merge pull request #1159 from jiangliu/tarfs
Add `export` subcommand to `nydus-image`
2023-03-29 15:16:15 +08:00
imeoer d71957392f
Merge pull request #1177 from jiangliu/is-present
nydus: fix a possible panic caused by SubCmdArgs::is_present()
2023-03-29 10:03:43 +08:00
imeoer 35416b0697
Merge pull request #1178 from jiangliu/mapped-blkaddr
nydus-image: print mapped block address when inspecting blob info
2023-03-29 09:58:58 +08:00
Jiang Liu fc3979e46a nydus-image: enable multi-threading when exporting block images
Enable multi-threading when exporting block images, to reduce exporting
time.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-29 09:53:02 +08:00
Jiang Liu d5ef141219 nydus-image: introduce new subcommand export
Introduce new subcommand `export` to nydus-image, which will be used
to export RAFS filesystems as raw block device images or tar files.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-29 09:53:01 +08:00
Jiang Liu 0917afb411 nydus-image: syntax changes for commandline option preparation
Syntax only changes for commandline option preparation.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-29 09:53:01 +08:00
Jiang Liu 183625a513 nydus-image: print mapped block address when inspecting blob info
Print mapped block address when inspecting blob info.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-28 23:52:10 +08:00
Jiang Liu fc814a2991 nydus: fix a possible panic caused by SubCmdArgs::is_present()
Fix a possible panic caused by SubCmdArgs::is_present().

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-28 13:50:29 +08:00
Jiang Liu 667189b7d8
Merge pull request #1175 from jiangliu/deny
deny: fix cargo deny warnings related to openssl
2023-03-27 16:48:49 +08:00
Jiang Liu 7e3baeeb1e
Merge pull request #1121 from taoohong/master
service: Add a README.md to nydus-service
2023-03-26 23:31:41 +08:00
Jiang Liu f2dd8e63a7 deny: fix cargo deny warnings related to openssl
Fix cargo deny warnings related to openssl.

https://github.com/dragonflyoss/image-service/actions/runs/4522515576/jobs/7965040490

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-26 23:23:41 +08:00
Tao Hong 14f45afc5d
Merge branch 'dragonflyoss:master' into master 2023-03-24 10:14:17 +08:00
imeoer 14c709d080
Merge pull request #1169 from jiangliu/service-macos-clippy
service: clean clippy warnings for macos
2023-03-23 18:23:44 +08:00
Jiang Liu cd4cb44f39
Merge pull request #1173 from ccx1024cc/morgan/fix_ci
fix: master branch not run ci
2023-03-22 23:59:02 +08:00
泰友 6ecef3fe37 fix: master branch not run ci
Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2023-03-22 19:09:50 +08:00
Jiang Liu b9b4f23816
Merge pull request #1172 from ccx1024cc/morgan/trigger_ci
fix: stable/XXX branch not run ci
2023-03-22 16:58:32 +08:00
泰友 7eda36afe2 fix: ci: actions are not triggered for stable/v2.2
Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2023-03-22 15:01:35 +08:00
Jiang Liu 527ce73a78
Merge pull request #1170 from dragonflyoss/dependabot/go_modules/contrib/nydusify/google.golang.org/protobuf-1.29.1
build(deps): bump google.golang.org/protobuf from 1.29.0 to 1.29.1 in /contrib/nydusify
2023-03-22 13:40:14 +08:00
Jiang Liu c6e5bd8e75
Merge pull request #1168 from jiangliu/tarfs-merge
rafs: fix incorrect blob id in merged TARFS
2023-03-22 12:30:56 +08:00
Jiang Liu 9904f6d1b2 service: clean clippy warnings for macos
Clean clippy warnings for macos.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-22 12:26:40 +08:00
dependabot[bot] 2dfee1cc5f
build(deps): bump google.golang.org/protobuf in /contrib/nydusify
Bumps [google.golang.org/protobuf](https://github.com/protocolbuffers/protobuf-go) from 1.29.0 to 1.29.1.
- [Release notes](https://github.com/protocolbuffers/protobuf-go/releases)
- [Changelog](https://github.com/protocolbuffers/protobuf-go/blob/master/release.bash)
- [Commits](https://github.com/protocolbuffers/protobuf-go/compare/v1.29.0...v1.29.1)

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-03-22 02:22:26 +00:00
Jiang Liu e6c7871aca rafs: fix incorrect blob id in merged TARFS
When merging multiple RAFS filesystems in TARFS mode into one, the
generated data blob is incorrectly, it actually the meta blob id
instead of data blob id.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-21 21:45:37 +08:00
imeoer c202e918d4
Merge pull request #1167 from jiangliu/service-macos
service: fix compilation failures on macos
2023-03-21 17:49:05 +08:00
Jiang Liu b94307c86c service: fix compilation failures on macos
Fix compilation failures on macos caused by the nydus-service crate.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-21 16:56:44 +08:00
imeoer 0cfd7f6023
Merge pull request #1166 from jiangliu/inode-wrapper-unimplemented
rafs: git rid of several unimplemented()
2023-03-21 16:34:38 +08:00
Jiang Liu 818fe47243 rafs: get rid of several unimplemented()
The nydus-image check for v5 uses some unimplemented methods of
InodeWrapper, which causes panicking at runtime.

Fixes: https://github.com/dragonflyoss/image-service/issues/1160

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-21 15:45:22 +08:00
imeoer 0c1fee409a
Merge pull request #1165 from jiangliu/fix-prefetch
rafs: fix a assertion failure in prefetch list generation
2023-03-21 15:04:33 +08:00
Jiang Liu 49fc71e1e1 rafs: fix a assertion failure in prefetch list generation
Fix a assertion failure in prefetch list generation.

Fixes: https://github.com/dragonflyoss/image-service/issues/1154

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-21 14:50:29 +08:00
Jiang Liu 5160def413
Merge pull request #1164 from imeoer/nydusify-fix-workdir
nydusify: cleanup work directory when conversion finish
2023-03-21 12:24:40 +08:00
Jiang Liu 82f3ee97b6
Merge pull request #1163 from imeoer/nydusify-fix-oci-handle
nydusify: fix oci media type handle
2023-03-21 12:23:35 +08:00
Yan Song 5708cb2e56 nydusify: cleanup work directory when conversion finish
Remove the work directory to clean up the temporary image
blob data after the conversion is finished.

We should only clean up when the work directory not exists
before, otherwise it may delete user data by mistake.

Fix: https://github.com/dragonflyoss/image-service/issues/1162

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-03-21 03:58:39 +00:00
Yan Song dac61cc9f6 nydusify: fix oci media type handle
Bump nydus snapshotter v0.7.3 and bring some fixups:

1. If the original image is already an OCI type, we should forcibly set the bootstrap layer to the OCI type.
2. We need to append history item for bootstrap layer, to ensure the history consistency, see: e5d5810851/manifest/schema1/config_builder.go (L136)

Related PR: https://github.com/containerd/nydus-snapshotter/pull/427, https://github.com/goharbor/acceleration-service/pull/119

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-03-21 03:45:44 +00:00
Jiang Liu 6ab15d85bf
Merge pull request #1161 from yqleng1987/fix-compile-snapshotter
ci test: fix bug of compiling nydus-snapshotter
2023-03-20 23:35:08 +08:00
Yiqun Leng cd9f1278b9 ci test: fix bug of compiling nydus-snapshotter
Since developers changed "make clear" to "make clean" in the Makefile
in nydus-snapshotter, it also needs to be updated in ci test.
Signed-off-by: Yiqun Leng <yqleng@linux.alibaba.com>
2023-03-20 23:04:48 +08:00
Jiang Liu bca47e3dd7
Merge pull request #1158 from jiangliu/fuse-tarfs
Enhance FUSE implementation to support RAFS in TARFS mode
2023-03-20 21:27:53 +08:00
Jiang Liu 54723319d5
Merge pull request #1155 from imeoer/disable-validation-by-default
rafs: only enable digest validate based on configuration
2023-03-20 13:53:58 +08:00
Jiang Liu 81592f60df rafs: enhance RAFS FUSE implementation to support TARFS
Enhance RAFS FUSE implementation to support RAFS filesystms in
TARFS mode.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-20 13:50:50 +08:00
Jiang Liu d8d67a841d rafs: rename TarfsChunkInfo to PlainChunkInfoV6
Rename TarfsChunkInfo to PlainChunkInfoV6, so it can be used for
EROFS plain inode later.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-20 13:50:40 +08:00
Yan Song ac2d786dde rafs: only enable digest validate based on configuration
We found that when using the "nydus-image check --bootstrap /path/to/bootstrap"
command, it takes about 15s to check a 35MB bootstrap file (rafs v5) due to the
default digest validation. This is very slow and disabling it can reduce the time to 3s.

We should only allow this option to be configurable at runtime.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-03-20 03:06:18 +00:00
imeoer 7ea753dcba
Merge pull request #1151 from jiangliu/tarfs-merge
Enhance `nydus-image merge` to support tarfs
2023-03-20 11:04:26 +08:00
imeoer 65127afe75
Merge pull request #1156 from jiangliu/rafs-v6-inode
rafs: define dedicated RafsV6Inode to reduce memory consumption
2023-03-20 11:00:31 +08:00
Jiang Liu d2fa7d52df rafs: define dedicated RafsV6Inode to reduce memory consumption
There are several unused fields in RafsV5Inode when used for v6,
so define dedicated RafsV6Inode to reduce memory consumption.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-18 09:56:01 +08:00
Jiang Liu 93bf61bc96 rafs: minor improvement to builder/merge
Minor improvement to builder/merge to avoid building unnecessary
chunk dictionary.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-17 15:00:26 +08:00
Jiang Liu 893ab021c9 rafs: avoid unnecessary memory copy by using VecDeque
Vec::insert(0, node) will cause unnecessary memory copy, so use
VecDeque instead.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-17 14:46:36 +08:00
Jiang Liu 578fe72549 rafe: enhance builder/merger to support RAFS in TARFS mode
Enhance builder/merger to support RAFS in TARFS mode, so we can merge
multiple RAFS filesystems in TARFS mode into one.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-17 14:28:35 +08:00
Jiang Liu 2a55d3ef88 rafs: move image merger into rafs/builder
Move image merger into rafs/builder.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-16 17:31:38 +08:00
imeoer c9d9b435ef
Merge pull request #1147 from jiangliu/tarfs
Introduce new tarfs mode to Nydus
2023-03-16 17:26:35 +08:00
Jiang Liu 3891a51465 service: enhance block device to support RAFS filesystem in TARFS mode
Enhance block device to support block size of 512, in addition to 4096,
so we can expose RAFS filesystems in TARFS mode as block device.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-16 15:17:13 +08:00
Jiang Liu 6522750a67 storage: enhance filecache to support RAFS filesystem in TARFS mode
A RAFS filesystem in TARFS mode directly use tar files/streams as data
blobs. A RAFS filesystem in TARFS mode contains a RAFS meta blob and
one or more tar files. There's no blob meta, such as compression info
array, chunk digest, TOC etc, in the tar files. So there's no support
of lazy loading, chunk dedup, chunk validation etc.

So assume that the snapshotter will prepare meta blob and tar files
before mounting the RAFS filesystem. Enhance the filecache module to
support tar files without lazy-loading, chunk dedup and chunk
validation.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-16 15:17:12 +08:00
Jiang Liu 45d0f8e6cf rafs: enhance builder to support TARFS mode
When using containerd overlayfs snapshotter to handle OCIv1 images,
it works as below:
- download compressed blobs from registry
- uncompress compressed blobs into tar streams/files
- unpack tar streams to directories on local filesystem
- mount multiple directories into a container rootfs by overlayfs

Here we introduce a new work mode to nydus, called as TARFS, which
works as below:
- download compressed blobs from registry
- uncompress compressed blobs into tar streams/files
- build RAFS/EROFS meta blob from tar streams/files
- optionally merge multiple RAFS/EROFS meta blobs into one
- mount the generated RAFS filesystem by mount.erofs

By introducing TARFS mode to RAFS, it helps to avoid generating a bunch
of small files by `untar`, which speeds up image preparation and garbage
collection. It may also help to reduce levels of overlayfs by merging
multiple image layers into one final RAFS filesystem.

The TARFS mode of RAFS filesystem has several special behavior, compared
to current RAFS, as below:
1) Instead of generating RAFS data blob, it directly use tar files as
   RAFS data blob.
2) Tar files are uncompressed, so data blobs for TARFS mode are
   uncompressed.
3) Tar files will also be directly used as local cache files.
4) There's no chunk compression info, chunk digest, TOC etc, generated
   for TARFS mode.
5) Block size is 512 bytes instead of 4K, because tar files are 512
   bytes aligned.

Now we have three ways to make of OCIv1 images:
Mode            		TAR-TARFS		TARGZ-REF		TARGZ-RAFS
Generate meta blob?		Y			Y			Y
Generate chunk data?		N			N			Y
Generate blob.meta?		N			Y			Y
Generate data blobs?		N			Y(for blob.meta)	Y
Data in data blobs?		Not generated		blob.meta		chunk data & blob.meta
Chunk alignment?		512			4096			4096
Chunk dedup?			N			Y			Y
Lazy loading?			N			Y			Y

Note, RAFS in TARFS mode is designed to be used locally only. In other
words, it's a way to implement snapshotter, instead of an image format
for sharing.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-16 15:17:09 +08:00
Jiang Liu d203985ba9 utils: add option to enable/disable hash calc for BufReaderInfo
Add method to enable/disable hash value computation for BufReaderInfo.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-15 17:51:36 +08:00
Jiang Liu bf5c24617d rafs: introduce fake TarfsChunkInfo to provide ChunkInfo TARFS
Introduce fake TarfsChunkInfo to provide ChunkInfo TARFS.
The TarfsChunkInfo acts as follow:
1) all TARFS chunks are uncompressed, because the tar file is in
   plaintext.
2) chunk digests of TarfsChunkInfo are all zero, so they are fake.

Also add constants and helpers to support 512-bytes block.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-15 17:51:35 +08:00
Jiang Liu 6638eee247
Merge pull request #1148 from jiangliu/utils-crypt
utils: introduce methods and structures for encryption and decryption
2023-03-15 17:26:06 +08:00
Jiang Liu eb042ca2b1
Merge pull request #1146 from imeoer/nydusify-fix-pull
nydusify: fix pulling all platforms of source image
2023-03-15 14:52:12 +08:00
Jiang Liu 5c19dfb8b1
Merge pull request #1150 from ccx1024cc/morgan/upmaster
rafs: fix amplify can not be skipped.
2023-03-15 11:46:54 +08:00
Yan Song 8458bcc7d2 nydusify: forcibly enabled `--oci` option when `--oci-ref` be enabled
We need to forcibly enable `--oci` option for allowing to append
related annotation for zran image, otherwise an error be thrown:

```
merge nydus layers: invalid label containerd.io/snapshot/nydus-ref=: invalid checksum digest format
```

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-03-15 03:46:51 +00:00
Yan Song 6e4ceee291 nydusify: fix unnecessary golang-lint error
```
golangci-lint run
Error: pkg/converter/provider/ported.go:47:64: SA1019: rCtx.ConvertSchema1 is deprecated: use Schema 2 or OCI images. (staticcheck)
	if desc.MediaType == images.MediaTypeDockerSchema1Manifest && rCtx.ConvertSchema1 {
	                                                              ^
Error: pkg/converter/provider/ported.go:20:2: SA1019: "github.com/containerd/containerd/remotes/docker/schema1" is deprecated: use images formatted in Docker Image Manifest v2, Schema 2, or OCI Image Spec v1. (staticcheck)
	"github.com/containerd/containerd/remotes/docker/schema1"
	^
```

Disabled the check, it's unnecessary to check the ported codes.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-03-15 02:55:16 +00:00
Yan Song 851fc6de29 nydusify: fix `--oci` option for convert subcommand
The `--oci` option is not working, we make it reverse before,
this patch fix it and keep compatibility with the old option
`--docker-v2-format`.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-03-15 02:15:28 +00:00
Yan Song 0288c9d44f nydusify: fix pulling all platforms of source image
We should only handle specific platform for pulling by
`platforms.MatchComparer`, otherwise nydusify will pull
the layer data of all platforms for an source image.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-03-15 02:13:10 +00:00
imeoer a61debc97a
Merge pull request #1149 from jongwu/fuse-back
upgrad fuse-backend-rs to 0.10.2
2023-03-15 10:11:35 +08:00
泰友 0fefbb4898 rafs: fix amplify can not be skipped
``` json
{
    "device":{
        "backend":{
            "type":"registry",
            "config":{
                "readahead":false,
                "host":"dockerhub.kubekey.local",
                "repo":"dfns/alpine",
                "auth":"YWRtaw46SGFyYm9VMTIZNDU=",
                "scheme":"https",
                "skip_verify":true,
                "proxy":{
                    "fallback":false
                }
            }
        },
        "cache":{
            "type":"",
            "config":{
                "work_dir":"/var/lib/containerd-nydus/cache",
                "disable_indexed_map":false
            }
        }
    },
    "mode":"direct",
    "digest_validate":false,
    "jostats_files":true,
    "enable_xattr":true,
    "access_pattern":true,
    "latest_read_files":true,
    "batch_size":0,
    "amplify_io":0,
    "fs_prefetch":{
        "enable":false,
        "prefetch_all":false,
        "threads_count":10,
        "merging_size":131072,
        "bandwidth_rate":1048576,
        "batch_size":0,
        "amplify_io":0
    }
}
```
`{.fs_prefetch.merging_size}` is used, instead of `{.amplify_io}`

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2023-03-15 10:00:45 +08:00
Jianyong Wu c4a97f16fc upgrad fuse-backend-rs to 0.10.2
There is a bug in fuse-backend-rs 0.10.1 which leads nydusd quit with segment fault.
Luckly, it has been fixed in 0.10.2. See [1].

[1] 2f2b242ed2

Signed-off-by: Jianyong Wu <jianyong.wu@arm.com>
2023-03-15 09:35:19 +08:00
Jiang Liu a6cecb980a utils: introduce methods and structures for encryption and decryption
Introduce methods and structures for encryption and decryption, and
implement `aes128xts` and `aes256xts`.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-13 22:19:10 +08:00
imeoer 3213eb718b
Merge pull request #1144 from jiangliu/prepare-tarfs
Refine builder and rafs to prepare for tarfs
2023-03-13 11:04:50 +08:00
Jiang Liu bd837d0086 rafs: only invoke v5 related code for v5 builds
Only invoke v5 related code for v5 builds, also enforce strictly
validation when creating missing directory for tar based builds.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-13 10:53:00 +08:00
Jiang Liu c00c5784cc rafs: replace with_context() by context() when possible
Replace with_context() by context() when possible to avoid function
call.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-12 21:00:15 +08:00
Jiang Liu 3f336139a5
Merge pull request #1137 from imeoer/converter-parent-bootstrap
builder: support `--parent-bootstrap` for merge
2023-03-09 17:10:40 +08:00
Yan Song a99a41fcdb rafs: do not fix blob id for old bootstrap
In fact, there is no way to tell if a separate old bootstrap file
was inline to the blob, for example, for an old merged bootstrap,
we can't set the blob id it references to as the filename, otherwise
it will break blob table on loading rafs.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-03-09 07:01:22 +00:00
Yan Song bee62d6a9f smoke: add `--parent-bootstrap` for merge test
Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-03-09 07:01:22 +00:00
Yan Song 2423f4366c builder: support `--parent-bootstrap` for merge
This option allows merging multiple bootstraps of upper layer with
the bootstrap of a parent image, so that we can implement container
commit operation for nydus image.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-03-09 07:01:20 +00:00
Jiang Liu d0b07e1c13 rafs: rename EROFS_BLOCK_SIZE to EROFS_BLOCK_SIZE_4096
Rename EROFS_BLOCK_SIZE to EROFS_BLOCK_SIZE_4096, we are going to
support EROFS_BLOCK_SIZE_512.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-09 13:54:11 +08:00
Jiang Liu c2e08c46ba
Merge pull request #1143 from jiangliu/api-fix
api: fix a build error
2023-03-09 00:33:30 +08:00
Jiang Liu f82803d23e api: fix a build error
Fix a builder error caused by missing of `warn!`.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-08 18:37:58 +08:00
Jiang Liu 1344b9c108
Merge pull request #1141 from jiangliu/builder
Move RAFS filesystem builder into nydus-rafs crate
2023-03-08 17:55:47 +08:00
imeoer 3ef84892a6
Merge pull request #1139 from jiangliu/block-nbd
Export Nydus images as block devices by using NBD
2023-03-07 11:21:14 +08:00
Jiang Liu 2b3fcc0244 rafs: refine prefetch and chunk dictionary in builder
Refine prefetch and chunk dictionary in builder for maintenance.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-06 21:29:30 +08:00
Jiang Liu 7a226ce9f9 rafs: refine builder Bootstrap implementation
Refine builder Bootstrap implementation for maintenance.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-06 20:20:49 +08:00
Jiang Liu 4372f96cfb rafs: refine RAFS v6 builder implementation
Refine RAFS v6 builder implementation by:
- introduce helper Node::v6_dump_inode() to reduce duplicated code
- introduce helper BuildContext::v6_block_addr()

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-06 15:47:22 +08:00
Jiang Liu ab344d69c1 rafs: refine builder/Node related code
Refine builder/Node related code for maintenance.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-06 15:47:20 +08:00
Jiang Liu 009625d19f rafs: move RAFSv6 builder related code into a dedicated file
Move RAFSv6 builder related code into a dedicated file.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-06 15:47:10 +08:00
Jiang Liu fa574fb0c3 rafs: move overlay related code into builder/core/overlay.rs
Move overlay related code into builder/core/overlay.rs, for better
maintenance.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-06 15:38:40 +08:00
Jiang Liu 4b90c87c58 rafs: refine Node structure to reduce memory consumption and copy
Organize immutable fields of Node into a new struct NodeInfo, to
reduce memory consumption and copy operations.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-06 15:38:39 +08:00
Jiang Liu 3fc59da93c rafs: move builder from nydus-image into rafs
Move builder from nydus-image into rafs, so it can be reused.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-06 15:38:37 +08:00
Jiang Liu b971551a14 rafs: optimize InodeWrapper to reduce memory consumption
Optimize InodeWrapper to reduce memory consumption by only
instantialing the inode object when needed.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-06 15:36:40 +08:00
Jiang Liu dc54beea4d rafs: optimize ChunkWrapper to reduce memory consumption
Optimize ChunkWrapper to reduce memory consumption by only
instantialing the chunk info object when needed.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-06 15:36:39 +08:00
imeoer 6b1998f927
Merge pull request #1135 from jiangliu/nydus-image-simplify
Minor improvements to nydus-image
2023-03-06 14:33:09 +08:00
imeoer 5946738cfe
Merge pull request #1140 from changweige/add-optimizer-doc
readme: add a very brief section to introduce image optimizer
2023-03-06 14:28:46 +08:00
Changwei Ge 0218ff172d readme: add a very brief section to introdce image optimizer
Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2023-03-06 11:47:34 +08:00
Jiang Liu f9b051ed40 api: add method to load BlobCacheConfigV2 from file
Add method to load BlobCacheConfigV2 from configuration file.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-05 01:39:25 +08:00
Jiang Liu eceeefd74c nydusd: add subcommand nbd to export nydus images as block devices
Add subcommand nbd to export nydus images as block devices through
NBD.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-05 00:35:10 +08:00
Jiang Liu 1e9b2f3995 service: add nbd service to export RAFSv6 images as block devices
Implement NbdService which cooperates with the Linux nbd driver to
expose RAFSv6 images as block devices. To simplify the implementation,
the NbdService will directly talk with the nbd driver, instead of
following a typical nbd-server and nbd-client architecture.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-05 00:35:09 +08:00
Jiang Liu 10a2fef0cb service: compose a block device from a RAFSv6 image
Compose a block device from a RAFSv6 image, so all metadata/data
content can be accessed by block address. The EROFS fs driver can be
used to directly mount the block device.

It depends on the blob_cache subsystem and can be used to implement
nbd/ublk/virtio-blk/vhost-user-blk servers.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-05 00:35:05 +08:00
Jiang Liu e4dc7f8764 service: add common code to compose a block device from a RAFSv6 image
Add common code to compose a block device from a RAFS image,
which then can used exposed through nbd/ublk/virtio-blk/vhost-user-blk
etc.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-05 00:35:04 +08:00
Jiang Liu c8b13ebef5 rafs: load mapped-blkaddr for each data blob
Load the mapped_blkaddr field for each data blob, later it will
be used compose a RAFS v6 image into a block device.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-05 00:35:04 +08:00
Jiang Liu 748c12e578 rafs: refine v6 related code
Refine v6 related code and add two fields to meta info.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-05 00:35:02 +08:00
Jiang Liu b217101701 nydus-image: minor improvement to nydus-image
Minor improvement to nydus-image:
- better handling of `chunk-size` argument
- avoid assert at runtime by returning error code

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-04 10:12:24 +08:00
Jiang Liu dd68b191b6 nydus-image: simplify ArtifactWriter::new() to remove the `fifo` arg
Simplify ArtifactWriter::new() to remove the argument `fifo`. We can
detect whether a file is a FIFO or not, so need to pass flag for it.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-04 10:01:58 +08:00
imeoer 04e4349cc2
Merge pull request #1125 from dragonflyoss/dev/v2.3
Prepare for exposing nydus images as block devices
2023-03-03 10:19:56 +08:00
imeoer bca1b8a072
Merge pull request #1130 from jiangliu/fix-get-compressed-size
nydus-image: fix a underflow issue in get_compressed_size()
2023-03-03 10:08:32 +08:00
Jiang Liu 8a4bc8ba26 nydus-image: fix a underflow issue in get_compressed_size()
Fix a underflow issue in get_compressed_size() by skipping generating
useless Tar/Toc headers.

Fixes: https://github.com/dragonflyoss/image-service/issues/1129

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-03 09:57:11 +08:00
imeoer 88e3fe0aad
Merge pull request #1127 from jiangliu/nydus-exclude
nydus: exclude some components when publishing crate
2023-03-03 09:47:58 +08:00
Jiang Liu 2e3acd1aa0 nydus: exclude some components when publishing crate
Exclude some components when publishing crate, otherwise the package
gets too big and can't be published to crates.io due to maximum
size (10MB) limitation.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-02 14:11:21 +08:00
Jiang Liu 1a1f1ca801
Merge pull request #1123 from adamqqqplay/update-tempfile
deps: bump tempfile version to 3.4.0 to fix some security vulnerabilities
2023-03-01 23:39:26 +08:00
imeoer 449f37816d
Merge pull request #1126 from jiangliu/api-v0.2.2
api: prepare for publishing v0.2.2
2023-03-01 21:56:23 +08:00
Jiang Liu 0c1e5724b7 api: prepare for publishing v0.2.2
Prepare for publishing nydus-api v0.2.2.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-01 21:00:45 +08:00
taohong 3a09d0773f service: add README for nydus-service
Signed-off-by: taohong <taoohong@linux.alibaba.com>
2023-03-01 16:09:32 +08:00
Qinqi Qu 766dbd43af deps: bump tempfile version to 3.4.0
Update tempfile related crates to fix https://github.com/advisories/GHSA-mc8h-8q98-g5hr

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-03-01 16:05:39 +08:00
Jiang Liu 02d1df36e7
Merge pull request #1108 from jiangliu/mapped-blkaddr
Correctly generate mapped-blkaddr for RAFS devslot array
2023-02-24 22:40:13 +08:00
Jiang Liu 0291d6e486 api: define helpers to detect cache type
Define helper functions to detect caceh types.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-02-24 17:22:40 +08:00
Jiang Liu 753890bb04 nydus-image: correctly set mapped-blkaddr for devslot
Correctly set mapped-blkaddr for RAFS v6 device slots.
It will be used to represent a Nydus image as a block device.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-02-24 17:22:40 +08:00
Jiang Liu e2fe47d2ad nydus-image: refine dump_v6_bootstrap()
Refine dump_v6_bootstrap() to prepare for fixing a bug.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-02-24 17:22:39 +08:00
Jiang Liu 73b57c9f25 nydus-image: only support maximum 255 layers for RAFS v6
Only support maximum 255 layers for RAFS v6, because it could only
encoding 255 blob indice.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-02-24 17:22:38 +08:00
463 changed files with 50990 additions and 31725 deletions

44
.github/ISSUE_TEMPLATE.md vendored Normal file
View File

@ -0,0 +1,44 @@
## Additional Information
_The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all._
### Version of nydus being used (nydusd --version)
<!-- Example:
Version: v2.2.0
Git Commit: a38f6b8d6257af90d59880265335dd55fab07668
Build Time: 2023-03-01T10:05:57.267573846Z
Profile: release
Rustc: rustc 1.66.1 (90743e729 2023-01-10)
-->
### Version of nydus-snapshotter being used (containerd-nydus-grpc --version)
<!-- Example:
Version: v0.5.1
Revision: a4b21d7e93481b713ed5c620694e77abac637abb
Go version: go1.18.6
Build time: 2023-01-28T06:05:42
-->
### Kernel information (uname -r)
_command result: uname -r_
### GNU/Linux Distribution, if applicable (cat /etc/os-release)
_command result: cat /etc/os-release_
### containerd-nydus-grpc command line used, if applicable (ps aux | grep containerd-nydus-grpc)
```
```
### client command line used, if applicable (such as: nerdctl, docker, kubectl, ctr)
```
```
### Screenshots (if applicable)
## Details about issue

21
.github/PULL_REQUEST_TEMPLATE.md vendored Normal file
View File

@ -0,0 +1,21 @@
## Relevant Issue (if applicable)
_If there are Issues related to this PullRequest, please list it._
## Details
_Please describe the details of PullRequest._
## Types of changes
_What types of changes does your PullRequest introduce? Put an `x` in all the boxes that apply:_
- [ ] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to change)
- [ ] Documentation Update (if none of the other choices apply)
## Checklist
_Go over all the following points, and put an `x` in all the boxes that apply._
- [ ] I have updated the documentation accordingly.
- [ ] I have added tests to cover my changes.

23
.github/codecov.yml vendored Normal file
View File

@ -0,0 +1,23 @@
coverage:
status:
project:
default:
enabled: yes
target: auto # auto compares coverage to the previous base commit
# adjust accordingly based on how flaky your tests are
# this allows a 0.2% drop from the previous base commit coverage
threshold: 0.2%
patch: false
comment:
layout: "reach, diff, flags, files"
behavior: default
require_changes: true # if true: only post the comment if coverage changes
codecov:
require_ci_to_pass: false
notify:
wait_for_ci: true
# When modifying this file, please validate using
# curl -X POST --data-binary @codecov.yml https://codecov.io/validate

250
.github/copilot-instructions.md vendored Normal file
View File

@ -0,0 +1,250 @@
# GitHub Copilot Instructions for Nydus
## Project Overview
Nydus is a high-performance container image service that implements a content-addressable file system on the RAFS format. It enhances the OCI image specification by enabling on-demand loading, chunk-level deduplication, and improved container startup performance.
### Key Components
- **nydusd**: User-space daemon that processes FUSE/fscache/virtiofs messages and serves Nydus images
- **nydus-image**: CLI tool to convert OCI image layers to Nydus format
- **nydusify**: Tool to convert entire OCI images to Nydus format with registry integration
- **nydusctl**: CLI client for managing and querying nydusd daemon
- **nydus-service**: Library crate for integrating Nydus services into other projects
## Architecture Guidelines
### Crate Structure
```
- api/ # Nydus Image Service APIs and data structures
- builder/ # Image building and conversion logic
- rafs/ # RAFS filesystem implementation
- service/ # Daemon and service management framework
- storage/ # Core storage subsystem with backends and caching
- utils/ # Common utilities and helper functions
- src/bin/ # Binary executables (nydusd, nydus-image, nydusctl)
```
### Key Technologies
- **Language**: Rust with memory safety focus
- **Filesystems**: FUSE, virtiofs, EROFS, fscache
- **Storage Backends**: Registry, OSS, S3, LocalFS, HTTP proxy
- **Compression**: LZ4, Gzip, Zstd
- **Async Runtime**: Tokio (current thread for io-uring compatibility)
## Code Style and Patterns
### Rust Conventions
- Use `#![deny(warnings)]` in all binary crates
- Follow standard Rust naming conventions (snake_case, PascalCase)
- Prefer `anyhow::Result` for error handling in applications
- Use custom error types with `thiserror` for libraries
- Apply `#[macro_use]` for frequently used external crates like `log`
- Always format the code with `cargo fmt`
- Use `clippy` for linting and follow its suggestions
### Error Handling
```rust
// Prefer anyhow for applications
use anyhow::{bail, Context, Result};
// Use custom error types for libraries
use thiserror::Error;
#[derive(Error, Debug)]
pub enum NydusError {
#[error("Invalid arguments: {0}")]
InvalidArguments(String),
#[error("IO error: {0}")]
Io(#[from] std::io::Error),
}
```
### Logging Patterns
- Use structured logging with appropriate levels (trace, debug, info, warn, error)
- Include context in error messages: `.with_context(|| "description")`
- Use `info!`, `warn!`, `error!` macros consistently
### Configuration Management
- Use `serde` for JSON configuration serialization/deserialization
- Support both file-based and environment variable configuration
- Validate configurations at startup with clear error messages
- Follow the `ConfigV2` pattern for versioned configurations
## Development Guidelines
### Storage Backend Development
- When implementing new storage backends:
- - Implement the `BlobBackend` trait
- - Support timeout, retry, and connection management
- - Add configuration in the backend config structure
- - Consider proxy support for high availability
- - Implement proper error handling and logging
### Daemon Service Development
- Use the `NydusDaemon` trait for service implementations
- Support save/restore for hot upgrade functionality
- Implement proper state machine transitions
- Use `DaemonController` for lifecycle management
### RAFS Filesystem Features
- Support both RAFS v5 and v6 formats
- Implement chunk-level deduplication
- Handle prefetch optimization for container startup
- Support overlay filesystem operations
- Maintain POSIX compatibility
### API Development
- Use versioned APIs (v1, v2) with backward compatibility
- Implement HTTP endpoints with proper error handling
- Support both Unix socket and TCP communication
- Follow OpenAPI specification patterns
## Testing Patterns
### Unit Tests
- Test individual functions and modules in isolation
- Use `#[cfg(test)]` modules within source files
- Mock external dependencies when necessary
- Focus on error conditions and edge cases
### Integration Tests
- Place integration tests in `tests/` directory
- Test complete workflows and component interactions
- Use temporary directories for filesystem operations
- Clean up resources properly in test teardown
### Smoke Tests
- Located in `smoke/` directory using Go
- Test real-world scenarios with actual images
- Verify performance and functionality
- Use Bats framework for shell-based testing
## Performance Considerations
### I/O Optimization
- Use async I/O patterns with Tokio
- Implement prefetching for predictable access patterns
- Optimize chunk size (default 1MB) for workload characteristics
- Consider io-uring for high-performance scenarios
### Memory Management
- Use `Arc<T>` for shared ownership of large objects
- Implement lazy loading for metadata structures
- Consider memory mapping for large files
- Profile memory usage in performance-critical paths
### Caching Strategy
- Implement blob caching with configurable backends
- Support compression in cache to save space
- Use chunk-level caching with efficient eviction policies
- Consider cache warming strategies for frequently accessed data
## Security Guidelines
### Data Integrity
- Implement end-to-end digest validation
- Support multiple hash algorithms (SHA256, Blake3)
- Verify chunk integrity on read operations
- Detect and prevent supply chain attacks
### Authentication
- Support registry authentication (basic auth, bearer tokens)
- Handle credential rotation and refresh
- Implement secure credential storage
- Support mutual TLS for backend connections
## Specific Code Patterns
### Configuration Loading
```rust
// Standard pattern for configuration loading
let config = match config_path {
Some(path) => ConfigV2::from_file(path)?,
None => ConfigV2::default(),
};
// Environment variable override
if let Ok(auth) = std::env::var("IMAGE_PULL_AUTH") {
config.update_registry_auth_info(&auth);
}
```
### Daemon Lifecycle
```rust
// Standard daemon initialization pattern
let daemon = create_daemon(config, build_info)?;
DAEMON_CONTROLLER.set_daemon(daemon);
// Event loop management
if DAEMON_CONTROLLER.is_active() {
DAEMON_CONTROLLER.run_loop();
}
// Graceful shutdown
DAEMON_CONTROLLER.shutdown();
```
### Blob Access Pattern
```rust
// Standard blob read pattern
let mut bio = BlobIoDesc::new(blob_id, blob_address, blob_size, user_io);
let blob_device = factory.get_device(&blob_info)?;
blob_device.read(&mut bio)?;
```
## Documentation Standards
### Code Documentation
- Document all public APIs with `///` comments
- Include examples in documentation
- Document safety requirements for unsafe code
- Explain complex algorithms and data structures
### Architecture Documentation
- Maintain design documents in `docs/` directory
- Update documentation when adding new features
- Include diagrams for complex interactions
- Document configuration options comprehensively
### Release Notes
- Document breaking changes clearly
- Include migration guides for major versions
- Highlight performance improvements
- List new features and bug fixes
## Container and Cloud Native Patterns
### OCI Compatibility
- Maintain compatibility with OCI image spec
- Support standard container runtimes (runc, Kata)
- Implement proper layer handling and manifest generation
- Support multi-architecture images
### Kubernetes Integration
- Design for Kubernetes CRI integration
- Support containerd snapshotter pattern
- Handle pod lifecycle events appropriately
- Implement proper resource cleanup
### Cloud Storage Integration
- Support major cloud providers (AWS S3, Alibaba OSS)
- Implement proper credential management
- Handle network interruptions gracefully
- Support cross-region replication patterns
## Build and Release
### Build Configuration
- Use `Cargo.toml` workspace configuration
- Support cross-compilation for multiple architectures
- Implement proper feature flags for optional components
- Use consistent dependency versioning
### Release Process
- Tag releases with semantic versioning
- Generate release binaries for supported platforms
- Update documentation with release notes
- Validate release artifacts before publishing
Remember to follow these guidelines when contributing to or working with the Nydus codebase. The project emphasizes performance, security, and compatibility with the broader container ecosystem.

40
.github/workflows/Dockerfile.cross vendored Normal file
View File

@ -0,0 +1,40 @@
FROM ubuntu:latest
ENV DEBIAN_FRONTEND=noninteractive
ARG RUST_VERSION=1.84.0
RUN apt-get update && apt-get install -y \
software-properties-common \
build-essential \
curl \
git \
libssl-dev \
pkg-config \
cmake \
gcc-riscv64-linux-gnu \
g++-riscv64-linux-gnu \
&& rm -rf /var/lib/apt/lists/*
RUN add-apt-repository ppa:ubuntu-toolchain-r/test \
&& apt-get update && apt-get install -y \
gcc-14 \
g++-14 \
gcc-14-riscv64-linux-gnu \
g++-14-riscv64-linux-gnu \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /root
RUN curl https://sh.rustup.rs -sSf | sh -s -- -y --default-toolchain $RUST_VERSION
ENV PATH="/root/.cargo/bin:${PATH}"
RUN rustup target add \
riscv64gc-unknown-linux-gnu
RUN mkdir -p ~/.cargo && echo '\
[target.riscv64gc-unknown-linux-gnu]\n\
linker = "riscv64-linux-gnu-gcc-14"' > ~/.cargo/config.toml
CMD ["/bin/bash"]

329
.github/workflows/benchmark.yml vendored Normal file
View File

@ -0,0 +1,329 @@
name: Benchmark
on:
schedule:
# Run at 03:00 clock UTC on Monday and Wednesday
- cron: "0 03 * * 1,3"
pull_request:
paths:
- '.github/workflows/benchmark.yml'
workflow_dispatch:
env:
CARGO_TERM_COLOR: always
jobs:
contrib-build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Golang
uses: actions/setup-go@v5
with:
go-version-file: 'go.work'
cache-dependency-path: "**/*.sum"
- name: Build Contrib
run: |
make -e DOCKER=false nydusify-release
- name: Upload Nydusify
uses: actions/upload-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd/nydusify
nydus-build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Rust Cache
uses: Swatinem/rust-cache@v2
with:
cache-on-failure: true
shared-key: Linux-cargo-amd64
- uses: dsherret/rust-toolchain-file@v1
- name: Build Nydus
run: |
make release
- name: Upload Nydus Binaries
uses: actions/upload-artifact@v4
with:
name: nydus-artifact
path: |
target/release/nydus-image
target/release/nydusd
benchmark-description:
runs-on: ubuntu-latest
steps:
- name: Description
run: |
echo "## Benchmark Environment" > $GITHUB_STEP_SUMMARY
echo "| operating system | cpu | memory " >> $GITHUB_STEP_SUMMARY
echo "|:----------------:|:---:|:------ " >> $GITHUB_STEP_SUMMARY
echo "| ubuntu-22.04 | 2-core CPU (x86_64) | 7GB |" >> $GITHUB_STEP_SUMMARY
benchmark-oci:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
strategy:
matrix:
include:
- image: wordpress
tag: 6.1.1
- image: node
tag: 19.8
- image: python
tag: 3.10.7
- image: golang
tag: 1.19.3
- image: ruby
tag: 3.1.3
- image: amazoncorretto
tag: 8-al2022-jdk
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: target/release
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Environment
run: |
sudo bash misc/prepare.sh
- name: BenchMark Test
run: |
export BENCHMARK_TEST_IMAGE=${{ matrix.image }}:${{ matrix.tag }}
export BENCHMARK_MODE=oci
export BENCHMARK_METRIC_FILE=${{ matrix.image }}-oci.json
export SNAPSHOTTER=overlayfs
sudo -E make smoke-benchmark
- name: Save BenchMark Result
uses: actions/upload-artifact@v4
with:
name: benchmark-oci-${{ matrix.image }}
path: smoke/${{ matrix.image }}-oci.json
benchmark-fsversion-v5:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
strategy:
matrix:
include:
- image: wordpress
tag: 6.1.1
- image: node
tag: 19.8
- image: python
tag: 3.10.7
- image: golang
tag: 1.19.3
- image: ruby
tag: 3.1.3
- image: amazoncorretto
tag: 8-al2022-jdk
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: target/release
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Environment
run: |
sudo bash misc/prepare.sh
- name: BenchMark Test
run: |
export BENCHMARK_TEST_IMAGE=${{ matrix.image }}:${{ matrix.tag }}
export BENCHMARK_MODE=fs-version-5
export BENCHMARK_METRIC_FILE=${{ matrix.image }}-fsversion-v5.json
sudo -E make smoke-benchmark
- name: Save BenchMark Result
uses: actions/upload-artifact@v4
with:
name: benchmark-fsversion-v5-${{ matrix.image }}
path: smoke/${{ matrix.image }}-fsversion-v5.json
benchmark-fsversion-v6:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
strategy:
matrix:
include:
- image: wordpress
tag: 6.1.1
- image: node
tag: 19.8
- image: python
tag: 3.10.7
- image: golang
tag: 1.19.3
- image: ruby
tag: 3.1.3
- image: amazoncorretto
tag: 8-al2022-jdk
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: target/release
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Environment
run: |
sudo bash misc/prepare.sh
- name: BenchMark Test
run: |
export BENCHMARK_TEST_IMAGE=${{ matrix.image }}:${{ matrix.tag }}
export BENCHMARK_MODE=fs-version-6
export BENCHMARK_METRIC_FILE=${{ matrix.image }}-fsversion-v6.json
sudo -E make smoke-benchmark
- name: Save BenchMark Result
uses: actions/upload-artifact@v4
with:
name: benchmark-fsversion-v6-${{ matrix.image }}
path: smoke/${{ matrix.image }}-fsversion-v6.json
benchmark-zran:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
strategy:
matrix:
include:
- image: wordpress
tag: 6.1.1
- image: node
tag: 19.8
- image: python
tag: 3.10.7
- image: golang
tag: 1.19.3
- image: ruby
tag: 3.1.3
- image: amazoncorretto
tag: 8-al2022-jdk
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: target/release
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Environment
run: |
sudo bash misc/prepare.sh
- name: BenchMark Test
run: |
export BENCHMARK_TEST_IMAGE=${{ matrix.image }}:${{ matrix.tag }}
export BENCHMARK_MODE=zran
export BENCHMARK_METRIC_FILE=${{ matrix.image }}-zran.json
sudo -E make smoke-benchmark
- name: Save BenchMark Result
uses: actions/upload-artifact@v4
with:
name: benchmark-zran-${{ matrix.image }}
path: smoke/${{ matrix.image }}-zran.json
benchmark-result:
runs-on: ubuntu-latest
needs: [benchmark-oci, benchmark-fsversion-v5, benchmark-fsversion-v6, benchmark-zran]
strategy:
matrix:
include:
- image: wordpress
tag: 6.1.1
- image: node
tag: 19.8
- image: python
tag: 3.10.7
- image: golang
tag: 1.19.3
- image: ruby
tag: 3.1.3
- image: amazoncorretto
tag: 8-al2022-jdk
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Download benchmark-oci
uses: actions/download-artifact@v4
with:
name: benchmark-oci-${{ matrix.image }}
path: benchmark-result
- name: Download benchmark-fsversion-v5
uses: actions/download-artifact@v4
with:
name: benchmark-fsversion-v5-${{ matrix.image }}
path: benchmark-result
- name: Download benchmark-fsversion-v6
uses: actions/download-artifact@v4
with:
name: benchmark-fsversion-v6-${{ matrix.image }}
path: benchmark-result
- name: Download benchmark-zran
uses: actions/download-artifact@v4
with:
name: benchmark-zran-${{ matrix.image }}
path: benchmark-result
- name: Benchmark Summary
run: |
case ${{matrix.image}} in
"wordpress")
echo "### workload: wait the 80 port response" > $GITHUB_STEP_SUMMARY
;;
"node")
echo "### workload: node index.js; wait the 80 port response" > $GITHUB_STEP_SUMMARY
;;
"python")
echo "### workload: python -c 'print("hello")'" > $GITHUB_STEP_SUMMARY
;;
"golang")
echo "### workload: go run main.go" > $GITHUB_STEP_SUMMARY
;;
"ruby")
echo "### workload: ruby -e "puts \"hello\""" > $GITHUB_STEP_SUMMARY
;;
"amazoncorretto")
echo "### workload: javac Main.java; java Main" > $GITHUB_STEP_SUMMARY
;;
esac
cd benchmark-result
metric_files=(
"${{ matrix.image }}-oci.json"
"${{ matrix.image }}-fsversion-v5.json"
"${{ matrix.image }}-fsversion-v6.json"
"${{ matrix.image }}-zran.json"
)
echo "| bench-result | e2e-time(s) | read-count | read-amount(MB) | image-size(MB) |convert-time(s)|" >> $GITHUB_STEP_SUMMARY
echo "|:-------------|:-----------:|:----------:|:---------------:|:--------------:|:-------------:|" >> $GITHUB_STEP_SUMMARY
for file in "${metric_files[@]}"; do
name=$(basename "$file" .json | sed 's/^[^-]*-\(.*\)$/\1/')
data=$(jq -r '. | "\(.e2e_time / 1e9) \(.read_count) \(.read_amount_total / (1024 * 1024)) \(.image_size / (1024 * 1024)) \(.conversion_elapsed / 1e9)"' "$file" | \
awk '{ printf "%.2f | %.0f | %.2f | %.2f | %.2f", $1, $2, $3, $4, $5 }')
echo "| $name | $data |" >> $GITHUB_STEP_SUMMARY
done

View File

@ -18,26 +18,18 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Setup Golang
uses: actions/setup-go@v3
uses: actions/setup-go@v5
with:
go-version: ~1.18
- name: Golang Cache
uses: actions/cache@v3
with:
path: |
~/.cache/go-build
~/go/pkg/mod
key: ${{ runner.os }}-golang-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-golang-
go-version-file: 'go.work'
cache-dependency-path: "**/*.sum"
- name: Build Contrib
run: |
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sudo sh -s -- -b /usr/local/bin v1.47.3
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sudo sh -s -- -b /usr/local/bin v1.61.0
make -e DOCKER=false nydusify-release
- name: Upload Nydusify
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd/nydusify
@ -46,17 +38,18 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Rust Cache
uses: Swatinem/rust-cache@v2.2.0
uses: Swatinem/rust-cache@v2
with:
cache-on-failure: true
shared-key: Linux-cargo-amd64
- uses: dsherret/rust-toolchain-file@v1
- name: Build Nydus
run: |
rustup component add rustfmt clippy
make
make release
- name: Upload Nydus Binaries
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v4
with:
name: nydus-artifact
path: |
@ -67,15 +60,15 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Build fsck.erofs
run: |
sudo apt-get update && sudo apt-get install -y build-essential git autotools-dev automake libtool pkg-config uuid-dev liblz4-dev
git clone https://git.kernel.org/pub/scm/linux/kernel/git/xiang/erofs-utils.git
cd erofs-utils && git apply ../${{ env.FSCK_PATCH_PATH }} && ./autogen.sh && ./configure && make && cd ..
cd erofs-utils && git checkout v1.6 && git apply ../${{ env.FSCK_PATCH_PATH }} && ./autogen.sh && ./configure && make && cd ..
sudo cp erofs-utils/fsck/fsck.erofs /usr/local/bin/
- name: Upload fsck.erofs
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v4
with:
name: fsck-erofs-artifact
path: |
@ -86,25 +79,25 @@ jobs:
needs: [nydusify-build, nydus-build, fsck-erofs-build]
steps:
- name: Checkout repository
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Login ghcr registry
uses: docker/login-action@v2
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Download Nydus
uses: actions/download-artifact@master
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: /usr/local/bin
- name: Download Nydusify
uses: actions/download-artifact@master
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: /usr/local/bin
- name: Download fsck.erofs
uses: actions/download-artifact@master
uses: actions/download-artifact@v4
with:
name: fsck-erofs-artifact
path: /usr/local/bin
@ -113,6 +106,7 @@ jobs:
sudo chmod +x /usr/local/bin/nydus*
sudo chmod +x /usr/local/bin/fsck.erofs
sudo docker run -d --restart=always -p 5000:5000 registry
sudo mkdir convert-zran
for I in $(cat ${{ env.IMAGE_LIST_PATH }}); do
echo "converting $I:latest to $I:nydus-nightly-oci-ref"
ghcr_repo=${{ env.REGISTRY }}/${{ env.ORGANIZATION }}
@ -136,7 +130,8 @@ jobs:
--oci-ref \
--source localhost:5000/$I \
--target localhost:5000/$I:nydus-nightly-oci-ref \
--platform linux/amd64,linux/arm64
--platform linux/amd64,linux/arm64 \
--output-json convert-zran/${I}.json
# check zran image and referenced oci image
sudo rm -rf ./tmp
@ -144,29 +139,34 @@ jobs:
--source localhost:5000/$I \
--target localhost:5000/$I:nydus-nightly-oci-ref
sudo fsck.erofs -d1 output/nydus_bootstrap
sudo fsck.erofs -d1 ./output/target/nydus_bootstrap/image/image.boot
sudo rm -rf ./output
done
- name: Save Nydusify Metric
uses: actions/upload-artifact@v4
with:
name: convert-zran-metric
path: convert-zran
convert-native-v5:
runs-on: ubuntu-latest
needs: [nydusify-build, nydus-build]
steps:
- name: Checkout repository
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Login ghcr registry
uses: docker/login-action@v2
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Download Nydus
uses: actions/download-artifact@master
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: /usr/local/bin
- name: Download Nydusify
uses: actions/download-artifact@master
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: /usr/local/bin
@ -174,6 +174,7 @@ jobs:
run: |
sudo chmod +x /usr/local/bin/nydus*
sudo docker run -d --restart=always -p 5000:5000 registry
sudo mkdir convert-native-v5
for I in $(cat ${{ env.IMAGE_LIST_PATH }}); do
echo "converting $I:latest to $I:nydus-nightly-v5"
# for pre-built images
@ -182,42 +183,49 @@ jobs:
--target ${{ env.REGISTRY }}/${{ env.ORGANIZATION }}/$I:nydus-nightly-v5 \
--fs-version 5 \
--platform linux/amd64,linux/arm64
# use local registry for speed
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--source $I:latest \
--target localhost:5000/$I:nydus-nightly-v5 \
--fs-version 5 \
--platform linux/amd64,linux/arm64
--platform linux/amd64,linux/arm64 \
--output-json convert-native-v5/${I}.json
sudo rm -rf ./tmp
sudo DOCKER_CONFIG=$HOME/.docker nydusify check --source $I:latest \
--target localhost:5000/$I:nydus-nightly-v5
done
- name: Save Nydusify Metric
uses: actions/upload-artifact@v4
with:
name: convert-native-v5-metric
path: convert-native-v5
convert-native-v6:
runs-on: ubuntu-latest
needs: [nydusify-build, nydus-build, fsck-erofs-build]
steps:
- name: Checkout repository
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Login ghcr registry
uses: docker/login-action@v2
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Download Nydus
uses: actions/download-artifact@master
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: /usr/local/bin
- name: Download Nydusify
uses: actions/download-artifact@master
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: /usr/local/bin
- name: Download fsck.erofs
uses: actions/download-artifact@master
uses: actions/download-artifact@v4
with:
name: fsck-erofs-artifact
path: /usr/local/bin
@ -226,6 +234,7 @@ jobs:
sudo chmod +x /usr/local/bin/nydus*
sudo chmod +x /usr/local/bin/fsck.erofs
sudo docker run -d --restart=always -p 5000:5000 registry
sudo mkdir convert-native-v6
for I in $(cat ${{ env.IMAGE_LIST_PATH }}); do
echo "converting $I:latest to $I:nydus-nightly-v6"
# for pre-built images
@ -234,17 +243,147 @@ jobs:
--target ${{ env.REGISTRY }}/${{ env.ORGANIZATION }}/$I:nydus-nightly-v6 \
--fs-version 6 \
--platform linux/amd64,linux/arm64
# use local registry for speed
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--source $I:latest \
--target localhost:5000/$I:nydus-nightly-v6 \
--fs-version 6 \
--platform linux/amd64,linux/arm64
--platform linux/amd64,linux/arm64 \
--output-json convert-native-v6/${I}.json
sudo rm -rf ./tmp
sudo DOCKER_CONFIG=$HOME/.docker nydusify check --source $I:latest \
--target localhost:5000/$I:nydus-nightly-v6
sudo fsck.erofs -d1 output/nydus_bootstrap
sudo fsck.erofs -d1 ./output/target/nydus_bootstrap/image/image.boot
sudo rm -rf ./output
done
- name: Save Nydusify Metric
uses: actions/upload-artifact@v4
with:
name: convert-native-v6-metric
path: convert-native-v6
convert-native-v6-batch:
runs-on: ubuntu-latest
needs: [nydusify-build, nydus-build, fsck-erofs-build]
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Login ghcr registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: /usr/local/bin
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: /usr/local/bin
- name: Download fsck.erofs
uses: actions/download-artifact@v4
with:
name: fsck-erofs-artifact
path: /usr/local/bin
- name: Convert and check RAFS v6 batch images
run: |
sudo chmod +x /usr/local/bin/nydus*
sudo chmod +x /usr/local/bin/fsck.erofs
sudo docker run -d --restart=always -p 5000:5000 registry
sudo mkdir convert-native-v6-batch
for I in $(cat ${{ env.IMAGE_LIST_PATH }}); do
echo "converting $I:latest to $I:nydus-nightly-v6-batch"
# for pre-built images
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--source $I:latest \
--target ${{ env.REGISTRY }}/${{ env.ORGANIZATION }}/$I:nydus-nightly-v6-batch \
--fs-version 6 \
--batch-size 0x100000 \
--platform linux/amd64,linux/arm64
# use local registry for speed
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--source $I:latest \
--target localhost:5000/$I:nydus-nightly-v6-batch \
--fs-version 6 \
--batch-size 0x100000 \
--platform linux/amd64,linux/arm64 \
--output-json convert-native-v6-batch/${I}.json
sudo rm -rf ./tmp
sudo DOCKER_CONFIG=$HOME/.docker nydusify check --source $I:latest \
--target localhost:5000/$I:nydus-nightly-v6-batch
sudo fsck.erofs -d1 ./output/target/nydus_bootstrap/image/image.boot
sudo rm -rf ./output
done
- name: Save Nydusify Metric
uses: actions/upload-artifact@v4
with:
name: convert-native-v6-batch-metric
path: convert-native-v6-batch
convert-metric:
runs-on: ubuntu-latest
needs: [convert-zran, convert-native-v5, convert-native-v6, convert-native-v6-batch]
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Download Zran Metric
uses: actions/download-artifact@v4
with:
name: convert-zran-metric
path: convert-zran
- name: Download V5 Metric
uses: actions/download-artifact@v4
with:
name: convert-native-v5-metric
path: convert-native-v5
- name: Download V6 Metric
uses: actions/download-artifact@v4
with:
name: convert-native-v6-metric
path: convert-native-v6
- name: Download V6 Batch Metric
uses: actions/download-artifact@v4
with:
name: convert-native-v6-batch-metric
path: convert-native-v6-batch
- name: Summary
run: |
echo "## Image Size(MB)" > $GITHUB_STEP_SUMMARY
echo "> Compare the size of OCI image and Nydus image."
echo "|image name|oci/nydus-zran|oci/nydus-v5|oci/nydus-v6|oci/nydus-batch|" >> $GITHUB_STEP_SUMMARY
echo "|:--------:|:------------:|:----------:|:----------:|:-------------:|" >> $GITHUB_STEP_SUMMARY
for I in $(cat ${{ env.IMAGE_LIST_PATH }}); do
zranSourceImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.SourceImageSize' convert-zran/${I}.json) / 1048576")")
zranTargetImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.TargetImageSize' convert-zran/${I}.json) / 1048576")")
v5SourceImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.SourceImageSize' convert-native-v5/${I}.json) / 1048576")")
v5TargetImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.TargetImageSize' convert-native-v5/${I}.json) / 1048576")")
v6SourceImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.SourceImageSize' convert-native-v6/${I}.json) / 1048576")")
v6TargetImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.TargetImageSize' convert-native-v6/${I}.json) / 1048576")")
batchSourceImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.SourceImageSize' convert-native-v6-batch/${I}.json) / 1048576")")
batchTargetImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.TargetImageSize' convert-native-v6-batch/${I}.json) / 1048576")")
echo "|${I}:latest|${zranSourceImageSize}/${zranTargetImageSize}|${v5SourceImageSize}/${v5TargetImageSize}|${v6SourceImageSize}/${v6TargetImageSize}|${batchSourceImageSize}/${batchTargetImageSize}|" >> $GITHUB_STEP_SUMMARY
done
echo "## Conversion Time(ms)" >> $GITHUB_STEP_SUMMARY
echo "> Time elapsed to convert OCI image to Nydus image."
echo "|image name|nydus-zran|nydus-v5|nydus-v6|nydus-batch|" >> $GITHUB_STEP_SUMMARY
echo "|:---:|:--:|:-------:|:-------:|:-------:|" >> $GITHUB_STEP_SUMMARY
for I in $(cat ${{ env.IMAGE_LIST_PATH }}); do
zranConversionElapsed=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.ConversionElapsed' convert-zran/${I}.json) / 1000000")")
v5ConversionElapsed=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.ConversionElapsed' convert-native-v5/${I}.json) / 1000000")")
v6ConversionElapsed=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.ConversionElapsed' convert-native-v6/${I}.json) / 1000000")")
batchConversionElapsed=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.ConversionElapsed' convert-native-v6-batch/${I}.json) / 1000000")")
echo "|${I}:latest|${zranConversionElapsed}|${v5ConversionElapsed}|${v6ConversionElapsed}|${batchConversionElapsed}|" >> $GITHUB_STEP_SUMMARY
done
- uses: geekyeggo/delete-artifact@v2
with:
name: '*'

View File

@ -1,113 +0,0 @@
name: Integration Test
on:
schedule:
# Do conversion every day at 00:03 clock UTC
- cron: "3 0 * * *"
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
arch: [amd64]
fs_version: [5, 6]
branch: [master, stable/v2.1]
steps:
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: 1.18
- name: Setup pytest
run: |
sudo apt install --no-install-recommends -y attr libattr1-dev fio pkg-config libssl-dev python3
sudo python3 -m pip install --upgrade pip
sudo pip3 install pytest xattr requests psutil requests_unixsocket libconf py-splice fallocate pytest-repeat PyYAML six docker toml
- name: containerd runc and crictl
run: |
sudo wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.17.0/crictl-v1.17.0-linux-amd64.tar.gz
sudo tar zxvf ./crictl-v1.17.0-linux-amd64.tar.gz -C /usr/local/bin
sudo wget https://github.com/containerd/containerd/releases/download/v1.4.3/containerd-1.4.3-linux-amd64.tar.gz
mkdir containerd
sudo tar -zxf ./containerd-1.4.3-linux-amd64.tar.gz -C ./containerd
sudo mv ./containerd/bin/* /usr/bin/
sudo wget https://github.com/opencontainers/runc/releases/download/v1.1.2/runc.amd64 -O /usr/bin/runc
sudo chmod +x /usr/bin/runc
- name: Set up ossutils
run: |
sudo wget https://gosspublic.alicdn.com/ossutil/1.7.13/ossutil64 -O /usr/bin/ossutil64
sudo chmod +x /usr/bin/ossutil64
- uses: actions/checkout@v3
with:
ref: ${{ matrix.branch }}
- name: Cache cargo
uses: Swatinem/rust-cache@v1
with:
target-dir: |
./target
cache-on-failure: true
key: ${{ runner.os }}-cargo-${{ matrix.arch }}
- name: Build nydus-rs
run: |
declare -A rust_target_map=( ["amd64"]="x86_64-unknown-linux-musl" ["arm64"]="aarch64-unknown-linux-musl")
RUST_TARGET=${rust_target_map[${{ matrix.arch }}]}
cargo install --version 0.2.1 cross
rustup component add rustfmt clippy
make -e RUST_TARGET=$RUST_TARGET -e CARGO=cross static-release
make release -C contrib/nydus-backend-proxy/
sudo cp -r misc/configs .
sudo chown -R $(id -un):$(id -gn) . ~/.cargo/
pwd
ls -lh target/$RUST_TARGET/release
- name: Set up anchor file
env:
OSS_AK_ID: ${{ secrets.OSS_TEST_AK_ID }}
OSS_AK_SEC: ${{ secrets.OSS_TEST_AK_SECRET }}
FS_VERSION: ${{ matrix.fs_version }}
run: |
sudo mkdir -p /home/runner/nydus-test-workspace
sudo mkdir -p /home/runner/nydus-test-workspace/proxy_blobs
sudo cat > /home/runner/work/image-service/image-service/contrib/nydus-test/anchor_conf.json << EOF
{
"workspace": "/home/runner/nydus-test-workspace",
"nydus_project": "/home/runner/work/image-service/image-service",
"nydus_runtime_conf": {
"profile": "release",
"log_level": "info"
},
"registry": {
"registry_url": "localhost:5000",
"registry_namespace": "",
"registry_auth": "YOURAUTH==",
"backend_proxy_url": "127.0.0.1:8000",
"backend_proxy_blobs_dir": "/home/runner/nydus-test-workspace/proxy_blobs"
},
"oss": {
"endpoint": "oss-cn-beijing.aliyuncs.com",
"ak_id": "$OSS_AK_ID",
"ak_secret": "$OSS_AK_SEC",
"bucket": "nydus-ci"
},
"images": {
"images_array": [
"busybox:latest"
]
},
"artifacts": {
"containerd": "/usr/bin/containerd",
"ossutil_bin": "/usr/bin/ossutil64"
},
"fs_version": "$FS_VERSION",
"logging_file": "stderr",
"target": "musl"
}
EOF
- name: run e2e tests
run: |
cd /home/runner/work/image-service/image-service/contrib/nydus-test
sudo mkdir -p /blobdir
sudo python3 nydus_test_config.py --dist fs_structure.yaml
sudo pytest -vs -x --durations=0 functional-test/test_api.py functional-test/test_nydus.py functional-test/test_layered_image.py

45
.github/workflows/miri.yml vendored Normal file
View File

@ -0,0 +1,45 @@
name: Miri Test
on:
push:
branches: ["**", "stable/**"]
paths-ignore: [ '**.md', '**.png', '**.jpg', '**.svg', '**/docs/**' ]
pull_request:
branches: ["**", "stable/**"]
paths-ignore: [ '**.md', '**.png', '**.jpg', '**.svg', '**/docs/**' ]
schedule:
# Run daily sanity check at 03:00 clock UTC
- cron: "0 03 * * *"
workflow_dispatch:
env:
CARGO_TERM_COLOR: always
jobs:
nydus-unit-test-with-miri:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Rust Cache
uses: Swatinem/rust-cache@v2
with:
cache-on-failure: true
shared-key: Linux-cargo-amd64
save-if: ${{ github.ref == 'refs/heads/master' }}
- name: Install cargo nextest
uses: taiki-e/install-action@nextest
- name: Fscache Setup
run: sudo bash misc/fscache/setup.sh
- name: Install Miri
run: |
rustup toolchain install nightly --component miri
rustup override set nightly
cargo miri setup
- name: Unit Test with Miri
run: |
CARGO_HOME=${HOME}/.cargo
CARGO_BIN=$(which cargo)
RUSTUP_BIN=$(which rustup)
sudo -E RUSTUP=${RUSTUP_BIN} make miri-ut-nextest 2>&1 | tee miri-ut.log
grep -C 2 'Undefined Behavior' miri-ut.log

View File

@ -19,28 +19,60 @@ jobs:
matrix:
arch: [amd64, arm64, ppc64le, riscv64]
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v4
- name: Cache cargo
uses: Swatinem/rust-cache@v1
uses: Swatinem/rust-cache@v2
with:
target-dir: |
./target
cache-on-failure: true
key: ${{ runner.os }}-cargo-${{ matrix.arch }}
- name: Build nydus-rs
shared-key: ${{ runner.os }}-cargo-${{ matrix.arch }}
- uses: dsherret/rust-toolchain-file@v1
- name: Read Rust toolchain version
id: set_toolchain_version
run: |
RUST_TOOLCHAIN_VERSION=$(grep -oP '(?<=channel = ")[^"]*' rust-toolchain.toml)
echo "Rust toolchain version: $RUST_TOOLCHAIN_VERSION"
echo "rust-version=$RUST_TOOLCHAIN_VERSION" >> $GITHUB_OUTPUT
shell: bash
- name: Set up Docker Buildx
if: matrix.arch == 'riscv64'
uses: docker/setup-buildx-action@v3
- name: Build and push Docker image
if: matrix.arch == 'riscv64'
uses: docker/build-push-action@v6
with:
context: .
file: ./.github/workflows/Dockerfile.cross
push: false
load: true
tags: rust-cross-compile-riscv64:latest
cache-from: type=gha
cache-to: type=gha,mode=max
build-args: |
RUST_VERSION=${{ steps.set_toolchain_version.outputs.rust-version }}
- name: Build nydus-rs Non-RISC-V
if: matrix.arch != 'riscv64'
run: |
declare -A rust_target_map=( ["amd64"]="x86_64-unknown-linux-musl" ["arm64"]="aarch64-unknown-linux-musl" ["ppc64le"]="powerpc64le-unknown-linux-gnu")
RUST_TARGET=${rust_target_map[${{ matrix.arch }}]}
cargo install --locked --version 0.2.5 cross
make -e RUST_TARGET_STATIC=$RUST_TARGET -e CARGO=cross static-release
- name : Build Nydus-rs RISC-V
if: matrix.arch == 'riscv64'
run: |
RUST_TARGET=riscv64gc-unknown-linux-gnu
docker run --rm -v ${{ github.workspace }}:/root/src rust-cross-compile-riscv64:latest \
sh -c "cd /root/src && make -e RUST_TARGET_STATIC=$RUST_TARGET static-release"
- name: Prepare to upload artifacts
run: |
declare -A rust_target_map=( ["amd64"]="x86_64-unknown-linux-musl" ["arm64"]="aarch64-unknown-linux-musl" ["ppc64le"]="powerpc64le-unknown-linux-gnu" ["riscv64"]="riscv64gc-unknown-linux-gnu")
RUST_TARGET=${rust_target_map[${{ matrix.arch }}]}
cargo install --version 0.2.4 cross
rustup component add rustfmt clippy
make -e RUST_TARGET_STATIC=$RUST_TARGET -e CARGO=cross static-release
sudo mv target/$RUST_TARGET/release/nydusd nydusd
sudo mv target/$RUST_TARGET/release/nydus-image .
sudo mv target/$RUST_TARGET/release/nydusctl .
sudo cp -r misc/configs .
sudo chown -R $(id -un):$(id -gn) . ~/.cargo/
- name: store-artifacts
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v4
with:
name: nydus-artifacts-linux-${{ matrix.arch }}
path: |
@ -50,27 +82,33 @@ jobs:
configs
nydus-macos:
runs-on: macos-11
runs-on: macos-13
strategy:
matrix:
arch: [amd64, arm64]
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v4
- name: Cache cargo
uses: Swatinem/rust-cache@v1
uses: Swatinem/rust-cache@v2
with:
target-dir: |
./target
cache-on-failure: true
key: ${{ runner.os }}-cargo-${{ matrix.arch }}
shared-key: ${{ runner.os }}-cargo-${{ matrix.arch }}
- uses: dsherret/rust-toolchain-file@v1
- name: build
run: |
rustup component add rustfmt clippy
make -e INSTALL_DIR_PREFIX=. install
if [[ "${{matrix.arch}}" == "amd64" ]]; then
RUST_TARGET="x86_64-apple-darwin"
else
RUST_TARGET="aarch64-apple-darwin"
fi
cargo install --version 0.2.5 cross
rustup target add ${RUST_TARGET}
make -e RUST_TARGET_STATIC=$RUST_TARGET -e CARGO=cross static-release
sudo mv target/$RUST_TARGET/release/nydusd nydusd
sudo cp -r misc/configs .
sudo chown -R $(id -un):$(id -gn) . ~/.cargo/
- name: store-artifacts
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v4
with:
name: nydus-artifacts-darwin-${{ matrix.arch }}
path: |
@ -87,29 +125,22 @@ jobs:
env:
DOCKER: false
steps:
- uses: actions/checkout@v2
- uses: actions/setup-go@v2
- uses: actions/checkout@v4
- name: Setup Golang
uses: actions/setup-go@v5
with:
go-version: '1.18'
- name: cache go mod
uses: actions/cache@v2
with:
path: /go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/contrib/nydusify/go.sum', '**/contrib/ctr-remote/go.sum', '**/contrib/nydus-overlayfs/go.sum') }}
restore-keys: |
${{ runner.os }}-go
go-version-file: 'go.work'
cache-dependency-path: "**/*.sum"
- name: build contrib go components
run: |
make -e GOARCH=${{ matrix.arch }} contrib-release
sudo mv contrib/ctr-remote/bin/ctr-remote .
sudo mv contrib/nydusify/cmd/nydusify .
sudo mv contrib/nydus-overlayfs/bin/nydus-overlayfs .
- name: store-artifacts
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v4
with:
name: nydus-artifacts-linux-${{ matrix.arch }}
name: nydus-artifacts-linux-${{ matrix.arch }}-contrib
path: |
ctr-remote
nydusify
nydus-overlayfs
containerd-nydus-grpc
@ -123,9 +154,10 @@ jobs:
needs: [nydus-linux, contrib-linux]
steps:
- name: download artifacts
uses: actions/download-artifact@v2
uses: actions/download-artifact@v4
with:
name: nydus-artifacts-${{ matrix.os }}-${{ matrix.arch }}
pattern: nydus-artifacts-${{ matrix.os }}-${{ matrix.arch }}*
merge-multiple: true
path: nydus-static
- name: prepare release tarball
run: |
@ -139,9 +171,9 @@ jobs:
sha256sum $tarball > $shasum
echo "tarball_shasum=${shasum}" >> $GITHUB_ENV
- name: store-artifacts
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v4
with:
name: nydus-release-tarball
name: nydus-release-tarball-${{ matrix.os }}-${{ matrix.arch }}
path: |
${{ env.tarball }}
${{ env.tarball_shasum }}
@ -151,12 +183,12 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
arch: [amd64]
arch: [amd64, arm64]
os: [darwin]
needs: [nydus-macos]
steps:
- name: download artifacts
uses: actions/download-artifact@v2
uses: actions/download-artifact@v4
with:
name: nydus-artifacts-${{ matrix.os }}-${{ matrix.arch }}
path: nydus-static
@ -172,9 +204,9 @@ jobs:
sha256sum $tarball > $shasum
echo "tarball_shasum=${shasum}" >> $GITHUB_ENV
- name: store-artifacts
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v4
with:
name: nydus-release-tarball
name: nydus-release-tarball-${{ matrix.os }}-${{ matrix.arch }}
path: |
${{ env.tarball }}
${{ env.tarball_shasum }}
@ -184,9 +216,10 @@ jobs:
needs: [prepare-tarball-linux, prepare-tarball-darwin]
steps:
- name: download artifacts
uses: actions/download-artifact@v2
uses: actions/download-artifact@v4
with:
name: nydus-release-tarball
pattern: nydus-release-tarball-*
merge-multiple: true
path: nydus-tarball
- name: prepare release env
run: |
@ -206,3 +239,87 @@ jobs:
generate_release_notes: true
files: |
${{ env.tarballs }}
goreleaser:
if: github.event_name == 'push' && startsWith(github.ref, 'refs/tags/')
strategy:
matrix:
arch: [amd64, arm64]
os: [linux]
needs: [nydus-linux, contrib-linux]
permissions:
contents: write
runs-on: ubuntu-latest
timeout-minutes: 60
outputs:
hashes: ${{ steps.hash.outputs.hashes }}
steps:
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 0
submodules: recursive
- name: Setup Golang
uses: actions/setup-go@v5
with:
go-version-file: 'go.work'
cache-dependency-path: "**/*.sum"
- name: download artifacts
uses: actions/download-artifact@v4
with:
pattern: nydus-artifacts-${{ matrix.os }}-${{ matrix.arch }}*
merge-multiple: true
path: nydus-static
- name: prepare context
run: |
chmod +x nydus-static/*
export GOARCH=${{ matrix.arch }}
echo "GOARCH: $GOARCH"
sh ./goreleaser.sh
- name: Check GoReleaser config
uses: goreleaser/goreleaser-action@90a3faa9d0182683851fbfa97ca1a2cb983bfca3
with:
version: latest
args: check
- name: Run GoReleaser
uses: goreleaser/goreleaser-action@90a3faa9d0182683851fbfa97ca1a2cb983bfca3
id: run-goreleaser
with:
version: latest
args: release --clean
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Generate subject
id: hash
env:
ARTIFACTS: "${{ steps.run-goreleaser.outputs.artifacts }}"
run: |
set -euo pipefail
hashes=$(echo $ARTIFACTS | jq --raw-output '.[] | {name, "digest": (.extra.Digest // .extra.Checksum)} | select(.digest) | {digest} + {name} | join(" ") | sub("^sha256:";"")' | base64 -w0)
if test "$hashes" = ""; then # goreleaser < v1.13.0
checksum_file=$(echo "$ARTIFACTS" | jq -r '.[] | select (.type=="Checksum") | .path')
hashes=$(cat $checksum_file | base64 -w0)
fi
echo "hashes=$hashes" >> $GITHUB_OUTPUT
- name: Set tag output
id: tag
run: echo "tag_name=${GITHUB_REF#refs/*/}" >> "$GITHUB_OUTPUT"
provenance:
if: github.event_name == 'push' && startsWith(github.ref, 'refs/tags/')
needs: [goreleaser]
permissions:
actions: read # To read the workflow path.
id-token: write # To sign the provenance.
contents: write # To add assets to a release.
uses: slsa-framework/slsa-github-generator/.github/workflows/generator_generic_slsa3.yml@v2.1.0
with:
base64-subjects: "${{ needs.goreleaser.outputs.hashes }}"
upload-assets: true # upload to a new release
upload-tag-name: "${{ needs.release.outputs.tag_name }}"
draft-release: true

View File

@ -2,10 +2,10 @@ name: Smoke Test
on:
push:
branches: ["*"]
branches: ["**", "stable/**"]
paths-ignore: [ '**.md', '**.png', '**.jpg', '**.svg', '**/docs/**' ]
pull_request:
branches: ["*"]
branches: ["**", "stable/**"]
paths-ignore: [ '**.md', '**.png', '**.jpg', '**.svg', '**/docs/**' ]
schedule:
# Run daily sanity check at 03:00 clock UTC
@ -18,105 +18,208 @@ env:
jobs:
contrib-build:
runs-on: ubuntu-latest
strategy:
matrix:
arch: [amd64, arm64, ppc64le, riscv64]
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Setup Golang
uses: actions/setup-go@v3
uses: actions/setup-go@v5
with:
go-version: ~1.18
- name: Golang Cache
uses: actions/cache@v3
with:
path: |
~/.cache/go-build
~/go/pkg/mod
key: ${{ runner.os }}-golang-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-golang-
go-version-file: 'go.work'
cache-dependency-path: "**/*.sum"
- name: Build Contrib
run: |
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sudo sh -s -- -b /usr/bin v1.47.3
make -e DOCKER=false nydusify-release
make -e DOCKER=false contrib-test
make -e DOCKER=false GOARCH=${{ matrix.arch }} contrib-release
- name: Upload Nydusify
uses: actions/upload-artifact@master
if: matrix.arch == 'amd64'
uses: actions/upload-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
nydus-build:
contrib-lint:
runs-on: ubuntu-latest
strategy:
matrix:
include:
- path: contrib/nydusify
- path: contrib/nydus-overlayfs
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Setup Golang
uses: actions/setup-go@v5
with:
go-version-file: 'go.work'
cache: false
- name: Lint
uses: golangci/golangci-lint-action@v6
with:
version: v1.64
working-directory: ${{ matrix.path }}
args: --timeout=10m --verbose
nydus-build:
runs-on: ubuntu-latest
strategy:
matrix:
arch: [amd64, arm64, ppc64le, riscv64]
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Rust Cache
uses: Swatinem/rust-cache@v2.2.0
uses: Swatinem/rust-cache@v2
with:
cache-on-failure: true
- name: Build Nydus
shared-key: ${{ runner.os }}-cargo-${{ matrix.arch }}
save-if: ${{ github.ref == 'refs/heads/master' }}
- uses: dsherret/rust-toolchain-file@v1
- name: Read Rust toolchain version
id: set_toolchain_version
run: |
rustup component add rustfmt clippy
make
RUST_TOOLCHAIN_VERSION=$(grep -oP '(?<=channel = ")[^"]*' rust-toolchain.toml)
echo "Rust toolchain version: $RUST_TOOLCHAIN_VERSION"
echo "rust-version=$RUST_TOOLCHAIN_VERSION" >> $GITHUB_OUTPUT
shell: bash
- name: Set up Docker Buildx
if: matrix.arch == 'riscv64'
uses: docker/setup-buildx-action@v3
- name: Build and push Docker image
if: matrix.arch == 'riscv64'
uses: docker/build-push-action@v6
with:
context: .
file: ./.github/workflows/Dockerfile.cross
push: false
load: true
tags: rust-cross-compile-riscv64:latest
cache-from: type=gha
cache-to: type=gha,mode=max
build-args: |
RUST_VERSION=${{ steps.set_toolchain_version.outputs.rust-version }}
- name: Build Nydus Non-RISC-V
if: matrix.arch != 'riscv64'
run: |
declare -A rust_target_map=( ["amd64"]="x86_64-unknown-linux-musl" ["arm64"]="aarch64-unknown-linux-musl" ["ppc64le"]="powerpc64le-unknown-linux-gnu")
RUST_TARGET=${rust_target_map[${{ matrix.arch }}]}
cargo install --locked --version 0.2.5 cross
make -e RUST_TARGET_STATIC=$RUST_TARGET -e CARGO=cross static-release
- name: Build Nydus RISC-V
if: matrix.arch == 'riscv64'
run: |
RUST_TARGET=riscv64gc-unknown-linux-gnu
docker run --rm -v ${{ github.workspace }}:/root/src rust-cross-compile-riscv64:latest \
sh -c "cd /root/src && make -e RUST_TARGET_STATIC=$RUST_TARGET static-release"
- name: Prepare to upload artifacts
run: |
declare -A rust_target_map=( ["amd64"]="x86_64-unknown-linux-musl" ["arm64"]="aarch64-unknown-linux-musl" ["ppc64le"]="powerpc64le-unknown-linux-gnu" ["riscv64"]="riscv64gc-unknown-linux-gnu")
RUST_TARGET=${rust_target_map[${{ matrix.arch }}]}
sudo mv target/$RUST_TARGET/release/nydusd .
sudo mv target/$RUST_TARGET/release/nydus-image .
- name: Upload Nydus Binaries
uses: actions/upload-artifact@master
if: matrix.arch == 'amd64'
uses: actions/upload-artifact@v4
with:
name: nydus-artifact
path: |
target/release/nydus-image
target/release/nydusd
nydus-image
nydusd
nydusd-build-macos:
runs-on: macos-13
strategy:
matrix:
arch: [amd64, arm64]
steps:
- uses: actions/checkout@v4
- name: Cache cargo
uses: Swatinem/rust-cache@v2
with:
cache-on-failure: true
shared-key: ${{ runner.os }}-cargo-${{ matrix.arch }}
save-if: ${{ github.ref == 'refs/heads/master' }}
- uses: dsherret/rust-toolchain-file@v1
- name: build
run: |
if [[ "${{matrix.arch}}" == "amd64" ]]; then
RUST_TARGET="x86_64-apple-darwin"
else
RUST_TARGET="aarch64-apple-darwin"
fi
cargo install --version 0.2.5 cross
rustup target add ${RUST_TARGET}
make -e RUST_TARGET_STATIC=$RUST_TARGET -e CARGO=cross static-release
nydus-integration-test:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Docker Cache
uses: jpribyl/action-docker-layer-caching@v0.1.0
continue-on-error: true
- name: Download Nydus
uses: actions/download-artifact@master
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: |
target/release
- name: Download Nydusify
uses: actions/download-artifact@master
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Older Binaries
id: prepare-binaries
run: |
versions=(v0.1.0 v2.1.4)
version_archs=(v0.1.0-x86_64 v2.1.4-linux-amd64)
export NYDUS_STABLE_VERSION=$(curl https://api.github.com/repos/Dragonflyoss/nydus/releases/latest | jq -r '.tag_name')
versions=(v0.1.0 ${NYDUS_STABLE_VERSION})
version_archs=(v0.1.0-x86_64 ${NYDUS_STABLE_VERSION}-linux-amd64)
for i in ${!versions[@]}; do
version=${versions[$i]}
version_arch=${version_archs[$i]}
wget -q https://github.com/dragonflyoss/image-service/releases/download/$version/nydus-static-$version_arch.tgz
wget -q https://github.com/dragonflyoss/nydus/releases/download/$version/nydus-static-$version_arch.tgz
sudo mkdir nydus-$version /usr/bin/nydus-$version
sudo tar xzf nydus-static-$version_arch.tgz -C nydus-$version
sudo cp -r nydus-$version/nydus-static/* /usr/bin/nydus-$version/
done
- name: Golang Cache
uses: actions/cache@v3
- name: Setup Golang
uses: actions/setup-go@v5
with:
path: |
~/.cache/go-build
~/go/pkg/mod
key: ${{ runner.os }}-golang-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-golang-
go-version-file: 'go.work'
cache-dependency-path: "**/*.sum"
- name: Free Disk Space
uses: jlumbroso/free-disk-space@main
with:
# this might remove tools that are actually needed,
# if set to "true" but frees about 6 GB
tool-cache: false
# all of these default to true, but feel free to set to
# "false" if necessary for your workflow
android: true
dotnet: true
haskell: true
large-packages: true
docker-images: true
swap-storage: true
- name: Integration Test
run: |
sudo mkdir -p /usr/bin/nydus-latest /home/runner/work/workdir
sudo install -D -m 755 contrib/nydusify/cmd/nydusify /usr/bin/nydus-latest
sudo install -D -m 755 target/release/nydusd target/release/nydus-image /usr/bin/nydus-latest
sudo bash misc/prepare.sh
versions=(v0.1.0 v2.1.4 latest)
version_exports=(v0_1_0 v2_1_4 latest)
export NYDUS_STABLE_VERSION=$(curl https://api.github.com/repos/Dragonflyoss/nydus/releases/latest | jq -r '.tag_name')
export NYDUS_STABLE_VERSION_EXPORT="${NYDUS_STABLE_VERSION//./_}"
versions=(v0.1.0 ${NYDUS_STABLE_VERSION} latest)
version_exports=(v0_1_0 ${NYDUS_STABLE_VERSION_EXPORT} latest)
for i in ${!version_exports[@]}; do
version=${versions[$i]}
version_export=${version_exports[$i]}
@ -125,26 +228,159 @@ jobs:
export NYDUS_NYDUSIFY_$version_export=/usr/bin/nydus-$version/nydusify
done
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sudo sh -s -- -b /usr/bin v1.47.3
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sudo sh -s -- -b /usr/bin v1.64.8
sudo -E make smoke-only
nydus-unit-test:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Rust Cache
uses: Swatinem/rust-cache@v2.2.0
uses: Swatinem/rust-cache@v2
with:
cache-on-failure: true
shared-key: Linux-cargo-amd64
save-if: ${{ github.ref == 'refs/heads/master' }}
- name: Install cargo nextest
uses: taiki-e/install-action@nextest
- name: Fscache Setup
run: sudo bash misc/fscache/setup.sh
- name: Unit Test
run: |
make ut
CARGO_HOME=${HOME}/.cargo
CARGO_BIN=$(which cargo)
RUSTUP_BIN=$(which rustup)
sudo -E RUSTUP=${RUSTUP_BIN} make ut-nextest
contrib-unit-test-coverage:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Golang
uses: actions/setup-go@v5
with:
go-version-file: 'go.work'
cache-dependency-path: "**/*.sum"
- name: Unit Test
run: |
make -e DOCKER=false contrib-test
- name: Upload contrib coverage file
uses: actions/upload-artifact@v4
with:
name: contrib-test-coverage-artifact
path: |
contrib/nydusify/coverage.txt
nydus-unit-test-coverage:
runs-on: ubuntu-latest
env:
CARGO_TERM_COLOR: always
steps:
- uses: actions/checkout@v4
- name: Rust Cache
uses: Swatinem/rust-cache@v2
with:
cache-on-failure: true
shared-key: Linux-cargo-amd64
save-if: ${{ github.ref == 'refs/heads/master' }}
- name: Install cargo-llvm-cov
uses: taiki-e/install-action@cargo-llvm-cov
- name: Fscache Setup
run: sudo bash misc/fscache/setup.sh
- name: Generate code coverage
run: |
CARGO_HOME=${HOME}/.cargo
CARGO_BIN=$(which cargo)
RUSTUP_BIN=$(which rustup)
sudo -E RUSTUP=${RUSTUP_BIN} make coverage-codecov
- name: Upload nydus coverage file
uses: actions/upload-artifact@v4
with:
name: nydus-test-coverage-artifact
path: |
codecov.json
upload-coverage-to-codecov:
runs-on: ubuntu-latest
needs: [contrib-unit-test-coverage, nydus-unit-test-coverage]
steps:
- uses: actions/checkout@v4
- name: Download nydus coverage file
uses: actions/download-artifact@v4
with:
name: nydus-test-coverage-artifact
- name: Download contrib coverage file
uses: actions/download-artifact@v4
with:
name: contrib-test-coverage-artifact
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v4
with:
files: ./codecov.json,./coverage.txt
token: ${{ secrets.CODECOV_TOKEN }}
verbose: true
fail_ci_if_error: true
nydus-cargo-deny:
name: cargo-deny
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- uses: actions/checkout@v3
- uses: EmbarkStudios/cargo-deny-action@v1
- uses: actions/checkout@v4
- uses: EmbarkStudios/cargo-deny-action@v2
performance-test:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
strategy:
matrix:
include:
- mode: fs-version-5
- mode: fs-version-6
- mode: zran
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: target/release
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Nydus Container Environment
run: |
sudo bash misc/prepare.sh
- name: Performance Test
run: |
export PERFORMANCE_TEST_MODE=${{ matrix.mode }}
sudo -E make smoke-performance
takeover-test:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: target/release
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Nydus Container Environment
run: |
sudo bash misc/prepare.sh takeover_test
- name: Takeover Test
run: |
export NEW_NYDUSD_BINARY_PATH=target/release/nydusd
sudo -E make smoke-takeover

31
.github/workflows/stale.yaml vendored Normal file
View File

@ -0,0 +1,31 @@
name: Close stale issues and PRs
on:
workflow_dispatch:
schedule:
- cron: "0 0 * * *"
permissions:
issues: write
pull-requests: write
jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@5bef64f19d7facfb25b37b414482c7164d639639 # v9.1.0
id: stale
with:
delete-branch: true
days-before-close: 7
days-before-stale: 60
days-before-pr-close: 7
days-before-pr-stale: 60
stale-issue-label: "stale"
exempt-issue-labels: bug,wip
exempt-pr-labels: bug,wip
exempt-all-milestones: true
stale-issue-message: 'This issue is stale because it has been open 60 days with no activity.'
close-issue-message: 'This issue was closed because it has been stalled for 7 days with no activity.'
stale-pr-message: 'This PR is stale because it has been open 60 days with no activity.'
close-pr-message: 'This PR was closed because it has been stalled for 7 days with no activity.'

8
.gitignore vendored
View File

@ -1,8 +1,14 @@
**/target*
**/*.rs.bk
/.vscode
**/.vscode
.idea
.cargo
**/.pyc
__pycache__
.DS_Store
go.work.sum
dist/
nydus-static/
.goreleaser.yml
metadata.db
tests/texture/zran/233c72f2b6b698c07021c4da367cfe2dff4f049efbaa885ca0ff760ea297865a

View File

@ -1,6 +1,6 @@
## CNCF Dragonfly Nydus Adopters
A non-exhaustive list of containerd adopters is provided below.
A non-exhaustive list of Nydus adopters is provided below.
Please kindly share your experience about Nydus with us and help us to improve Nydus ❤️.
**_[Alibaba Cloud](https://www.alibabacloud.com)_** - Aliyun serverless image pull time drops from 20 seconds to 0.8s seconds.
@ -12,3 +12,5 @@ Please kindly share your experience about Nydus with us and help us to improve N
**_[KuaiShou](https://www.kuaishou.com)_** - Starting to deploy millions of containers with Dragonfly and Nydus.
**_[Yue Miao](https://www.laiyuemiao.com)_** - The startup time of micro service has been greatly improved, and reduced the network consumption.
**_[CoreWeave](https://coreweave.com/)_** - Dramatically reduce the pull time of container image which embedded machine learning models.

2284
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -6,9 +6,11 @@ description = "Nydus Image Service"
authors = ["The Nydus Developers"]
license = "Apache-2.0 OR BSD-3-Clause"
homepage = "https://nydus.dev/"
repository = "https://github.com/dragonflyoss/image-service"
edition = "2018"
repository = "https://github.com/dragonflyoss/nydus"
exclude = ["contrib/", "smoke/", "tests/"]
edition = "2021"
resolver = "2"
build = "build.rs"
[profile.release]
panic = "abort"
@ -31,45 +33,57 @@ path = "src/lib.rs"
[dependencies]
anyhow = "1"
base64 = "0.13.0"
clap = { version = "4.0.18", features = ["derive", "cargo"] }
fuse-backend-rs = "0.10.1"
flexi_logger = { version = "0.25", features = ["compress"] }
fuse-backend-rs = "^0.12.0"
hex = "0.4.3"
hyper = "0.14.11"
hyperlocal = "0.8.0"
indexmap = "1"
lazy_static = "1"
libc = "0.2"
log = "0.4.8"
log-panics = { version = "2.1.0", features = ["with-backtrace"] }
mio = { version = "0.8", features = ["os-poll", "os-ext"] }
nix = "0.24.0"
rlimit = "0.9.0"
rusqlite = { version = "0.30.0", features = ["bundled"] }
serde = { version = "1.0.110", features = ["serde_derive", "rc"] }
serde_json = "1.0.51"
sha2 = "0.10.2"
tar = "0.4.38"
tokio = { version = "1.24", features = ["macros"] }
vmm-sys-util = "0.10.0"
xattr = "0.2.3"
tar = "0.4.40"
tokio = { version = "1.35.1", features = ["macros"] }
# Build static linked openssl library
openssl = { version = "0.10.45", features = ["vendored"] }
# pin openssl-src to bring in fix for https://rustsec.org/advisories/RUSTSEC-2022-0032
#openssl-src = { version = "111.22" }
openssl = { version = '0.10.72', features = ["vendored"] }
nydus-api = { version = "0.2.1", path = "api", features = ["handler"] }
nydus-app = { version = "0.3.2", path = "app" }
nydus-error = { version = "0.2.3", path = "error" }
nydus-rafs = { version = "0.2.2", path = "rafs" }
nydus-service = { version = "0.2.0", path = "service" }
nydus-storage = { version = "0.6.2", path = "storage" }
nydus-utils = { version = "0.4.1", path = "utils" }
nydus-api = { version = "0.4.0", path = "api", features = [
"error-backtrace",
"handler",
] }
nydus-builder = { version = "0.2.0", path = "builder" }
nydus-rafs = { version = "0.4.0", path = "rafs" }
nydus-service = { version = "0.4.0", path = "service", features = [
"block-device",
] }
nydus-storage = { version = "0.7.0", path = "storage", features = [
"prefetch-rate-limit",
] }
nydus-utils = { version = "0.5.0", path = "utils" }
vhost = { version = "0.5.0", features = ["vhost-user-slave"], optional = true }
vhost-user-backend = { version = "0.7.0", optional = true }
virtio-bindings = { version = "0.1", features = ["virtio-v5_0_0"], optional = true }
virtio-queue = { version = "0.6.0", optional = true }
vm-memory = { version = "0.9.0", features = ["backend-mmap"], optional = true }
vhost = { version = "0.11.0", features = ["vhost-user"], optional = true }
vhost-user-backend = { version = "0.15.0", optional = true }
virtio-bindings = { version = "0.1", features = [
"virtio-v5_0_0",
], optional = true }
virtio-queue = { version = "0.12.0", optional = true }
vm-memory = { version = "0.14.1", features = ["backend-mmap","backend-atomic"], optional = true }
vmm-sys-util = { version = "0.12.1", optional = true }
[build-dependencies]
time = { version = "0.3.14", features = ["formatting"] }
[dev-dependencies]
xattr = "1.0.1"
vmm-sys-util = "0.12.1"
[features]
default = [
@ -79,6 +93,7 @@ default = [
"backend-s3",
"backend-http-proxy",
"backend-localdisk",
"dedup",
]
virtiofs = [
"nydus-service/virtiofs",
@ -87,13 +102,29 @@ virtiofs = [
"virtio-bindings",
"virtio-queue",
"vm-memory",
"vmm-sys-util",
]
block-nbd = ["nydus-service/block-nbd"]
backend-http-proxy = ["nydus-storage/backend-http-proxy"]
backend-localdisk = ["nydus-storage/backend-localdisk"]
backend-localdisk = [
"nydus-storage/backend-localdisk",
"nydus-storage/backend-localdisk-gpt",
]
backend-oss = ["nydus-storage/backend-oss"]
backend-registry = ["nydus-storage/backend-registry"]
backend-s3 = ["nydus-storage/backend-s3"]
dedup = ["nydus-storage/dedup"]
[workspace]
members = ["api", "app", "blobfs", "clib", "error", "rafs", "storage", "service", "utils"]
members = [
"api",
"builder",
"clib",
"rafs",
"storage",
"service",
"upgrade",
"utils",
]

15
MAINTAINERS.md Normal file
View File

@ -0,0 +1,15 @@
# Maintainers
<!-- markdownlint-disable -->
| GitHub ID | Name | Email | Company |
| :-------------------------------------------: | :---------: | :-----------------------------: | :-----------: |
| [imeoer](https://github.com/imeoer) | Yan Song | imeoer@gmail.com | Ant Group |
| [bergwolf](https://github.com/bergwolf) | Peng Tao | bergwolf@hyper.sh | Ant Group |
| [jiangliu](https://github.com/jiangliu) | Jiang Liu | gerry@linux.alibaba.com | Alibaba Group |
| [liubogithub](https://github.com/liubogithub) | Liu Bo | liub.liubo@gmail.com | Alibaba Group |
| [luodw](https://github.com/luodw) | daowen luo | luodaowen.backend@bytedance.com | ByteDance |
| [changweige](https://github.com/changweige) | Changwei Ge | gechangwei@live.cn | ByteDance |
| [hsiangkao](https://github.com/hsiangkao) | Gao Xiang | hsiangkao@linux.alibaba.com | Alibaba Group |
<!-- markdownlint-restore -->

120
Makefile
View File

@ -1,4 +1,4 @@
all: build
all: release
all-build: build contrib-build
@ -15,9 +15,10 @@ INSTALL_DIR_PREFIX ?= "/usr/local/bin"
DOCKER ?= "true"
CARGO ?= $(shell which cargo)
RUSTUP ?= $(shell which rustup)
CARGO_BUILD_GEARS = -v ~/.ssh/id_rsa:/root/.ssh/id_rsa -v ~/.cargo/git:/root/.cargo/git -v ~/.cargo/registry:/root/.cargo/registry
SUDO = $(shell which sudo)
CARGO_COMMON ?=
CARGO_COMMON ?=
EXCLUDE_PACKAGES =
UNAME_M := $(shell uname -m)
@ -43,7 +44,6 @@ endif
endif
RUST_TARGET_STATIC ?= $(STATIC_TARGET)
CTR-REMOTE_PATH = contrib/ctr-remote
NYDUSIFY_PATH = contrib/nydusify
NYDUS-OVERLAYFS_PATH = contrib/nydus-overlayfs
@ -51,12 +51,6 @@ current_dir := $(shell dirname $(realpath $(firstword $(MAKEFILE_LIST))))
env_go_path := $(shell go env GOPATH 2> /dev/null)
go_path := $(if $(env_go_path),$(env_go_path),"$(HOME)/go")
# Set the env DIND_CACHE_DIR to specify a cache directory for
# docker-in-docker container, used to cache data for docker pull,
# then mitigate the impact of docker hub rate limit, for example:
# env DIND_CACHE_DIR=/path/to/host/var-lib-docker make docker-nydusify-smoke
dind_cache_mount := $(if $(DIND_CACHE_DIR),-v $(DIND_CACHE_DIR):/var/lib/docker,)
# Functions
# Func: build golang target in docker
@ -66,7 +60,7 @@ dind_cache_mount := $(if $(DIND_CACHE_DIR),-v $(DIND_CACHE_DIR):/var/lib/docker,
define build_golang
echo "Building target $@ by invoking: $(2)"
if [ $(DOCKER) = "true" ]; then \
docker run --rm -v ${go_path}:/go -v ${current_dir}:/nydus-rs --workdir /nydus-rs/$(1) golang:1.18 $(2) ;\
docker run --rm -v ${go_path}:/go -v ${current_dir}:/nydus-rs --workdir /nydus-rs/$(1) golang:1.21 $(2) ;\
else \
$(2) -C $(1); \
fi
@ -90,7 +84,7 @@ endef
@${CARGO} clean --target ${RUST_TARGET_STATIC} --release -p libz-sys
# Targets that are exposed to developers and users.
build: .format .release_version
build: .format
${CARGO} build $(CARGO_COMMON) $(CARGO_BUILD_FLAGS)
# Cargo will skip checking if it is already checked
${CARGO} clippy --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS) --bins --tests -- -Dwarnings --allow clippy::unnecessary_cast --allow clippy::needless_borrow
@ -108,60 +102,57 @@ install: release
@sudo install -m 755 target/release/nydus-image $(INSTALL_DIR_PREFIX)/nydus-image
@sudo install -m 755 target/release/nydusctl $(INSTALL_DIR_PREFIX)/nydusctl
# unit test
ut: .release_version
TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) RUST_BACKTRACE=1 ${CARGO} test --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS) -- --skip integration --nocapture --test-threads=8
TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) RUST_BACKTRACE=1 ${CARGO} test --no-fail-fast --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS) -- --skip integration --nocapture --test-threads=8
# you need install cargo nextest first from: https://nexte.st/book/pre-built-binaries.html
ut-nextest: .release_version
TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) RUST_BACKTRACE=1 ${RUSTUP} run stable cargo nextest run --no-fail-fast --filter-expr 'test(test) - test(integration)' --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS)
# install miri first from https://github.com/rust-lang/miri/
miri-ut-nextest: .release_version
MIRIFLAGS=-Zmiri-disable-isolation TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) RUST_BACKTRACE=1 ${RUSTUP} run nightly cargo miri nextest run --no-fail-fast --filter-expr 'test(test) - test(integration) - test(deduplicate::tests) - test(inode_bitmap::tests::test_inode_bitmap)' --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS)
# install test dependencies
pre-coverage:
${CARGO} +stable install cargo-llvm-cov --locked
${RUSTUP} component add llvm-tools-preview
# print unit test coverage to console
coverage: pre-coverage
TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) ${CARGO} llvm-cov --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS) -- --skip integration --nocapture --test-threads=8
# write unit teset coverage to codecov.json, used for Github CI
coverage-codecov:
TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) ${RUSTUP} run stable cargo llvm-cov --codecov --output-path codecov.json --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS) -- --skip integration --nocapture --test-threads=8
smoke-only:
make -C smoke test
smoke-performance:
make -C smoke test-performance
smoke-benchmark:
make -C smoke test-benchmark
smoke-takeover:
make -C smoke test-takeover
smoke: release smoke-only
docker-nydus-smoke:
docker build -t nydus-smoke --build-arg RUST_TARGET=${RUST_TARGET_STATIC} misc/nydus-smoke
docker run --rm --privileged ${CARGO_BUILD_GEARS} \
-e TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) \
-v ~/.cargo:/root/.cargo \
-v $(TEST_WORKDIR_PREFIX) \
-v ${current_dir}:/nydus-rs \
nydus-smoke
contrib-build: nydusify nydus-overlayfs
# TODO: Nydusify smoke has to be time consuming for a while since it relies on musl nydusd and nydus-image.
# So musl compilation must be involved.
# And docker-in-docker deployment involves image building?
docker-nydusify-smoke: docker-static
$(call build_golang,$(NYDUSIFY_PATH),make build-smoke)
docker build -t nydusify-smoke misc/nydusify-smoke
docker run --rm --privileged \
-e BACKEND_TYPE=$(BACKEND_TYPE) \
-e BACKEND_CONFIG=$(BACKEND_CONFIG) \
-v $(current_dir):/nydus-rs $(dind_cache_mount) nydusify-smoke TestSmoke
contrib-release: nydusify-release nydus-overlayfs-release
docker-nydusify-image-test: docker-static
$(call build_golang,$(NYDUSIFY_PATH),make build-smoke)
docker build -t nydusify-smoke misc/nydusify-smoke
docker run --rm --privileged \
-e BACKEND_TYPE=$(BACKEND_TYPE) \
-e BACKEND_CONFIG=$(BACKEND_CONFIG) \
-v $(current_dir):/nydus-rs $(dind_cache_mount) nydusify-smoke TestDockerHubImage
contrib-test: nydusify-test nydus-overlayfs-test
# Run integration smoke test in docker-in-docker container. It requires some special settings,
# refer to `misc/example/README.md` for details.
docker-smoke: docker-nydus-smoke docker-nydusify-smoke
contrib-lint: nydusify-lint nydus-overlayfs-lint
contrib-build: nydusify ctr-remote nydus-overlayfs
contrib-release: nydusify-release ctr-remote-release \
nydus-overlayfs-release
contrib-test: nydusify-test ctr-remote-test \
nydus-overlayfs-test
contrib-clean: nydusify-clean ctr-remote-clean \
nydus-overlayfs-clean
contrib-clean: nydusify-clean nydus-overlayfs-clean
contrib-install:
@sudo mkdir -m 755 -p $(INSTALL_DIR_PREFIX)
@sudo install -m 755 contrib/ctr-remote/bin/ctr-remote $(INSTALL_DIR_PREFIX)/ctr-remote
@sudo install -m 755 contrib/nydus-overlayfs/bin/nydus-overlayfs $(INSTALL_DIR_PREFIX)/nydus-overlayfs
@sudo install -m 755 contrib/nydusify/cmd/nydusify $(INSTALL_DIR_PREFIX)/nydusify
@ -177,17 +168,8 @@ nydusify-test:
nydusify-clean:
$(call build_golang,${NYDUSIFY_PATH},make clean)
ctr-remote:
$(call build_golang,${CTR-REMOTE_PATH},make)
ctr-remote-release:
$(call build_golang,${CTR-REMOTE_PATH},make release)
ctr-remote-test:
$(call build_golang,${CTR-REMOTE_PATH},make test)
ctr-remote-clean:
$(call build_golang,${CTR-REMOTE_PATH},make clean)
nydusify-lint:
$(call build_golang,${NYDUSIFY_PATH},make lint)
nydus-overlayfs:
$(call build_golang,${NYDUS-OVERLAYFS_PATH},make)
@ -201,17 +183,9 @@ nydus-overlayfs-test:
nydus-overlayfs-clean:
$(call build_golang,${NYDUS-OVERLAYFS_PATH},make clean)
nydus-overlayfs-lint:
$(call build_golang,${NYDUS-OVERLAYFS_PATH},make lint)
docker-static:
docker build -t nydus-rs-static --build-arg RUST_TARGET=${RUST_TARGET_STATIC} misc/musl-static
docker run --rm ${CARGO_BUILD_GEARS} -e RUST_TARGET=${RUST_TARGET_STATIC} --workdir /nydus-rs -v ${current_dir}:/nydus-rs nydus-rs-static
docker-example: all-static-release
cp ${current_dir}/target/${RUST_TARGET_STATIC}/release/nydusd misc/example
cp ${current_dir}/target/${RUST_TARGET_STATIC}/release/nydus-image misc/example
cp contrib/nydusify/cmd/nydusify misc/example
docker build -t nydus-rs-example misc/example
@cid=$(shell docker run --rm -t -d --privileged $(dind_cache_mount) nydus-rs-example)
@docker exec $$cid /run.sh
@EXIT_CODE=$$?
@docker rm -f $$cid
@exit $$EXIT_CODE

153
README.md
View File

@ -1,76 +1,82 @@
[**[⬇️ Download]**](https://github.com/dragonflyoss/nydus/releases)
[**[📖 Website]**](https://nydus.dev/)
[**[☸ Quick Start (Kubernetes)**]](https://github.com/containerd/nydus-snapshotter/blob/main/docs/run_nydus_in_kubernetes.md)
[**[🤓 Quick Start (nerdctl)**]](https://github.com/containerd/nerdctl/blob/master/docs/nydus.md)
[**[❓ FAQs & Troubleshooting]**](https://github.com/dragonflyoss/nydus/wiki/FAQ)
# Nydus: Dragonfly Container Image Service
<p><img src="misc/logo.svg" width="170"></p>
[![Release Version](https://img.shields.io/github/v/release/dragonflyoss/image-service?style=flat)](https://github.com/dragonflyoss/image-service/releases)
[![Release Version](https://img.shields.io/github/v/release/dragonflyoss/nydus?style=flat)](https://github.com/dragonflyoss/nydus/releases)
[![License](https://img.shields.io/crates/l/nydus-rs)](https://crates.io/crates/nydus-rs)
[![Smoke Test](https://github.com/dragonflyoss/image-service/actions/workflows/smoke.yml/badge.svg?event=schedule)](https://github.com/dragonflyoss/image-service/actions/workflows/ci.yml)
[![Image Conversion](https://github.com/dragonflyoss/image-service/actions/workflows/convert.yml/badge.svg?event=schedule)](https://github.com/dragonflyoss/image-service/actions/workflows/convert.yml)
[![Release Test Daily](https://github.com/dragonflyoss/image-service/actions/workflows/release.yml/badge.svg?event=schedule)](https://github.com/dragonflyoss/image-service/actions/workflows/release.yml)
[![Twitter](https://img.shields.io/twitter/url?style=social&url=https%3A%2F%2Ftwitter.com%2Fdragonfly_oss)](https://twitter.com/dragonfly_oss)
[![Nydus Stars](https://img.shields.io/github/stars/dragonflyoss/image-service?label=Nydus%20Stars&style=social)](https://github.com/dragonflyoss/image-service)
[![Nydus Stars](https://img.shields.io/github/stars/dragonflyoss/nydus?label=Nydus%20Stars&style=social)](https://github.com/dragonflyoss/nydus)
[<img src="https://app.devin.ai/devin_v4.png" width="20" title="deepwiki">](https://deepwiki.com/dragonflyoss/nydus)
[![Smoke Test](https://github.com/dragonflyoss/nydus/actions/workflows/smoke.yml/badge.svg?event=schedule)](https://github.com/dragonflyoss/nydus/actions/workflows/smoke.yml?query=event%3Aschedule)
[![Image Conversion](https://github.com/dragonflyoss/nydus/actions/workflows/convert.yml/badge.svg?event=schedule)](https://github.com/dragonflyoss/nydus/actions/workflows/convert.yml?query=event%3Aschedule)
[![Release Test Daily](https://github.com/dragonflyoss/nydus/actions/workflows/release.yml/badge.svg?event=schedule)](https://github.com/dragonflyoss/nydus/actions/workflows/release.yml?query=event%3Aschedule)
[![Benchmark](https://github.com/dragonflyoss/nydus/actions/workflows/benchmark.yml/badge.svg?event=schedule)](https://github.com/dragonflyoss/nydus/actions/workflows/benchmark.yml?query=event%3Aschedule)
[![Coverage](https://codecov.io/gh/dragonflyoss/nydus/branch/master/graph/badge.svg)](https://codecov.io/gh/dragonflyoss/nydus)
## Introduction
The nydus project implements a content-addressable filesystem on top of a RAFS format that improves the current OCI image specification, in terms of container launching speed, image space, and network bandwidth efficiency, as well as data integrity.
Nydus implements a content-addressable file system on the RAFS format, which enhances the current OCI image specification by improving container launch speed, image space and network bandwidth efficiency, and data integrity.
The following benchmarking result shows the performance improvement compared with the OCI image for the container cold startup elapsed time on containerd. As the OCI image size increases, the container startup time of using Nydus image remains very short.
The following Benchmarking results demonstrate that Nydus images significantly outperform OCI images in terms of container cold startup elapsed time on Containerd, particularly as the OCI image size increases.
![Container Cold Startup](./misc/perf.jpg)
Nydus' key features include:
## Principles
- Container images can be downloaded on demand in chunks for lazy pulling to boost container startup
- Chunk-based content-addressable data de-duplication to minimize storage, transmission and memory footprints
- Merged filesystem tree in order to remove all intermediate layers as an option
- in-kernel EROFS or FUSE filesystem together with overlayfs to provide full POSIX compatibility
- E2E image data integrity check. So security issues like "Supply Chain Attach" can be avoided and detected at runtime
- Compatible with the OCI artifacts spec and distribution spec, so nydus image can be stored in a regular container registry
- Native [eStargz](https://github.com/containerd/stargz-snapshotter) image support with remote snapshotter plugin `nydus-snapshotter` for containerd runtime.
- Various container image storage backends are supported. For example, Registry, NAS, Aliyun/OSS, S3.
- Integrated with CNCF incubating project Dragonfly to distribute container images in P2P fashion and mitigate the pressure on container registries
- Capable to prefetch data block before user IO hits the block thus to reduce read latency
- Record files access pattern during runtime gathering access trace/log, by which user abnormal behaviors are easily caught
- Access trace based prefetch table
- User I/O amplification to reduce the amount of small requests to storage backend.
***Provide Fast, Secure And Easy Access to Data Distribution***
Currently Nydus includes following tools:
- **Performance**: Second-level container startup speed, millisecond-level function computation code package loading speed.
- **Low Cost**: Written in memory-safed language `Rust`, numerous optimizations help improve memory, CPU, and network consumption.
- **Flexible**: Supports container runtimes such as [runC](https://github.com/opencontainers/runc) and [Kata](https://github.com/kata-containers), and provides [Confidential Containers](https://github.com/confidential-containers) and vulnerability scanning capabilities
- **Security**: End to end data integrity check, Supply Chain Attack can be detected and avoided at runtime.
## Key features
- **On-demand Load**: Container images/packages are downloaded on-demand in chunk unit to boost startup.
- **Chunk Deduplication**: Chunk level data de-duplication cross-layer or cross-image to reduce storage, transport, and memory cost.
- **Compatible with Ecosystem**: Storage backend support with Registry, OSS, NAS, Shared Disk, and [P2P service](https://d7y.io/). Compatible with the [OCI images](https://github.com/dragonflyoss/nydus/blob/master/docs/nydus-zran.md), and provide native [eStargz images](https://github.com/containerd/stargz-snapshotter) support.
- **Data Analyzability**: Record accesses, data layout optimization, prefetch, IO amplification, abnormal behavior detection.
- **POSIX Compatibility**: In-Kernel EROFS or FUSE filesystems together with overlayfs provide full POSIX compatibility
- **I/O optimization**: Use merged filesystem tree, data prefetching and User I/O amplification to reduce read latency and improve user I/O performance.
## Ecosystem
### Nydus tools
| Tool | Description |
| ---------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [nydusd](https://github.com/dragonflyoss/image-service/blob/master/docs/nydusd.md) | Nydus user-space daemon, it processes all fscache/FUSE messages from the kernel and parses Nydus images to fullfil those requests |
| [nydus-image](https://github.com/dragonflyoss/image-service/blob/master/docs/nydus-image.md) | Convert a single layer of OCI format container image into a nydus format container image generating meta part file and data part file respectively |
| [nydusify](https://github.com/dragonflyoss/image-service/blob/master/docs/nydusify.md) | It pulls OCI image down and unpack it, invokes `nydus-image create` to convert image and then pushes the converted image back to registry and data storage |
| [nydusctl](https://github.com/dragonflyoss/image-service/blob/master/docs/nydus-image.md) | Nydusd CLI client (`nydus-image inspect`), query daemon's working status/metrics and configure it |
| [ctr-remote](https://github.com/dragonflyoss/image-service/tree/master/contrib/ctr-remote) | An enhanced `containerd` CLI tool enable nydus support with `containerd` ctr |
| [nydusd](https://github.com/dragonflyoss/nydus/blob/master/docs/nydusd.md) | Nydus user-space daemon, it processes all fscache/FUSE messages from the kernel and parses Nydus images to fullfil those requests |
| [nydus-image](https://github.com/dragonflyoss/nydus/blob/master/docs/nydus-image.md) | Convert a single layer of OCI format container image into a nydus format container image generating meta part file and data part file respectively |
| [nydusify](https://github.com/dragonflyoss/nydus/blob/master/docs/nydusify.md) | It pulls OCI image down and unpack it, invokes `nydus-image create` to convert image and then pushes the converted image back to registry and data storage |
| [nydusctl](https://github.com/dragonflyoss/nydus/blob/master/docs/nydus-image.md) | Nydusd CLI client (`nydus-image inspect`), query daemon's working status/metrics and configure it |
| [nydus-docker-graphdriver](https://github.com/nydusaccelerator/docker-nydus-graphdriver) | [Experimental] Works as a `docker` remote graph driver to control how images and containers are stored and managed |
| [nydus-overlayfs](https://github.com/dragonflyoss/image-service/tree/master/contrib/nydus-overlayfs) | `Containerd` mount helper to invoke overlayfs mount with tweaking mount options a bit. So nydus prerequisites can be passed to vm-based runtime |
| [nydus-overlayfs](https://github.com/dragonflyoss/nydus/tree/master/contrib/nydus-overlayfs) | `Containerd` mount helper to invoke overlayfs mount with tweaking mount options a bit. So nydus prerequisites can be passed to vm-based runtime |
| [nydus-backend-proxy](./contrib/nydus-backend-proxy/README.md) | A simple HTTP server to serve local directory as a blob backend for nydusd |
Currently Nydus is supporting the following platforms in container ecosystem:
### Supported platforms
| Type | Platform | Description | Status |
| ------------- | --------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------ |
| Storage | Registry/OSS/S3/NAS | Support for OCI-compatible distribution implementations such as Docker Hub, Harbor, Github GHCR, Aliyun ACR, NAS, and Aliyun OSS-like object storage service | ✅ |
| Storage/Build | [Harbor](https://github.com/goharbor/acceleration-service) | Provides a general service for Harbor to support acceleration image conversion based on kinds of accelerator like Nydus and eStargz etc | ✅ |
| Distribution | [Dragonfly](https://github.com/dragonflyoss/Dragonfly2) | Improve the runtime performance of Nydus image even further with the Dragonfly P2P data distribution system | ✅ |
| Build | [Buildkit](https://github.com/moby/buildkit/blob/master/docs/nydus.md) | Provides the ability to build and export Nydus images directly from Dockerfile | ✅ |
| Distribution | [Dragonfly](https://github.com/dragonflyoss/dragonfly) | Improve the runtime performance of Nydus image even further with the Dragonfly P2P data distribution system | ✅ |
| Build | [Buildkit](https://github.com/nydusaccelerator/buildkit/blob/master/docs/nydus.md) | Provides the ability to build and export Nydus images directly from Dockerfile | ✅ |
| Build/Runtime | [Nerdctl](https://github.com/containerd/nerdctl/blob/master/docs/nydus.md) | The containerd client to build or run (requires nydus snapshotter) Nydus image | ✅ |
| Runtime | [Docker / Moby](https://github.com/dragonflyoss/nydus/blob/master/docs/docker-env-setup.md) | Run Nydus image in Docker container with containerd and nydus-snapshotter | ✅ |
| Runtime | [Kubernetes](https://github.com/containerd/nydus-snapshotter/blob/main/docs/run_nydus_in_kubernetes.md) | Run Nydus image using CRI interface | ✅ |
| Runtime | [Containerd](https://github.com/containerd/nydus-snapshotter) | Nydus Snapshotter, a containerd remote plugin to run Nydus image | ✅ |
| Runtime | [CRI-O / Podman](https://github.com/containers/nydus-storage-plugin) | Run Nydus image with CRI-O or Podman | 🚧 |
| Runtime | [Docker / Moby](https://github.com/dragonflyoss/image-service/blob/master/docs/docker-env-setup.md) | Run Nydus image in Docker container with containerd and nydus-snapshotter | ✅ |
| Build/Runtime | [Nerdctl](https://github.com/containerd/nerdctl/blob/master/docs/nydus.md) | The containerd client to build or run (requires nydus snapshotter) Nydus image | ✅ |
| Runtime | [KataContainers](https://github.com/kata-containers/kata-containers/blob/main/docs/design/kata-nydus-design.md) | Run Nydus image in KataContainers as a native solution | ✅ |
| Runtime | [EROFS](https://www.kernel.org/doc/html/latest/filesystems/erofs.html) | Run Nydus image directly in-kernel EROFS for even greater performance improvement | ✅ |
To try nydus image service:
1. Convert an original OCI image to nydus image and store it somewhere like Docker/Registry, NAS, Aliyun/OSS or S3. This can be directly done by `nydusify`. Normal users don't have to get involved with `nydus-image`.
2. Get `nydus-snapshotter`(`containerd-nydus-grpc`) installed locally and configured properly. Or install `nydus-docker-graphdriver` plugin.
3. Operate container in legacy approaches. For example, `docker`, `nerdctl`, `crictl` and `ctr`.
## Build Binary
## Build
### Build Binary
```shell
# build debug binary
make
@ -80,30 +86,36 @@ make release
make docker-static
```
## Quick Start with Kubernetes and Containerd
For more details on how to lazily start a container with `nydus-snapshotter` and nydus image on Kubernetes nodes or locally use `nerdctl` rather than CRI, please refer to [Nydus Setup](./docs/containerd-env-setup.md)
## Build Nydus Image
Build Nydus image from directory source: [Nydus Image Builder](./docs/nydus-image.md).
### Build Nydus Image
Convert OCIv1 image to Nydus image: [Nydusify](./docs/nydusify.md), [Acceld](https://github.com/goharbor/acceleration-service) or [Nerdctl](https://github.com/containerd/nerdctl/blob/master/docs/nydus.md#build-nydus-image-using-nerdctl-image-convert).
## Nydus Snapshotter
Build Nydus image from Dockerfile directly: [Buildkit](https://github.com/nydusaccelerator/buildkit/blob/master/docs/nydus.md).
Build Nydus layer from various sources: [Nydus Image Builder](./docs/nydus-image.md).
#### Image prefetch optimization
To further reduce container startup time, a nydus image with a prefetch list can be built using the NRI plugin (containerd >=1.7): [Container Image Optimizer](https://github.com/containerd/nydus-snapshotter/blob/main/docs/optimize_nydus_image.md)
## Run
### Quick Start
For more details on how to lazily start a container with `nydus-snapshotter` and nydus image on Kubernetes nodes or locally use `nerdctl` rather than CRI, please refer to [Nydus Setup](./docs/containerd-env-setup.md)
### Run Nydus Snapshotter
Nydus-snapshotter is a non-core sub-project of containerd.
Check out its code and tutorial from [Nydus-snapshotter repository](https://github.com/containerd/nydus-snapshotter).
It works as a `containerd` remote snapshotter to help setup container rootfs with nydus images, which handles nydus image format when necessary. When running without nydus images, it is identical to the containerd's builtin overlayfs snapshotter.
## Run Nydusd Daemon
### Run Nydusd Daemon
Normally, users do not need to start `nydusd` by hand. It is started by `nydus-snapshotter` when a container rootfs is prepared.
Run Nydusd Daemon to serve Nydus image: [Nydusd](./docs/nydusd.md).
## Run Nydus with in-kernel EROFS filesystem
### Run Nydus with in-kernel EROFS filesystem
In-kernel EROFS has been fully compatible with RAFS v6 image format since Linux 5.16. In other words, uncompressed RAFS v6 images can be mounted over block devices since then.
@ -111,42 +123,39 @@ Since [Linux 5.19](https://lwn.net/Articles/896140), EROFS has added a new file-
Guide to running Nydus with fscache: [Nydus-fscache](./docs/nydus-fscache.md)
## Run Nydus with Dragonfly P2P system
### Run Nydus with Dragonfly P2P system
Nydus is deeply integrated with [Dragonfly](https://d7y.io/) P2P system, which can greatly reduce the network latency and the single point of network pressure for registry server, testing in the production environment shows that using Dragonfly can reduce network latency by more than 80%, to understand the performance test data and how to configure Nydus to use Dragonfly, please refer to the [doc](https://d7y.io/docs/setup/integration/nydus).
Nydus is deeply integrated with [Dragonfly](https://d7y.io/) P2P system, which can greatly reduce the network latency and the single point pressure of the registry server. Benchmarking results in the production environment demonstrate that using Dragonfly can reduce network latency by more than 80%, to understand the performance results and integration steps, please refer to the [nydus integration](https://d7y.io/docs/setup/integration/nydus).
## Accelerate OCI image directly with Nydus
If you want to deploy Dragonfly and Nydus at the same time through Helm, please refer to the **[Quick Start](https://github.com/dragonflyoss/helm-charts/blob/main/INSTALL.md)**.
### Run OCI image directly with Nydus
Nydus is able to generate a tiny artifact called a `nydus zran` from an existing OCI image in the short time. This artifact can be used to accelerate the container boot time without the need for a full image conversion. For more information, please see the [documentation](./docs/nydus-zran.md).
## Build Images via Harbor
### Run with Docker(Moby)
Nydus cooperates with Harbor community to develop [acceleration-service](https://github.com/goharbor/acceleration-service) which provides a general service for Harbor to support image acceleration based on kinds of accelerators like Nydus, eStargz, etc.
Nydus provides a variety of methods to support running on docker(Moby), please refer to [Nydus Setup for Docker(Moby) Environment](./docs/docker-env-setup.md)
## Run with Docker
### Run with macOS
A **experimental** plugin helps to start Docker container from nydus image. For more particular instructions, please refer to [Docker Nydus Graph Driver](https://github.com/nydusaccelerator/docker-nydus-graphdriver)
Nydus can also run with macfuse(a.k.a osxfuse). For more details please read [nydus with macOS](./docs/nydus_with_macos.md).
## Run with macOS
Nydus can also run with macfuse(a.k.a osxfuse).For more details please read [nydus with macOS](./docs/nydus_with_macos.md).
## Run eStargz image (with lazy pulling)
### Run eStargz image (with lazy pulling)
The containerd remote snapshotter plugin [nydus-snapshotter](https://github.com/containerd/nydus-snapshotter) can be used to run nydus images, or to run [eStargz](https://github.com/containerd/stargz-snapshotter) images directly by appending `--enable-stargz` command line option.
In the future, `zstd::chunked` can work in this way as well.
### Run Nydus Service
Using the key features of nydus as native in your project without preparing and invoking `nydusd` deliberately, [nydus-service](./service/README.md) helps to reuse the core services of nyuds.
## Documentation
Browse the documentation to learn more. Here are some topics you may be interested in:
Please visit [**Wiki**](https://github.com/dragonflyoss/nydus/wiki), or [**docs**](./docs)
- [A Nydus Tutorial for Beginners](./docs/tutorial.md)
- [Nydus Design Doc](./docs/nydus-design.md)
- Our talk on Open Infra Summit 2020: [Toward Next Generation Container Image](https://drive.google.com/file/d/1LRfLUkNxShxxWU7SKjc_50U0N9ZnGIdV/view)
- [EROFS, What Are We Doing Now For Containers?](https://static.sched.com/hosted_files/kccncosschn21/fd/EROFS_What_Are_We_Doing_Now_For_Containers.pdf)
- [The Evolution of the Nydus Image Acceleration](https://d7y.io/blog/2022/06/06/evolution-of-nydus/) \([Video](https://youtu.be/yr6CB1JN1xg)\)
- [Introduction to Nydus Image Service on In-kernel EROFS](https://static.sched.com/hosted_files/osseu2022/59/Introduction%20to%20Nydus%20Image%20Service%20on%20In-kernel%20EROFS.pdf) \([Video](https://youtu.be/2Uog-y2Gcus)\)
There is also a very nice [Devin](https://devin.ai/) generated document available at [**deepwiki**](https://deepwiki.com/dragonflyoss/nydus).
## Community
@ -154,7 +163,7 @@ Nydus aims to form a **vendor-neutral opensource** image distribution solution t
Questions, bug reports, technical discussion, feature requests and contribution are always welcomed!
We're very pleased to hear your use cases any time.
Feel free to reach/join us via Slack and/or Dingtalk.
Feel free to reach us via Slack or Dingtalk.
- **Slack:** [Nydus Workspace](https://join.slack.com/t/nydusimageservice/shared_invite/zt-pz4qvl4y-WIh4itPNILGhPS8JqdFm_w)
@ -163,5 +172,3 @@ Feel free to reach/join us via Slack and/or Dingtalk.
- **Dingtalk:** [34971767](https://qr.dingtalk.com/action/joingroup?code=v1,k1,ioWGzuDZEIO10Bf+/ohz4RcQqAkW0MtOwoG1nbbMxQg=&_dt_no_comment=1&origin=11)
<img src="./misc/dingtalk.jpg" width="250" height="300"/>
- **Technical Meeting:** Every Wednesday at 06:00 UTC (Beijing, Shanghai 14:00), please see our [HackMD](https://hackmd.io/@Nydus/Bk8u2X0p9) page for more information.

View File

@ -1,29 +1,31 @@
[package]
name = "nydus-api"
version = "0.2.1"
version = "0.4.0"
description = "APIs for Nydus Image Service"
authors = ["The Nydus Developers"]
license = "Apache-2.0 OR BSD-3-Clause"
homepage = "https://nydus.dev/"
repository = "https://github.com/dragonflyoss/image-service"
edition = "2018"
repository = "https://github.com/dragonflyoss/nydus"
edition = "2021"
[dependencies]
libc = "0.2"
log = "0.4.8"
serde_json = "1.0.53"
toml = "0.5"
thiserror = "1.0.30"
backtrace = { version = "0.3", optional = true }
dbs-uhttp = { version = "0.3.0", optional = true }
http = { version = "0.2.1", optional = true }
lazy_static = { version = "1.4.0", optional = true }
libc = "0.2"
log = "0.4.8"
mio = { version = "0.8", features = ["os-poll", "os-ext"], optional = true }
serde = { version = "1.0.110", features = ["rc", "serde_derive"] }
serde_json = "1.0.53"
toml = "0.5"
url = { version = "2.1.1", optional = true }
nydus-error = { version = "0.2", path = "../error" }
[dev-dependencies]
vmm-sys-util = { version = "0.10" }
vmm-sys-util = { version = "0.12.1" }
[features]
error-backtrace = ["backtrace"]
handler = ["dbs-uhttp", "http", "lazy_static", "mio", "url"]

File diff suppressed because it is too large Load Diff

252
api/src/error.rs Normal file
View File

@ -0,0 +1,252 @@
// Copyright 2020 Ant Group. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use std::fmt::Debug;
/// Display error messages with line number, file path and optional backtrace.
pub fn make_error(
err: std::io::Error,
_raw: impl Debug,
_file: &str,
_line: u32,
) -> std::io::Error {
#[cfg(feature = "error-backtrace")]
{
if let Ok(val) = std::env::var("RUST_BACKTRACE") {
if val.trim() != "0" {
error!("Stack:\n{:?}", backtrace::Backtrace::new());
error!("Error:\n\t{:?}\n\tat {}:{}", _raw, _file, _line);
return err;
}
}
error!(
"Error:\n\t{:?}\n\tat {}:{}\n\tnote: enable `RUST_BACKTRACE=1` env to display a backtrace",
_raw, _file, _line
);
}
err
}
/// Define error macro like `x!()` or `x!(err)`.
/// Note: The `x!()` macro will convert any origin error (Os, Simple, Custom) to Custom error.
macro_rules! define_error_macro {
($fn:ident, $err:expr) => {
#[macro_export]
macro_rules! $fn {
() => {
std::io::Error::new($err.kind(), format!("{}: {}:{}", $err, file!(), line!()))
};
($raw:expr) => {
$crate::error::make_error($err, &$raw, file!(), line!())
};
}
};
}
/// Define error macro for libc error codes
macro_rules! define_libc_error_macro {
($fn:ident, $code:ident) => {
define_error_macro!($fn, std::io::Error::from_raw_os_error(libc::$code));
};
}
// TODO: Add format string support
// Add more libc error macro here if necessary
define_libc_error_macro!(einval, EINVAL);
define_libc_error_macro!(enoent, ENOENT);
define_libc_error_macro!(ebadf, EBADF);
define_libc_error_macro!(eacces, EACCES);
define_libc_error_macro!(enotdir, ENOTDIR);
define_libc_error_macro!(eisdir, EISDIR);
define_libc_error_macro!(ealready, EALREADY);
define_libc_error_macro!(enosys, ENOSYS);
define_libc_error_macro!(epipe, EPIPE);
define_libc_error_macro!(eio, EIO);
/// Return EINVAL error with formatted error message.
#[macro_export]
macro_rules! bail_einval {
($($arg:tt)*) => {{
return Err(einval!(format!($($arg)*)))
}}
}
/// Return EIO error with formatted error message.
#[macro_export]
macro_rules! bail_eio {
($($arg:tt)*) => {{
return Err(eio!(format!($($arg)*)))
}}
}
// Add more custom error macro here if necessary
define_error_macro!(last_error, std::io::Error::last_os_error());
define_error_macro!(eother, std::io::Error::new(std::io::ErrorKind::Other, ""));
#[cfg(test)]
mod tests {
use std::io::{Error, ErrorKind};
fn check_size(size: usize) -> std::io::Result<()> {
if size > 0x1000 {
return Err(einval!());
}
Ok(())
}
#[test]
fn test_einval() {
assert_eq!(
check_size(0x2000).unwrap_err().kind(),
std::io::Error::from_raw_os_error(libc::EINVAL).kind()
);
}
#[test]
fn test_make_error() {
let original_error = Error::new(ErrorKind::Other, "test error");
let debug_info = "debug information";
let file = "test.rs";
let line = 42;
let result_error = super::make_error(original_error, debug_info, file, line);
assert_eq!(result_error.kind(), ErrorKind::Other);
}
#[test]
fn test_libc_error_macros() {
// Test einval macro
let err = einval!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EINVAL).kind());
// Test enoent macro
let err = enoent!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::ENOENT).kind());
// Test ebadf macro
let err = ebadf!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EBADF).kind());
// Test eacces macro
let err = eacces!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EACCES).kind());
// Test enotdir macro
let err = enotdir!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::ENOTDIR).kind());
// Test eisdir macro
let err = eisdir!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EISDIR).kind());
// Test ealready macro
let err = ealready!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EALREADY).kind());
// Test enosys macro
let err = enosys!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::ENOSYS).kind());
// Test epipe macro
let err = epipe!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EPIPE).kind());
// Test eio macro
let err = eio!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EIO).kind());
}
#[test]
fn test_libc_error_macros_with_context() {
let test_msg = "test context";
// Test einval macro with context
let err = einval!(test_msg);
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EINVAL).kind());
// Test enoent macro with context
let err = enoent!(test_msg);
assert_eq!(err.kind(), Error::from_raw_os_error(libc::ENOENT).kind());
// Test eio macro with context
let err = eio!(test_msg);
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EIO).kind());
}
#[test]
fn test_custom_error_macros() {
// Test last_error macro
let err = last_error!();
// We can't predict the exact error, but we can check it's a valid error
assert!(!err.to_string().is_empty());
// Test eother macro
let err = eother!();
assert_eq!(err.kind(), ErrorKind::Other);
// Test eother macro with context
let err = eother!("custom context");
assert_eq!(err.kind(), ErrorKind::Other);
}
fn test_bail_einval_function() -> std::io::Result<()> {
bail_einval!("test error message");
}
fn test_bail_eio_function() -> std::io::Result<()> {
bail_eio!("test error message");
}
#[test]
fn test_bail_macros() {
// Test bail_einval macro
let result = test_bail_einval_function();
assert!(result.is_err());
let err = result.unwrap_err();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EINVAL).kind());
// The error message format is controlled by the macro, so just check it's not empty
assert!(!err.to_string().is_empty());
// Test bail_eio macro
let result = test_bail_eio_function();
assert!(result.is_err());
let err = result.unwrap_err();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EIO).kind());
// The error message format is controlled by the macro, so just check it's not empty
assert!(!err.to_string().is_empty());
}
#[test]
fn test_bail_macros_with_formatting() {
fn test_bail_with_format(code: i32) -> std::io::Result<()> {
if code == 1 {
bail_einval!("error code: {}", code);
} else if code == 2 {
bail_eio!("I/O error with code: {}", code);
}
Ok(())
}
// Test bail_einval with formatting
let result = test_bail_with_format(1);
assert!(result.is_err());
let err = result.unwrap_err();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EINVAL).kind());
// The error message format is controlled by the macro, so just check it's not empty
assert!(!err.to_string().is_empty());
// Test bail_eio with formatting
let result = test_bail_with_format(2);
assert!(result.is_err());
let err = result.unwrap_err();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EIO).kind());
// The error message format is controlled by the macro, so just check it's not empty
assert!(!err.to_string().is_empty());
// Test success case
let result = test_bail_with_format(3);
assert!(result.is_ok());
}
}

View File

@ -4,15 +4,23 @@
//
// SPDX-License-Identifier: Apache-2.0
use std::convert::TryInto;
use std::io;
use std::sync::mpsc::{RecvError, SendError};
use nydus_error::error::MetricsError;
use serde::Deserialize;
use serde_json::Error as SerdeError;
use thiserror::Error;
use crate::{BlobCacheEntryConfig, BlobCacheEntryConfigV2};
use crate::BlobCacheEntry;
/// Errors related to Metrics.
#[derive(Error, Debug)]
pub enum MetricsError {
#[error("no counter found for the metric")]
NoCounter,
#[error("failed to serialize metric: {0:?}")]
Serialize(#[source] SerdeError),
}
/// Mount a filesystem.
#[derive(Clone, Deserialize, Debug)]
@ -43,56 +51,6 @@ pub struct DaemonConf {
pub log_level: String,
}
/// Blob cache object type for nydus/rafs bootstrap blob.
pub const BLOB_CACHE_TYPE_META_BLOB: &str = "bootstrap";
/// Blob cache object type for nydus/rafs data blob.
pub const BLOB_CACHE_TYPE_DATA_BLOB: &str = "datablob";
/// Configuration information for a cached blob.
#[derive(Debug, Deserialize, Serialize)]
pub struct BlobCacheEntry {
/// Type of blob object, bootstrap or data blob.
#[serde(rename = "type")]
pub blob_type: String,
/// Blob id.
#[serde(rename = "id")]
pub blob_id: String,
/// Configuration information to generate blob cache object.
#[serde(default, rename = "config")]
pub(crate) blob_config_legacy: Option<BlobCacheEntryConfig>,
/// Configuration information to generate blob cache object.
#[serde(default, rename = "config_v2")]
pub blob_config: Option<BlobCacheEntryConfigV2>,
/// Domain id for the blob, which is used to group cached blobs into management domains.
#[serde(default)]
pub domain_id: String,
}
impl BlobCacheEntry {
pub fn prepare_configuration_info(&mut self) -> bool {
if self.blob_config.is_none() {
if let Some(legacy) = self.blob_config_legacy.as_ref() {
match legacy.try_into() {
Err(_) => return false,
Ok(v) => self.blob_config = Some(v),
}
}
}
match self.blob_config.as_ref() {
None => false,
Some(cfg) => cfg.cache.validate() && cfg.backend.validate(),
}
}
}
/// Configuration information for a list of cached blob objects.
#[derive(Debug, Default, Deserialize, Serialize)]
pub struct BlobCacheList {
/// List of blob configuration information.
pub blobs: Vec<BlobCacheEntry>,
}
/// Identifier for cached blob objects.
///
/// Domains are used to control the blob sharing scope. All blobs associated with the same domain
@ -174,7 +132,7 @@ pub enum DaemonErrorKind {
/// Unexpected event type.
UnexpectedEvent(String),
/// Can't upgrade the daemon.
UpgradeManager,
UpgradeManager(String),
/// Unsupported requests.
Unsupported,
}
@ -188,25 +146,25 @@ pub enum MetricsErrorKind {
Stats(MetricsError),
}
#[derive(Debug)]
#[derive(Error, Debug)]
#[allow(clippy::large_enum_variant)]
pub enum ApiError {
/// Daemon internal error
#[error("daemon internal error: {0:?}")]
DaemonAbnormal(DaemonErrorKind),
/// Failed to get events information
#[error("daemon events error: {0}")]
Events(String),
/// Failed to get metrics information
#[error("metrics error: {0:?}")]
Metrics(MetricsErrorKind),
/// Failed to mount filesystem
#[error("failed to mount filesystem: {0:?}")]
MountFilesystem(DaemonErrorKind),
/// Failed to send request to the API service
RequestSend(SendError<Option<ApiRequest>>),
/// Unrecognized payload content
#[error("failed to send request to the API service: {0:?}")]
RequestSend(#[from] SendError<Option<ApiRequest>>),
#[error("failed to parse response payload type")]
ResponsePayloadType,
/// Failed to receive response from the API service
ResponseRecv(RecvError),
/// Failed to send wakeup notification
Wakeup(io::Error),
#[error("failed to receive response from the API service: {0:?}")]
ResponseRecv(#[from] RecvError),
#[error("failed to wake up the daemon: {0:?}")]
Wakeup(#[source] io::Error),
}
/// Specialized `std::result::Result` for API replies.

View File

@ -140,7 +140,7 @@ impl EndpointHandler for MetricsFsFilesHandler {
(Method::Get, None) => {
let id = extract_query_part(req, "id");
let latest_read_files = extract_query_part(req, "latest")
.map_or(false, |b| b.parse::<bool>().unwrap_or(false));
.is_some_and(|b| b.parse::<bool>().unwrap_or(false));
let r = kicker(ApiRequest::ExportFsFilesMetrics(id, latest_read_files));
Ok(convert_to_response(r, HttpError::FsFilesMetrics))
}

View File

@ -12,12 +12,12 @@ use dbs_uhttp::{Body, HttpServer, MediaType, Request, Response, ServerError, Sta
use http::uri::Uri;
use mio::unix::SourceFd;
use mio::{Events, Interest, Poll, Token, Waker};
use nydus_error::error::MetricsError;
use serde::Deserialize;
use url::Url;
use crate::http::{
ApiError, ApiRequest, ApiResponse, DaemonErrorKind, ErrorMessage, HttpError, MetricsErrorKind,
ApiError, ApiRequest, ApiResponse, DaemonErrorKind, ErrorMessage, HttpError, MetricsError,
MetricsErrorKind,
};
use crate::http_endpoint_common::{
EventsHandler, ExitHandler, MetricsBackendHandler, MetricsBlobcacheHandler, MountHandler,
@ -43,9 +43,8 @@ pub fn extract_query_part(req: &Request, key: &str) -> Option<String> {
// right now, below way makes it easy to obtain query parts from uri.
let http_prefix = format!("http:{}", req.uri().get_abs_path());
let url = Url::parse(&http_prefix)
.map_err(|e| {
.inspect_err(|e| {
error!("api: can't parse request {:?}", e);
e
})
.ok()?;
@ -326,35 +325,30 @@ mod tests {
#[test]
fn test_http_api_routes_v1() {
assert!(HTTP_ROUTES.routes.get("/api/v1/daemon").is_some());
assert!(HTTP_ROUTES.routes.get("/api/v1/daemon/events").is_some());
assert!(HTTP_ROUTES.routes.get("/api/v1/daemon/backend").is_some());
assert!(HTTP_ROUTES.routes.get("/api/v1/daemon/start").is_some());
assert!(HTTP_ROUTES.routes.get("/api/v1/daemon/exit").is_some());
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/daemon"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/daemon/events"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/daemon/backend"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/daemon/start"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/daemon/exit"));
assert!(HTTP_ROUTES
.routes
.get("/api/v1/daemon/fuse/sendfd")
.is_some());
.contains_key("/api/v1/daemon/fuse/sendfd"));
assert!(HTTP_ROUTES
.routes
.get("/api/v1/daemon/fuse/takeover")
.is_some());
assert!(HTTP_ROUTES.routes.get("/api/v1/mount").is_some());
assert!(HTTP_ROUTES.routes.get("/api/v1/metrics").is_some());
assert!(HTTP_ROUTES.routes.get("/api/v1/metrics/files").is_some());
assert!(HTTP_ROUTES.routes.get("/api/v1/metrics/pattern").is_some());
assert!(HTTP_ROUTES.routes.get("/api/v1/metrics/backend").is_some());
assert!(HTTP_ROUTES
.routes
.get("/api/v1/metrics/blobcache")
.is_some());
assert!(HTTP_ROUTES.routes.get("/api/v1/metrics/inflight").is_some());
.contains_key("/api/v1/daemon/fuse/takeover"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/mount"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics/files"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics/pattern"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics/backend"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics/blobcache"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics/inflight"));
}
#[test]
fn test_http_api_routes_v2() {
assert!(HTTP_ROUTES.routes.get("/api/v2/daemon").is_some());
assert!(HTTP_ROUTES.routes.get("/api/v2/blobs").is_some());
assert!(HTTP_ROUTES.routes.contains_key("/api/v2/daemon"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v2/blobs"));
}
#[test]

View File

@ -14,11 +14,11 @@ extern crate serde;
#[cfg(feature = "handler")]
#[macro_use]
extern crate lazy_static;
#[macro_use]
extern crate nydus_error;
pub mod config;
pub use config::*;
#[macro_use]
pub mod error;
pub mod http;
pub use self::http::*;

View File

@ -1,14 +0,0 @@
# Changelog
## [Unreleased]
### Added
### Fixed
### Deprecated
## [v0.1.0]
### Added
- Initial release

View File

@ -1 +0,0 @@
* @bergwolf @imeoer @jiangliu

View File

@ -1,24 +0,0 @@
[package]
name = "nydus-app"
version = "0.3.2"
authors = ["The Nydus Developers"]
description = "Application framework for Nydus Image Service"
readme = "README.md"
repository = "https://github.com/dragonflyoss/image-service"
license = "Apache-2.0 OR BSD-3-Clause"
edition = "2018"
build = "build.rs"
[build-dependencies]
time = { version = "0.3.14", features = ["formatting"] }
[dependencies]
regex = "1.5.5"
flexi_logger = { version = "0.25", features = ["compress"] }
libc = "0.2"
log = "0.4"
nix = "0.24"
serde = { version = "1.0.110", features = ["serde_derive"] }
log-panics = { version = "2.1.0", features = ["with-backtrace"] }
nydus-error = { version = "0.2", path = "../error" }

View File

@ -1,202 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -1,57 +0,0 @@
# nydus-app
The `nydus-app` crate is a collection of utilities to help creating applications for [`Nydus Image Service`](https://github.com/dragonflyoss/image-service) project, which provides:
- `struct BuildTimeInfo`: application build and version information.
- `fn dump_program_info()`: dump program build and version information.
- `fn setup_logging()`: setup logging infrastructure for application.
## Support
**Platforms**:
- x86_64
- aarch64
**Operating Systems**:
- Linux
## Usage
Add `nydus-app` as a dependency in `Cargo.toml`
```toml
[dependencies]
nydus-app = "*"
```
Then add `extern crate nydus-app;` to your crate root if needed.
## Examples
- Setup application infrastructure.
```rust
#[macro_use(crate_authors, crate_version)]
extern crate clap;
use clap::App;
use std::io::Result;
use nydus_app::{BuildTimeInfo, setup_logging};
fn main() -> Result<()> {
let level = cmd.value_of("log-level").unwrap().parse().unwrap();
let (bti_string, build_info) = BuildTimeInfo::dump();
let _cmd = App::new("")
.version(bti_string.as_str())
.author(crate_authors!())
.get_matches();
setup_logging(None, level)?;
print!("{}", build_info);
Ok(())
}
```
## License
This code is licensed under [Apache-2.0](LICENSE).

View File

@ -1,34 +0,0 @@
[package]
name = "nydus-blobfs"
version = "0.2.0"
description = "Blob object file system for Nydus Image Service"
authors = ["The Nydus Developers"]
license = "Apache-2.0 OR BSD-3-Clause"
homepage = "https://nydus.dev/"
repository = "https://github.com/dragonflyoss/image-service"
edition = "2018"
[dependencies]
fuse-backend-rs = "0.10"
libc = "0.2"
log = "0.4.8"
serde = { version = "1.0.110", features = ["serde_derive", "rc"] }
serde_json = "1.0.53"
vm-memory = { version = "0.9" }
nydus-error = { version = "0.2", path = "../error" }
nydus-api = { version = "0.2", path = "../api" }
nydus-rafs = { version = "0.2", path = "../rafs" }
nydus-storage = { version = "0.6", path = "../storage", features = [
"backend-localfs",
] }
[dev-dependencies]
nydus-app = { version = "0.3", path = "../app" }
[features]
virtiofs = ["fuse-backend-rs/virtiofs", "nydus-rafs/virtio-fs"]
[package.metadata.docs.rs]
all-features = true
targets = ["x86_64-unknown-linux-gnu", "x86_64-apple-darwin"]

View File

@ -1,510 +0,0 @@
// Copyright (C) 2020 Alibaba Cloud. All rights reserved.
// SPDX-License-Identifier: Apache-2.0
//! Fuse blob passthrough file system, mirroring an existing FS hierarchy.
//!
//! This file system mirrors the existing file system hierarchy of the system, starting at the
//! root file system. This is implemented by just "passing through" all requests to the
//! corresponding underlying file system.
//!
//! The code is derived from the
//! [CrosVM](https://chromium.googlesource.com/chromiumos/platform/crosvm/) project,
//! with heavy modification/enhancements from Alibaba Cloud OS team.
#[macro_use]
extern crate log;
use fuse_backend_rs::{
api::{filesystem::*, BackendFileSystem, VFS_MAX_INO},
passthrough::Config as PassthroughConfig,
passthrough::PassthroughFs,
};
use nydus_api::ConfigV2;
use nydus_error::einval;
use nydus_rafs::fs::Rafs;
use serde::Deserialize;
use std::any::Any;
#[cfg(feature = "virtiofs")]
use std::ffi::CStr;
use std::ffi::CString;
use std::fs::create_dir_all;
#[cfg(feature = "virtiofs")]
use std::fs::File;
use std::io;
#[cfg(feature = "virtiofs")]
use std::mem::MaybeUninit;
#[cfg(feature = "virtiofs")]
use std::os::unix::ffi::OsStrExt;
#[cfg(feature = "virtiofs")]
use std::os::unix::io::{AsRawFd, FromRawFd};
use std::path::Path;
use std::str::FromStr;
use std::sync::{Arc, Mutex};
use std::thread;
#[cfg(feature = "virtiofs")]
use nydus_storage::device::BlobPrefetchRequest;
use vm_memory::ByteValued;
mod sync_io;
#[cfg(feature = "virtiofs")]
const EMPTY_CSTR: &[u8] = b"\0";
type Inode = u64;
type Handle = u64;
#[repr(C, packed)]
#[derive(Clone, Copy, Debug, Default)]
struct LinuxDirent64 {
d_ino: libc::ino64_t,
d_off: libc::off64_t,
d_reclen: libc::c_ushort,
d_ty: libc::c_uchar,
}
unsafe impl ByteValued for LinuxDirent64 {}
/// Options that configure xxx
#[derive(Clone, Default, Deserialize)]
pub struct BlobOndemandConfig {
/// The rafs config used to set up rafs device for the purpose of
/// `on demand read`.
pub rafs_conf: ConfigV2,
/// THe path of bootstrap of an container image (for rafs in
/// kernel).
///
/// The default is ``.
#[serde(default)]
pub bootstrap_path: String,
/// The path of blob cache directory.
#[serde(default)]
pub blob_cache_dir: String,
}
impl FromStr for BlobOndemandConfig {
type Err = io::Error;
fn from_str(s: &str) -> io::Result<BlobOndemandConfig> {
serde_json::from_str(s).map_err(|e| einval!(e))
}
}
/// Options that configure the behavior of the blobfs fuse file system.
#[derive(Default, Debug, Clone, PartialEq)]
pub struct Config {
/// Blobfs config is embedded with passthrough config
pub ps_config: PassthroughConfig,
/// This provides on demand config of blob management.
pub blob_ondemand_cfg: String,
}
#[allow(dead_code)]
struct RafsHandle {
rafs: Arc<Mutex<Option<Rafs>>>,
handle: Arc<Mutex<Option<thread::JoinHandle<Option<Rafs>>>>>,
}
#[allow(dead_code)]
struct BootstrapArgs {
rafs_handle: RafsHandle,
blob_cache_dir: String,
}
// Safe to Send/Sync because the underlying data structures are readonly
unsafe impl Sync for BootstrapArgs {}
unsafe impl Send for BootstrapArgs {}
#[cfg(feature = "virtiofs")]
impl BootstrapArgs {
fn get_rafs_handle(&self) -> io::Result<()> {
let mut c = self.rafs_handle.rafs.lock().unwrap();
match (*self.rafs_handle.handle.lock().unwrap()).take() {
Some(handle) => {
let rafs = handle.join().unwrap().ok_or_else(|| {
error!("blobfs: get rafs failed.");
einval!("create rafs failed in thread.")
})?;
debug!("blobfs: async create Rafs finish!");
*c = Some(rafs);
Ok(())
}
None => Err(einval!("create rafs failed in thread.")),
}
}
fn fetch_range_sync(&self, prefetches: &[BlobPrefetchRequest]) -> io::Result<()> {
let c = self.rafs_handle.rafs.lock().unwrap();
match &*c {
Some(rafs) => rafs.fetch_range_synchronous(prefetches),
None => Err(einval!("create rafs failed in thread.")),
}
}
}
/// A file system that simply "passes through" all requests it receives to the underlying file
/// system.
///
/// To keep the implementation simple it servers the contents of its root directory. Users
/// that wish to serve only a specific directory should set up the environment so that that
/// directory ends up as the root of the file system process. One way to accomplish this is via a
/// combination of mount namespaces and the pivot_root system call.
pub struct BlobFs {
pfs: PassthroughFs,
#[allow(dead_code)]
bootstrap_args: BootstrapArgs,
}
impl BlobFs {
fn ensure_path_exist(path: &Path) -> io::Result<()> {
if path.as_os_str().is_empty() {
return Err(einval!("path is empty"));
}
if !path.exists() {
create_dir_all(path).map_err(|e| {
error!(
"create dir error. directory is {:?}. {}:{}",
path,
file!(),
line!()
);
e
})?;
}
Ok(())
}
/// Create a Blob file system instance.
pub fn new(cfg: Config) -> io::Result<BlobFs> {
trace!("BlobFs config is: {:?}", cfg);
let bootstrap_args = Self::load_bootstrap(&cfg)?;
let pfs = PassthroughFs::new(cfg.ps_config)?;
Ok(BlobFs {
pfs,
bootstrap_args,
})
}
fn load_bootstrap(cfg: &Config) -> io::Result<BootstrapArgs> {
let blob_ondemand_conf = BlobOndemandConfig::from_str(&cfg.blob_ondemand_cfg)?;
if !blob_ondemand_conf.rafs_conf.validate() {
return Err(einval!("invlidate configuration for blobfs"));
}
let rafs_cfg = blob_ondemand_conf.rafs_conf.get_rafs_config()?;
if rafs_cfg.mode != "direct" {
return Err(einval!("blobfs only supports RAFS 'direct' mode"));
}
// check if blob cache dir exists.
let path = Path::new(blob_ondemand_conf.blob_cache_dir.as_str());
Self::ensure_path_exist(path).map_err(|e| {
error!("blob_cache_dir not exist");
e
})?;
let path = Path::new(blob_ondemand_conf.bootstrap_path.as_str());
if !path.exists() || blob_ondemand_conf.bootstrap_path.is_empty() {
return Err(einval!("no valid bootstrap"));
}
let bootstrap_path = blob_ondemand_conf.bootstrap_path.clone();
let config = Arc::new(blob_ondemand_conf.rafs_conf.clone());
trace!("blobfs: async create Rafs start!");
let rafs_join_handle = std::thread::spawn(move || {
let (mut rafs, reader) = match Rafs::new(&config, "blobfs", Path::new(&bootstrap_path))
{
Ok(rafs) => rafs,
Err(e) => {
error!("blobfs: new rafs failed {:?}.", e);
return None;
}
};
match rafs.import(reader, None) {
Ok(_) => {}
Err(e) => {
error!("blobfs: new rafs failed {:?}.", e);
return None;
}
}
Some(rafs)
});
let rafs_handle = RafsHandle {
rafs: Arc::new(Mutex::new(None)),
handle: Arc::new(Mutex::new(Some(rafs_join_handle))),
};
Ok(BootstrapArgs {
rafs_handle,
blob_cache_dir: blob_ondemand_conf.blob_cache_dir,
})
}
#[cfg(feature = "virtiofs")]
fn stat(f: &File) -> io::Result<libc::stat64> {
// Safe because this is a constant value and a valid C string.
let pathname = unsafe { CStr::from_bytes_with_nul_unchecked(EMPTY_CSTR) };
let mut st = MaybeUninit::<libc::stat64>::zeroed();
// Safe because the kernel will only write data in `st` and we check the return value.
let res = unsafe {
libc::fstatat64(
f.as_raw_fd(),
pathname.as_ptr(),
st.as_mut_ptr(),
libc::AT_EMPTY_PATH | libc::AT_SYMLINK_NOFOLLOW,
)
};
if res >= 0 {
// Safe because the kernel guarantees that the struct is now fully initialized.
Ok(unsafe { st.assume_init() })
} else {
Err(io::Error::last_os_error())
}
}
/// Initialize the PassthroughFs
pub fn import(&self) -> io::Result<()> {
self.pfs.import()
}
#[cfg(feature = "virtiofs")]
fn open_file(dfd: i32, pathname: &Path, flags: i32, mode: u32) -> io::Result<File> {
let pathname = CString::new(pathname.as_os_str().as_bytes())
.map_err(|e| io::Error::new(io::ErrorKind::InvalidData, e))?;
let fd = if flags & libc::O_CREAT == libc::O_CREAT {
unsafe { libc::openat(dfd, pathname.as_ptr(), flags, mode) }
} else {
unsafe { libc::openat(dfd, pathname.as_ptr(), flags) }
};
if fd < 0 {
return Err(io::Error::last_os_error());
}
// Safe because we just opened this fd.
Ok(unsafe { File::from_raw_fd(fd) })
}
}
impl BackendFileSystem for BlobFs {
fn mount(&self) -> io::Result<(Entry, u64)> {
let ctx = &Context::default();
let entry = self.lookup(ctx, ROOT_ID, &CString::new(".").unwrap())?;
Ok((entry, VFS_MAX_INO))
}
fn as_any(&self) -> &dyn Any {
self
}
}
#[cfg(test2)]
mod tests {
use super::*;
use fuse_backend_rs::abi::virtio_fs;
use fuse_backend_rs::transport::FsCacheReqHandler;
use nydus_app::setup_logging;
use std::os::unix::prelude::RawFd;
struct DummyCacheReq {}
impl FsCacheReqHandler for DummyCacheReq {
fn map(
&mut self,
_foffset: u64,
_moffset: u64,
_len: u64,
_flags: u64,
_fd: RawFd,
) -> io::Result<()> {
Ok(())
}
fn unmap(&mut self, _requests: Vec<virtio_fs::RemovemappingOne>) -> io::Result<()> {
Ok(())
}
}
// #[test]
// #[cfg(feature = "virtiofs")]
// fn test_blobfs_new() {
// setup_logging(None, log::LevelFilter::Trace, 0).unwrap();
// let config = r#"
// {
// "device": {
// "backend": {
// "type": "localfs",
// "config": {
// "dir": "/home/b.liu/1_source/3_ali/virtiofs/qemu-my/build-kangaroo/share_dir1/test4k"
// }
// },
// "cache": {
// "type": "blobcache",
// "compressed": false,
// "config": {
// "work_dir": "/home/b.liu/1_source/3_ali/virtiofs/qemu-my/build-kangaroo/share_dir1/blobcache"
// }
// }
// },
// "mode": "direct",
// "digest_validate": true,
// "enable_xattr": false,
// "fs_prefetch": {
// "enable": false,
// "threads_count": 10,
// "merging_size": 131072,
// "bandwidth_rate": 10485760
// }
// }"#;
// // let rafs_conf = RafsConfig::from_str(config).unwrap();
// let fs_cfg = Config {
// root_dir: "/home/b.liu/1_source/3_ali/virtiofs/qemu-my/build-kangaroo/share_dir1"
// .to_string(),
// bootstrap_path: "test4k/bootstrap-link".to_string(),
// // blob_cache_dir: "blobcache".to_string(),
// do_import: false,
// no_open: true,
// rafs_conf: config.to_string(),
// ..Default::default()
// };
// assert!(BlobFs::new(fs_cfg).is_err());
// let fs_cfg = Config {
// root_dir: "/home/b.liu/1_source/3_ali/virtiofs/qemu-my/build-kangaroo/share_dir1"
// .to_string(),
// bootstrap_path: "test4k/bootstrap-link".to_string(),
// blob_cache_dir: "blobcache1".to_string(),
// do_import: false,
// no_open: true,
// rafs_conf: config.to_string(),
// ..Default::default()
// };
// assert!(BlobFs::new(fs_cfg).is_err());
// let fs_cfg = Config {
// root_dir: "/home/b.liu/1_source/3_ali/virtiofs/qemu-my/build-kangaroo/share_dir1"
// .to_string(),
// // bootstrap_path: "test4k/bootstrap-link".to_string(),
// blob_cache_dir: "blobcache".to_string(),
// do_import: false,
// no_open: true,
// rafs_conf: config.to_string(),
// ..Default::default()
// };
// assert!(BlobFs::new(fs_cfg).is_err());
// let fs_cfg = Config {
// root_dir: "/home/b.liu/1_source/3_ali/virtiofs/qemu-my/build-kangaroo/share_dir1"
// .to_string(),
// bootstrap_path: "test4k/bootstrap-foo".to_string(),
// blob_cache_dir: "blobcache".to_string(),
// do_import: false,
// no_open: true,
// rafs_conf: config.to_string(),
// ..Default::default()
// };
// assert!(BlobFs::new(fs_cfg).is_err());
// let fs_cfg = Config {
// root_dir: "/home/b.liu/1_source/3_ali/virtiofs/qemu-my/build-kangaroo/share_dir1"
// .to_string(),
// bootstrap_path: "test4k/bootstrap-link".to_string(),
// blob_cache_dir: "blobcache".to_string(),
// do_import: false,
// no_open: true,
// rafs_conf: config.to_string(),
// ..Default::default()
// };
// assert!(BlobFs::new(fs_cfg).is_ok());
// }
#[test]
fn test_blobfs_setupmapping() {
setup_logging(None, log::LevelFilter::Trace, 0).unwrap();
let config = r#"
{
"rafs_conf": {
"device": {
"backend": {
"type": "localfs",
"config": {
"blob_file": "/home/b.liu/1_source/3_ali/virtiofs/qemu-my/build-kangaroo/share_dir1/nydus-rs/myblob1/v6/blob-btrfs"
}
},
"cache": {
"type": "blobcache",
"compressed": false,
"config": {
"work_dir": "/home/b.liu/1_source/3_ali/virtiofs/qemu-my/build-kangaroo/share_dir1/blobcache"
}
}
},
"mode": "direct",
"digest_validate": false,
"enable_xattr": false,
"fs_prefetch": {
"enable": false,
"threads_count": 10,
"merging_size": 131072,
"bandwidth_rate": 10485760
}
},
"bootstrap_path": "nydus-rs/myblob1/v6/bootstrap-btrfs",
"blob_cache_dir": "/home/b.liu/1_source/3_ali/virtiofs/qemu-my/build-kangaroo/share_dir1/blobcache"
}"#;
// let rafs_conf = RafsConfig::from_str(config).unwrap();
let ps_config = PassthroughConfig {
root_dir: "/home/b.liu/1_source/3_ali/virtiofs/qemu-my/build-kangaroo/share_dir1"
.to_string(),
do_import: false,
no_open: true,
..Default::default()
};
let fs_cfg = Config {
ps_config,
blob_ondemand_cfg: config.to_string(),
};
let fs = BlobFs::new(fs_cfg).unwrap();
fs.import().unwrap();
fs.mount().unwrap();
let ctx = &Context::default();
// read bootstrap first, should return err as it's not in blobcache dir.
// let bootstrap = CString::new("foo").unwrap();
// let entry = fs.lookup(ctx, ROOT_ID, &bootstrap).unwrap();
// let mut req = DummyCacheReq {};
// fs.setupmapping(ctx, entry.inode, 0, 0, 4096, 0, 0, &mut req)
// .unwrap();
// FIXME: use a real blob id under test4k.
let blob_cache_dir = CString::new("blobcache").unwrap();
let parent_entry = fs.lookup(ctx, ROOT_ID, &blob_cache_dir).unwrap();
let blob_id = CString::new("80da976ee69d68af6bb9170395f71b4ef1e235e815e2").unwrap();
let entry = fs.lookup(ctx, parent_entry.inode, &blob_id).unwrap();
let foffset = 0;
let len = 1 << 21;
let mut req = DummyCacheReq {};
fs.setupmapping(ctx, entry.inode, 0, foffset, len, 0, 0, &mut req)
.unwrap();
// FIXME: release fs
fs.destroy();
}
}

35
builder/Cargo.toml Normal file
View File

@ -0,0 +1,35 @@
[package]
name = "nydus-builder"
version = "0.2.0"
description = "Nydus Image Builder"
authors = ["The Nydus Developers"]
license = "Apache-2.0"
homepage = "https://nydus.dev/"
repository = "https://github.com/dragonflyoss/nydus"
edition = "2021"
[dependencies]
anyhow = "1.0.35"
base64 = "0.21"
hex = "0.4.3"
indexmap = "2"
libc = "0.2"
log = "0.4"
nix = "0.24"
serde = { version = "1.0.110", features = ["serde_derive", "rc"] }
serde_json = "1.0.53"
sha2 = "0.10.2"
tar = "0.4.40"
vmm-sys-util = "0.12.1"
xattr = "1.0.1"
parse-size = "1.1.0"
nydus-api = { version = "0.4.0", path = "../api" }
nydus-rafs = { version = "0.4.0", path = "../rafs" }
nydus-storage = { version = "0.7.0", path = "../storage", features = ["backend-localfs"] }
nydus-utils = { version = "0.5.0", path = "../utils" }
gix-attributes = "0.25.0"
[package.metadata.docs.rs]
all-features = true
targets = ["x86_64-unknown-linux-gnu", "aarch64-unknown-linux-gnu", "aarch64-apple-darwin"]

189
builder/src/attributes.rs Normal file
View File

@ -0,0 +1,189 @@
// Copyright 2024 Nydus Developers. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use std::collections::HashMap;
use std::path::{Path, PathBuf};
use std::{fs, path};
use anyhow::Result;
use gix_attributes::parse;
use gix_attributes::parse::Kind;
const KEY_TYPE: &str = "type";
const KEY_CRCS: &str = "crcs";
const VAL_EXTERNAL: &str = "external";
pub struct Parser {}
#[derive(Clone, Debug, Eq, PartialEq, Default)]
pub struct Item {
pub pattern: PathBuf,
pub attributes: HashMap<String, String>,
}
#[derive(Clone, Debug, Eq, PartialEq, Default)]
pub struct Attributes {
pub items: HashMap<PathBuf, HashMap<String, String>>,
pub crcs: HashMap<PathBuf, Vec<u32>>,
}
impl Attributes {
/// Parse nydus attributes from a file.
pub fn from<P: AsRef<Path>>(path: P) -> Result<Attributes> {
let content = fs::read(path)?;
let _items = parse(&content);
let mut items = HashMap::new();
let mut crcs = HashMap::new();
for _item in _items {
let _item = _item?;
if let Kind::Pattern(pattern) = _item.0 {
let mut path = PathBuf::from(pattern.text.to_string());
if !path.is_absolute() {
path = path::Path::new("/").join(path);
}
let mut current_path = path.clone();
let mut attributes = HashMap::new();
let mut _type = String::new();
let mut _crcs = vec![];
for line in _item.1 {
let line = line?;
let name = line.name.as_str();
let state = line.state.as_bstr().unwrap_or_default();
if name == KEY_TYPE {
_type = state.to_string();
}
if name == KEY_CRCS {
_crcs = state
.to_string()
.split(',')
.map(|s| {
let trimmed = s.trim();
let hex_str = if let Some(stripped) = trimmed.strip_prefix("0x") {
stripped
} else {
trimmed
};
u32::from_str_radix(hex_str, 16).map_err(|e| anyhow::anyhow!(e))
})
.collect::<Result<Vec<u32>, _>>()?;
}
attributes.insert(name.to_string(), state.to_string());
}
crcs.insert(path.clone(), _crcs);
items.insert(path, attributes);
// process parent directory
while let Some(parent) = current_path.parent() {
if parent == Path::new("/") {
break;
}
let mut attributes = HashMap::new();
if !items.contains_key(parent) {
attributes.insert(KEY_TYPE.to_string(), VAL_EXTERNAL.to_string());
items.insert(parent.to_path_buf(), attributes);
}
current_path = parent.to_path_buf();
}
}
}
Ok(Attributes { items, crcs })
}
fn check_external(&self, attributes: &HashMap<String, String>) -> bool {
attributes.get(KEY_TYPE) == Some(&VAL_EXTERNAL.to_string())
}
pub fn is_external<P: AsRef<Path>>(&self, path: P) -> bool {
if let Some(attributes) = self.items.get(path.as_ref()) {
return self.check_external(attributes);
}
false
}
pub fn is_prefix_external<P: AsRef<Path>>(&self, target: P) -> bool {
self.items
.iter()
.any(|item| item.0.starts_with(&target) && self.check_external(item.1))
}
pub fn get_value<P: AsRef<Path>, K: AsRef<str>>(&self, path: P, key: K) -> Option<String> {
if let Some(attributes) = self.items.get(path.as_ref()) {
return attributes.get(key.as_ref()).map(|s| s.to_string());
}
None
}
pub fn get_values<P: AsRef<Path>>(&self, path: P) -> Option<&HashMap<String, String>> {
self.items.get(path.as_ref())
}
pub fn get_crcs<P: AsRef<Path>>(&self, path: P) -> Option<&Vec<u32>> {
self.crcs.get(path.as_ref())
}
}
#[cfg(test)]
mod tests {
use std::{collections::HashMap, fs, path::PathBuf};
use super::{Attributes, Item};
use vmm_sys_util::tempfile::TempFile;
#[test]
fn test_attribute_parse() {
let file = TempFile::new().unwrap();
fs::write(
file.as_path(),
"/foo type=external crcs=0x1234,0x5678
/bar type=external crcs=0x1234,0x5678
/models/foo/bar type=external",
)
.unwrap();
let attributes = Attributes::from(file.as_path()).unwrap();
let _attributes_base: HashMap<String, String> =
[("type".to_string(), "external".to_string())]
.iter()
.cloned()
.collect();
let _attributes: HashMap<String, String> = [
("type".to_string(), "external".to_string()),
("crcs".to_string(), "0x1234,0x5678".to_string()),
]
.iter()
.cloned()
.collect();
let items_map: HashMap<PathBuf, HashMap<String, String>> = vec![
Item {
pattern: PathBuf::from("/foo"),
attributes: _attributes.clone(),
},
Item {
pattern: PathBuf::from("/bar"),
attributes: _attributes.clone(),
},
Item {
pattern: PathBuf::from("/models"),
attributes: _attributes_base.clone(),
},
Item {
pattern: PathBuf::from("/models/foo"),
attributes: _attributes_base.clone(),
},
Item {
pattern: PathBuf::from("/models/foo/bar"),
attributes: _attributes_base.clone(),
},
]
.into_iter()
.map(|item| (item.pattern, item.attributes))
.collect();
assert_eq!(attributes.items, items_map);
assert_eq!(attributes.get_crcs("/foo"), Some(&vec![0x1234, 0x5678]))
}
}

View File

@ -0,0 +1,283 @@
// Copyright (C) 2023 Nydus Developers. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
//! Generate Chunkdict RAFS bootstrap.
//! -------------------------------------------------------------------------------------------------
//! Bug 1: Inconsistent Chunk Size Leading to Blob Size Less Than 4K(v6_block_size)
//! Description: The size of chunks is not consistent, which results in the possibility that a blob,
//! composed of a group of these chunks, may be less than 4K(v6_block_size) in size.
//! This inconsistency leads to a failure in passing the size check.
//! -------------------------------------------------------------------------------------------------
//! Bug 2: Incorrect Chunk Number Calculation Due to Premature Check Logic
//! Description: The current logic for calculating the chunk number is based on the formula size/chunk size.
//! However, this approach is flawed as it precedes the actual check which accounts for chunk statistics.
//! Consequently, this leads to inaccurate counting of chunk numbers.
use super::core::node::{ChunkSource, NodeInfo};
use super::{BlobManager, Bootstrap, BootstrapManager, BuildContext, BuildOutput, Tree};
use crate::core::node::Node;
use crate::NodeChunk;
use crate::OsString;
use anyhow::{Ok, Result};
use nydus_rafs::metadata::chunk::ChunkWrapper;
use nydus_rafs::metadata::inode::InodeWrapper;
use nydus_rafs::metadata::layout::RafsXAttrs;
use nydus_storage::meta::BlobChunkInfoV1Ondisk;
use nydus_utils::compress::Algorithm;
use nydus_utils::digest::RafsDigest;
use std::mem::size_of;
use std::path::PathBuf;
use std::str::FromStr;
use std::sync::Arc;
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
pub struct ChunkdictChunkInfo {
pub image_reference: String,
pub version: String,
pub chunk_blob_id: String,
pub chunk_digest: String,
pub chunk_crc32: u32,
pub chunk_compressed_size: u32,
pub chunk_uncompressed_size: u32,
pub chunk_compressed_offset: u64,
pub chunk_uncompressed_offset: u64,
}
pub struct ChunkdictBlobInfo {
pub blob_id: String,
pub blob_compressed_size: u64,
pub blob_uncompressed_size: u64,
pub blob_compressor: String,
pub blob_meta_ci_compressed_size: u64,
pub blob_meta_ci_uncompressed_size: u64,
pub blob_meta_ci_offset: u64,
}
/// Struct to generate chunkdict RAFS bootstrap.
pub struct Generator {}
impl Generator {
// Generate chunkdict RAFS bootstrap.
pub fn generate(
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_mgr: &mut BlobManager,
chunkdict_chunks_origin: Vec<ChunkdictChunkInfo>,
chunkdict_blobs: Vec<ChunkdictBlobInfo>,
) -> Result<BuildOutput> {
// Validate and remove chunks whose belonged blob sizes are smaller than a block.
let mut chunkdict_chunks = chunkdict_chunks_origin.to_vec();
Self::validate_and_remove_chunks(ctx, &mut chunkdict_chunks);
// Build root tree.
let mut tree = Self::build_root_tree(ctx)?;
// Build child tree.
let child = Self::build_child_tree(ctx, blob_mgr, &chunkdict_chunks, &chunkdict_blobs)?;
let result = vec![child];
tree.children = result;
Self::validate_tree(&tree)?;
// Build bootstrap.
let mut bootstrap_ctx = bootstrap_mgr.create_ctx()?;
let mut bootstrap = Bootstrap::new(tree)?;
bootstrap.build(ctx, &mut bootstrap_ctx)?;
let blob_table = blob_mgr.to_blob_table(ctx)?;
let storage = &mut bootstrap_mgr.bootstrap_storage;
bootstrap.dump(ctx, storage, &mut bootstrap_ctx, &blob_table)?;
BuildOutput::new(blob_mgr, None, &bootstrap_mgr.bootstrap_storage, &None)
}
/// Validate tree.
fn validate_tree(tree: &Tree) -> Result<()> {
let pre = &mut |t: &Tree| -> Result<()> {
let node = t.borrow_mut_node();
debug!("chunkdict tree: ");
debug!("inode: {}", node);
for chunk in &node.chunks {
debug!("\t chunk: {}", chunk);
}
Ok(())
};
tree.walk_dfs_pre(pre)?;
debug!("chunkdict tree is valid.");
Ok(())
}
/// Validates and removes chunks with a total uncompressed size smaller than the block size limit.
fn validate_and_remove_chunks(ctx: &mut BuildContext, chunkdict: &mut Vec<ChunkdictChunkInfo>) {
let mut chunk_sizes = std::collections::HashMap::new();
// Accumulate the uncompressed size for each chunk_blob_id.
for chunk in chunkdict.iter() {
*chunk_sizes.entry(chunk.chunk_blob_id.clone()).or_insert(0) +=
chunk.chunk_uncompressed_size as u64;
}
// Find all chunk_blob_ids with a total uncompressed size > v6_block_size.
let small_chunks: Vec<String> = chunk_sizes
.into_iter()
.filter(|&(_, size)| size < ctx.v6_block_size())
.inspect(|(id, _)| {
eprintln!(
"Warning: Blob with id '{}' is smaller than {} bytes.",
id,
ctx.v6_block_size()
)
})
.map(|(id, _)| id)
.collect();
// Retain only chunks with chunk_blob_id that has a total uncompressed size > v6_block_size.
chunkdict.retain(|chunk| !small_chunks.contains(&chunk.chunk_blob_id));
}
/// Build the root tree.
pub fn build_root_tree(ctx: &mut BuildContext) -> Result<Tree> {
let mut inode = InodeWrapper::new(ctx.fs_version);
inode.set_ino(1);
inode.set_uid(1000);
inode.set_gid(1000);
inode.set_projid(0);
inode.set_mode(0o660 | libc::S_IFDIR as u32);
inode.set_nlink(3);
inode.set_name_size("/".len());
inode.set_rdev(0);
inode.set_blocks(256);
let node_info = NodeInfo {
explicit_uidgid: true,
src_dev: 0,
src_ino: 0,
rdev: 0,
source: PathBuf::from("/"),
path: PathBuf::from("/"),
target: PathBuf::from("/"),
target_vec: vec![OsString::from("/")],
symlink: None,
xattrs: RafsXAttrs::default(),
v6_force_extended_inode: true,
};
let root_node = Node::new(inode, node_info, 0);
let tree = Tree::new(root_node);
Ok(tree)
}
/// Build the child tree.
fn build_child_tree(
ctx: &mut BuildContext,
blob_mgr: &mut BlobManager,
chunkdict_chunks: &[ChunkdictChunkInfo],
chunkdict_blobs: &[ChunkdictBlobInfo],
) -> Result<Tree> {
let mut inode = InodeWrapper::new(ctx.fs_version);
inode.set_ino(2);
inode.set_uid(0);
inode.set_gid(0);
inode.set_projid(0);
inode.set_mode(0o660 | libc::S_IFREG as u32);
inode.set_nlink(1);
inode.set_name_size("chunkdict".len());
inode.set_rdev(0);
inode.set_blocks(256);
let node_info = NodeInfo {
explicit_uidgid: true,
src_dev: 0,
src_ino: 1,
rdev: 0,
source: PathBuf::from("/"),
path: PathBuf::from("/chunkdict"),
target: PathBuf::from("/chunkdict"),
target_vec: vec![OsString::from("/"), OsString::from("/chunkdict")],
symlink: None,
xattrs: RafsXAttrs::new(),
v6_force_extended_inode: true,
};
let mut node = Node::new(inode, node_info, 0);
// Insert chunks.
Self::insert_chunks(ctx, blob_mgr, &mut node, chunkdict_chunks, chunkdict_blobs)?;
let node_size: u64 = node
.chunks
.iter()
.map(|chunk| chunk.inner.uncompressed_size() as u64)
.sum();
node.inode.set_size(node_size);
// Update child count.
node.inode.set_child_count(node.chunks.len() as u32);
let child = Tree::new(node);
child
.borrow_mut_node()
.v5_set_dir_size(ctx.fs_version, &child.children);
Ok(child)
}
/// Insert chunks.
fn insert_chunks(
ctx: &mut BuildContext,
blob_mgr: &mut BlobManager,
node: &mut Node,
chunkdict_chunks: &[ChunkdictChunkInfo],
chunkdict_blobs: &[ChunkdictBlobInfo],
) -> Result<()> {
for (index, chunk_info) in chunkdict_chunks.iter().enumerate() {
let chunk_size: u32 = chunk_info.chunk_compressed_size;
let file_offset = index as u64 * chunk_size as u64;
let mut chunk = ChunkWrapper::new(ctx.fs_version);
// Update blob context.
let (blob_index, blob_ctx) =
blob_mgr.get_or_cerate_blob_for_chunkdict(ctx, &chunk_info.chunk_blob_id)?;
let chunk_uncompressed_size = chunk_info.chunk_uncompressed_size;
let pre_d_offset = blob_ctx.current_uncompressed_offset;
blob_ctx.uncompressed_blob_size = pre_d_offset + chunk_uncompressed_size as u64;
blob_ctx.current_uncompressed_offset += chunk_uncompressed_size as u64;
blob_ctx.blob_meta_header.set_ci_uncompressed_size(
blob_ctx.blob_meta_header.ci_uncompressed_size()
+ size_of::<BlobChunkInfoV1Ondisk>() as u64,
);
blob_ctx.blob_meta_header.set_ci_compressed_size(
blob_ctx.blob_meta_header.ci_uncompressed_size()
+ size_of::<BlobChunkInfoV1Ondisk>() as u64,
);
let chunkdict_blob_info = chunkdict_blobs
.iter()
.find(|blob| blob.blob_id == chunk_info.chunk_blob_id)
.unwrap();
blob_ctx.blob_compressor =
Algorithm::from_str(chunkdict_blob_info.blob_compressor.as_str())?;
blob_ctx
.blob_meta_header
.set_ci_uncompressed_size(chunkdict_blob_info.blob_meta_ci_uncompressed_size);
blob_ctx
.blob_meta_header
.set_ci_compressed_size(chunkdict_blob_info.blob_meta_ci_compressed_size);
blob_ctx
.blob_meta_header
.set_ci_compressed_offset(chunkdict_blob_info.blob_meta_ci_offset);
blob_ctx.blob_meta_header.set_ci_compressor(Algorithm::Zstd);
// Update chunk context.
let chunk_index = blob_ctx.alloc_chunk_index()?;
chunk.set_blob_index(blob_index);
chunk.set_index(chunk_index);
chunk.set_file_offset(file_offset);
chunk.set_compressed_size(chunk_info.chunk_compressed_size);
chunk.set_compressed_offset(chunk_info.chunk_compressed_offset);
chunk.set_uncompressed_size(chunk_info.chunk_uncompressed_size);
chunk.set_uncompressed_offset(chunk_info.chunk_uncompressed_offset);
chunk.set_id(RafsDigest::from_string(&chunk_info.chunk_digest));
chunk.set_crc32(chunk_info.chunk_crc32);
node.chunks.push(NodeChunk {
source: ChunkSource::Build,
inner: Arc::new(chunk.clone()),
});
}
Ok(())
}
}

1362
builder/src/compact.rs Normal file

File diff suppressed because it is too large Load Diff

View File

@ -3,39 +3,39 @@
// SPDX-License-Identifier: Apache-2.0
use std::borrow::Cow;
use std::io::Write;
use std::slice;
use anyhow::{Context, Result};
use anyhow::{bail, Context, Result};
use nydus_rafs::metadata::RAFS_MAX_CHUNK_SIZE;
use nydus_storage::device::BlobFeatures;
use nydus_storage::meta::{toc, BlobMetaChunkArray};
use nydus_utils::compress;
use nydus_utils::digest::{self, DigestHasher, RafsDigest};
use nydus_utils::{compress, crypt};
use sha2::digest::Digest;
use super::context::{ArtifactWriter, BlobContext, BlobManager, BuildContext, ConversionType};
use super::feature::Feature;
use super::layout::BlobLayout;
use super::node::Node;
use crate::core::context::Artifact;
use crate::{BlobContext, BlobManager, BuildContext, ConversionType, Feature};
pub struct Blob {}
const VALID_BLOB_ID_LENGTH: usize = 64;
/// Generator for RAFS data blob.
pub(crate) struct Blob {}
impl Blob {
/// Dump blob file and generate chunks
pub fn dump(
pub(crate) fn dump(
ctx: &BuildContext,
nodes: &mut [Node],
blob_mgr: &mut BlobManager,
blob_writer: &mut ArtifactWriter,
blob_writer: &mut dyn Artifact,
) -> Result<()> {
match ctx.conversion_type {
ConversionType::DirectoryToRafs => {
let (inodes, prefetch_entries) =
BlobLayout::layout_blob_simple(&ctx.prefetch, nodes)?;
let mut chunk_data_buf = vec![0u8; RAFS_MAX_CHUNK_SIZE as usize];
for (idx, inode) in inodes.iter().enumerate() {
let node = &mut nodes[*inode];
let (inodes, prefetch_entries) = BlobLayout::layout_blob_simple(&ctx.prefetch)?;
for (idx, node) in inodes.iter().enumerate() {
let mut node = node.borrow_mut();
let size = node
.dump_node_data(ctx, blob_mgr, blob_writer, &mut chunk_data_buf)
.context("failed to dump blob chunks")?;
@ -52,7 +52,8 @@ impl Blob {
| ConversionType::EStargzToRafs => {
Self::finalize_blob_data(ctx, blob_mgr, blob_writer)?;
}
ConversionType::TarToRef
ConversionType::TarToTarfs
| ConversionType::TarToRef
| ConversionType::TargzToRef
| ConversionType::EStargzToRef => {
// Use `sha256(tarball)` as `blob_id` for ref-type conversions.
@ -66,6 +67,9 @@ impl Blob {
}
} else if let Some(tar_reader) = &ctx.blob_tar_reader {
blob_ctx.compressed_blob_size = tar_reader.position();
if ctx.conversion_type == ConversionType::TarToTarfs {
blob_ctx.uncompressed_blob_size = blob_ctx.compressed_blob_size;
}
if blob_ctx.blob_id.is_empty() {
let hash = tar_reader.get_hash_object();
blob_ctx.blob_id = format!("{:x}", hash.finalize());
@ -74,10 +78,12 @@ impl Blob {
}
Self::finalize_blob_data(ctx, blob_mgr, blob_writer)?;
}
ConversionType::EStargzIndexToRef => {
Self::finalize_blob_data(ctx, blob_mgr, blob_writer)?;
}
ConversionType::TarToStargz
| ConversionType::DirectoryToTargz
| ConversionType::DirectoryToStargz
| ConversionType::EStargzIndexToRef
| ConversionType::TargzToStargz => {
unimplemented!()
}
@ -90,13 +96,35 @@ impl Blob {
Ok(())
}
fn finalize_blob_data(
pub fn finalize_blob_data(
ctx: &BuildContext,
blob_mgr: &mut BlobManager,
blob_writer: &mut ArtifactWriter,
blob_writer: &mut dyn Artifact,
) -> Result<()> {
if ctx.blob_inline_meta || ctx.features.is_enabled(Feature::BlobToc) {
// Dump buffered batch chunk data if exists.
if let Some(ref batch) = ctx.blob_batch_generator {
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
let mut batch = batch.lock().unwrap();
if !batch.chunk_data_buf_is_empty() {
let (_, compressed_size, _) = Node::write_chunk_data(
&ctx,
blob_ctx,
blob_writer,
batch.chunk_data_buf(),
)?;
batch.add_context(compressed_size);
batch.clear_chunk_data_buf();
}
}
}
if !ctx.blob_features.contains(BlobFeatures::SEPARATE)
&& (ctx.blob_inline_meta || ctx.features.is_enabled(Feature::BlobToc))
{
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
if blob_ctx.external {
return Ok(());
}
blob_ctx.write_tar_header(
blob_writer,
toc::TOC_ENTRY_BLOB_RAW,
@ -118,13 +146,35 @@ impl Blob {
}
}
// check blobs to make sure all blobs are valid.
if blob_mgr.external {
for (index, blob_ctx) in blob_mgr.get_blobs().iter().enumerate() {
if blob_ctx.blob_id.len() != VALID_BLOB_ID_LENGTH {
bail!(
"invalid blob id:{}, length:{}, index:{}",
blob_ctx.blob_id,
blob_ctx.blob_id.len(),
index
);
}
}
}
Ok(())
}
fn get_compression_algorithm_for_meta(ctx: &BuildContext) -> compress::Algorithm {
if ctx.conversion_type.is_to_ref() {
compress::Algorithm::Zstd
} else {
ctx.compressor
}
}
pub(crate) fn dump_meta_data(
ctx: &BuildContext,
blob_ctx: &mut BlobContext,
blob_writer: &mut ArtifactWriter,
blob_writer: &mut dyn Artifact,
) -> Result<()> {
// Dump blob meta for v6 when it has chunks or bootstrap is to be inlined.
if !blob_ctx.blob_meta_info_enabled || blob_ctx.uncompressed_blob_size == 0 {
@ -132,35 +182,44 @@ impl Blob {
}
// Prepare blob meta information data.
let encrypt = ctx.cipher != crypt::Algorithm::None;
let cipher_obj = &blob_ctx.cipher_object;
let cipher_ctx = &blob_ctx.cipher_ctx;
let blob_meta_info = &blob_ctx.blob_meta_info;
let mut ci_data = blob_meta_info.as_byte_slice();
let mut zran_buf = Vec::new();
let mut inflate_buf = Vec::new();
let mut header = blob_ctx.blob_meta_header;
if let Some(ref zran) = ctx.blob_zran_generator {
let (zran_data, zran_count) = zran.lock().unwrap().to_vec()?;
header.set_ci_zran_count(zran_count);
let (inflate_data, inflate_count) = zran.lock().unwrap().to_vec()?;
header.set_ci_zran_count(inflate_count);
header.set_ci_zran_offset(ci_data.len() as u64);
header.set_ci_zran_size(zran_data.len() as u64);
header.set_ci_zran_size(inflate_data.len() as u64);
header.set_ci_zran(true);
header.set_separate_blob(true);
zran_buf = [ci_data, &zran_data].concat();
ci_data = &zran_buf;
inflate_buf = [ci_data, &inflate_data].concat();
ci_data = &inflate_buf;
} else if let Some(ref batch) = ctx.blob_batch_generator {
let (inflate_data, inflate_count) = batch.lock().unwrap().to_vec()?;
header.set_ci_zran_count(inflate_count);
header.set_ci_zran_offset(ci_data.len() as u64);
header.set_ci_zran_size(inflate_data.len() as u64);
header.set_ci_batch(true);
inflate_buf = [ci_data, &inflate_data].concat();
ci_data = &inflate_buf;
} else if ctx.blob_tar_reader.is_some() {
header.set_separate_blob(true);
header.set_ci_zran(false);
} else {
header.set_separate_blob(false);
header.set_ci_zran(false);
};
let mut compressor = compress::Algorithm::Zstd;
let mut compressor = Self::get_compression_algorithm_for_meta(ctx);
let (compressed_data, compressed) = compress::compress(ci_data, compressor)
.with_context(|| "failed to compress blob chunk info array".to_string())?;
if !compressed {
compressor = compress::Algorithm::None;
}
let encrypted_ci_data =
crypt::encrypt_with_context(&compressed_data, cipher_obj, cipher_ctx, encrypt)?;
let compressed_offset = blob_writer.pos()?;
let compressed_size = compressed_data.len() as u64;
let compressed_size = encrypted_ci_data.len() as u64;
let uncompressed_size = ci_data.len() as u64;
header.set_ci_compressor(compressor);
@ -168,7 +227,7 @@ impl Blob {
header.set_ci_compressed_offset(compressed_offset);
header.set_ci_compressed_size(compressed_size as u64);
header.set_ci_uncompressed_size(uncompressed_size as u64);
header.set_4k_aligned(true);
header.set_aligned(true);
match blob_meta_info {
BlobMetaChunkArray::V1(_) => header.set_chunk_info_v2(false),
BlobMetaChunkArray::V2(_) => header.set_chunk_info_v2(true),
@ -177,18 +236,23 @@ impl Blob {
header.set_inlined_chunk_digest(true);
}
let header_size = header.as_bytes().len();
blob_ctx.blob_meta_header = header;
if let Some(blob_cache) = ctx.blob_cache_generator.as_ref() {
blob_cache.write_blob_meta(ci_data, &header)?;
}
let encrypted_header =
crypt::encrypt_with_context(header.as_bytes(), cipher_obj, cipher_ctx, encrypt)?;
let header_size = encrypted_header.len();
// Write blob meta data and header
match compressed_data {
match encrypted_ci_data {
Cow::Owned(v) => blob_ctx.write_data(blob_writer, &v)?,
Cow::Borrowed(v) => {
let buf = v.to_vec();
blob_ctx.write_data(blob_writer, &buf)?;
}
}
blob_ctx.write_data(blob_writer, header.as_bytes())?;
blob_ctx.write_data(blob_writer, &encrypted_header)?;
// Write tar header for `blob.meta`.
if ctx.blob_inline_meta || ctx.features.is_enabled(Feature::BlobToc) {
@ -199,11 +263,13 @@ impl Blob {
)?;
}
// Generate ToC entry for `blob.meta`.
// Generate ToC entry for `blob.meta` and write chunk digest array.
if ctx.features.is_enabled(Feature::BlobToc) {
let mut hasher = RafsDigest::hasher(digest::Algorithm::Sha256);
let ci_data = if ctx.blob_features.contains(BlobFeatures::ZRAN) {
zran_buf.as_slice()
let ci_data = if ctx.blob_features.contains(BlobFeatures::BATCH)
|| ctx.blob_features.contains(BlobFeatures::ZRAN)
{
inflate_buf.as_slice()
} else {
blob_ctx.blob_meta_info.as_byte_slice()
};
@ -254,3 +320,45 @@ impl Blob {
Ok(())
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_default_compression_algorithm_for_meta_ci() {
let mut ctx = BuildContext::default();
//TarToRef
ctx = BuildContext {
conversion_type: ConversionType::TarToRef,
..ctx
};
let compressor = Blob::get_compression_algorithm_for_meta(&ctx);
assert_eq!(compressor, compress::Algorithm::Zstd);
//EStargzIndexToRef
ctx = BuildContext {
conversion_type: ConversionType::EStargzIndexToRef,
..ctx
};
let compressor = Blob::get_compression_algorithm_for_meta(&ctx);
assert_eq!(compressor, compress::Algorithm::Zstd);
//TargzToRef
ctx = BuildContext {
conversion_type: ConversionType::TargzToRef,
..ctx
};
let compressor = Blob::get_compression_algorithm_for_meta(&ctx);
assert_eq!(compressor, compress::Algorithm::Zstd);
//TarToRef
ctx = BuildContext {
conversion_type: ConversionType::TarToRef,
..ctx
};
let compressor = Blob::get_compression_algorithm_for_meta(&ctx);
assert_eq!(compressor, compress::Algorithm::Zstd);
}
}

View File

@ -0,0 +1,214 @@
// Copyright 2020 Ant Group. All rights reserved.
// Copyright (C) 2023 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use anyhow::{Context, Error, Result};
use nydus_utils::digest::{self, RafsDigest};
use std::ops::Deref;
use nydus_rafs::metadata::layout::{RafsBlobTable, RAFS_V5_ROOT_INODE};
use nydus_rafs::metadata::{RafsSuper, RafsSuperConfig, RafsSuperFlags};
use crate::{ArtifactStorage, BlobManager, BootstrapContext, BootstrapManager, BuildContext, Tree};
/// RAFS bootstrap/meta builder.
pub struct Bootstrap {
pub(crate) tree: Tree,
}
impl Bootstrap {
/// Create a new instance of [Bootstrap].
pub fn new(tree: Tree) -> Result<Self> {
Ok(Self { tree })
}
/// Build the final view of the RAFS filesystem meta from the hierarchy `tree`.
pub fn build(
&mut self,
ctx: &mut BuildContext,
bootstrap_ctx: &mut BootstrapContext,
) -> Result<()> {
// Special handling of the root inode
let mut root_node = self.tree.borrow_mut_node();
assert!(root_node.is_dir());
let index = bootstrap_ctx.generate_next_ino();
// 0 is reserved and 1 also matches RAFS_V5_ROOT_INODE.
assert_eq!(index, RAFS_V5_ROOT_INODE);
root_node.index = index;
root_node.inode.set_ino(index);
ctx.prefetch.insert(&self.tree.node, root_node.deref());
bootstrap_ctx.inode_map.insert(
(
root_node.layer_idx,
root_node.info.src_ino,
root_node.info.src_dev,
),
vec![self.tree.node.clone()],
);
drop(root_node);
Self::build_rafs(ctx, bootstrap_ctx, &mut self.tree)?;
if ctx.fs_version.is_v6() {
let root_offset = self.tree.node.borrow().v6_offset;
Self::v6_update_dirents(&self.tree, root_offset);
}
Ok(())
}
/// Dump the RAFS filesystem meta information to meta blob.
pub fn dump(
&mut self,
ctx: &mut BuildContext,
bootstrap_storage: &mut Option<ArtifactStorage>,
bootstrap_ctx: &mut BootstrapContext,
blob_table: &RafsBlobTable,
) -> Result<()> {
match blob_table {
RafsBlobTable::V5(table) => self.v5_dump(ctx, bootstrap_ctx, table)?,
RafsBlobTable::V6(table) => self.v6_dump(ctx, bootstrap_ctx, table)?,
}
if let Some(ArtifactStorage::FileDir(p)) = bootstrap_storage {
let bootstrap_data = bootstrap_ctx.writer.as_bytes()?;
let digest = RafsDigest::from_buf(&bootstrap_data, digest::Algorithm::Sha256);
let name = digest.to_string();
bootstrap_ctx.writer.finalize(Some(name.clone()))?;
let mut path = p.0.join(name);
path.set_extension(&p.1);
*bootstrap_storage = Some(ArtifactStorage::SingleFile(path));
Ok(())
} else {
bootstrap_ctx.writer.finalize(Some(String::default()))
}
}
/// Traverse node tree, set inode index, ino, child_index and child_count etc according to the
/// RAFS metadata format, then store to nodes collection.
fn build_rafs(
ctx: &mut BuildContext,
bootstrap_ctx: &mut BootstrapContext,
tree: &mut Tree,
) -> Result<()> {
let parent_node = tree.node.clone();
let mut parent_node = parent_node.borrow_mut();
let parent_ino = parent_node.inode.ino();
let block_size = ctx.v6_block_size();
// In case of multi-layer building, it's possible that the parent node is not a directory.
if parent_node.is_dir() {
parent_node
.inode
.set_child_count(tree.children.len() as u32);
if ctx.fs_version.is_v5() {
parent_node
.inode
.set_child_index(bootstrap_ctx.get_next_ino() as u32);
} else if ctx.fs_version.is_v6() {
// Layout directory entries for v6.
let d_size = parent_node.v6_dirent_size(ctx, tree)?;
parent_node.v6_set_dir_offset(bootstrap_ctx, d_size, block_size)?;
}
}
let mut dirs: Vec<&mut Tree> = Vec::new();
for child in tree.children.iter_mut() {
let child_node = child.node.clone();
let mut child_node = child_node.borrow_mut();
let index = bootstrap_ctx.generate_next_ino();
child_node.index = index;
if ctx.fs_version.is_v5() {
child_node.inode.set_parent(parent_ino);
}
// Handle hardlink.
// All hardlink nodes' ino and nlink should be the same.
// We need to find hardlink node index list in the layer where the node is located
// because the real_ino may be different among different layers,
let mut v6_hardlink_offset: Option<u64> = None;
let key = (
child_node.layer_idx,
child_node.info.src_ino,
child_node.info.src_dev,
);
if let Some(indexes) = bootstrap_ctx.inode_map.get_mut(&key) {
let nlink = indexes.len() as u32 + 1;
// Update nlink for previous hardlink inodes
for n in indexes.iter() {
n.borrow_mut().inode.set_nlink(nlink);
}
let (first_ino, first_offset) = {
let first_node = indexes[0].borrow_mut();
(first_node.inode.ino(), first_node.v6_offset)
};
// set offset for rafs v6 hardlinks
v6_hardlink_offset = Some(first_offset);
child_node.inode.set_nlink(nlink);
child_node.inode.set_ino(first_ino);
indexes.push(child.node.clone());
} else {
child_node.inode.set_ino(index);
child_node.inode.set_nlink(1);
// Store inode real ino
bootstrap_ctx
.inode_map
.insert(key, vec![child.node.clone()]);
}
// update bootstrap_ctx.offset for rafs v6 non-dir nodes.
if !child_node.is_dir() && ctx.fs_version.is_v6() {
child_node.v6_set_offset(bootstrap_ctx, v6_hardlink_offset, block_size)?;
}
ctx.prefetch.insert(&child.node, child_node.deref());
if child_node.is_dir() {
dirs.push(child);
}
}
// According to filesystem semantics, a parent directory should have nlink equal to
// the number of its child directories plus 2.
if parent_node.is_dir() {
parent_node.inode.set_nlink((2 + dirs.len()) as u32);
}
for dir in dirs {
Self::build_rafs(ctx, bootstrap_ctx, dir)?;
}
Ok(())
}
/// Load a parent RAFS bootstrap and return the `Tree` object representing the filesystem.
pub fn load_parent_bootstrap(
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_mgr: &mut BlobManager,
) -> Result<Tree> {
let rs = if let Some(path) = bootstrap_mgr.f_parent_path.as_ref() {
RafsSuper::load_from_file(path, ctx.configuration.clone(), false).map(|(rs, _)| rs)?
} else {
return Err(Error::msg("bootstrap context's parent bootstrap is null"));
};
let config = RafsSuperConfig {
compressor: ctx.compressor,
digester: ctx.digester,
chunk_size: ctx.chunk_size,
batch_size: ctx.batch_size,
explicit_uidgid: ctx.explicit_uidgid,
version: ctx.fs_version,
is_tarfs_mode: rs.meta.flags.contains(RafsSuperFlags::TARTFS_MODE),
};
config.check_compatibility(&rs.meta)?;
// Reuse lower layer blob table,
// we need to append the blob entry of upper layer to the table
blob_mgr.extend_from_blob_table(ctx, rs.superblock.get_blob_infos())?;
// Build node tree of lower layer from a bootstrap file, and add chunks
// of lower node to layered_chunk_dict for chunk deduplication on next.
Tree::from_bootstrap(&rs, &mut blob_mgr.layered_chunk_dict)
.context("failed to build tree from bootstrap")
}
}

View File

@ -8,7 +8,7 @@ use std::path::{Path, PathBuf};
use std::sync::atomic::{AtomicU32, Ordering};
use std::sync::{Arc, Mutex};
use anyhow::{Context, Result};
use anyhow::{bail, Context, Result};
use nydus_api::ConfigV2;
use nydus_rafs::metadata::chunk::ChunkWrapper;
use nydus_rafs::metadata::layout::v5::RafsV5ChunkInfo;
@ -16,25 +16,43 @@ use nydus_rafs::metadata::{RafsSuper, RafsSuperConfig};
use nydus_storage::device::BlobInfo;
use nydus_utils::digest::{self, RafsDigest};
use crate::core::tree::Tree;
use crate::Tree;
#[derive(Debug, PartialEq, Eq, Hash, Ord, PartialOrd)]
pub struct DigestWithBlobIndex(pub RafsDigest, pub u32);
pub struct DigestWithBlobIndex(pub RafsDigest, pub u32, pub Option<u32>);
/// Trait to manage chunk cache for chunk deduplication.
pub trait ChunkDict: Sync + Send + 'static {
fn add_chunk(&mut self, chunk: ChunkWrapper, digester: digest::Algorithm);
fn get_chunk(&self, digest: &RafsDigest, uncompressed_size: u32) -> Option<&ChunkWrapper>;
/// Add a chunk into the cache.
fn add_chunk(&mut self, chunk: Arc<ChunkWrapper>, digester: digest::Algorithm);
/// Get a cached chunk from the cache.
fn get_chunk(&self, digest: &RafsDigest, uncompressed_size: u32) -> Option<&Arc<ChunkWrapper>>;
/// Get all `BlobInfo` objects referenced by cached chunks.
fn get_blobs(&self) -> Vec<Arc<BlobInfo>>;
fn get_blob_by_inner_idx(&self, idx: u32) -> Option<&BlobInfo>;
/// Get the `BlobInfo` object with inner index `idx`.
fn get_blob_by_inner_idx(&self, idx: u32) -> Option<&Arc<BlobInfo>>;
/// Associate an external index with the inner index.
fn set_real_blob_idx(&self, inner_idx: u32, out_idx: u32);
/// Get the external index associated with an inner index.
fn get_real_blob_idx(&self, inner_idx: u32) -> Option<u32>;
/// Get the digest algorithm used to generate chunk digest.
fn digester(&self) -> digest::Algorithm;
}
impl ChunkDict for () {
fn add_chunk(&mut self, _chunk: ChunkWrapper, _digester: digest::Algorithm) {}
fn add_chunk(&mut self, _chunk: Arc<ChunkWrapper>, _digester: digest::Algorithm) {}
fn get_chunk(&self, _digest: &RafsDigest, _uncompressed_size: u32) -> Option<&ChunkWrapper> {
fn get_chunk(
&self,
_digest: &RafsDigest,
_uncompressed_size: u32,
) -> Option<&Arc<ChunkWrapper>> {
None
}
@ -42,7 +60,7 @@ impl ChunkDict for () {
Vec::new()
}
fn get_blob_by_inner_idx(&self, _idx: u32) -> Option<&BlobInfo> {
fn get_blob_by_inner_idx(&self, _idx: u32) -> Option<&Arc<BlobInfo>> {
None
}
@ -59,15 +77,16 @@ impl ChunkDict for () {
}
}
/// An implementation of [ChunkDict] based on [HashMap].
pub struct HashChunkDict {
pub m: HashMap<RafsDigest, (ChunkWrapper, AtomicU32)>,
m: HashMap<RafsDigest, (Arc<ChunkWrapper>, AtomicU32)>,
blobs: Vec<Arc<BlobInfo>>,
blob_idx_m: Mutex<BTreeMap<u32, u32>>,
digester: digest::Algorithm,
}
impl ChunkDict for HashChunkDict {
fn add_chunk(&mut self, chunk: ChunkWrapper, digester: digest::Algorithm) {
fn add_chunk(&mut self, chunk: Arc<ChunkWrapper>, digester: digest::Algorithm) {
if self.digester == digester {
if let Some(e) = self.m.get(chunk.id()) {
e.1.fetch_add(1, Ordering::AcqRel);
@ -78,7 +97,7 @@ impl ChunkDict for HashChunkDict {
}
}
fn get_chunk(&self, digest: &RafsDigest, uncompressed_size: u32) -> Option<&ChunkWrapper> {
fn get_chunk(&self, digest: &RafsDigest, uncompressed_size: u32) -> Option<&Arc<ChunkWrapper>> {
if let Some((chunk, _)) = self.m.get(digest) {
if chunk.uncompressed_size() == 0 || chunk.uncompressed_size() == uncompressed_size {
return Some(chunk);
@ -91,8 +110,8 @@ impl ChunkDict for HashChunkDict {
self.blobs.clone()
}
fn get_blob_by_inner_idx(&self, idx: u32) -> Option<&BlobInfo> {
self.blobs.get(idx as usize).map(|b| b.as_ref())
fn get_blob_by_inner_idx(&self, idx: u32) -> Option<&Arc<BlobInfo>> {
self.blobs.get(idx as usize)
}
fn set_real_blob_idx(&self, inner_idx: u32, out_idx: u32) {
@ -109,6 +128,7 @@ impl ChunkDict for HashChunkDict {
}
impl HashChunkDict {
/// Create a new instance of [HashChunkDict].
pub fn new(digester: digest::Algorithm) -> Self {
HashChunkDict {
m: Default::default(),
@ -118,12 +138,29 @@ impl HashChunkDict {
}
}
fn from_bootstrap_file(
/// Get an immutable reference to the internal `HashMap`.
pub fn hashmap(&self) -> &HashMap<RafsDigest, (Arc<ChunkWrapper>, AtomicU32)> {
&self.m
}
/// Parse commandline argument for chunk dictionary and load chunks into the dictionary.
pub fn from_commandline_arg(
arg: &str,
config: Arc<ConfigV2>,
rafs_config: &RafsSuperConfig,
) -> Result<Arc<dyn ChunkDict>> {
let file_path = parse_chunk_dict_arg(arg)?;
HashChunkDict::from_bootstrap_file(&file_path, config, rafs_config)
.map(|d| Arc::new(d) as Arc<dyn ChunkDict>)
}
/// Load chunks from the RAFS filesystem into the chunk dictionary.
pub fn from_bootstrap_file(
path: &Path,
config: Arc<ConfigV2>,
rafs_config: &RafsSuperConfig,
) -> Result<Self> {
let (rs, _) = RafsSuper::load_from_file(path, config, true, true)
let (rs, _) = RafsSuper::load_from_file(path, config, true)
.with_context(|| format!("failed to open bootstrap file {:?}", path))?;
let mut d = HashChunkDict {
m: HashMap::new(),
@ -163,7 +200,8 @@ impl HashChunkDict {
for idx in 0..(size / unit_size) {
let chunk = rs.superblock.get_chunk_info(idx)?;
self.add_chunk(ChunkWrapper::from_chunk_info(chunk.as_ref()), self.digester);
let chunk_info = Arc::new(ChunkWrapper::from_chunk_info(chunk));
self.add_chunk(chunk_info, self.digester);
}
Ok(())
@ -196,17 +234,6 @@ pub fn parse_chunk_dict_arg(arg: &str) -> Result<PathBuf> {
}
}
/// Load a chunk dictionary from external source.
pub(crate) fn import_chunk_dict(
arg: &str,
config: Arc<ConfigV2>,
rafs_config: &RafsSuperConfig,
) -> Result<Arc<dyn ChunkDict>> {
let file_path = parse_chunk_dict_arg(arg)?;
HashChunkDict::from_bootstrap_file(&file_path, config, rafs_config)
.map(|d| Arc::new(d) as Arc<dyn ChunkDict>)
}
#[cfg(test)]
mod tests {
use super::*;
@ -218,7 +245,7 @@ mod tests {
fn test_null_dict() {
let mut dict = Box::new(()) as Box<dyn ChunkDict>;
let chunk = ChunkWrapper::new(RafsVersion::V5);
let chunk = Arc::new(ChunkWrapper::new(RafsVersion::V5));
dict.add_chunk(chunk.clone(), digest::Algorithm::Sha256);
assert!(dict.get_chunk(chunk.id(), 0).is_none());
assert_eq!(dict.get_blobs().len(), 0);
@ -229,16 +256,20 @@ mod tests {
fn test_chunk_dict() {
let root_dir = &std::env::var("CARGO_MANIFEST_DIR").expect("$CARGO_MANIFEST_DIR");
let mut source_path = PathBuf::from(root_dir);
source_path.push("tests/texture/bootstrap/rafs-v5.boot");
source_path.push("../tests/texture/bootstrap/rafs-v5.boot");
let path = source_path.to_str().unwrap();
let rafs_config = RafsSuperConfig {
version: RafsVersion::V5,
compressor: compress::Algorithm::Lz4Block,
digester: digest::Algorithm::Blake3,
chunk_size: 0x100000,
batch_size: 0,
explicit_uidgid: true,
is_tarfs_mode: false,
};
let dict = import_chunk_dict(path, Arc::new(ConfigV2::default()), &rafs_config).unwrap();
let dict =
HashChunkDict::from_commandline_arg(path, Arc::new(ConfigV2::default()), &rafs_config)
.unwrap();
assert!(dict.get_chunk(&RafsDigest::default(), 0).is_none());
assert_eq!(dict.get_blobs().len(), 18);

View File

@ -0,0 +1,94 @@
// Copyright (C) 2022 Nydus Developers. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use std::collections::HashSet;
use std::convert::TryFrom;
use anyhow::{bail, Result};
const ERR_UNSUPPORTED_FEATURE: &str = "unsupported feature";
/// Feature flags to control behavior of RAFS filesystem builder.
#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub enum Feature {
/// Append a Table Of Content footer to RAFS v6 data blob, to help locate data sections.
BlobToc,
}
impl TryFrom<&str> for Feature {
type Error = anyhow::Error;
fn try_from(f: &str) -> Result<Self> {
match f {
"blob-toc" => Ok(Self::BlobToc),
_ => bail!(
"{} `{}`, please try upgrading to the latest nydus-image",
ERR_UNSUPPORTED_FEATURE,
f,
),
}
}
}
/// A set of enabled feature flags to control behavior of RAFS filesystem builder
#[derive(Clone, Debug)]
pub struct Features(HashSet<Feature>);
impl Default for Features {
fn default() -> Self {
Self::new()
}
}
impl Features {
/// Create a new instance of [Features].
pub fn new() -> Self {
Self(HashSet::new())
}
/// Check whether a feature is enabled or not.
pub fn is_enabled(&self, feature: Feature) -> bool {
self.0.contains(&feature)
}
}
impl TryFrom<&str> for Features {
type Error = anyhow::Error;
fn try_from(features: &str) -> Result<Self> {
let mut list = Features::new();
for feat in features.trim().split(',') {
if !feat.is_empty() {
let feature = Feature::try_from(feat.trim())?;
list.0.insert(feature);
}
}
Ok(list)
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_feature() {
assert_eq!(Feature::try_from("blob-toc").unwrap(), Feature::BlobToc);
Feature::try_from("unknown-feature-bit").unwrap_err();
}
#[test]
fn test_features() {
let features = Features::try_from("blob-toc").unwrap();
assert!(features.is_enabled(Feature::BlobToc));
let features = Features::try_from("blob-toc,").unwrap();
assert!(features.is_enabled(Feature::BlobToc));
let features = Features::try_from("blob-toc, ").unwrap();
assert!(features.is_enabled(Feature::BlobToc));
let features = Features::try_from("blob-toc ").unwrap();
assert!(features.is_enabled(Feature::BlobToc));
let features = Features::try_from(" blob-toc ").unwrap();
assert!(features.is_enabled(Feature::BlobToc));
}
}

View File

@ -0,0 +1,62 @@
// Copyright (C) 2021 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use anyhow::Result;
use std::ops::Deref;
use super::node::Node;
use crate::{Overlay, Prefetch, TreeNode};
#[derive(Clone)]
pub struct BlobLayout {}
impl BlobLayout {
pub fn layout_blob_simple(prefetch: &Prefetch) -> Result<(Vec<TreeNode>, usize)> {
let (pre, non_pre) = prefetch.get_file_nodes();
let mut inodes: Vec<TreeNode> = pre
.into_iter()
.filter(|x| Self::should_dump_node(x.borrow().deref()))
.collect();
let mut non_prefetch_inodes: Vec<TreeNode> = non_pre
.into_iter()
.filter(|x| Self::should_dump_node(x.borrow().deref()))
.collect();
let prefetch_entries = inodes.len();
inodes.append(&mut non_prefetch_inodes);
Ok((inodes, prefetch_entries))
}
#[inline]
fn should_dump_node(node: &Node) -> bool {
node.overlay == Overlay::UpperAddition || node.overlay == Overlay::UpperModification
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::{core::node::NodeInfo, Tree};
use nydus_rafs::metadata::{inode::InodeWrapper, RafsVersion};
#[test]
fn test_layout_blob_simple() {
let mut inode = InodeWrapper::new(RafsVersion::V6);
inode.set_mode(0o755 | libc::S_IFREG as u32);
inode.set_size(1);
let mut node1 = Node::new(inode.clone(), NodeInfo::default(), 1);
node1.overlay = Overlay::UpperAddition;
let tree = Tree::new(node1);
let mut prefetch = Prefetch::default();
prefetch.insert(&tree.node, tree.node.borrow().deref());
let (inodes, prefetch_entries) = BlobLayout::layout_blob_simple(&prefetch).unwrap();
assert_eq!(inodes.len(), 1);
assert_eq!(prefetch_entries, 0);
}
}

View File

@ -3,12 +3,14 @@
// SPDX-License-Identifier: Apache-2.0
pub(crate) mod blob;
pub(crate) mod blob_compact;
pub(crate) mod bootstrap;
pub(crate) mod chunk_dict;
pub(crate) mod context;
pub(crate) mod feature;
pub(crate) mod layout;
pub(crate) mod node;
pub(crate) mod overlay;
pub(crate) mod prefetch;
pub(crate) mod tree;
pub(crate) mod v5;
pub(crate) mod v6;

1275
builder/src/core/node.rs Normal file

File diff suppressed because it is too large Load Diff

361
builder/src/core/overlay.rs Normal file
View File

@ -0,0 +1,361 @@
// Copyright 2020 Ant Group. All rights reserved.
// Copyright (C) 2021-2023 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
//! Execute file/directory whiteout rules when merging multiple RAFS filesystems
//! according to the OCI or Overlayfs specifications.
use std::ffi::{OsStr, OsString};
use std::fmt::{self, Display, Formatter};
use std::os::unix::ffi::OsStrExt;
use std::str::FromStr;
use anyhow::{anyhow, Error, Result};
use super::node::Node;
/// Prefix for OCI whiteout file.
pub const OCISPEC_WHITEOUT_PREFIX: &str = ".wh.";
/// Prefix for OCI whiteout opaque.
pub const OCISPEC_WHITEOUT_OPAQUE: &str = ".wh..wh..opq";
/// Extended attribute key for Overlayfs whiteout opaque.
pub const OVERLAYFS_WHITEOUT_OPAQUE: &str = "trusted.overlay.opaque";
/// RAFS filesystem overlay specifications.
///
/// When merging multiple RAFS filesystems into one, special rules are needed to white out
/// files/directories in lower/parent filesystems. The whiteout specification defined by the
/// OCI image specification and Linux Overlayfs are widely adopted, so both of them are supported
/// by RAFS filesystem.
///
/// # Overlayfs Whiteout
///
/// In order to support rm and rmdir without changing the lower filesystem, an overlay filesystem
/// needs to record in the upper filesystem that files have been removed. This is done using
/// whiteouts and opaque directories (non-directories are always opaque).
///
/// A whiteout is created as a character device with 0/0 device number. When a whiteout is found
/// in the upper level of a merged directory, any matching name in the lower level is ignored,
/// and the whiteout itself is also hidden.
///
/// A directory is made opaque by setting the xattr “trusted.overlay.opaque” to “y”. Where the upper
/// filesystem contains an opaque directory, any directory in the lower filesystem with the same
/// name is ignored.
///
/// # OCI Image Whiteout
/// - A whiteout file is an empty file with a special filename that signifies a path should be
/// deleted.
/// - A whiteout filename consists of the prefix .wh. plus the basename of the path to be deleted.
/// - As files prefixed with .wh. are special whiteout markers, it is not possible to create a
/// filesystem which has a file or directory with a name beginning with .wh..
/// - Once a whiteout is applied, the whiteout itself MUST also be hidden.
/// - Whiteout files MUST only apply to resources in lower/parent layers.
/// - Files that are present in the same layer as a whiteout file can only be hidden by whiteout
/// files in subsequent layers.
/// - In addition to expressing that a single entry should be removed from a lower layer, layers
/// may remove all of the children using an opaque whiteout entry.
/// - An opaque whiteout entry is a file with the name .wh..wh..opq indicating that all siblings
/// are hidden in the lower layer.
#[derive(Clone, Copy, PartialEq)]
pub enum WhiteoutSpec {
/// Overlay whiteout rules according to the OCI image specification.
///
/// https://github.com/opencontainers/image-spec/blob/master/layer.md#whiteouts
Oci,
/// Overlay whiteout rules according to the Linux Overlayfs specification.
///
/// "whiteouts and opaque directories" in https://www.kernel.org/doc/Documentation/filesystems/overlayfs.txt
Overlayfs,
/// No whiteout, keep all content from lower/parent filesystems.
None,
}
impl fmt::Display for WhiteoutSpec {
fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result {
match self {
WhiteoutSpec::Oci => write!(f, "oci"),
WhiteoutSpec::Overlayfs => write!(f, "overlayfs"),
WhiteoutSpec::None => write!(f, "none"),
}
}
}
impl Default for WhiteoutSpec {
fn default() -> Self {
Self::Oci
}
}
impl FromStr for WhiteoutSpec {
type Err = Error;
fn from_str(s: &str) -> Result<Self> {
match s.to_lowercase().as_str() {
"oci" => Ok(Self::Oci),
"overlayfs" => Ok(Self::Overlayfs),
"none" => Ok(Self::None),
_ => Err(anyhow!("invalid whiteout spec")),
}
}
}
/// RAFS filesystem overlay operation types.
#[derive(Clone, Copy, Debug, PartialEq)]
pub enum WhiteoutType {
OciOpaque,
OciRemoval,
OverlayFsOpaque,
OverlayFsRemoval,
}
impl WhiteoutType {
pub fn is_removal(&self) -> bool {
*self == WhiteoutType::OciRemoval || *self == WhiteoutType::OverlayFsRemoval
}
}
/// RAFS filesystem node overlay state.
#[allow(dead_code)]
#[derive(Clone, Debug, PartialEq)]
pub enum Overlay {
Lower,
UpperAddition,
UpperModification,
}
impl Overlay {
pub fn is_lower_layer(&self) -> bool {
self == &Overlay::Lower
}
}
impl Display for Overlay {
fn fmt(&self, f: &mut Formatter) -> fmt::Result {
match self {
Overlay::Lower => write!(f, "LOWER"),
Overlay::UpperAddition => write!(f, "ADDED"),
Overlay::UpperModification => write!(f, "MODIFIED"),
}
}
}
impl Node {
/// Check whether the inode is a special overlayfs whiteout file.
pub fn is_overlayfs_whiteout(&self, spec: WhiteoutSpec) -> bool {
if spec != WhiteoutSpec::Overlayfs {
return false;
}
self.inode.is_chrdev()
&& nydus_utils::compact::major_dev(self.info.rdev) == 0
&& nydus_utils::compact::minor_dev(self.info.rdev) == 0
}
/// Check whether the inode (directory) is a overlayfs whiteout opaque.
pub fn is_overlayfs_opaque(&self, spec: WhiteoutSpec) -> bool {
if spec != WhiteoutSpec::Overlayfs || !self.is_dir() {
return false;
}
// A directory is made opaque by setting the xattr "trusted.overlay.opaque" to "y".
if let Some(v) = self
.info
.xattrs
.get(&OsString::from(OVERLAYFS_WHITEOUT_OPAQUE))
{
if let Ok(v) = std::str::from_utf8(v.as_slice()) {
return v == "y";
}
}
false
}
/// Get whiteout type to process the inode.
pub fn whiteout_type(&self, spec: WhiteoutSpec) -> Option<WhiteoutType> {
if self.overlay == Overlay::Lower {
return None;
}
match spec {
WhiteoutSpec::Oci => {
if let Some(name) = self.name().to_str() {
if name == OCISPEC_WHITEOUT_OPAQUE {
return Some(WhiteoutType::OciOpaque);
} else if name.starts_with(OCISPEC_WHITEOUT_PREFIX) {
return Some(WhiteoutType::OciRemoval);
}
}
}
WhiteoutSpec::Overlayfs => {
if self.is_overlayfs_whiteout(spec) {
return Some(WhiteoutType::OverlayFsRemoval);
} else if self.is_overlayfs_opaque(spec) {
return Some(WhiteoutType::OverlayFsOpaque);
}
}
WhiteoutSpec::None => {
return None;
}
}
None
}
/// Get original filename from a whiteout filename.
pub fn origin_name(&self, t: WhiteoutType) -> Option<&OsStr> {
if let Some(name) = self.name().to_str() {
if t == WhiteoutType::OciRemoval {
// the whiteout filename prefixes the basename of the path to be deleted with ".wh.".
return Some(OsStr::from_bytes(
name[OCISPEC_WHITEOUT_PREFIX.len()..].as_bytes(),
));
} else if t == WhiteoutType::OverlayFsRemoval {
// the whiteout file has the same name as the file to be deleted.
return Some(name.as_ref());
}
}
None
}
}
#[cfg(test)]
mod tests {
use nydus_rafs::metadata::{inode::InodeWrapper, layout::v5::RafsV5Inode};
use crate::core::node::NodeInfo;
use super::*;
#[test]
fn test_white_spec_from_str() {
let spec = WhiteoutSpec::default();
assert!(matches!(spec, WhiteoutSpec::Oci));
assert!(WhiteoutSpec::from_str("oci").is_ok());
assert!(WhiteoutSpec::from_str("overlayfs").is_ok());
assert!(WhiteoutSpec::from_str("none").is_ok());
assert!(WhiteoutSpec::from_str("foo").is_err());
}
#[test]
fn test_white_type_removal_check() {
let t1 = WhiteoutType::OciOpaque;
let t2 = WhiteoutType::OciRemoval;
let t3 = WhiteoutType::OverlayFsOpaque;
let t4 = WhiteoutType::OverlayFsRemoval;
assert!(!t1.is_removal());
assert!(t2.is_removal());
assert!(!t3.is_removal());
assert!(t4.is_removal());
}
#[test]
fn test_overlay_low_layer_check() {
let t1 = Overlay::Lower;
let t2 = Overlay::UpperAddition;
let t3 = Overlay::UpperModification;
assert!(t1.is_lower_layer());
assert!(!t2.is_lower_layer());
assert!(!t3.is_lower_layer());
}
#[test]
fn test_node() {
let mut inode = InodeWrapper::V5(RafsV5Inode::default());
inode.set_mode(libc::S_IFCHR as u32);
let node = Node::new(inode, NodeInfo::default(), 0);
assert!(!node.is_overlayfs_whiteout(WhiteoutSpec::None));
assert!(node.is_overlayfs_whiteout(WhiteoutSpec::Overlayfs));
assert_eq!(
node.whiteout_type(WhiteoutSpec::Overlayfs).unwrap(),
WhiteoutType::OverlayFsRemoval
);
let mut inode = InodeWrapper::V5(RafsV5Inode::default());
let mut info: NodeInfo = NodeInfo::default();
assert!(info
.xattrs
.add(OVERLAYFS_WHITEOUT_OPAQUE.into(), "y".into())
.is_ok());
inode.set_mode(libc::S_IFDIR as u32);
let node = Node::new(inode, info, 0);
assert!(!node.is_overlayfs_opaque(WhiteoutSpec::None));
assert!(node.is_overlayfs_opaque(WhiteoutSpec::Overlayfs));
assert_eq!(
node.whiteout_type(WhiteoutSpec::Overlayfs).unwrap(),
WhiteoutType::OverlayFsOpaque
);
let mut inode = InodeWrapper::V5(RafsV5Inode::default());
let mut info = NodeInfo::default();
assert!(info
.xattrs
.add(OVERLAYFS_WHITEOUT_OPAQUE.into(), "n".into())
.is_ok());
inode.set_mode(libc::S_IFDIR as u32);
let node = Node::new(inode, info, 0);
assert!(!node.is_overlayfs_opaque(WhiteoutSpec::None));
assert!(!node.is_overlayfs_opaque(WhiteoutSpec::Overlayfs));
let mut inode = InodeWrapper::V5(RafsV5Inode::default());
let mut info = NodeInfo::default();
assert!(info
.xattrs
.add(OVERLAYFS_WHITEOUT_OPAQUE.into(), "y".into())
.is_ok());
inode.set_mode(libc::S_IFCHR as u32);
let node = Node::new(inode, info, 0);
assert!(!node.is_overlayfs_opaque(WhiteoutSpec::None));
assert!(!node.is_overlayfs_opaque(WhiteoutSpec::Overlayfs));
let mut inode = InodeWrapper::V5(RafsV5Inode::default());
let mut info = NodeInfo::default();
assert!(info
.xattrs
.add(OVERLAYFS_WHITEOUT_OPAQUE.into(), "n".into())
.is_ok());
inode.set_mode(libc::S_IFDIR as u32);
let node = Node::new(inode, info, 0);
assert!(!node.is_overlayfs_opaque(WhiteoutSpec::None));
assert!(!node.is_overlayfs_opaque(WhiteoutSpec::Overlayfs));
let inode = InodeWrapper::V5(RafsV5Inode::default());
let info = NodeInfo::default();
let mut node = Node::new(inode, info, 0);
assert_eq!(node.whiteout_type(WhiteoutSpec::None), None);
assert_eq!(node.whiteout_type(WhiteoutSpec::Oci), None);
assert_eq!(node.whiteout_type(WhiteoutSpec::Overlayfs), None);
node.overlay = Overlay::Lower;
assert_eq!(node.whiteout_type(WhiteoutSpec::Overlayfs), None);
let inode = InodeWrapper::V5(RafsV5Inode::default());
let mut info = NodeInfo::default();
let name = OCISPEC_WHITEOUT_PREFIX.to_string() + "foo";
info.target_vec.push(name.clone().into());
let node = Node::new(inode, info, 0);
assert_eq!(
node.whiteout_type(WhiteoutSpec::Oci).unwrap(),
WhiteoutType::OciRemoval
);
assert_eq!(node.origin_name(WhiteoutType::OciRemoval).unwrap(), "foo");
assert_eq!(node.origin_name(WhiteoutType::OciOpaque), None);
assert_eq!(
node.origin_name(WhiteoutType::OverlayFsRemoval).unwrap(),
OsStr::new(&name)
);
let inode = InodeWrapper::V5(RafsV5Inode::default());
let mut info = NodeInfo::default();
info.target_vec.push(OCISPEC_WHITEOUT_OPAQUE.into());
let node = Node::new(inode, info, 0);
assert_eq!(
node.whiteout_type(WhiteoutSpec::Oci).unwrap(),
WhiteoutType::OciOpaque
);
}
}

View File

@ -0,0 +1,391 @@
// Copyright 2020 Ant Group. All rights reserved.
// Copyright (C) 2023 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use std::path::PathBuf;
use std::str::FromStr;
use anyhow::{anyhow, Context, Error, Result};
use indexmap::IndexMap;
use nydus_rafs::metadata::layout::v5::RafsV5PrefetchTable;
use nydus_rafs::metadata::layout::v6::{calculate_nid, RafsV6PrefetchTable};
use super::node::Node;
use crate::core::tree::TreeNode;
/// Filesystem data prefetch policy.
#[derive(Clone, Copy, Debug, PartialEq)]
pub enum PrefetchPolicy {
None,
/// Prefetch will be issued from Fs layer, which leverages inode/chunkinfo to prefetch data
/// from blob no matter where it resides(OSS/Localfs). Basically, it is willing to cache the
/// data into blobcache(if exists). It's more nimble. With this policy applied, image builder
/// currently puts prefetch files' data into a continuous region within blob which behaves very
/// similar to `Blob` policy.
Fs,
/// Prefetch will be issued directly from backend/blob layer
Blob,
}
impl Default for PrefetchPolicy {
fn default() -> Self {
Self::None
}
}
impl FromStr for PrefetchPolicy {
type Err = Error;
fn from_str(s: &str) -> Result<Self> {
match s {
"none" => Ok(Self::None),
"fs" => Ok(Self::Fs),
"blob" => Ok(Self::Blob),
_ => Err(anyhow!("invalid prefetch policy")),
}
}
}
/// Gather prefetch patterns from STDIN line by line.
///
/// Input format:
/// printf "/relative/path/to/rootfs/1\n/relative/path/to/rootfs/2"
///
/// It does not guarantee that specified path exist in local filesystem because the specified path
/// may exist in parent image/layers.
fn get_patterns() -> Result<IndexMap<PathBuf, Option<TreeNode>>> {
let stdin = std::io::stdin();
let mut patterns = Vec::new();
loop {
let mut file = String::new();
let size = stdin
.read_line(&mut file)
.context("failed to read prefetch pattern")?;
if size == 0 {
return generate_patterns(patterns);
}
patterns.push(file);
}
}
fn generate_patterns(input: Vec<String>) -> Result<IndexMap<PathBuf, Option<TreeNode>>> {
let mut patterns = IndexMap::new();
for file in &input {
let file_trimmed: PathBuf = file.trim().into();
// Sanity check for the list format.
if !file_trimmed.is_absolute() {
warn!(
"Illegal file path {} specified, should be absolute path",
file
);
continue;
}
let mut current_path = file_trimmed.clone();
let mut skip = patterns.contains_key(&current_path);
while !skip && current_path.pop() {
if patterns.contains_key(&current_path) {
skip = true;
break;
}
}
if skip {
warn!(
"prefetch pattern {} is covered by previous pattern and thus omitted",
file
);
} else {
debug!(
"prefetch pattern: {}, trimmed file name {:?}",
file, file_trimmed
);
patterns.insert(file_trimmed, None);
}
}
Ok(patterns)
}
/// Manage filesystem data prefetch configuration and state for builder.
#[derive(Default, Clone)]
pub struct Prefetch {
pub policy: PrefetchPolicy,
pub disabled: bool,
// Patterns to generate prefetch inode array, which will be put into the prefetch array
// in the RAFS bootstrap. It may access directory or file inodes.
patterns: IndexMap<PathBuf, Option<TreeNode>>,
// File list to help optimizing layout of data blobs.
// Files from this list may be put at the head of data blob for better prefetch performance,
// The index of matched prefetch pattern is stored in `usize`,
// which will help to sort the prefetch files in the final layout.
// It only stores regular files.
files_prefetch: Vec<(TreeNode, usize)>,
// It stores all non-prefetch files that is not stored in `prefetch_files`,
// including regular files, dirs, symlinks, etc.,
// with the same order of BFS traversal of file tree.
files_non_prefetch: Vec<TreeNode>,
}
impl Prefetch {
/// Create a new instance of [Prefetch].
pub fn new(policy: PrefetchPolicy) -> Result<Self> {
let patterns = if policy != PrefetchPolicy::None {
get_patterns().context("failed to get prefetch patterns")?
} else {
IndexMap::new()
};
Ok(Self {
policy,
disabled: false,
patterns,
files_prefetch: Vec::with_capacity(10000),
files_non_prefetch: Vec::with_capacity(10000),
})
}
/// Insert node into the prefetch Vector if it matches prefetch rules,
/// while recording the index of matched prefetch pattern,
/// or insert it into non-prefetch Vector.
pub fn insert(&mut self, obj: &TreeNode, node: &Node) {
// Newly created root inode of this rafs has zero size
if self.policy == PrefetchPolicy::None
|| self.disabled
|| (node.inode.is_reg() && node.inode.size() == 0)
{
self.files_non_prefetch.push(obj.clone());
return;
}
let mut path = node.target().clone();
let mut exact_match = true;
loop {
if let Some((idx, _, v)) = self.patterns.get_full_mut(&path) {
if exact_match {
*v = Some(obj.clone());
}
if node.is_reg() {
self.files_prefetch.push((obj.clone(), idx));
} else {
self.files_non_prefetch.push(obj.clone());
}
return;
}
// If no exact match, try to match parent dir until root.
if !path.pop() {
self.files_non_prefetch.push(obj.clone());
return;
}
exact_match = false;
}
}
/// Get node Vector of files in the prefetch list and non-prefetch list.
/// The order of prefetch files is the same as the order of prefetch patterns.
/// The order of non-prefetch files is the same as the order of BFS traversal of file tree.
pub fn get_file_nodes(&self) -> (Vec<TreeNode>, Vec<TreeNode>) {
let mut p_files = self.files_prefetch.clone();
p_files.sort_by_key(|k| k.1);
let p_files = p_files.into_iter().map(|(s, _)| s).collect();
(p_files, self.files_non_prefetch.clone())
}
/// Get the number of ``valid`` prefetch rules.
pub fn fs_prefetch_rule_count(&self) -> u32 {
if self.policy == PrefetchPolicy::Fs {
self.patterns.values().filter(|v| v.is_some()).count() as u32
} else {
0
}
}
/// Generate filesystem layer prefetch list for RAFS v5.
pub fn get_v5_prefetch_table(&mut self) -> Option<RafsV5PrefetchTable> {
if self.policy == PrefetchPolicy::Fs {
let mut prefetch_table = RafsV5PrefetchTable::new();
for i in self.patterns.values().filter_map(|v| v.clone()) {
let node = i.borrow_mut();
assert!(node.inode.ino() < u32::MAX as u64);
prefetch_table.add_entry(node.inode.ino() as u32);
}
Some(prefetch_table)
} else {
None
}
}
/// Generate filesystem layer prefetch list for RAFS v6.
pub fn get_v6_prefetch_table(&mut self, meta_addr: u64) -> Option<RafsV6PrefetchTable> {
if self.policy == PrefetchPolicy::Fs {
let mut prefetch_table = RafsV6PrefetchTable::new();
for i in self.patterns.values().filter_map(|v| v.clone()) {
let node = i.borrow_mut();
let ino = node.inode.ino();
debug_assert!(ino > 0);
let nid = calculate_nid(node.v6_offset, meta_addr);
// 32bit nid can represent 128GB bootstrap, it is large enough, no need
// to worry about casting here
assert!(nid < u32::MAX as u64);
trace!(
"v6 prefetch table: map node index {} to offset {} nid {} path {:?} name {:?}",
ino,
node.v6_offset,
nid,
node.path(),
node.name()
);
prefetch_table.add_entry(nid as u32);
}
Some(prefetch_table)
} else {
None
}
}
/// Disable filesystem data prefetch.
pub fn disable(&mut self) {
self.disabled = true;
}
/// Reset to initialization state.
pub fn clear(&mut self) {
self.disabled = false;
self.patterns.clear();
self.files_prefetch.clear();
self.files_non_prefetch.clear();
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::core::node::NodeInfo;
use nydus_rafs::metadata::{inode::InodeWrapper, RafsVersion};
use std::cell::RefCell;
#[test]
fn test_generate_pattern() {
let input = vec![
"/a/b".to_string(),
"/a/b/c".to_string(),
"/a/b/d".to_string(),
"/a/b/d/e".to_string(),
"/f".to_string(),
"/h/i".to_string(),
];
let patterns = generate_patterns(input).unwrap();
assert_eq!(patterns.len(), 3);
assert!(patterns.contains_key(&PathBuf::from("/a/b")));
assert!(patterns.contains_key(&PathBuf::from("/f")));
assert!(patterns.contains_key(&PathBuf::from("/h/i")));
assert!(!patterns.contains_key(&PathBuf::from("/")));
assert!(!patterns.contains_key(&PathBuf::from("/a")));
assert!(!patterns.contains_key(&PathBuf::from("/a/b/c")));
assert!(!patterns.contains_key(&PathBuf::from("/a/b/d")));
assert!(!patterns.contains_key(&PathBuf::from("/a/b/d/e")));
assert!(!patterns.contains_key(&PathBuf::from("/k")));
}
#[test]
fn test_prefetch_policy() {
let policy = PrefetchPolicy::from_str("fs").unwrap();
assert_eq!(policy, PrefetchPolicy::Fs);
let policy = PrefetchPolicy::from_str("blob").unwrap();
assert_eq!(policy, PrefetchPolicy::Blob);
let policy = PrefetchPolicy::from_str("none").unwrap();
assert_eq!(policy, PrefetchPolicy::None);
PrefetchPolicy::from_str("").unwrap_err();
PrefetchPolicy::from_str("invalid").unwrap_err();
}
#[test]
fn test_prefetch() {
let input = vec![
"/a/b".to_string(),
"/f".to_string(),
"/h/i".to_string(),
"/k".to_string(),
];
let patterns = generate_patterns(input).unwrap();
let mut prefetch = Prefetch {
policy: PrefetchPolicy::Fs,
disabled: false,
patterns,
files_prefetch: Vec::with_capacity(10),
files_non_prefetch: Vec::with_capacity(10),
};
let mut inode = InodeWrapper::new(RafsVersion::V6);
inode.set_mode(0o755 | libc::S_IFREG as u32);
inode.set_size(1);
let info = NodeInfo::default();
let mut info1 = info.clone();
info1.target = PathBuf::from("/f");
let node1 = Node::new(inode.clone(), info1, 1);
let node1 = TreeNode::new(RefCell::from(node1));
prefetch.insert(&node1, &node1.borrow());
let inode2 = inode.clone();
let mut info2 = info.clone();
info2.target = PathBuf::from("/a/b");
let node2 = Node::new(inode2, info2, 1);
let node2 = TreeNode::new(RefCell::from(node2));
prefetch.insert(&node2, &node2.borrow());
let inode3 = inode.clone();
let mut info3 = info.clone();
info3.target = PathBuf::from("/h/i/j");
let node3 = Node::new(inode3, info3, 1);
let node3 = TreeNode::new(RefCell::from(node3));
prefetch.insert(&node3, &node3.borrow());
let inode4 = inode.clone();
let mut info4 = info.clone();
info4.target = PathBuf::from("/z");
let node4 = Node::new(inode4, info4, 1);
let node4 = TreeNode::new(RefCell::from(node4));
prefetch.insert(&node4, &node4.borrow());
let inode5 = inode.clone();
inode.set_mode(0o755 | libc::S_IFDIR as u32);
inode.set_size(0);
let mut info5 = info;
info5.target = PathBuf::from("/a/b/d");
let node5 = Node::new(inode5, info5, 1);
let node5 = TreeNode::new(RefCell::from(node5));
prefetch.insert(&node5, &node5.borrow());
// node1, node2
assert_eq!(prefetch.fs_prefetch_rule_count(), 2);
let (pre, non_pre) = prefetch.get_file_nodes();
assert_eq!(pre.len(), 4);
assert_eq!(non_pre.len(), 1);
let pre_str: Vec<String> = pre
.iter()
.map(|n| n.borrow().target().to_str().unwrap().to_owned())
.collect();
assert_eq!(pre_str, vec!["/a/b", "/a/b/d", "/f", "/h/i/j"]);
let non_pre_str: Vec<String> = non_pre
.iter()
.map(|n| n.borrow().target().to_str().unwrap().to_owned())
.collect();
assert_eq!(non_pre_str, vec!["/z"]);
prefetch.clear();
assert_eq!(prefetch.fs_prefetch_rule_count(), 0);
let (pre, non_pre) = prefetch.get_file_nodes();
assert_eq!(pre.len(), 0);
assert_eq!(non_pre.len(), 0);
}
}

533
builder/src/core/tree.rs Normal file
View File

@ -0,0 +1,533 @@
// Copyright 2020 Ant Group. All rights reserved.
// Copyright 2023 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
//! An in-memory tree structure to maintain information for filesystem metadata.
//!
//! Steps to build the first layer for a Rafs image:
//! - Build the upper tree (FileSystemTree) from the source directory.
//! - Traverse the upper tree (FileSystemTree) to dump bootstrap and data blobs.
//!
//! Steps to build the second and following on layers for a Rafs image:
//! - Build the upper tree (FileSystemTree) from the source directory.
//! - Load the lower tree (MetadataTree) from a metadata blob.
//! - Merge the final tree (OverlayTree) by applying the upper tree (FileSystemTree) to the
//! lower tree (MetadataTree).
//! - Traverse the merged tree (OverlayTree) to dump bootstrap and data blobs.
use std::cell::{RefCell, RefMut};
use std::ffi::OsString;
use std::os::unix::ffi::OsStrExt;
use std::path::{Path, PathBuf};
use std::rc::Rc;
use std::sync::Arc;
use anyhow::{bail, Result};
use nydus_rafs::metadata::chunk::ChunkWrapper;
use nydus_rafs::metadata::inode::InodeWrapper;
use nydus_rafs::metadata::layout::{bytes_to_os_str, RafsXAttrs};
use nydus_rafs::metadata::{Inode, RafsInodeExt, RafsSuper};
use nydus_utils::{lazy_drop, root_tracer, timing_tracer};
use super::node::{ChunkSource, Node, NodeChunk, NodeInfo};
use super::overlay::{Overlay, WhiteoutType};
use crate::core::overlay::OVERLAYFS_WHITEOUT_OPAQUE;
use crate::{BuildContext, ChunkDict};
/// Type alias for tree internal node.
pub type TreeNode = Rc<RefCell<Node>>;
/// An in-memory tree structure to maintain information and topology of filesystem nodes.
#[derive(Clone)]
pub struct Tree {
/// Filesystem node.
pub node: TreeNode,
/// Cached base name.
name: Vec<u8>,
/// Children tree nodes.
pub children: Vec<Tree>,
}
impl Tree {
/// Create a new instance of `Tree` from a filesystem node.
pub fn new(node: Node) -> Self {
let name = node.name().as_bytes().to_vec();
Tree {
node: Rc::new(RefCell::new(node)),
name,
children: Vec::new(),
}
}
/// Load a `Tree` from a bootstrap file, and optionally caches chunk information.
pub fn from_bootstrap<T: ChunkDict>(rs: &RafsSuper, chunk_dict: &mut T) -> Result<Self> {
let tree_builder = MetadataTreeBuilder::new(rs);
let root_ino = rs.superblock.root_ino();
let root_inode = rs.get_extended_inode(root_ino, true)?;
let root_node = MetadataTreeBuilder::parse_node(rs, root_inode, PathBuf::from("/"))?;
let mut tree = Tree::new(root_node);
tree.children = timing_tracer!(
{ tree_builder.load_children(root_ino, Option::<PathBuf>::None, chunk_dict, true,) },
"load_tree_from_bootstrap"
)?;
Ok(tree)
}
/// Get name of the tree node.
pub fn name(&self) -> &[u8] {
&self.name
}
/// Set `Node` associated with the tree node.
pub fn set_node(&mut self, node: Node) {
self.node.replace(node);
}
/// Get mutably borrowed value to access the associated `Node` object.
pub fn borrow_mut_node(&self) -> RefMut<'_, Node> {
self.node.as_ref().borrow_mut()
}
/// Walk all nodes in DFS mode.
pub fn walk_dfs<F1, F2>(&self, pre: &mut F1, post: &mut F2) -> Result<()>
where
F1: FnMut(&Tree) -> Result<()>,
F2: FnMut(&Tree) -> Result<()>,
{
pre(self)?;
for child in &self.children {
child.walk_dfs(pre, post)?;
}
post(self)?;
Ok(())
}
/// Walk all nodes in pre DFS mode.
pub fn walk_dfs_pre<F>(&self, cb: &mut F) -> Result<()>
where
F: FnMut(&Tree) -> Result<()>,
{
self.walk_dfs(cb, &mut |_t| Ok(()))
}
/// Walk all nodes in post DFS mode.
pub fn walk_dfs_post<F>(&self, cb: &mut F) -> Result<()>
where
F: FnMut(&Tree) -> Result<()>,
{
self.walk_dfs(&mut |_t| Ok(()), cb)
}
/// Walk the tree in BFS mode.
pub fn walk_bfs<F>(&self, handle_self: bool, cb: &mut F) -> Result<()>
where
F: FnMut(&Tree) -> Result<()>,
{
if handle_self {
cb(self)?;
}
let mut dirs = Vec::with_capacity(32);
for child in &self.children {
cb(child)?;
if child.borrow_mut_node().is_dir() {
dirs.push(child);
}
}
for dir in dirs {
dir.walk_bfs(false, cb)?;
}
Ok(())
}
/// Insert a new child node into the tree.
pub fn insert_child(&mut self, child: Tree) {
if let Err(idx) = self
.children
.binary_search_by_key(&&child.name, |n| &n.name)
{
self.children.insert(idx, child);
}
}
/// Get index of child node with specified `name`.
pub fn get_child_idx(&self, name: &[u8]) -> Option<usize> {
self.children.binary_search_by_key(&name, |n| &n.name).ok()
}
/// Get the tree node corresponding to the path.
pub fn get_node(&self, path: &Path) -> Option<&Tree> {
let target_vec = Node::generate_target_vec(path);
assert!(!target_vec.is_empty());
let mut tree = self;
for name in &target_vec[1..] {
match tree.get_child_idx(name.as_bytes()) {
Some(idx) => tree = &tree.children[idx],
None => return None,
}
}
Some(tree)
}
/// Get the mutable tree node corresponding to the path.
pub fn get_node_mut(&mut self, path: &Path) -> Option<&mut Tree> {
let target_vec = Node::generate_target_vec(path);
assert!(!target_vec.is_empty());
let mut tree = self;
let last_idx = target_vec.len() - 1;
for name in &target_vec[1..last_idx] {
match tree.get_child_idx(name.as_bytes()) {
Some(idx) => tree = &mut tree.children[idx],
None => return None,
}
}
if let Some(last_name) = target_vec.last() {
match tree.get_child_idx(last_name.as_bytes()) {
Some(idx) => Some(&mut tree.children[idx]),
None => None,
}
} else {
Some(tree)
}
}
/// Merge the upper layer tree into the lower layer tree, applying whiteout rules.
pub fn merge_overaly(&mut self, ctx: &BuildContext, upper: Tree) -> Result<()> {
assert_eq!(self.name, "/".as_bytes());
assert_eq!(upper.name, "/".as_bytes());
// Handle the root node.
upper.borrow_mut_node().overlay = Overlay::UpperModification;
self.node = upper.node.clone();
self.merge_children(ctx, &upper)?;
lazy_drop(upper);
Ok(())
}
fn merge_children(&mut self, ctx: &BuildContext, upper: &Tree) -> Result<()> {
// Handle whiteout nodes in the first round, and handle other nodes in the second round.
let mut modified = Vec::with_capacity(upper.children.len());
for u in upper.children.iter() {
let mut u_node = u.borrow_mut_node();
match u_node.whiteout_type(ctx.whiteout_spec) {
Some(WhiteoutType::OciRemoval) => {
if let Some(origin_name) = u_node.origin_name(WhiteoutType::OciRemoval) {
if let Some(idx) = self.get_child_idx(origin_name.as_bytes()) {
self.children.remove(idx);
}
}
}
Some(WhiteoutType::OciOpaque) => {
self.children.clear();
}
Some(WhiteoutType::OverlayFsRemoval) => {
if let Some(idx) = self.get_child_idx(&u.name) {
self.children.remove(idx);
}
}
Some(WhiteoutType::OverlayFsOpaque) => {
if let Some(idx) = self.get_child_idx(&u.name) {
self.children[idx].children.clear();
}
u_node.remove_xattr(&OsString::from(OVERLAYFS_WHITEOUT_OPAQUE));
modified.push(u);
}
None => modified.push(u),
}
}
let mut dirs = Vec::new();
for u in modified {
let mut u_node = u.borrow_mut_node();
if let Some(idx) = self.get_child_idx(&u.name) {
u_node.overlay = Overlay::UpperModification;
self.children[idx].node = u.node.clone();
} else {
u_node.overlay = Overlay::UpperAddition;
self.insert_child(Tree {
node: u.node.clone(),
name: u.name.clone(),
children: vec![],
});
}
if u_node.is_dir() {
dirs.push(u);
}
}
for dir in dirs {
if let Some(idx) = self.get_child_idx(&dir.name) {
self.children[idx].merge_children(ctx, dir)?;
} else {
bail!("builder: can not find directory in merged tree");
}
}
Ok(())
}
}
pub struct MetadataTreeBuilder<'a> {
rs: &'a RafsSuper,
}
impl<'a> MetadataTreeBuilder<'a> {
fn new(rs: &'a RafsSuper) -> Self {
Self { rs }
}
/// Build node tree by loading bootstrap file
fn load_children<T: ChunkDict, P: AsRef<Path>>(
&self,
ino: Inode,
parent: Option<P>,
chunk_dict: &mut T,
validate_digest: bool,
) -> Result<Vec<Tree>> {
let inode = self.rs.get_extended_inode(ino, validate_digest)?;
if !inode.is_dir() {
return Ok(Vec::new());
}
let parent_path = if let Some(parent) = parent {
parent.as_ref().join(inode.name())
} else {
PathBuf::from("/")
};
let blobs = self.rs.superblock.get_blob_infos();
let child_count = inode.get_child_count();
let mut children = Vec::with_capacity(child_count as usize);
for idx in 0..child_count {
let child = inode.get_child_by_index(idx)?;
let child_path = parent_path.join(child.name());
let child = Self::parse_node(self.rs, child.clone(), child_path)?;
if child.is_reg() {
for chunk in &child.chunks {
let blob_idx = chunk.inner.blob_index();
if let Some(blob) = blobs.get(blob_idx as usize) {
chunk_dict.add_chunk(chunk.inner.clone(), blob.digester());
}
}
}
let child = Tree::new(child);
children.push(child);
}
children.sort_unstable_by(|a, b| a.name.cmp(&b.name));
for child in children.iter_mut() {
let child_node = child.borrow_mut_node();
if child_node.is_dir() {
let child_ino = child_node.inode.ino();
drop(child_node);
child.children =
self.load_children(child_ino, Some(&parent_path), chunk_dict, validate_digest)?;
}
}
Ok(children)
}
/// Convert a `RafsInode` object to an in-memory `Node` object.
pub fn parse_node(rs: &RafsSuper, inode: Arc<dyn RafsInodeExt>, path: PathBuf) -> Result<Node> {
let chunks = if inode.is_reg() {
let chunk_count = inode.get_chunk_count();
let mut chunks = Vec::with_capacity(chunk_count as usize);
for i in 0..chunk_count {
let cki = inode.get_chunk_info(i)?;
chunks.push(NodeChunk {
source: ChunkSource::Parent,
inner: Arc::new(ChunkWrapper::from_chunk_info(cki)),
});
}
chunks
} else {
Vec::new()
};
let symlink = if inode.is_symlink() {
Some(inode.get_symlink()?)
} else {
None
};
let mut xattrs = RafsXAttrs::new();
for name in inode.get_xattrs()? {
let name = bytes_to_os_str(&name);
let value = inode.get_xattr(name)?;
xattrs.add(name.to_os_string(), value.unwrap_or_default())?;
}
// Nodes loaded from bootstrap will only be used as `Overlay::Lower`, so make `dev` invalid
// to avoid breaking hardlink detecting logic.
let src_dev = u64::MAX;
let rdev = inode.rdev() as u64;
let inode = InodeWrapper::from_inode_info(inode.clone());
let source = PathBuf::from("/");
let target = Node::generate_target(&path, &source);
let target_vec = Node::generate_target_vec(&target);
let info = NodeInfo {
explicit_uidgid: rs.meta.explicit_uidgid(),
src_ino: inode.ino(),
src_dev,
rdev,
path,
source,
target,
target_vec,
symlink,
xattrs,
v6_force_extended_inode: false,
};
Ok(Node {
info: Arc::new(info),
index: 0,
layer_idx: 0,
overlay: Overlay::Lower,
inode,
chunks,
v6_offset: 0,
v6_dirents: Vec::new(),
v6_datalayout: 0,
v6_compact_inode: false,
v6_dirents_offset: 0,
})
}
}
#[cfg(test)]
mod tests {
use super::*;
use nydus_rafs::metadata::RafsVersion;
use nydus_storage::RAFS_DEFAULT_CHUNK_SIZE;
use vmm_sys_util::tempdir::TempDir;
use vmm_sys_util::tempfile::TempFile;
#[test]
fn test_set_lock_node() {
let tmpdir = TempDir::new().unwrap();
let tmpfile = TempFile::new_in(tmpdir.as_path()).unwrap();
let node = Node::from_fs_object(
RafsVersion::V6,
tmpdir.as_path().to_path_buf(),
tmpfile.as_path().to_path_buf(),
Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true,
false,
)
.unwrap();
let mut tree = Tree::new(node);
assert_eq!(tree.name, tmpfile.as_path().file_name().unwrap().as_bytes());
let node1 = tree.borrow_mut_node();
drop(node1);
let tmpfile = TempFile::new_in(tmpdir.as_path()).unwrap();
let node = Node::from_fs_object(
RafsVersion::V6,
tmpdir.as_path().to_path_buf(),
tmpfile.as_path().to_path_buf(),
Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true,
false,
)
.unwrap();
tree.set_node(node);
let node2 = tree.borrow_mut_node();
assert_eq!(node2.name(), tmpfile.as_path().file_name().unwrap());
}
#[test]
fn test_walk_tree() {
let tmpdir = TempDir::new().unwrap();
let tmpfile = TempFile::new_in(tmpdir.as_path()).unwrap();
let node = Node::from_fs_object(
RafsVersion::V6,
tmpdir.as_path().to_path_buf(),
tmpfile.as_path().to_path_buf(),
Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true,
false,
)
.unwrap();
let mut tree = Tree::new(node);
let tmpfile2 = TempFile::new_in(tmpdir.as_path()).unwrap();
let node = Node::from_fs_object(
RafsVersion::V6,
tmpdir.as_path().to_path_buf(),
tmpfile2.as_path().to_path_buf(),
Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true,
false,
)
.unwrap();
let tree2 = Tree::new(node);
tree.insert_child(tree2);
let tmpfile3 = TempFile::new_in(tmpdir.as_path()).unwrap();
let node = Node::from_fs_object(
RafsVersion::V6,
tmpdir.as_path().to_path_buf(),
tmpfile3.as_path().to_path_buf(),
Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true,
false,
)
.unwrap();
let tree3 = Tree::new(node);
tree.insert_child(tree3);
let mut count = 0;
tree.walk_bfs(true, &mut |_n| -> Result<()> {
count += 1;
Ok(())
})
.unwrap();
assert_eq!(count, 3);
let mut count = 0;
tree.walk_bfs(false, &mut |_n| -> Result<()> {
count += 1;
Ok(())
})
.unwrap();
assert_eq!(count, 2);
let mut count = 0;
tree.walk_bfs(true, &mut |_n| -> Result<()> {
count += 1;
bail!("test")
})
.unwrap_err();
assert_eq!(count, 1);
let idx = tree
.get_child_idx(tmpfile2.as_path().file_name().unwrap().as_bytes())
.unwrap();
assert!(idx == 0 || idx == 1);
let idx = tree
.get_child_idx(tmpfile3.as_path().file_name().unwrap().as_bytes())
.unwrap();
assert!(idx == 0 || idx == 1);
}
}

266
builder/src/core/v5.rs Normal file
View File

@ -0,0 +1,266 @@
// Copyright 2020 Ant Group. All rights reserved.
// Copyright (C) 2021 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use std::convert::TryFrom;
use std::mem::size_of;
use anyhow::{bail, Context, Result};
use nydus_rafs::metadata::inode::InodeWrapper;
use nydus_rafs::metadata::layout::v5::{
RafsV5BlobTable, RafsV5ChunkInfo, RafsV5InodeTable, RafsV5InodeWrapper, RafsV5SuperBlock,
RafsV5XAttrsTable,
};
use nydus_rafs::metadata::{RafsStore, RafsVersion};
use nydus_rafs::RafsIoWrite;
use nydus_utils::digest::{DigestHasher, RafsDigest};
use nydus_utils::{div_round_up, root_tracer, timing_tracer, try_round_up_4k};
use super::node::Node;
use crate::{Bootstrap, BootstrapContext, BuildContext, Tree};
// Filesystem may have different algorithms to calculate `i_size` for directory entries,
// which may break "repeatable build". To support repeatable build, instead of reuse the value
// provided by the source filesystem, we use our own algorithm to calculate `i_size` for directory
// entries for stable `i_size`.
//
// Rafs v6 already has its own algorithm to calculate `i_size` for directory entries, but we don't
// have directory entries for Rafs v5. So let's generate a pseudo `i_size` for Rafs v5 directory
// inode.
const RAFS_V5_VIRTUAL_ENTRY_SIZE: u64 = 8;
impl Node {
/// Dump RAFS v5 inode metadata to meta blob.
pub fn dump_bootstrap_v5(
&self,
ctx: &mut BuildContext,
f_bootstrap: &mut dyn RafsIoWrite,
) -> Result<()> {
trace!("[{}]\t{}", self.overlay, self);
if let InodeWrapper::V5(raw_inode) = &self.inode {
// Dump inode info
let name = self.name();
let inode = RafsV5InodeWrapper {
name,
symlink: self.info.symlink.as_deref(),
inode: raw_inode,
};
inode
.store(f_bootstrap)
.context("failed to dump inode to bootstrap")?;
// Dump inode xattr
if !self.info.xattrs.is_empty() {
self.info
.xattrs
.store_v5(f_bootstrap)
.context("failed to dump xattr to bootstrap")?;
ctx.has_xattr = true;
}
// Dump chunk info
if self.is_reg() && self.inode.child_count() as usize != self.chunks.len() {
bail!("invalid chunk count {}: {}", self.chunks.len(), self);
}
for chunk in &self.chunks {
chunk
.inner
.store(f_bootstrap)
.context("failed to dump chunk info to bootstrap")?;
trace!("\t\tchunk: {} compressor {}", chunk, ctx.compressor,);
}
Ok(())
} else {
bail!("dump_bootstrap_v5() encounters non-v5-inode");
}
}
// Filesystem may have different algorithms to calculate `i_size` for directory entries,
// which may break "repeatable build". To support repeatable build, instead of reuse the value
// provided by the source filesystem, we use our own algorithm to calculate `i_size` for
// directory entries for stable `i_size`.
//
// Rafs v6 already has its own algorithm to calculate `i_size` for directory entries, but we
// don't have directory entries for Rafs v5. So let's generate a pseudo `i_size` for Rafs v5
// directory inode.
pub fn v5_set_dir_size(&mut self, fs_version: RafsVersion, children: &[Tree]) {
if !self.is_dir() || !fs_version.is_v5() {
return;
}
let mut d_size = 0u64;
for child in children.iter() {
d_size += child.borrow_mut_node().inode.name_size() as u64 + RAFS_V5_VIRTUAL_ENTRY_SIZE;
}
if d_size == 0 {
self.inode.set_size(4096);
} else {
// Safe to unwrap() because we have u32 for child count.
self.inode.set_size(try_round_up_4k(d_size).unwrap());
}
self.v5_set_inode_blocks();
}
/// Calculate and set `i_blocks` for inode.
///
/// In order to support repeatable build, we can't reuse `i_blocks` from source filesystems,
/// so let's calculate it by ourself for stable `i_block`.
///
/// Normal filesystem includes the space occupied by Xattr into the directory size,
/// let's follow the normal behavior.
pub fn v5_set_inode_blocks(&mut self) {
// Set inode blocks for RAFS v5 inode, v6 will calculate it at runtime.
if let InodeWrapper::V5(_) = self.inode {
self.inode.set_blocks(div_round_up(
self.inode.size() + self.info.xattrs.aligned_size_v5() as u64,
512,
));
}
}
}
impl Bootstrap {
/// Calculate inode digest for directory.
fn v5_digest_node(&self, ctx: &mut BuildContext, tree: &Tree) {
let mut node = tree.borrow_mut_node();
// We have set digest for non-directory inode in the previous dump_blob workflow.
if node.is_dir() {
let mut inode_hasher = RafsDigest::hasher(ctx.digester);
for child in tree.children.iter() {
let child = child.borrow_mut_node();
inode_hasher.digest_update(child.inode.digest().as_ref());
}
node.inode.set_digest(inode_hasher.digest_finalize());
}
}
/// Dump bootstrap and blob file, return (Vec<blob_id>, blob_size)
pub(crate) fn v5_dump(
&mut self,
ctx: &mut BuildContext,
bootstrap_ctx: &mut BootstrapContext,
blob_table: &RafsV5BlobTable,
) -> Result<()> {
// Set inode digest, use reverse iteration order to reduce repeated digest calculations.
self.tree.walk_dfs_post(&mut |t| {
self.v5_digest_node(ctx, t);
Ok(())
})?;
// Set inode table
let super_block_size = size_of::<RafsV5SuperBlock>();
let inode_table_entries = bootstrap_ctx.get_next_ino() as u32 - 1;
let mut inode_table = RafsV5InodeTable::new(inode_table_entries as usize);
let inode_table_size = inode_table.size();
// Set prefetch table
let (prefetch_table_size, prefetch_table_entries) =
if let Some(prefetch_table) = ctx.prefetch.get_v5_prefetch_table() {
(prefetch_table.size(), prefetch_table.len() as u32)
} else {
(0, 0u32)
};
// Set blob table, use sha256 string (length 64) as blob id if not specified
let prefetch_table_offset = super_block_size + inode_table_size;
let blob_table_offset = prefetch_table_offset + prefetch_table_size;
let blob_table_size = blob_table.size();
let extended_blob_table_offset = blob_table_offset + blob_table_size;
let extended_blob_table_size = blob_table.extended.size();
let extended_blob_table_entries = blob_table.extended.entries();
// Set super block
let mut super_block = RafsV5SuperBlock::new();
let inodes_count = bootstrap_ctx.inode_map.len() as u64;
super_block.set_inodes_count(inodes_count);
super_block.set_inode_table_offset(super_block_size as u64);
super_block.set_inode_table_entries(inode_table_entries);
super_block.set_blob_table_offset(blob_table_offset as u64);
super_block.set_blob_table_size(blob_table_size as u32);
super_block.set_extended_blob_table_offset(extended_blob_table_offset as u64);
super_block.set_extended_blob_table_entries(u32::try_from(extended_blob_table_entries)?);
super_block.set_prefetch_table_offset(prefetch_table_offset as u64);
super_block.set_prefetch_table_entries(prefetch_table_entries);
super_block.set_compressor(ctx.compressor);
super_block.set_digester(ctx.digester);
super_block.set_chunk_size(ctx.chunk_size);
if ctx.explicit_uidgid {
super_block.set_explicit_uidgid();
}
// Set inodes and chunks
let mut inode_offset = (super_block_size
+ inode_table_size
+ prefetch_table_size
+ blob_table_size
+ extended_blob_table_size) as u32;
let mut has_xattr = false;
self.tree.walk_dfs_pre(&mut |t| {
let node = t.borrow_mut_node();
inode_table.set(node.index, inode_offset)?;
// Add inode size
inode_offset += node.inode.inode_size() as u32;
if node.inode.has_xattr() {
has_xattr = true;
if !node.info.xattrs.is_empty() {
inode_offset += (size_of::<RafsV5XAttrsTable>()
+ node.info.xattrs.aligned_size_v5())
as u32;
}
}
// Add chunks size
if node.is_reg() {
inode_offset += node.inode.child_count() * size_of::<RafsV5ChunkInfo>() as u32;
}
Ok(())
})?;
if has_xattr {
super_block.set_has_xattr();
}
// Dump super block
super_block
.store(bootstrap_ctx.writer.as_mut())
.context("failed to store superblock")?;
// Dump inode table
inode_table
.store(bootstrap_ctx.writer.as_mut())
.context("failed to store inode table")?;
// Dump prefetch table
if let Some(mut prefetch_table) = ctx.prefetch.get_v5_prefetch_table() {
prefetch_table
.store(bootstrap_ctx.writer.as_mut())
.context("failed to store prefetch table")?;
}
// Dump blob table
blob_table
.store(bootstrap_ctx.writer.as_mut())
.context("failed to store blob table")?;
// Dump extended blob table
blob_table
.store_extended(bootstrap_ctx.writer.as_mut())
.context("failed to store extended blob table")?;
// Dump inodes and chunks
timing_tracer!(
{
self.tree.walk_dfs_pre(&mut |t| {
t.borrow_mut_node()
.dump_bootstrap_v5(ctx, bootstrap_ctx.writer.as_mut())
.context("failed to dump bootstrap")
})
},
"dump_bootstrap"
)?;
Ok(())
}
}

1072
builder/src/core/v6.rs Normal file

File diff suppressed because it is too large Load Diff

267
builder/src/directory.rs Normal file
View File

@ -0,0 +1,267 @@
// Copyright 2020 Ant Group. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use std::fs;
use std::fs::DirEntry;
use anyhow::{anyhow, Context, Result};
use nydus_utils::{event_tracer, lazy_drop, root_tracer, timing_tracer};
use crate::core::context::{Artifact, NoopArtifactWriter};
use crate::core::prefetch;
use super::core::blob::Blob;
use super::core::context::{
ArtifactWriter, BlobManager, BootstrapManager, BuildContext, BuildOutput,
};
use super::core::node::Node;
use super::{build_bootstrap, dump_bootstrap, finalize_blob, Builder, Overlay, Tree, TreeNode};
struct FilesystemTreeBuilder {}
impl FilesystemTreeBuilder {
fn new() -> Self {
Self {}
}
#[allow(clippy::only_used_in_recursion)]
/// Walk directory to build node tree by DFS
fn load_children(
&self,
ctx: &mut BuildContext,
parent: &TreeNode,
layer_idx: u16,
) -> Result<(Vec<Tree>, Vec<Tree>)> {
let mut trees = Vec::new();
let mut external_trees = Vec::new();
let parent = parent.borrow();
if !parent.is_dir() {
return Ok((trees.clone(), external_trees));
}
let children = fs::read_dir(parent.path())
.with_context(|| format!("failed to read dir {:?}", parent.path()))?;
let children = children.collect::<Result<Vec<DirEntry>, std::io::Error>>()?;
event_tracer!("load_from_directory", +children.len());
for child in children {
let path = child.path();
let target = Node::generate_target(&path, &ctx.source_path);
let mut file_size: u64 = 0;
if ctx.attributes.is_external(&target) {
if let Some(value) = ctx.attributes.get_value(&target, "file_size") {
file_size = value.parse::<u64>().ok().ok_or_else(|| {
anyhow!(
"failed to parse file_size for external file {}",
&target.display()
)
})?;
}
}
let mut child = Node::from_fs_object(
ctx.fs_version,
ctx.source_path.clone(),
path.clone(),
Overlay::UpperAddition,
ctx.chunk_size,
file_size,
parent.info.explicit_uidgid,
true,
)
.with_context(|| format!("failed to create node {:?}", path))?;
child.layer_idx = layer_idx;
// as per OCI spec, whiteout file should not be present within final image
// or filesystem, only existed in layers.
if layer_idx == 0
&& child.whiteout_type(ctx.whiteout_spec).is_some()
&& !child.is_overlayfs_opaque(ctx.whiteout_spec)
{
continue;
}
let (mut child, mut external_child) = (Tree::new(child.clone()), Tree::new(child));
let (child_children, external_children) =
self.load_children(ctx, &child.node, layer_idx)?;
child.children = child_children;
external_child.children = external_children;
child
.borrow_mut_node()
.v5_set_dir_size(ctx.fs_version, &child.children);
external_child
.borrow_mut_node()
.v5_set_dir_size(ctx.fs_version, &external_child.children);
if ctx.attributes.is_external(&target) {
external_trees.push(external_child);
} else {
// TODO: need to implement type=ignore for nydus attributes,
// let's ignore the tree for workaround.
trees.push(child.clone());
if ctx.attributes.is_prefix_external(target) {
external_trees.push(external_child);
}
};
}
trees.sort_unstable_by(|a, b| a.name().cmp(b.name()));
external_trees.sort_unstable_by(|a, b| a.name().cmp(b.name()));
Ok((trees, external_trees))
}
}
#[derive(Default)]
pub struct DirectoryBuilder {}
impl DirectoryBuilder {
pub fn new() -> Self {
Self {}
}
/// Build node tree from a filesystem directory
fn build_tree(&mut self, ctx: &mut BuildContext, layer_idx: u16) -> Result<(Tree, Tree)> {
let node = Node::from_fs_object(
ctx.fs_version,
ctx.source_path.clone(),
ctx.source_path.clone(),
Overlay::UpperAddition,
ctx.chunk_size,
0,
ctx.explicit_uidgid,
true,
)?;
let mut tree = Tree::new(node.clone());
let mut external_tree = Tree::new(node);
let tree_builder = FilesystemTreeBuilder::new();
let (tree_children, external_tree_children) = timing_tracer!(
{ tree_builder.load_children(ctx, &tree.node, layer_idx) },
"load_from_directory"
)?;
tree.children = tree_children;
external_tree.children = external_tree_children;
tree.borrow_mut_node()
.v5_set_dir_size(ctx.fs_version, &tree.children);
external_tree
.borrow_mut_node()
.v5_set_dir_size(ctx.fs_version, &external_tree.children);
Ok((tree, external_tree))
}
fn one_build(
&mut self,
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_mgr: &mut BlobManager,
blob_writer: &mut Box<dyn Artifact>,
tree: Tree,
) -> Result<BuildOutput> {
// Build bootstrap
let mut bootstrap_ctx = bootstrap_mgr.create_ctx()?;
let mut bootstrap = timing_tracer!(
{ build_bootstrap(ctx, bootstrap_mgr, &mut bootstrap_ctx, blob_mgr, tree) },
"build_bootstrap"
)?;
// Dump blob file
timing_tracer!(
{ Blob::dump(ctx, blob_mgr, blob_writer.as_mut()) },
"dump_blob"
)?;
// Dump blob meta information
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
Blob::dump_meta_data(ctx, blob_ctx, blob_writer.as_mut())?;
}
// Dump RAFS meta/bootstrap and finalize the data blob.
if ctx.blob_inline_meta {
timing_tracer!(
{
dump_bootstrap(
ctx,
bootstrap_mgr,
&mut bootstrap_ctx,
&mut bootstrap,
blob_mgr,
blob_writer.as_mut(),
)
},
"dump_bootstrap"
)?;
finalize_blob(ctx, blob_mgr, blob_writer.as_mut())?;
} else {
finalize_blob(ctx, blob_mgr, blob_writer.as_mut())?;
timing_tracer!(
{
dump_bootstrap(
ctx,
bootstrap_mgr,
&mut bootstrap_ctx,
&mut bootstrap,
blob_mgr,
blob_writer.as_mut(),
)
},
"dump_bootstrap"
)?;
}
lazy_drop(bootstrap_ctx);
BuildOutput::new(blob_mgr, None, &bootstrap_mgr.bootstrap_storage, &None)
}
}
impl Builder for DirectoryBuilder {
fn build(
&mut self,
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_mgr: &mut BlobManager,
) -> Result<BuildOutput> {
let layer_idx = u16::from(bootstrap_mgr.f_parent_path.is_some());
// Scan source directory to build upper layer tree.
let (tree, external_tree) =
timing_tracer!({ self.build_tree(ctx, layer_idx) }, "build_tree")?;
// Build for tree
let mut blob_writer: Box<dyn Artifact> = if let Some(blob_stor) = ctx.blob_storage.clone() {
Box::new(ArtifactWriter::new(blob_stor)?)
} else {
Box::<NoopArtifactWriter>::default()
};
let mut output = self.one_build(ctx, bootstrap_mgr, blob_mgr, &mut blob_writer, tree)?;
// Build for external tree
ctx.prefetch = prefetch::Prefetch::new(prefetch::PrefetchPolicy::None)?;
let mut external_blob_mgr = BlobManager::new(ctx.digester, true);
let mut external_bootstrap_mgr = bootstrap_mgr.clone();
if let Some(stor) = external_bootstrap_mgr.bootstrap_storage.as_mut() {
stor.add_suffix("external")
}
let mut external_blob_writer: Box<dyn Artifact> =
if let Some(blob_stor) = ctx.external_blob_storage.clone() {
Box::new(ArtifactWriter::new(blob_stor)?)
} else {
Box::<NoopArtifactWriter>::default()
};
let external_output = self.one_build(
ctx,
&mut external_bootstrap_mgr,
&mut external_blob_mgr,
&mut external_blob_writer,
external_tree,
)?;
output.external_bootstrap_path = external_output.bootstrap_path;
output.external_blobs = external_output.blobs;
Ok(output)
}
}

411
builder/src/lib.rs Normal file
View File

@ -0,0 +1,411 @@
// Copyright 2020 Ant Group. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
//! Builder to create RAFS filesystems from directories and tarballs.
#[macro_use]
extern crate log;
use crate::core::context::Artifact;
use std::ffi::OsString;
use std::os::unix::ffi::OsStrExt;
use std::path::{Path, PathBuf};
use anyhow::{anyhow, Context, Result};
use nydus_rafs::metadata::inode::InodeWrapper;
use nydus_rafs::metadata::layout::RafsXAttrs;
use nydus_rafs::metadata::{Inode, RafsVersion};
use nydus_storage::meta::toc;
use nydus_utils::digest::{DigestHasher, RafsDigest};
use nydus_utils::{compress, digest, root_tracer, timing_tracer};
use sha2::Digest;
use self::core::node::{Node, NodeInfo};
pub use self::chunkdict_generator::ChunkdictBlobInfo;
pub use self::chunkdict_generator::ChunkdictChunkInfo;
pub use self::chunkdict_generator::Generator;
pub use self::compact::BlobCompactor;
pub use self::compact::Config as CompactConfig;
pub use self::core::bootstrap::Bootstrap;
pub use self::core::chunk_dict::{parse_chunk_dict_arg, ChunkDict, HashChunkDict};
pub use self::core::context::{
ArtifactStorage, ArtifactWriter, BlobCacheGenerator, BlobContext, BlobManager,
BootstrapContext, BootstrapManager, BuildContext, BuildOutput, ConversionType,
};
pub use self::core::feature::{Feature, Features};
pub use self::core::node::{ChunkSource, NodeChunk};
pub use self::core::overlay::{Overlay, WhiteoutSpec};
pub use self::core::prefetch::{Prefetch, PrefetchPolicy};
pub use self::core::tree::{MetadataTreeBuilder, Tree, TreeNode};
pub use self::directory::DirectoryBuilder;
pub use self::merge::Merger;
pub use self::optimize_prefetch::update_ctx_from_bootstrap;
pub use self::optimize_prefetch::OptimizePrefetch;
pub use self::stargz::StargzBuilder;
pub use self::tarball::TarballBuilder;
pub mod attributes;
mod chunkdict_generator;
mod compact;
mod core;
mod directory;
mod merge;
mod optimize_prefetch;
mod stargz;
mod tarball;
/// Trait to generate a RAFS filesystem from the source.
pub trait Builder {
fn build(
&mut self,
build_ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_mgr: &mut BlobManager,
) -> Result<BuildOutput>;
}
fn build_bootstrap(
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
bootstrap_ctx: &mut BootstrapContext,
blob_mgr: &mut BlobManager,
mut tree: Tree,
) -> Result<Bootstrap> {
// For multi-layer build, merge the upper layer and lower layer with overlay whiteout applied.
if bootstrap_ctx.layered {
let mut parent = Bootstrap::load_parent_bootstrap(ctx, bootstrap_mgr, blob_mgr)?;
timing_tracer!({ parent.merge_overaly(ctx, tree) }, "merge_bootstrap")?;
tree = parent;
}
let mut bootstrap = Bootstrap::new(tree)?;
timing_tracer!({ bootstrap.build(ctx, bootstrap_ctx) }, "build_bootstrap")?;
Ok(bootstrap)
}
fn dump_bootstrap(
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
bootstrap_ctx: &mut BootstrapContext,
bootstrap: &mut Bootstrap,
blob_mgr: &mut BlobManager,
blob_writer: &mut dyn Artifact,
) -> Result<()> {
// Make sure blob id is updated according to blob hash if not specified by user.
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
if blob_ctx.blob_id.is_empty() {
// `Blob::dump()` should have set `blob_ctx.blob_id` to referenced OCI tarball for
// ref-type conversion.
assert!(!ctx.conversion_type.is_to_ref());
if ctx.blob_inline_meta {
// Set special blob id for blob with inlined meta.
blob_ctx.blob_id = "x".repeat(64);
} else {
blob_ctx.blob_id = format!("{:x}", blob_ctx.blob_hash.clone().finalize());
}
}
if !ctx.conversion_type.is_to_ref() {
blob_ctx.compressed_blob_size = blob_writer.pos()?;
}
}
// Dump bootstrap file
let blob_table = blob_mgr.to_blob_table(ctx)?;
let storage = &mut bootstrap_mgr.bootstrap_storage;
bootstrap.dump(ctx, storage, bootstrap_ctx, &blob_table)?;
// Dump RAFS meta to data blob if inline meta is enabled.
if ctx.blob_inline_meta {
assert_ne!(ctx.conversion_type, ConversionType::TarToTarfs);
// Ensure the blob object is created in case of no chunks generated for the blob.
let blob_ctx = if blob_mgr.external {
&mut blob_mgr.new_blob_ctx(ctx)?
} else {
let (_, blob_ctx) = blob_mgr
.get_or_create_current_blob(ctx)
.map_err(|_e| anyhow!("failed to get current blob object"))?;
blob_ctx
};
let bootstrap_offset = blob_writer.pos()?;
let uncompressed_bootstrap = bootstrap_ctx.writer.as_bytes()?;
let uncompressed_size = uncompressed_bootstrap.len();
let uncompressed_digest =
RafsDigest::from_buf(&uncompressed_bootstrap, digest::Algorithm::Sha256);
// Output uncompressed data for backward compatibility and compressed data for new format.
let (bootstrap_data, compressor) = if ctx.features.is_enabled(Feature::BlobToc) {
let mut compressor = compress::Algorithm::Zstd;
let (compressed_data, compressed) =
compress::compress(&uncompressed_bootstrap, compressor)
.with_context(|| "failed to compress bootstrap".to_string())?;
blob_ctx.write_data(blob_writer, &compressed_data)?;
if !compressed {
compressor = compress::Algorithm::None;
}
(compressed_data, compressor)
} else {
blob_ctx.write_data(blob_writer, &uncompressed_bootstrap)?;
(uncompressed_bootstrap, compress::Algorithm::None)
};
let compressed_size = bootstrap_data.len();
blob_ctx.write_tar_header(
blob_writer,
toc::TOC_ENTRY_BOOTSTRAP,
compressed_size as u64,
)?;
if ctx.features.is_enabled(Feature::BlobToc) {
blob_ctx.entry_list.add(
toc::TOC_ENTRY_BOOTSTRAP,
compressor,
uncompressed_digest,
bootstrap_offset,
compressed_size as u64,
uncompressed_size as u64,
)?;
}
}
Ok(())
}
fn dump_toc(
ctx: &mut BuildContext,
blob_ctx: &mut BlobContext,
blob_writer: &mut dyn Artifact,
) -> Result<()> {
if ctx.features.is_enabled(Feature::BlobToc) {
assert_ne!(ctx.conversion_type, ConversionType::TarToTarfs);
let mut hasher = RafsDigest::hasher(digest::Algorithm::Sha256);
let data = blob_ctx.entry_list.as_bytes().to_vec();
let toc_size = data.len() as u64;
blob_ctx.write_data(blob_writer, &data)?;
hasher.digest_update(&data);
let header = blob_ctx.write_tar_header(blob_writer, toc::TOC_ENTRY_BLOB_TOC, toc_size)?;
hasher.digest_update(header.as_bytes());
blob_ctx.blob_toc_digest = hasher.digest_finalize().data;
blob_ctx.blob_toc_size = toc_size as u32 + header.as_bytes().len() as u32;
}
Ok(())
}
fn finalize_blob(
ctx: &mut BuildContext,
blob_mgr: &mut BlobManager,
blob_writer: &mut dyn Artifact,
) -> Result<()> {
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
let is_tarfs = ctx.conversion_type == ConversionType::TarToTarfs;
if !is_tarfs {
dump_toc(ctx, blob_ctx, blob_writer)?;
}
if !ctx.conversion_type.is_to_ref() {
blob_ctx.compressed_blob_size = blob_writer.pos()?;
}
if ctx.blob_inline_meta && blob_ctx.blob_id == "x".repeat(64) {
blob_ctx.blob_id = String::new();
}
let hash = blob_ctx.blob_hash.clone().finalize();
let blob_meta_id = if ctx.blob_id.is_empty() {
format!("{:x}", hash)
} else {
assert!(!ctx.conversion_type.is_to_ref() || is_tarfs);
ctx.blob_id.clone()
};
if ctx.conversion_type.is_to_ref() {
if blob_ctx.blob_id.is_empty() {
// Use `sha256(tarball)` as `blob_id`. A tarball without files will fall through
// this path because `Blob::dump()` hasn't generated `blob_ctx.blob_id`.
if let Some(zran) = &ctx.blob_zran_generator {
let reader = zran.lock().unwrap().reader();
blob_ctx.compressed_blob_size = reader.get_data_size();
if blob_ctx.blob_id.is_empty() {
let hash = reader.get_data_digest();
blob_ctx.blob_id = format!("{:x}", hash.finalize());
}
} else if let Some(tar_reader) = &ctx.blob_tar_reader {
blob_ctx.compressed_blob_size = tar_reader.position();
if blob_ctx.blob_id.is_empty() {
let hash = tar_reader.get_hash_object();
blob_ctx.blob_id = format!("{:x}", hash.finalize());
}
}
}
// Tarfs mode only has tar stream and meta blob, there's no data blob.
if !ctx.blob_inline_meta && !is_tarfs {
blob_ctx.blob_meta_digest = hash.into();
blob_ctx.blob_meta_size = blob_writer.pos()?;
}
} else if blob_ctx.blob_id.is_empty() {
// `blob_ctx.blob_id` should be RAFS blob id.
blob_ctx.blob_id = blob_meta_id.clone();
}
// Tarfs mode directly use the tar file as RAFS data blob, so no need to generate the data
// blob file.
if !is_tarfs {
blob_writer.finalize(Some(blob_meta_id))?;
}
if let Some(blob_cache) = ctx.blob_cache_generator.as_ref() {
blob_cache.finalize(&blob_ctx.blob_id)?;
}
}
Ok(())
}
/// Helper for TarballBuilder/StargzBuilder to build the filesystem tree.
pub struct TarBuilder {
pub explicit_uidgid: bool,
pub layer_idx: u16,
pub version: RafsVersion,
next_ino: Inode,
}
impl TarBuilder {
/// Create a new instance of [TarBuilder].
pub fn new(explicit_uidgid: bool, layer_idx: u16, version: RafsVersion) -> Self {
TarBuilder {
explicit_uidgid,
layer_idx,
next_ino: 0,
version,
}
}
/// Allocate an inode number.
pub fn next_ino(&mut self) -> Inode {
self.next_ino += 1;
self.next_ino
}
/// Insert a node into the tree, creating any missing intermediate directories.
pub fn insert_into_tree(&mut self, tree: &mut Tree, node: Node) -> Result<()> {
let target_paths = node.target_vec();
let target_paths_len = target_paths.len();
if target_paths_len == 1 {
// Handle root node modification
assert_eq!(node.path(), Path::new("/"));
tree.set_node(node);
} else {
let mut tmp_tree = tree;
for idx in 1..target_paths.len() {
match tmp_tree.get_child_idx(target_paths[idx].as_bytes()) {
Some(i) => {
if idx == target_paths_len - 1 {
tmp_tree.children[i].set_node(node);
break;
} else {
tmp_tree = &mut tmp_tree.children[i];
}
}
None => {
if idx == target_paths_len - 1 {
tmp_tree.insert_child(Tree::new(node));
break;
} else {
let node = self.create_directory(&target_paths[..=idx])?;
tmp_tree.insert_child(Tree::new(node));
let last_idx = tmp_tree.children.len() - 1;
tmp_tree = &mut tmp_tree.children[last_idx];
}
}
}
}
}
Ok(())
}
/// Create a new node for a directory.
pub fn create_directory(&mut self, target_paths: &[OsString]) -> Result<Node> {
let ino = self.next_ino();
let name = &target_paths[target_paths.len() - 1];
let mut inode = InodeWrapper::new(self.version);
inode.set_ino(ino);
inode.set_mode(0o755 | libc::S_IFDIR as u32);
inode.set_nlink(2);
inode.set_name_size(name.len());
inode.set_rdev(u32::MAX);
let source = PathBuf::from("/");
let target_vec = target_paths.to_vec();
let mut target = PathBuf::new();
for name in target_paths.iter() {
target = target.join(name);
}
let info = NodeInfo {
explicit_uidgid: self.explicit_uidgid,
src_ino: ino,
src_dev: u64::MAX,
rdev: u64::MAX,
path: target.clone(),
source,
target,
target_vec,
symlink: None,
xattrs: RafsXAttrs::new(),
v6_force_extended_inode: false,
};
Ok(Node::new(inode, info, self.layer_idx))
}
/// Check whether the path is a eStargz special file.
pub fn is_stargz_special_files(&self, path: &Path) -> bool {
path == Path::new("/stargz.index.json")
|| path == Path::new("/.prefetch.landmark")
|| path == Path::new("/.no.prefetch.landmark")
}
}
#[cfg(test)]
mod tests {
use vmm_sys_util::tempdir::TempDir;
use super::*;
#[test]
fn test_tar_builder_is_stargz_special_files() {
let builder = TarBuilder::new(true, 0, RafsVersion::V6);
let path = Path::new("/stargz.index.json");
assert!(builder.is_stargz_special_files(&path));
let path = Path::new("/.prefetch.landmark");
assert!(builder.is_stargz_special_files(&path));
let path = Path::new("/.no.prefetch.landmark");
assert!(builder.is_stargz_special_files(&path));
let path = Path::new("/no.prefetch.landmark");
assert!(!builder.is_stargz_special_files(&path));
let path = Path::new("/prefetch.landmark");
assert!(!builder.is_stargz_special_files(&path));
let path = Path::new("/tar.index.json");
assert!(!builder.is_stargz_special_files(&path));
}
#[test]
fn test_tar_builder_create_directory() {
let tmp_dir = TempDir::new().unwrap();
let target_paths = [OsString::from(tmp_dir.as_path())];
let mut builder = TarBuilder::new(true, 0, RafsVersion::V6);
let node = builder.create_directory(&target_paths);
assert!(node.is_ok());
let node = node.unwrap();
println!("Node: {}", node);
assert_eq!(node.file_type(), "dir");
assert_eq!(node.target(), tmp_dir.as_path());
assert_eq!(builder.next_ino, 1);
assert_eq!(builder.next_ino(), 2);
}
}

440
builder/src/merge.rs Normal file
View File

@ -0,0 +1,440 @@
// Copyright (C) 2022 Nydus Developers. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use std::collections::hash_map::Entry;
use std::collections::{HashMap, HashSet};
use std::convert::TryFrom;
use std::path::PathBuf;
use std::sync::Arc;
use anyhow::{anyhow, bail, ensure, Context, Result};
use hex::FromHex;
use nydus_api::ConfigV2;
use nydus_rafs::metadata::{RafsSuper, RafsVersion};
use nydus_storage::device::{BlobFeatures, BlobInfo};
use nydus_utils::crypt;
use super::{
ArtifactStorage, BlobContext, BlobManager, Bootstrap, BootstrapContext, BuildContext,
BuildOutput, ChunkSource, ConversionType, Overlay, Tree,
};
/// Struct to generate the merged RAFS bootstrap for an image from per layer RAFS bootstraps.
///
/// A container image contains one or more layers, a RAFS bootstrap is built for each layer.
/// Those per layer bootstraps could be mounted by overlayfs to form the container rootfs.
/// To improve performance by avoiding overlayfs, an image level bootstrap is generated by
/// merging per layer bootstrap with overlayfs rules applied.
pub struct Merger {}
impl Merger {
fn get_string_from_list(
original_ids: &Option<Vec<String>>,
idx: usize,
) -> Result<Option<String>> {
Ok(if let Some(id) = &original_ids {
let id_string = id
.get(idx)
.ok_or_else(|| anyhow!("unmatched digest index {}", idx))?;
Some(id_string.clone())
} else {
None
})
}
fn get_digest_from_list(digests: &Option<Vec<String>>, idx: usize) -> Result<Option<[u8; 32]>> {
Ok(if let Some(digests) = &digests {
let digest = digests
.get(idx)
.ok_or_else(|| anyhow!("unmatched digest index {}", idx))?;
Some(<[u8; 32]>::from_hex(digest)?)
} else {
None
})
}
fn get_size_from_list(sizes: &Option<Vec<u64>>, idx: usize) -> Result<Option<u64>> {
Ok(if let Some(sizes) = &sizes {
let size = sizes
.get(idx)
.ok_or_else(|| anyhow!("unmatched size index {}", idx))?;
Some(*size)
} else {
None
})
}
/// Overlay multiple RAFS filesystems into a merged RAFS filesystem.
///
/// # Arguments
/// - sources: contains one or more per layer bootstraps in order of lower to higher.
/// - chunk_dict: contain the chunk dictionary used to build per layer boostrap, or None.
#[allow(clippy::too_many_arguments)]
pub fn merge(
ctx: &mut BuildContext,
parent_bootstrap_path: Option<String>,
sources: Vec<PathBuf>,
blob_digests: Option<Vec<String>>,
original_blob_ids: Option<Vec<String>>,
blob_sizes: Option<Vec<u64>>,
blob_toc_digests: Option<Vec<String>>,
blob_toc_sizes: Option<Vec<u64>>,
target: ArtifactStorage,
chunk_dict: Option<PathBuf>,
config_v2: Arc<ConfigV2>,
) -> Result<BuildOutput> {
if sources.is_empty() {
bail!("source bootstrap list is empty , at least one bootstrap is required");
}
if let Some(digests) = blob_digests.as_ref() {
ensure!(
digests.len() == sources.len(),
"number of blob digest entries {} doesn't match number of sources {}",
digests.len(),
sources.len(),
);
}
if let Some(original_ids) = original_blob_ids.as_ref() {
ensure!(
original_ids.len() == sources.len(),
"number of original blob id entries {} doesn't match number of sources {}",
original_ids.len(),
sources.len(),
);
}
if let Some(sizes) = blob_sizes.as_ref() {
ensure!(
sizes.len() == sources.len(),
"number of blob size entries {} doesn't match number of sources {}",
sizes.len(),
sources.len(),
);
}
if let Some(toc_digests) = blob_toc_digests.as_ref() {
ensure!(
toc_digests.len() == sources.len(),
"number of toc digest entries {} doesn't match number of sources {}",
toc_digests.len(),
sources.len(),
);
}
if let Some(sizes) = blob_toc_sizes.as_ref() {
ensure!(
sizes.len() == sources.len(),
"number of toc size entries {} doesn't match number of sources {}",
sizes.len(),
sources.len(),
);
}
let mut tree: Option<Tree> = None;
let mut blob_mgr = BlobManager::new(ctx.digester, false);
let mut blob_idx_map = HashMap::new();
let mut parent_layers = 0;
// Load parent bootstrap
if let Some(parent_bootstrap_path) = &parent_bootstrap_path {
let (rs, _) =
RafsSuper::load_from_file(parent_bootstrap_path, config_v2.clone(), false)
.context(format!("load parent bootstrap {:?}", parent_bootstrap_path))?;
let blobs = rs.superblock.get_blob_infos();
for blob in &blobs {
let blob_ctx = BlobContext::from(ctx, &blob, ChunkSource::Parent)?;
blob_idx_map.insert(blob_ctx.blob_id.clone(), blob_mgr.len());
blob_mgr.add_blob(blob_ctx);
}
parent_layers = blobs.len();
tree = Some(Tree::from_bootstrap(&rs, &mut ())?);
}
// Get the blobs come from chunk dictionary.
let mut chunk_dict_blobs = HashSet::new();
let mut config = None;
if let Some(chunk_dict_path) = &chunk_dict {
let (rs, _) = RafsSuper::load_from_file(chunk_dict_path, config_v2.clone(), false)
.context(format!("load chunk dict bootstrap {:?}", chunk_dict_path))?;
config = Some(rs.meta.get_config());
for blob in rs.superblock.get_blob_infos() {
chunk_dict_blobs.insert(blob.blob_id().to_string());
}
}
let mut fs_version = RafsVersion::V6;
let mut chunk_size = None;
for (layer_idx, bootstrap_path) in sources.iter().enumerate() {
let (rs, _) = RafsSuper::load_from_file(bootstrap_path, config_v2.clone(), false)
.context(format!("load bootstrap {:?}", bootstrap_path))?;
config
.get_or_insert_with(|| rs.meta.get_config())
.check_compatibility(&rs.meta)?;
fs_version = RafsVersion::try_from(rs.meta.version)
.context("failed to get RAFS version number")?;
ctx.compressor = rs.meta.get_compressor();
ctx.digester = rs.meta.get_digester();
// If any RAFS filesystems are encrypted, the merged boostrap will be marked as encrypted.
match rs.meta.get_cipher() {
crypt::Algorithm::None => (),
crypt::Algorithm::Aes128Xts => ctx.cipher = crypt::Algorithm::Aes128Xts,
_ => bail!("invalid per layer bootstrap, only supports aes-128-xts"),
}
ctx.explicit_uidgid = rs.meta.explicit_uidgid();
if config.as_ref().unwrap().is_tarfs_mode {
ctx.conversion_type = ConversionType::TarToTarfs;
ctx.blob_features |= BlobFeatures::TARFS;
}
let mut parent_blob_added = false;
let blobs = &rs.superblock.get_blob_infos();
for blob in blobs {
let mut blob_ctx = BlobContext::from(ctx, &blob, ChunkSource::Parent)?;
if let Some(chunk_size) = chunk_size {
ensure!(
chunk_size == blob_ctx.chunk_size,
"can not merge bootstraps with inconsistent chunk size, current bootstrap {:?} with chunk size {:x}, expected {:x}",
bootstrap_path,
blob_ctx.chunk_size,
chunk_size,
);
} else {
chunk_size = Some(blob_ctx.chunk_size);
}
if !chunk_dict_blobs.contains(&blob.blob_id()) {
// It is assumed that the `nydus-image create` at each layer and `nydus-image merge` commands
// use the same chunk dict bootstrap. So the parent bootstrap includes multiple blobs, but
// only at most one new blob, the other blobs should be from the chunk dict image.
if parent_blob_added {
bail!("invalid per layer bootstrap, having multiple associated data blobs");
}
parent_blob_added = true;
if ctx.configuration.internal.blob_accessible()
|| ctx.conversion_type == ConversionType::TarToTarfs
{
// `blob.blob_id()` should have been fixed when loading the bootstrap.
blob_ctx.blob_id = blob.blob_id();
} else {
// The blob id (blob sha256 hash) in parent bootstrap is invalid for nydusd
// runtime, should change it to the hash of whole tar blob.
if let Some(original_id) =
Self::get_string_from_list(&original_blob_ids, layer_idx)?
{
blob_ctx.blob_id = original_id;
} else {
blob_ctx.blob_id =
BlobInfo::get_blob_id_from_meta_path(bootstrap_path)?;
}
}
if let Some(digest) = Self::get_digest_from_list(&blob_digests, layer_idx)? {
if blob.has_feature(BlobFeatures::SEPARATE) {
blob_ctx.blob_meta_digest = digest;
} else {
blob_ctx.blob_id = hex::encode(digest);
}
}
if let Some(size) = Self::get_size_from_list(&blob_sizes, layer_idx)? {
if blob.has_feature(BlobFeatures::SEPARATE) {
blob_ctx.blob_meta_size = size;
} else {
blob_ctx.compressed_blob_size = size;
}
}
if let Some(digest) = Self::get_digest_from_list(&blob_toc_digests, layer_idx)?
{
blob_ctx.blob_toc_digest = digest;
}
if let Some(size) = Self::get_size_from_list(&blob_toc_sizes, layer_idx)? {
blob_ctx.blob_toc_size = size as u32;
}
}
if let Entry::Vacant(e) = blob_idx_map.entry(blob.blob_id()) {
e.insert(blob_mgr.len());
blob_mgr.add_blob(blob_ctx);
}
}
let upper = Tree::from_bootstrap(&rs, &mut ())?;
upper.walk_bfs(true, &mut |n| {
let mut node = n.borrow_mut_node();
for chunk in &mut node.chunks {
let origin_blob_index = chunk.inner.blob_index() as usize;
let blob_ctx = blobs[origin_blob_index].as_ref();
if let Some(blob_index) = blob_idx_map.get(&blob_ctx.blob_id()) {
// Set the blob index of chunk to real index in blob table of final bootstrap.
chunk.set_blob_index(*blob_index as u32);
}
}
// Set node's layer index to distinguish same inode number (from bootstrap)
// between different layers.
let idx = u16::try_from(layer_idx).context(format!(
"too many layers {}, limited to {}",
layer_idx,
u16::MAX
))?;
if parent_layers + idx as usize > u16::MAX as usize {
bail!("too many layers {}, limited to {}", layer_idx, u16::MAX);
}
node.layer_idx = idx + parent_layers as u16;
node.overlay = Overlay::UpperAddition;
Ok(())
})?;
if let Some(tree) = &mut tree {
tree.merge_overaly(ctx, upper)?;
} else {
tree = Some(upper);
}
}
if ctx.conversion_type == ConversionType::TarToTarfs {
if parent_layers > 0 {
bail!("merging RAFS in TARFS mode conflicts with `--parent-bootstrap`");
}
if !chunk_dict_blobs.is_empty() {
bail!("merging RAFS in TARFS mode conflicts with `--chunk-dict`");
}
}
// Safe to unwrap because there is at least one source bootstrap.
let tree = tree.unwrap();
ctx.fs_version = fs_version;
if let Some(chunk_size) = chunk_size {
ctx.chunk_size = chunk_size;
}
// After merging all trees, we need to re-calculate the blob index of
// referenced blobs, as the upper tree might have deleted some files
// or directories by opaques, and some blobs are dereferenced.
let mut used_blobs = HashMap::new(); // HashMap<blob_id, new_blob_index>
let mut used_blob_mgr = BlobManager::new(ctx.digester, false);
let origin_blobs = blob_mgr.get_blobs();
tree.walk_bfs(true, &mut |n| {
let mut node = n.borrow_mut_node();
for chunk in &mut node.chunks {
let origin_blob_index = chunk.inner.blob_index() as usize;
let blob_ctx = origin_blobs[origin_blob_index].clone();
let origin_blob_id = blob_ctx.blob_id();
let new_blob_index = if let Some(new_blob_index) = used_blobs.get(&origin_blob_id) {
*new_blob_index
} else {
let new_blob_index = used_blob_mgr.len();
used_blobs.insert(origin_blob_id, new_blob_index);
used_blob_mgr.add_blob(blob_ctx);
new_blob_index
};
chunk.set_blob_index(new_blob_index as u32);
}
Ok(())
})?;
let mut bootstrap_ctx = BootstrapContext::new(Some(target.clone()), false)?;
let mut bootstrap = Bootstrap::new(tree)?;
bootstrap.build(ctx, &mut bootstrap_ctx)?;
let blob_table = used_blob_mgr.to_blob_table(ctx)?;
let mut bootstrap_storage = Some(target.clone());
bootstrap
.dump(ctx, &mut bootstrap_storage, &mut bootstrap_ctx, &blob_table)
.context(format!("dump bootstrap to {:?}", target.display()))?;
BuildOutput::new(&used_blob_mgr, None, &bootstrap_storage, &None)
}
}
#[cfg(test)]
mod tests {
use nydus_utils::digest;
use vmm_sys_util::tempfile::TempFile;
use super::*;
#[test]
fn test_merger_get_string_from_list() {
let res = Merger::get_string_from_list(&None, 1);
assert!(res.is_ok());
assert!(res.unwrap().is_none());
let original_ids = vec!["string1".to_owned(), "string2".to_owned()];
let original_ids = Some(original_ids);
let res = Merger::get_string_from_list(&original_ids, 0);
assert!(res.is_ok());
assert_eq!(res.unwrap(), Some("string1".to_owned()));
assert!(Merger::get_string_from_list(&original_ids, 2).is_err());
}
#[test]
fn test_merger_get_digest_from_list() {
let res = Merger::get_digest_from_list(&None, 1);
assert!(res.is_ok());
assert!(res.unwrap().is_none());
let original_ids = vec!["string1".to_owned(), "12ab".repeat(16)];
let original_ids = Some(original_ids);
let res = Merger::get_digest_from_list(&original_ids, 1);
assert!(res.is_ok());
assert_eq!(
res.unwrap(),
Some([
18u8, 171, 18, 171, 18, 171, 18, 171, 18, 171, 18, 171, 18, 171, 18, 171, 18, 171,
18, 171, 18, 171, 18, 171, 18, 171, 18, 171, 18, 171, 18, 171
])
);
assert!(Merger::get_digest_from_list(&original_ids, 0).is_err());
assert!(Merger::get_digest_from_list(&original_ids, 2).is_err());
}
#[test]
fn test_merger_get_size_from_list() {
let res = Merger::get_size_from_list(&None, 1);
assert!(res.is_ok());
assert!(res.unwrap().is_none());
let original_ids = vec![1u64, 2, 3, 4];
let original_ids = Some(original_ids);
let res = Merger::get_size_from_list(&original_ids, 1);
assert!(res.is_ok());
assert_eq!(res.unwrap(), Some(2u64));
assert!(Merger::get_size_from_list(&original_ids, 4).is_err());
}
#[test]
fn test_merger_merge() {
let mut ctx = BuildContext::default();
ctx.configuration.internal.set_blob_accessible(false);
ctx.digester = digest::Algorithm::Sha256;
let root_dir = &std::env::var("CARGO_MANIFEST_DIR").expect("$CARGO_MANIFEST_DIR");
let mut source_path1 = PathBuf::from(root_dir);
source_path1.push("../tests/texture/bootstrap/rafs-v6-2.2.boot");
let mut source_path2 = PathBuf::from(root_dir);
source_path2.push("../tests/texture/bootstrap/rafs-v6-2.2.boot");
let tmp_file = TempFile::new().unwrap();
let target = ArtifactStorage::SingleFile(tmp_file.as_path().to_path_buf());
let blob_toc_digests = Some(vec![
"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855".to_owned(),
"4cf0c409788fc1c149afbf4c81276b92427ae41e46412334ca495991b8526650".to_owned(),
]);
let build_output = Merger::merge(
&mut ctx,
None,
vec![source_path1, source_path2],
Some(vec!["a70f".repeat(16), "9bd3".repeat(16)]),
Some(vec!["blob_id".to_owned(), "blob_id2".to_owned()]),
Some(vec![16u64, 32u64]),
blob_toc_digests,
Some(vec![64u64, 128]),
target,
None,
Arc::new(ConfigV2::new("config_v2")),
);
assert!(build_output.is_ok());
let build_output = build_output.unwrap();
println!("BuildOutput: {}", build_output);
assert_eq!(build_output.blob_size, Some(16));
}
}

View File

@ -0,0 +1,302 @@
use crate::anyhow;
use crate::core::blob::Blob;
use crate::finalize_blob;
use crate::Artifact;
use crate::ArtifactWriter;
use crate::BlobContext;
use crate::BlobManager;
use crate::Bootstrap;
use crate::BootstrapManager;
use crate::BuildContext;
use crate::BuildOutput;
use crate::ChunkSource;
use crate::ConversionType;
use crate::NodeChunk;
use crate::Path;
use crate::PathBuf;
use crate::Tree;
use crate::TreeNode;
use anyhow::Context;
use anyhow::{Ok, Result};
use nydus_api::ConfigV2;
use nydus_rafs::metadata::layout::RafsBlobTable;
use nydus_rafs::metadata::RafsSuper;
use nydus_rafs::metadata::RafsVersion;
use nydus_storage::device::BlobInfo;
use nydus_storage::meta::BatchContextGenerator;
use nydus_storage::meta::BlobChunkInfoV1Ondisk;
use nydus_utils::compress;
use sha2::Digest;
use std::fs::File;
use std::io::{Read, Seek, Write};
use std::mem::size_of;
use std::sync::Arc;
pub struct OptimizePrefetch {}
struct PrefetchBlobState {
blob_info: BlobInfo,
blob_ctx: BlobContext,
blob_writer: Box<dyn Artifact>,
}
impl PrefetchBlobState {
fn new(ctx: &BuildContext, blob_layer_num: u32, blobs_dir_path: &Path) -> Result<Self> {
let mut blob_info = BlobInfo::new(
blob_layer_num,
String::from("prefetch-blob"),
0,
0,
ctx.chunk_size,
u32::MAX,
ctx.blob_features,
);
blob_info.set_compressor(ctx.compressor);
blob_info.set_separated_with_prefetch_files_feature(true);
let mut blob_ctx = BlobContext::from(ctx, &blob_info, ChunkSource::Build)?;
blob_ctx.blob_meta_info_enabled = true;
let blob_writer = ArtifactWriter::new(crate::ArtifactStorage::FileDir((
blobs_dir_path.to_path_buf(),
String::new(),
)))
.map(|writer| Box::new(writer) as Box<dyn Artifact>)?;
Ok(Self {
blob_info,
blob_ctx,
blob_writer,
})
}
}
impl OptimizePrefetch {
/// Generate a new bootstrap for prefetch.
pub fn generate_prefetch(
tree: &mut Tree,
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_table: &mut RafsBlobTable,
blobs_dir_path: PathBuf,
prefetch_nodes: Vec<TreeNode>,
) -> Result<BuildOutput> {
// create a new blob for prefetch layer
let blob_layer_num = match blob_table {
RafsBlobTable::V5(table) => table.get_all().len(),
RafsBlobTable::V6(table) => table.get_all().len(),
};
let mut blob_state = PrefetchBlobState::new(&ctx, blob_layer_num as u32, &blobs_dir_path)?;
let mut batch = BatchContextGenerator::new(0)?;
for node in &prefetch_nodes {
Self::process_prefetch_node(
tree,
&node,
&mut blob_state,
&mut batch,
blob_table,
&blobs_dir_path,
)?;
}
let blob_mgr = Self::dump_blob(ctx, blob_table, &mut blob_state)?;
debug!("prefetch blob id: {}", ctx.blob_id);
Self::build_dump_bootstrap(tree, ctx, bootstrap_mgr, blob_table)?;
BuildOutput::new(&blob_mgr, None, &bootstrap_mgr.bootstrap_storage, &None)
}
fn build_dump_bootstrap(
tree: &mut Tree,
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_table: &mut RafsBlobTable,
) -> Result<()> {
let mut bootstrap_ctx = bootstrap_mgr.create_ctx()?;
let mut bootstrap = Bootstrap::new(tree.clone())?;
// Build bootstrap
bootstrap.build(ctx, &mut bootstrap_ctx)?;
let blob_table_withprefetch = match blob_table {
RafsBlobTable::V5(table) => RafsBlobTable::V5(table.clone()),
RafsBlobTable::V6(table) => RafsBlobTable::V6(table.clone()),
};
bootstrap.dump(
ctx,
&mut bootstrap_mgr.bootstrap_storage,
&mut bootstrap_ctx,
&blob_table_withprefetch,
)?;
Ok(())
}
fn dump_blob(
ctx: &mut BuildContext,
blob_table: &mut RafsBlobTable,
blob_state: &mut PrefetchBlobState,
) -> Result<BlobManager> {
match blob_table {
RafsBlobTable::V5(table) => {
table.entries.push(blob_state.blob_info.clone().into());
}
RafsBlobTable::V6(table) => {
table.entries.push(blob_state.blob_info.clone().into());
}
}
let mut blob_mgr = BlobManager::new(ctx.digester, false);
blob_mgr.add_blob(blob_state.blob_ctx.clone());
blob_mgr.set_current_blob_index(0);
Blob::finalize_blob_data(&ctx, &mut blob_mgr, blob_state.blob_writer.as_mut())?;
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
Blob::dump_meta_data(&ctx, blob_ctx, blob_state.blob_writer.as_mut()).unwrap();
};
ctx.blob_id = String::from("");
blob_mgr.get_current_blob().unwrap().1.blob_id = String::from("");
finalize_blob(ctx, &mut blob_mgr, blob_state.blob_writer.as_mut())?;
ctx.blob_id = blob_mgr
.get_current_blob()
.ok_or(anyhow!("failed to get current blob"))?
.1
.blob_id
.clone();
let entries = match blob_table {
RafsBlobTable::V5(table) => table.get_all(),
RafsBlobTable::V6(table) => table.get_all(),
};
// Verify and update prefetch blob
assert!(
entries
.iter()
.filter(|blob| blob.blob_id() == "prefetch-blob")
.count()
== 1,
"Expected exactly one prefetch-blob"
);
// Rewrite prefetch blob id
match blob_table {
RafsBlobTable::V5(table) => {
rewrite_blob_id(&mut table.entries, "prefetch-blob", ctx.blob_id.clone())
}
RafsBlobTable::V6(table) => {
rewrite_blob_id(&mut table.entries, "prefetch-blob", ctx.blob_id.clone())
}
}
Ok(blob_mgr)
}
fn process_prefetch_node(
tree: &mut Tree,
node: &TreeNode,
prefetch_state: &mut PrefetchBlobState,
batch: &mut BatchContextGenerator,
blob_table: &RafsBlobTable,
blobs_dir_path: &Path,
) -> Result<()> {
let tree_node = tree
.get_node_mut(&node.borrow().path())
.ok_or(anyhow!("failed to get node"))?
.node
.as_ref();
let entries = match blob_table {
RafsBlobTable::V5(table) => table.get_all(),
RafsBlobTable::V6(table) => table.get_all(),
};
let blob_id = tree_node
.borrow()
.chunks
.first()
.and_then(|chunk| entries.get(chunk.inner.blob_index() as usize).cloned())
.map(|entry| entry.blob_id())
.ok_or(anyhow!("failed to get blob id"))?;
let mut blob_file = Arc::new(File::open(blobs_dir_path.join(blob_id))?);
tree_node.borrow_mut().layer_idx = prefetch_state.blob_info.blob_index() as u16;
let mut child = tree_node.borrow_mut();
let chunks: &mut Vec<NodeChunk> = child.chunks.as_mut();
let blob_ctx = &mut prefetch_state.blob_ctx;
let blob_info = &mut prefetch_state.blob_info;
let encrypted = blob_ctx.blob_compressor != compress::Algorithm::None;
for chunk in chunks {
let inner = Arc::make_mut(&mut chunk.inner);
let mut buf = vec![0u8; inner.compressed_size() as usize];
blob_file.seek(std::io::SeekFrom::Start(inner.compressed_offset()))?;
blob_file.read_exact(&mut buf)?;
prefetch_state.blob_writer.write_all(&buf)?;
let info = batch.generate_chunk_info(
blob_ctx.current_compressed_offset,
blob_ctx.current_uncompressed_offset,
inner.uncompressed_size(),
encrypted,
)?;
inner.set_blob_index(blob_info.blob_index());
if blob_ctx.chunk_count == u32::MAX {
blob_ctx.chunk_count = 0;
}
inner.set_index(blob_ctx.chunk_count);
blob_ctx.chunk_count += 1;
inner.set_compressed_offset(blob_ctx.current_compressed_offset);
inner.set_uncompressed_offset(blob_ctx.current_uncompressed_offset);
let aligned_d_size: u64 = nydus_utils::try_round_up_4k(inner.uncompressed_size())
.ok_or_else(|| anyhow!("invalid size"))?;
blob_ctx.compressed_blob_size += inner.compressed_size() as u64;
blob_ctx.uncompressed_blob_size += aligned_d_size;
blob_ctx.current_compressed_offset += inner.compressed_size() as u64;
blob_ctx.current_uncompressed_offset += aligned_d_size;
blob_ctx.add_chunk_meta_info(&inner, Some(info))?;
blob_ctx.blob_hash.update(&buf);
blob_info.set_meta_ci_compressed_size(
(blob_info.meta_ci_compressed_size() + size_of::<BlobChunkInfoV1Ondisk>() as u64)
as usize,
);
blob_info.set_meta_ci_uncompressed_size(
(blob_info.meta_ci_uncompressed_size() + size_of::<BlobChunkInfoV1Ondisk>() as u64)
as usize,
);
}
Ok(())
}
}
fn rewrite_blob_id(entries: &mut [Arc<BlobInfo>], blob_id: &str, new_blob_id: String) {
entries
.iter_mut()
.filter(|blob| blob.blob_id() == blob_id)
.for_each(|blob| {
let mut info = (**blob).clone();
info.set_blob_id(new_blob_id.clone());
*blob = Arc::new(info);
});
}
pub fn update_ctx_from_bootstrap(
ctx: &mut BuildContext,
config: Arc<ConfigV2>,
bootstrap_path: &Path,
) -> Result<RafsSuper> {
let (sb, _) = RafsSuper::load_from_file(bootstrap_path, config, false)?;
ctx.blob_features = sb
.superblock
.get_blob_infos()
.first()
.ok_or_else(|| anyhow!("No blob info found in superblock"))?
.features();
let config = sb.meta.get_config();
if config.is_tarfs_mode {
ctx.conversion_type = ConversionType::TarToRafs;
}
ctx.fs_version =
RafsVersion::try_from(sb.meta.version).context("Failed to get RAFS version")?;
ctx.compressor = config.compressor;
Ok(sb)
}

1059
builder/src/stargz.rs Normal file

File diff suppressed because it is too large Load Diff

744
builder/src/tarball.rs Normal file
View File

@ -0,0 +1,744 @@
// Copyright 2022 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
//! Generate RAFS filesystem from a tarball.
//!
//! It support generating RAFS filesystem from a tar/targz/stargz file with or without data blob.
//!
//! The tarball data is arrange as a sequence of tar headers with associated file data interleaved.
//! - (tar header) (tar header) (file data) (tar header) (file data) (tar header)
//! And to support read tarball data from FIFO, we could only go over the tarball stream once.
//! So the workflow is as:
//! - for each tar header from the stream
//! -- generate RAFS filesystem node from the tar header
//! -- optionally dump file data associated with the tar header into RAFS data blob
//! - arrange all generated RAFS nodes into a RAFS filesystem tree
//! - dump the RAFS filesystem tree into RAFS metadata blob
use std::ffi::{OsStr, OsString};
use std::fs::{File, OpenOptions};
use std::io::{BufReader, Read, Seek, SeekFrom};
use std::os::unix::ffi::OsStrExt;
use std::path::{Path, PathBuf};
use std::sync::Mutex;
use anyhow::{anyhow, bail, Context, Result};
use tar::{Archive, Entry, EntryType, Header};
use nydus_api::enosys;
use nydus_rafs::metadata::inode::{InodeWrapper, RafsInodeFlags, RafsV6Inode};
use nydus_rafs::metadata::layout::v5::RafsV5Inode;
use nydus_rafs::metadata::layout::RafsXAttrs;
use nydus_rafs::metadata::RafsVersion;
use nydus_storage::device::BlobFeatures;
use nydus_storage::meta::ZranContextGenerator;
use nydus_storage::RAFS_MAX_CHUNKS_PER_BLOB;
use nydus_utils::compact::makedev;
use nydus_utils::compress::zlib_random::{ZranReader, ZRAN_READER_BUF_SIZE};
use nydus_utils::compress::ZlibDecoder;
use nydus_utils::digest::RafsDigest;
use nydus_utils::{div_round_up, lazy_drop, root_tracer, timing_tracer, BufReaderInfo, ByteSize};
use crate::core::context::{Artifact, NoopArtifactWriter};
use super::core::blob::Blob;
use super::core::context::{
ArtifactWriter, BlobManager, BootstrapManager, BuildContext, BuildOutput, ConversionType,
};
use super::core::node::{Node, NodeInfo};
use super::core::tree::Tree;
use super::{build_bootstrap, dump_bootstrap, finalize_blob, Builder, TarBuilder};
enum CompressionType {
None,
Gzip,
}
enum TarReader {
File(File),
BufReader(BufReader<File>),
BufReaderInfo(BufReaderInfo<File>),
BufReaderInfoSeekable(BufReaderInfo<File>),
TarGzFile(Box<ZlibDecoder<File>>),
TarGzBufReader(Box<ZlibDecoder<BufReader<File>>>),
ZranReader(ZranReader<File>),
}
impl Read for TarReader {
fn read(&mut self, buf: &mut [u8]) -> std::io::Result<usize> {
match self {
TarReader::File(f) => f.read(buf),
TarReader::BufReader(f) => f.read(buf),
TarReader::BufReaderInfo(b) => b.read(buf),
TarReader::BufReaderInfoSeekable(b) => b.read(buf),
TarReader::TarGzFile(f) => f.read(buf),
TarReader::TarGzBufReader(b) => b.read(buf),
TarReader::ZranReader(f) => f.read(buf),
}
}
}
impl TarReader {
fn seekable(&self) -> bool {
matches!(
self,
TarReader::File(_) | TarReader::BufReaderInfoSeekable(_)
)
}
}
impl Seek for TarReader {
fn seek(&mut self, pos: SeekFrom) -> std::io::Result<u64> {
match self {
TarReader::File(f) => f.seek(pos),
TarReader::BufReaderInfoSeekable(b) => b.seek(pos),
_ => Err(enosys!("seek() not supported!")),
}
}
}
struct TarballTreeBuilder<'a> {
ty: ConversionType,
ctx: &'a mut BuildContext,
blob_mgr: &'a mut BlobManager,
blob_writer: &'a mut dyn Artifact,
buf: Vec<u8>,
builder: TarBuilder,
}
impl<'a> TarballTreeBuilder<'a> {
/// Create a new instance of `TarballBuilder`.
pub fn new(
ty: ConversionType,
ctx: &'a mut BuildContext,
blob_mgr: &'a mut BlobManager,
blob_writer: &'a mut dyn Artifact,
layer_idx: u16,
) -> Self {
let builder = TarBuilder::new(ctx.explicit_uidgid, layer_idx, ctx.fs_version);
Self {
ty,
ctx,
blob_mgr,
buf: Vec::new(),
blob_writer,
builder,
}
}
fn build_tree(&mut self) -> Result<Tree> {
let file = OpenOptions::new()
.read(true)
.open(self.ctx.source_path.clone())
.context("tarball: can not open source file for conversion")?;
let mut is_file = match file.metadata() {
Ok(md) => md.file_type().is_file(),
Err(_) => false,
};
let reader = match self.ty {
ConversionType::EStargzToRef
| ConversionType::TargzToRef
| ConversionType::TarToRef => match Self::detect_compression_algo(file)? {
(CompressionType::Gzip, buf_reader) => {
let generator = ZranContextGenerator::from_buf_reader(buf_reader)?;
let reader = generator.reader();
self.ctx.blob_zran_generator = Some(Mutex::new(generator));
self.ctx.blob_features.insert(BlobFeatures::ZRAN);
TarReader::ZranReader(reader)
}
(CompressionType::None, buf_reader) => {
self.ty = ConversionType::TarToRef;
let reader = BufReaderInfo::from_buf_reader(buf_reader);
self.ctx.blob_tar_reader = Some(reader.clone());
TarReader::BufReaderInfo(reader)
}
},
ConversionType::EStargzToRafs
| ConversionType::TargzToRafs
| ConversionType::TarToRafs => match Self::detect_compression_algo(file)? {
(CompressionType::Gzip, buf_reader) => {
if is_file {
let mut file = buf_reader.into_inner();
file.seek(SeekFrom::Start(0))?;
TarReader::TarGzFile(Box::new(ZlibDecoder::new(file)))
} else {
TarReader::TarGzBufReader(Box::new(ZlibDecoder::new(buf_reader)))
}
}
(CompressionType::None, buf_reader) => {
if is_file {
let mut file = buf_reader.into_inner();
file.seek(SeekFrom::Start(0))?;
TarReader::File(file)
} else {
TarReader::BufReader(buf_reader)
}
}
},
ConversionType::TarToTarfs => {
let mut reader = BufReaderInfo::from_buf_reader(BufReader::new(file));
self.ctx.blob_tar_reader = Some(reader.clone());
if !self.ctx.blob_id.is_empty() {
reader.enable_digest_calculation(false);
} else {
// Disable seek when need to calculate hash value.
is_file = false;
}
// only enable seek when hash computing is disabled.
if is_file {
TarReader::BufReaderInfoSeekable(reader)
} else {
TarReader::BufReaderInfo(reader)
}
}
_ => return Err(anyhow!("tarball: unsupported image conversion type")),
};
let is_seekable = reader.seekable();
let mut tar = Archive::new(reader);
tar.set_ignore_zeros(true);
tar.set_preserve_mtime(true);
tar.set_preserve_permissions(true);
tar.set_unpack_xattrs(true);
// Prepare scratch buffer for dumping file data.
if self.buf.len() < self.ctx.chunk_size as usize {
self.buf = vec![0u8; self.ctx.chunk_size as usize];
}
// Generate the root node in advance, it may be overwritten by entries from the tar stream.
let root = self.builder.create_directory(&[OsString::from("/")])?;
let mut tree = Tree::new(root);
// Generate RAFS node for each tar entry, and optionally adding missing parents.
let entries = if is_seekable {
tar.entries_with_seek()
.context("tarball: failed to read entries from tar")?
} else {
tar.entries()
.context("tarball: failed to read entries from tar")?
};
for entry in entries {
let mut entry = entry.context("tarball: failed to read entry from tar")?;
let path = entry
.path()
.context("tarball: failed to to get path from tar entry")?;
let path = PathBuf::from("/").join(path);
let path = path.components().as_path();
if !self.builder.is_stargz_special_files(path) {
self.parse_entry(&mut tree, &mut entry, path)?;
}
}
// Update directory size for RAFS V5 after generating the tree.
if self.ctx.fs_version.is_v5() {
Self::set_v5_dir_size(&mut tree);
}
Ok(tree)
}
fn parse_entry<R: Read>(
&mut self,
tree: &mut Tree,
entry: &mut Entry<R>,
path: &Path,
) -> Result<()> {
let header = entry.header();
let entry_type = header.entry_type();
if entry_type.is_gnu_longname() {
return Err(anyhow!("tarball: unsupported gnu_longname from tar header"));
} else if entry_type.is_gnu_longlink() {
return Err(anyhow!("tarball: unsupported gnu_longlink from tar header"));
} else if entry_type.is_pax_local_extensions() {
return Err(anyhow!(
"tarball: unsupported pax_local_extensions from tar header"
));
} else if entry_type.is_pax_global_extensions() {
return Err(anyhow!(
"tarball: unsupported pax_global_extensions from tar header"
));
} else if entry_type.is_contiguous() {
return Err(anyhow!(
"tarball: unsupported contiguous entry type from tar header"
));
} else if entry_type.is_gnu_sparse() {
return Err(anyhow!(
"tarball: unsupported gnu sparse file extension from tar header"
));
}
let mut file_size = entry.size();
let name = Self::get_file_name(path)?;
let mode = Self::get_mode(header)?;
let (uid, gid) = Self::get_uid_gid(self.ctx, header)?;
let mtime = header.mtime().unwrap_or_default();
let mut flags = match self.ctx.fs_version {
RafsVersion::V5 => RafsInodeFlags::default(),
RafsVersion::V6 => RafsInodeFlags::default(),
};
// Parse special files
let rdev = if entry_type.is_block_special()
|| entry_type.is_character_special()
|| entry_type.is_fifo()
{
let major = header
.device_major()
.context("tarball: failed to get device major from tar entry")?
.ok_or_else(|| anyhow!("tarball: failed to get major device from tar entry"))?;
let minor = header
.device_minor()
.context("tarball: failed to get device major from tar entry")?
.ok_or_else(|| anyhow!("tarball: failed to get minor device from tar entry"))?;
makedev(major as u64, minor as u64) as u32
} else {
u32::MAX
};
// Parse symlink
let (symlink, symlink_size) = if entry_type.is_symlink() {
let symlink_link_path = entry
.link_name()
.context("tarball: failed to get target path for tar symlink entry")?
.ok_or_else(|| anyhow!("tarball: failed to get symlink target tor tar entry"))?;
let symlink_size = symlink_link_path.as_os_str().byte_size();
if symlink_size > u16::MAX as usize {
bail!("tarball: symlink target from tar entry is too big");
}
file_size = symlink_size as u64;
flags |= RafsInodeFlags::SYMLINK;
(
Some(symlink_link_path.as_os_str().to_owned()),
symlink_size as u16,
)
} else {
(None, 0)
};
let mut child_count = 0;
if entry_type.is_file() {
child_count = div_round_up(file_size, self.ctx.chunk_size as u64);
if child_count > RAFS_MAX_CHUNKS_PER_BLOB as u64 {
bail!("tarball: file size 0x{:x} is too big", file_size);
}
}
// Handle hardlink ino
let mut hardlink_target = None;
let ino = if entry_type.is_hard_link() {
let link_path = entry
.link_name()
.context("tarball: failed to get target path for tar symlink entry")?
.ok_or_else(|| anyhow!("tarball: failed to get symlink target tor tar entry"))?;
let link_path = PathBuf::from("/").join(link_path);
let link_path = link_path.components().as_path();
let targets = Node::generate_target_vec(link_path);
assert!(!targets.is_empty());
let mut tmp_tree: &Tree = tree;
for name in &targets[1..] {
match tmp_tree.get_child_idx(name.as_bytes()) {
Some(idx) => tmp_tree = &tmp_tree.children[idx],
None => {
bail!(
"tarball: unknown target {} for hardlink {}",
link_path.display(),
path.display()
);
}
}
}
let mut tmp_node = tmp_tree.borrow_mut_node();
if !tmp_node.is_reg() {
bail!(
"tarball: target {} for hardlink {} is not a regular file",
link_path.display(),
path.display()
);
}
hardlink_target = Some(tmp_tree);
flags |= RafsInodeFlags::HARDLINK;
tmp_node.inode.set_has_hardlink(true);
tmp_node.inode.ino()
} else {
self.builder.next_ino()
};
// Parse xattrs
let mut xattrs = RafsXAttrs::new();
if let Some(exts) = entry.pax_extensions()? {
for p in exts {
match p {
Ok(pax) => {
let prefix = b"SCHILY.xattr.";
let key = pax.key_bytes();
if key.starts_with(prefix) {
let x_key = OsStr::from_bytes(&key[prefix.len()..]);
xattrs.add(x_key.to_os_string(), pax.value_bytes().to_vec())?;
}
}
Err(e) => {
return Err(anyhow!(
"tarball: failed to parse PaxExtension from tar header, {}",
e
))
}
}
}
}
let mut inode = match self.ctx.fs_version {
RafsVersion::V5 => InodeWrapper::V5(RafsV5Inode {
i_digest: RafsDigest::default(),
i_parent: 0,
i_ino: ino,
i_projid: 0,
i_uid: uid,
i_gid: gid,
i_mode: mode,
i_size: file_size,
i_nlink: 1,
i_blocks: 0,
i_flags: flags,
i_child_index: 0,
i_child_count: child_count as u32,
i_name_size: name.len() as u16,
i_symlink_size: symlink_size,
i_rdev: rdev,
i_mtime: mtime,
i_mtime_nsec: 0,
i_reserved: [0; 8],
}),
RafsVersion::V6 => InodeWrapper::V6(RafsV6Inode {
i_ino: ino,
i_projid: 0,
i_uid: uid,
i_gid: gid,
i_mode: mode,
i_size: file_size,
i_nlink: 1,
i_blocks: 0,
i_flags: flags,
i_child_count: child_count as u32,
i_name_size: name.len() as u16,
i_symlink_size: symlink_size,
i_rdev: rdev,
i_mtime: mtime,
i_mtime_nsec: 0,
}),
};
inode.set_has_xattr(!xattrs.is_empty());
let source = PathBuf::from("/");
let target = Node::generate_target(path, &source);
let target_vec = Node::generate_target_vec(&target);
let info = NodeInfo {
explicit_uidgid: self.ctx.explicit_uidgid,
src_ino: ino,
src_dev: u64::MAX,
rdev: rdev as u64,
path: path.to_path_buf(),
source,
target,
target_vec,
symlink,
xattrs,
v6_force_extended_inode: false,
};
let mut node = Node::new(inode, info, self.builder.layer_idx);
// Special handling of hardlink.
// Tar hardlink header has zero file size and no file data associated, so copy value from
// the associated regular file.
if let Some(t) = hardlink_target {
let n = t.borrow_mut_node();
if n.inode.is_v5() {
node.inode.set_digest(n.inode.digest().to_owned());
}
node.inode.set_size(n.inode.size());
node.inode.set_child_count(n.inode.child_count());
node.chunks = n.chunks.clone();
node.set_xattr(n.info.xattrs.clone());
} else {
node.dump_node_data_with_reader(
self.ctx,
self.blob_mgr,
self.blob_writer,
Some(entry),
&mut self.buf,
)?;
}
// Update inode.i_blocks for RAFS v5.
if self.ctx.fs_version == RafsVersion::V5 && !entry_type.is_dir() {
node.v5_set_inode_blocks();
}
self.builder.insert_into_tree(tree, node)
}
fn get_uid_gid(ctx: &BuildContext, header: &Header) -> Result<(u32, u32)> {
let uid = if ctx.explicit_uidgid {
header.uid().unwrap_or_default()
} else {
0
};
let gid = if ctx.explicit_uidgid {
header.gid().unwrap_or_default()
} else {
0
};
if uid > u32::MAX as u64 || gid > u32::MAX as u64 {
bail!(
"tarball: uid {:x} or gid {:x} from tar entry is out of range",
uid,
gid
);
}
Ok((uid as u32, gid as u32))
}
fn get_mode(header: &Header) -> Result<u32> {
let mode = header
.mode()
.context("tarball: failed to get permission/mode from tar entry")?;
let ty = match header.entry_type() {
EntryType::Regular | EntryType::Link => libc::S_IFREG,
EntryType::Directory => libc::S_IFDIR,
EntryType::Symlink => libc::S_IFLNK,
EntryType::Block => libc::S_IFBLK,
EntryType::Char => libc::S_IFCHR,
EntryType::Fifo => libc::S_IFIFO,
_ => bail!("tarball: unsupported tar entry type"),
};
Ok((mode & !libc::S_IFMT as u32) | ty as u32)
}
fn get_file_name(path: &Path) -> Result<&OsStr> {
let name = if path == Path::new("/") {
path.as_os_str()
} else {
path.file_name().ok_or_else(|| {
anyhow!(
"tarball: failed to get file name from tar entry with path {}",
path.display()
)
})?
};
if name.len() > u16::MAX as usize {
bail!(
"tarball: file name {} from tar entry is too long",
name.to_str().unwrap_or_default()
);
}
Ok(name)
}
fn set_v5_dir_size(tree: &mut Tree) {
for c in &mut tree.children {
Self::set_v5_dir_size(c);
}
let mut node = tree.borrow_mut_node();
node.v5_set_dir_size(RafsVersion::V5, &tree.children);
}
fn detect_compression_algo(file: File) -> Result<(CompressionType, BufReader<File>)> {
// Use 64K buffer to keep consistence with zlib-random.
let mut buf_reader = BufReader::with_capacity(ZRAN_READER_BUF_SIZE, file);
let mut buf = [0u8; 3];
buf_reader.read_exact(&mut buf)?;
if buf[0] == 0x1f && buf[1] == 0x8b && buf[2] == 0x08 {
buf_reader.seek_relative(-3).unwrap();
Ok((CompressionType::Gzip, buf_reader))
} else {
buf_reader.seek_relative(-3).unwrap();
Ok((CompressionType::None, buf_reader))
}
}
}
/// Builder to create RAFS filesystems from tarballs.
pub struct TarballBuilder {
ty: ConversionType,
}
impl TarballBuilder {
/// Create a new instance of [TarballBuilder] to build a RAFS filesystem from a tarball.
pub fn new(conversion_type: ConversionType) -> Self {
Self {
ty: conversion_type,
}
}
}
impl Builder for TarballBuilder {
fn build(
&mut self,
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_mgr: &mut BlobManager,
) -> Result<BuildOutput> {
let mut bootstrap_ctx = bootstrap_mgr.create_ctx()?;
let layer_idx = u16::from(bootstrap_ctx.layered);
let mut blob_writer: Box<dyn Artifact> = match self.ty {
ConversionType::EStargzToRafs
| ConversionType::EStargzToRef
| ConversionType::TargzToRafs
| ConversionType::TargzToRef
| ConversionType::TarToRafs
| ConversionType::TarToTarfs => {
if let Some(blob_stor) = ctx.blob_storage.clone() {
Box::new(ArtifactWriter::new(blob_stor)?)
} else {
Box::<NoopArtifactWriter>::default()
}
}
_ => {
return Err(anyhow!(
"tarball: unsupported image conversion type '{}'",
self.ty
))
}
};
let mut tree_builder =
TarballTreeBuilder::new(self.ty, ctx, blob_mgr, blob_writer.as_mut(), layer_idx);
let tree = timing_tracer!({ tree_builder.build_tree() }, "build_tree")?;
// Build bootstrap
let mut bootstrap = timing_tracer!(
{ build_bootstrap(ctx, bootstrap_mgr, &mut bootstrap_ctx, blob_mgr, tree) },
"build_bootstrap"
)?;
// Dump blob file
timing_tracer!(
{ Blob::dump(ctx, blob_mgr, blob_writer.as_mut()) },
"dump_blob"
)?;
// Dump blob meta information
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
Blob::dump_meta_data(ctx, blob_ctx, blob_writer.as_mut())?;
}
// Dump RAFS meta/bootstrap and finalize the data blob.
if ctx.blob_inline_meta {
timing_tracer!(
{
dump_bootstrap(
ctx,
bootstrap_mgr,
&mut bootstrap_ctx,
&mut bootstrap,
blob_mgr,
blob_writer.as_mut(),
)
},
"dump_bootstrap"
)?;
finalize_blob(ctx, blob_mgr, blob_writer.as_mut())?;
} else {
finalize_blob(ctx, blob_mgr, blob_writer.as_mut())?;
timing_tracer!(
{
dump_bootstrap(
ctx,
bootstrap_mgr,
&mut bootstrap_ctx,
&mut bootstrap,
blob_mgr,
blob_writer.as_mut(),
)
},
"dump_bootstrap"
)?;
}
lazy_drop(bootstrap_ctx);
BuildOutput::new(blob_mgr, None, &bootstrap_mgr.bootstrap_storage, &None)
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::attributes::Attributes;
use crate::{ArtifactStorage, Features, Prefetch, WhiteoutSpec};
use nydus_utils::{compress, digest};
#[test]
fn test_build_tarfs() {
let tmp_dir = vmm_sys_util::tempdir::TempDir::new().unwrap();
let tmp_dir = tmp_dir.as_path().to_path_buf();
let root_dir = &std::env::var("CARGO_MANIFEST_DIR").expect("$CARGO_MANIFEST_DIR");
let source_path = PathBuf::from(root_dir).join("../tests/texture/tar/all-entry-type.tar");
let prefetch = Prefetch::default();
let mut ctx = BuildContext::new(
"test".to_string(),
true,
0,
compress::Algorithm::None,
digest::Algorithm::Sha256,
true,
WhiteoutSpec::Oci,
ConversionType::TarToTarfs,
source_path,
prefetch,
Some(ArtifactStorage::FileDir((tmp_dir.clone(), String::new()))),
None,
false,
Features::new(),
false,
Attributes::default(),
);
let mut bootstrap_mgr = BootstrapManager::new(
Some(ArtifactStorage::FileDir((tmp_dir, String::new()))),
None,
);
let mut blob_mgr = BlobManager::new(digest::Algorithm::Sha256, false);
let mut builder = TarballBuilder::new(ConversionType::TarToTarfs);
builder
.build(&mut ctx, &mut bootstrap_mgr, &mut blob_mgr)
.unwrap();
}
#[test]
fn test_build_encrypted_tarfs() {
let tmp_dir = vmm_sys_util::tempdir::TempDir::new().unwrap();
let tmp_dir = tmp_dir.as_path().to_path_buf();
let root_dir = &std::env::var("CARGO_MANIFEST_DIR").expect("$CARGO_MANIFEST_DIR");
let source_path = PathBuf::from(root_dir).join("../tests/texture/tar/all-entry-type.tar");
let prefetch = Prefetch::default();
let mut ctx = BuildContext::new(
"test".to_string(),
true,
0,
compress::Algorithm::None,
digest::Algorithm::Sha256,
true,
WhiteoutSpec::Oci,
ConversionType::TarToTarfs,
source_path,
prefetch,
Some(ArtifactStorage::FileDir((tmp_dir.clone(), String::new()))),
None,
false,
Features::new(),
true,
Attributes::default(),
);
let mut bootstrap_mgr = BootstrapManager::new(
Some(ArtifactStorage::FileDir((tmp_dir, String::new()))),
None,
);
let mut blob_mgr = BlobManager::new(digest::Algorithm::Sha256, false);
let mut builder = TarballBuilder::new(ConversionType::TarToTarfs);
builder
.build(&mut ctx, &mut bootstrap_mgr, &mut blob_mgr)
.unwrap();
}
}

View File

@ -5,7 +5,7 @@ description = "C wrapper library for Nydus SDK"
authors = ["The Nydus Developers"]
license = "Apache-2.0"
homepage = "https://nydus.dev/"
repository = "https://github.com/dragonflyoss/image-service"
repository = "https://github.com/dragonflyoss/nydus"
edition = "2021"
[lib]
@ -15,10 +15,10 @@ crate-type = ["cdylib", "staticlib"]
[dependencies]
libc = "0.2.137"
log = "0.4.17"
fuse-backend-rs = "0.10.0"
nydus-api = { version = "0.2", path = "../api" }
nydus-rafs = { version = "0.2.2", path = "../rafs" }
nydus-storage = { version = "0.6.2", path = "../storage" }
fuse-backend-rs = "^0.12.0"
nydus-api = { version = "0.4.0", path = "../api" }
nydus-rafs = { version = "0.4.0", path = "../rafs" }
nydus-storage = { version = "0.7.0", path = "../storage" }
[features]
baekend-s3 = ["nydus-storage/backend-s3"]

View File

@ -1 +0,0 @@
bin/

View File

@ -1,21 +0,0 @@
# https://golangci-lint.run/usage/configuration#config-file
linters:
enable:
- staticcheck
- unconvert
- gofmt
- goimports
- revive
- ineffassign
- vet
- unused
- misspell
disable:
- errcheck
run:
deadline: 4m
skip-dirs:
- misc

View File

@ -1,27 +0,0 @@
GIT_COMMIT := $(shell git rev-list -1 HEAD)
BUILD_TIME := $(shell date -u +%Y%m%d.%H%M)
PACKAGES ?= $(shell go list ./... | grep -v /vendor/)
GOARCH ?= amd64
GOPROXY ?= https://goproxy.io
ifdef GOPROXY
PROXY := GOPROXY=${GOPROXY}
endif
.PHONY: all build release test clean
all: build
build:
@CGO_ENABLED=0 ${PROXY} GOOS=linux GOARCH=${GOARCH} go build -v -o bin/ctr-remote ./cmd/main.go
release:
@CGO_ENABLED=0 ${PROXY} GOOS=linux GOARCH=${GOARCH} go build -ldflags '-s -w -extldflags "-static"' -v -o bin/ctr-remote ./cmd/main.go
test: build
go vet $(PACKAGES)
golangci-lint run
go test -v -cover ${PACKAGES}
clean:
rm -f bin/*

View File

@ -1,65 +0,0 @@
/*
Copyright The containerd Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
"fmt"
"os"
"github.com/containerd/containerd/cmd/ctr/app"
"github.com/containerd/containerd/pkg/seed"
"github.com/dragonflyoss/image-service/contrib/ctr-remote/commands"
"github.com/urfave/cli"
)
func init() {
seed.WithTimeAndRand()
}
func main() {
customCommands := []cli.Command{commands.RpullCommand}
app := app.New()
app.Description = "NOTE: Enhanced for nydus-snapshotter\n" + app.Description
for i := range app.Commands {
if app.Commands[i].Name == "images" {
sc := map[string]cli.Command{}
for _, subcmd := range customCommands {
sc[subcmd.Name] = subcmd
}
// First, replace duplicated subcommands
for j := range app.Commands[i].Subcommands {
for name, subcmd := range sc {
if name == app.Commands[i].Subcommands[j].Name {
app.Commands[i].Subcommands[j] = subcmd
delete(sc, name)
}
}
}
// Next, append all new sub commands
for _, subcmd := range sc {
app.Commands[i].Subcommands = append(app.Commands[i].Subcommands, subcmd)
}
break
}
}
if err := app.Run(os.Args); err != nil {
fmt.Fprintf(os.Stderr, "ctr-remote: %v\n", err)
os.Exit(1)
}
}

View File

@ -1,103 +0,0 @@
/*
Copyright The containerd Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package commands
import (
"context"
"fmt"
"github.com/containerd/containerd"
"github.com/containerd/containerd/cmd/ctr/commands"
"github.com/containerd/containerd/cmd/ctr/commands/content"
"github.com/containerd/containerd/images"
"github.com/containerd/containerd/log"
"github.com/containerd/nydus-snapshotter/pkg/label"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/urfave/cli"
)
const (
remoteSnapshotterName = "nydus"
)
var RpullCommand = cli.Command{
Name: "rpull",
Usage: "pull an image from a registry leveraging nydus-snapshotter",
ArgsUsage: "[flags] <ref>",
Description: `Fetch and prepare an image for use in containerd leveraging nydus-snapshotter.
After pulling an image, it should be ready to use the same reference in a run command.`,
Flags: append(commands.RegistryFlags, commands.LabelFlag),
Action: func(context *cli.Context) error {
var (
ref = context.Args().First()
config = &rPullConfig{}
)
if ref == "" {
return fmt.Errorf("please provide an image reference to pull")
}
client, ctx, cancel, err := commands.NewClient(context)
if err != nil {
return err
}
defer cancel()
ctx, done, err := client.WithLease(ctx)
if err != nil {
return err
}
defer done(ctx)
fc, err := content.NewFetchConfig(ctx, context)
if err != nil {
return err
}
config.FetchConfig = fc
return pull(ctx, client, ref, config)
},
}
type rPullConfig struct {
*content.FetchConfig
}
func pull(ctx context.Context, client *containerd.Client, ref string, config *rPullConfig) error {
pCtx := ctx
h := images.HandlerFunc(func(ctx context.Context, desc ocispec.Descriptor) ([]ocispec.Descriptor, error) {
if desc.MediaType != images.MediaTypeDockerSchema1Manifest {
fmt.Printf("fetching %v... %v\n", desc.Digest.String()[:15], desc.MediaType)
}
return nil, nil
})
log.G(pCtx).WithField("image", ref).Debug("fetching")
configLabels := commands.LabelArgs(config.Labels)
if _, err := client.Pull(pCtx, ref, []containerd.RemoteOpt{
containerd.WithPullLabels(configLabels),
containerd.WithResolver(config.Resolver),
containerd.WithImageHandler(h),
containerd.WithSchema1Conversion,
containerd.WithPullUnpack,
containerd.WithPullSnapshotter(remoteSnapshotterName),
containerd.WithImageHandlerWrapper(label.AppendLabelsHandlerWrapper(ref)),
}...); err != nil {
return err
}
return nil
}

View File

@ -1,59 +0,0 @@
module github.com/dragonflyoss/image-service/contrib/ctr-remote
go 1.18
require (
github.com/containerd/containerd v1.6.18
github.com/containerd/nydus-snapshotter v0.5.1
github.com/opencontainers/image-spec v1.1.0-rc2
github.com/urfave/cli v1.22.12
)
require (
github.com/Microsoft/go-winio v0.6.0 // indirect
github.com/Microsoft/hcsshim v0.9.6 // indirect
github.com/cilium/ebpf v0.10.0 // indirect
github.com/containerd/cgroups v1.1.0 // indirect
github.com/containerd/console v1.0.3 // indirect
github.com/containerd/continuity v0.3.0 // indirect
github.com/containerd/fifo v1.0.0 // indirect
github.com/containerd/go-cni v1.1.8 // indirect
github.com/containerd/go-runc v1.0.0 // indirect
github.com/containerd/ttrpc v1.1.0 // indirect
github.com/containerd/typeurl v1.0.2 // indirect
github.com/containernetworking/cni v1.1.2 // indirect
github.com/containernetworking/plugins v1.2.0 // indirect
github.com/coreos/go-systemd/v22 v22.5.0 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.2 // indirect
github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c // indirect
github.com/docker/go-units v0.5.0 // indirect
github.com/godbus/dbus/v5 v5.1.0 // indirect
github.com/gogo/googleapis v1.4.1 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.2 // indirect
github.com/google/uuid v1.3.0 // indirect
github.com/klauspost/compress v1.15.15 // indirect
github.com/moby/locker v1.0.1 // indirect
github.com/moby/sys/mountinfo v0.6.2 // indirect
github.com/moby/sys/signal v0.7.0 // indirect
github.com/moby/sys/symlink v0.2.0 // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/opencontainers/runc v1.1.4 // indirect
github.com/opencontainers/runtime-spec v1.1.0-rc.1 // indirect
github.com/opencontainers/selinux v1.11.0 // indirect
github.com/pelletier/go-toml v1.9.5 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/sirupsen/logrus v1.9.0 // indirect
go.opencensus.io v0.24.0 // indirect
golang.org/x/mod v0.8.0 // indirect
golang.org/x/net v0.7.0 // indirect
golang.org/x/sync v0.1.0 // indirect
golang.org/x/sys v0.5.0 // indirect
golang.org/x/text v0.7.0 // indirect
golang.org/x/tools v0.6.0 // indirect
google.golang.org/genproto v0.0.0-20230209215440-0dfe4f8abfcc // indirect
google.golang.org/grpc v1.53.0 // indirect
google.golang.org/protobuf v1.28.1 // indirect
)

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,8 @@
package main
import "fmt"
// This is a dummy program, to workaround the goreleaser can't pre build the binary.
func main() {
fmt.Println("Hello, World!")
}

File diff suppressed because it is too large Load Diff

View File

@ -1,19 +1,19 @@
[package]
name = "nydus-backend-proxy"
version = "0.1.0"
version = "0.2.0"
authors = ["The Nydus Developers"]
description = "A simple HTTP server to provide a fake container registry for nydusd"
homepage = "https://nydus.dev/"
repository = "https://github.com/dragonflyoss/image-service"
edition = "2018"
repository = "https://github.com/dragonflyoss/nydus"
edition = "2021"
license = "Apache-2.0"
[dependencies]
rocket = "0.5.0-rc"
http-range = "0.1.3"
nix = ">=0.23.0"
clap = "2.33"
once_cell = "1.10.0"
rocket = "0.5.0"
http-range = "0.1.5"
nix = { version = "0.28", features = ["uio"] }
clap = "4.4"
once_cell = "1.19.0"
lazy_static = "1.4"
[workspace]

View File

@ -2,29 +2,22 @@
//
// SPDX-License-Identifier: (Apache-2.0 AND BSD-3-Clause)
#[macro_use]
extern crate rocket;
#[macro_use]
extern crate lazy_static;
#[macro_use(crate_authors, crate_version)]
extern crate clap;
use std::collections::HashMap;
use std::env;
use std::os::unix::io::AsRawFd;
use std::path::{Path, PathBuf};
use std::sync::Arc;
use std::{fs, io};
use clap::{App, Arg};
use clap::*;
use http_range::HttpRange;
use lazy_static::lazy_static;
use nix::sys::uio;
use rocket::fs::{FileServer, NamedFile};
use rocket::futures::lock::{Mutex, MutexGuard};
use rocket::http::Status;
use rocket::request::{self, FromRequest, Outcome};
use rocket::response::{self, stream::ReaderStream, Responder};
use rocket::{Request, Response};
use rocket::*;
lazy_static! {
static ref BLOB_BACKEND: Mutex<BlobBackend> = Mutex::new(BlobBackend {
@ -165,12 +158,12 @@ impl<'r> Responder<'r, 'static> for RangeStream {
let mut read = 0u64;
let startpos = self.start as i64;
let size = self.len;
let raw_fd = self.file.as_raw_fd();
let file = self.file.clone();
Response::build()
.streamed_body(ReaderStream! {
while read < size {
match uio::pread(raw_fd, &mut buf, startpos + read as i64) {
match uio::pread(file.as_ref(), &mut buf, startpos + read as i64) {
Ok(mut n) => {
n = std::cmp::min(n, (size - read) as usize);
read += n as u64;
@ -268,20 +261,31 @@ async fn fetch(
#[rocket::main]
async fn main() {
let cmd = App::new("nydus-backend-proxy")
.author(crate_authors!())
.version(crate_version!())
let cmd = Command::new("nydus-backend-proxy")
.author(env!("CARGO_PKG_AUTHORS"))
.version(env!("CARGO_PKG_VERSION"))
.about("A simple HTTP server to provide a fake container registry for nydusd.")
.arg(
Arg::with_name("blobsdir")
.short("b")
Arg::new("blobsdir")
.short('b')
.long("blobsdir")
.takes_value(true)
.required(true)
.help("path to directory hosting nydus blob files"),
)
.help_template(
"\
{before-help}{name} {version}
{author-with-newline}{about-with-newline}
{usage-heading} {usage}
{all-args}{after-help}
",
)
.get_matches();
// Safe to unwrap() because `blobsdir` takes a value.
let path = cmd.value_of("blobsdir").unwrap();
let path = cmd
.get_one::<String>("blobsdir")
.expect("required argument");
init_blob_backend(Path::new(path)).await;

View File

@ -8,14 +8,14 @@ linters:
- goimports
- revive
- ineffassign
- vet
- govet
- unused
- misspell
disable:
- errcheck
run:
deadline: 4m
skip-dirs:
- misc
timeout: 5m
issues:
exclude-dirs:
- misc

View File

@ -1,8 +1,8 @@
GIT_COMMIT := $(shell git rev-parse --verify HEAD --short=7)
BUILD_TIME := $(shell date -u +%Y%m%d.%H%M)
PACKAGES ?= $(shell go list ./... | grep -v /vendor/)
GOARCH ?= amd64
GOPROXY ?= https://goproxy.io
GOARCH ?= $(shell go env GOARCH)
GOPROXY ?=
ifdef GOPROXY
PROXY := GOPROXY=${GOPROXY}
@ -13,15 +13,17 @@ endif
all: build
build:
@CGO_ENABLED=0 ${PROXY} GOOS=linux GOARCH=${GOARCH} go build -ldflags="-s -w -X 'main.Version=${GIT_COMMIT}' -X 'main.BuildTime=${BUILD_TIME}'" -v -o bin/nydus-overlayfs ./cmd/main.go
@CGO_ENABLED=0 ${PROXY} GOOS=linux GOARCH=${GOARCH} go build -ldflags="-s -w -X 'main.Version=${GIT_COMMIT}' -X 'main.BuildTime=${BUILD_TIME}'" -v -o bin/nydus-overlayfs ./cmd/main.go
release:
@CGO_ENABLED=0 ${PROXY} GOOS=linux GOARCH=${GOARCH} go build -ldflags '-s -w -extldflags "-static"' -v -o bin/nydus-overlayfs ./cmd/main.go
test: build
go vet $(PACKAGES)
golangci-lint run
go test -v -cover ${PACKAGES}
lint:
golangci-lint run
clean:
rm -f bin/*

View File

@ -8,12 +8,16 @@ import (
"syscall"
"github.com/pkg/errors"
"github.com/urfave/cli/v2"
cli "github.com/urfave/cli/v2"
"golang.org/x/sys/unix"
)
const (
// Extra mount option to pass Nydus specific information from snapshotter to runtime through containerd.
extraOptionKey = "extraoption="
// Kata virtual volume infmation passed from snapshotter to runtime through containerd, superset of `extraOptionKey`.
// Please refer to `KataVirtualVolume` in https://github.com/kata-containers/kata-containers/blob/main/src/libs/kata-types/src/mount.rs
kataVolumeOptionKey = "io.katacontainers.volume="
)
var (
@ -44,7 +48,7 @@ func parseArgs(args []string) (*mountArgs, error) {
}
if args[2] == "-o" && len(args[3]) != 0 {
for _, opt := range strings.Split(args[3], ",") {
if strings.HasPrefix(opt, extraOptionKey) {
if strings.HasPrefix(opt, extraOptionKey) || strings.HasPrefix(opt, kataVolumeOptionKey) {
// filter extraoption
continue
}

View File

@ -1,15 +1,15 @@
module github.com/dragonflyoss/image-service/contrib/nydus-overlayfs
module github.com/dragonflyoss/nydus/contrib/nydus-overlayfs
go 1.18
go 1.21
require (
github.com/pkg/errors v0.9.1
github.com/urfave/cli/v2 v2.3.0
golang.org/x/sys v0.1.0
github.com/urfave/cli/v2 v2.27.1
golang.org/x/sys v0.15.0
)
require (
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d // indirect
github.com/russross/blackfriday/v2 v2.0.1 // indirect
github.com/shurcooL/sanitized_anchor_name v1.0.0 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.3 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/xrash/smetrics v0.0.0-20231213231151-1d8dd44e695e // indirect
)

View File

@ -1,17 +1,10 @@
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d h1:U+s90UTSYgptZMwQh2aRr3LuazLJIa+Pg3Kc1ylSYVY=
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/cpuguy83/go-md2man/v2 v2.0.3 h1:qMCsGGgs+MAzDFyp9LpAe1Lqy/fY/qCovCm0qnXZOBM=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/russross/blackfriday/v2 v2.0.1 h1:lPqVAte+HuHNfhJ/0LC98ESWRz8afy9tM/0RK8m9o+Q=
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/shurcooL/sanitized_anchor_name v1.0.0 h1:PdmoCO6wvbs+7yrJyMORt4/BmY5IYyJwS/kOiWx8mHo=
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
github.com/urfave/cli/v2 v2.3.0 h1:qph92Y649prgesehzOrQjdWyxFOp/QVM+6imKHad91M=
github.com/urfave/cli/v2 v2.3.0/go.mod h1:LJmUH05zAU44vOAcrfzZQKsZbVcdbOG8rtL3/XcUArI=
golang.org/x/sys v0.1.0 h1:kunALQeHf1/185U1i0GOB/fy1IPRDDpuoOOqRReG57U=
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v2 v2.2.3/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/urfave/cli/v2 v2.27.1 h1:8xSQ6szndafKVRmfyeUMxkNUJQMjL1F2zmsZ+qHpfho=
github.com/urfave/cli/v2 v2.27.1/go.mod h1:8qnjx1vcq5s2/wpsqoZFndg2CE5tNFyrTvS6SinrnYQ=
github.com/xrash/smetrics v0.0.0-20231213231151-1d8dd44e695e h1:+SOyEddqYF09QP7vr7CgJ1eti3pY9Fn3LHO1M1r/0sI=
golang.org/x/sys v0.15.0 h1:h48lPFYpsTvQJZF4EKyI4aLHaev3CxivZmv7yZig9pc=
golang.org/x/sys v0.15.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=

View File

@ -1,150 +0,0 @@
# Nydus Functional Test
## Introduction
Nydus functional test a.k.a nydus-test is built on top of [pytest](https://docs.pytest.org/en/stable/).
It basically includes two parts:
* Specific test cases located at sub-directory functional-test
* Test framework located at sub-directory framework
## Prerequisites
Debian/Ubuntu
```bash
sudo apt update && sudo apt install --no-install-recommends -y attr libattr1-dev fio pkg-config libssl-dev python3-pip libpython3.7-dev libffi-dev
python3 -m pip install --upgrade pip
# Ensure you install below modules as root user
sudo pip3 install pytest xattr requests psutil requests_unixsocket libconf py-splice fallocate pytest-repeat PyYAML six docker toml
```
## Getting Started
### Configure framework
Nydus-test is controlled and configured by `anchor_conf.json`. Nydus-test will try to find it from its root directory before executing all tests.
```json
{
"workspace": "/path/to/where/nydus-test/stores/intermediates",
"nydus_project": "/path/to/image-service/repo",
"nydus_runtime_conf": {
"profile": "release",
"log_level": "info"
},
"registry": {
"registry_url": "127.0.0.1:5000",
"registry_namespace": "nydus",
"registry_auth": "YourRegistryAuth",
"backend_proxy_url": "127.0.0.1:8000",
"backend_proxy_blobs_dir": "/path/to/where/backend/simulator/stores/blobs"
},
"images": {
"images_array": [
"busybox:latest"
]
},
"artifacts": {
"containerd": "/usr/bin/containerd"
},
"logging_file": "stderr",
"target": "gnu"
}
```
### Compile Nydus components
Before running nydus-test, please compile nydus components.
`nydusd` and `nydus-image`
```bash
cd /path/to/image-service/repo
make release
```
`nydus-backend-proxy`
```bash
cd /path/to/image-service/repo
make -C contrib/nydus-backend-proxy
```
### Define target fs structure
```yaml
depth: 4
width: 6
layers:
- layer1:
- size: 10KB
type: regular
count: 5
- size: 4MB
type: regular
count: 30
- size: 128KB
type: regular
count: 100
- size: 90MB
type: regular
count: 1
- type: symlink
count: 100
```
### Generate your own original rootfs
The framework provides a tool to generate rootfs which will be the test target.
```text
$ sudo python3 nydus_test_config.py --dist fs_structure.yaml
INFO [nydus_test_config - 49:put_files] - putting regular, count 5
INFO [nydus_test_config - 49:put_files] - putting regular, count 30
INFO [nydus_test_config - 49:put_files] - putting regular, count 100
INFO [nydus_test_config - 49:put_files] - putting regular, count 1
INFO [nydus_test_config - 49:put_files] - putting symlink, count 100
INFO [utils - 171:timer] - Generating test layer, Takes time 0.857 seconds
INFO [nydus_test_config - 49:put_files] - putting regular, count 5
INFO [nydus_test_config - 49:put_files] - putting regular, count 30
INFO [nydus_test_config - 49:put_files] - putting regular, count 100
INFO [nydus_test_config - 49:put_files] - putting regular, count 1
INFO [nydus_test_config - 49:put_files] - putting symlink, count 100
INFO [utils - 171:timer] - Generating test parent layer, Takes time 0.760 seconds
```
## Run test
Please run tests as root user.
### Run All Test Cases
The whole nydus functional test suit works on top of pytest.
### Run a Specific Test Case
```bash
pytest -sv functional-test/test_nydus.py::test_basic
```
### Run a Set of Test Cases
```bash
pytest -sv functional-test/test_nydus.py
```
### Stop Once a Case Fails
```bash
pytest -sv functional-test/test_nydus.py::test_basic --pdb
```
### Run case Step by Step
```bash
pytest -sv functional-test/test_nydus.py::test_basic --trace
```

View File

@ -1,220 +0,0 @@
import sys
import os
import re
import shutil
import logging
import pytest
import docker
sys.path.append(os.path.realpath("framework"))
from nydus_anchor import NydusAnchor
from rafs import RafsImage, RafsConf
from backend_proxy import BackendProxy
import utils
ANCHOR = NydusAnchor()
utils.logging_setup(ANCHOR.logging_file)
os.environ["RUST_BACKTRACE"] = "1"
from tools import artifact
@pytest.fixture()
def nydus_anchor(request):
# TODO: check if nydusd executable exists and have a proper version
# TODO: check if bootstrap exists
# TODO: check if blob cache file exists and try to clear it if it does
# TODO: check if blob file was put to oss
nyta = NydusAnchor()
nyta.check_prerequisites()
logging.info("*** Testing case %s ***", os.environ.get("PYTEST_CURRENT_TEST"))
yield nyta
nyta.clear_blobcache()
if hasattr(nyta, "scratch_dir"):
logging.info("Clean up scratch dir")
shutil.rmtree(nyta.scratch_dir)
if hasattr(nyta, "nydusd") and nyta.nydusd is not None:
nyta.nydusd.shutdown()
if hasattr(nyta, "overlayfs") and os.path.ismount(nyta.overlayfs):
nyta.umount_overlayfs()
# Check if nydusd is crashed.
# TODO: Where the core file is places is controlled by kernel.
# Check `/proc/sys/kernel/core_pattern`
files = os.listdir()
for one in files:
assert re.match("^core\..*", one) is None
try:
shutil.rmtree(nyta.localfs_workdir)
except FileNotFoundError:
pass
try:
nyta.cleanup_dustbin()
except FileNotFoundError:
pass
# All nydusd should stop.
assert not NydusAnchor.capture_running_nydusd()
@pytest.fixture()
def nydus_image(nydus_anchor: NydusAnchor, request):
"""
Create images using previous version nydus image tool.
This fixture provides rafs image file, case is not responsible for performing
creating image.
"""
image = RafsImage(
nydus_anchor, nydus_anchor.source_dir, "bootstrap", "blob", clear_from_oss=True
)
yield image
try:
image.clean_up()
except FileNotFoundError as _:
pass
@pytest.fixture()
def nydus_scratch_image(nydus_anchor: NydusAnchor):
"""No longer use source_dir but use scratch_dir,
Scratch image's creation is delayed until runtime of each case.
"""
nydus_anchor.prepare_scratch_dir()
# Scratch image is not made here since specific case decides how to
# scratch this dir
image = RafsImage(
nydus_anchor,
nydus_anchor.scratch_dir,
"bootstrap_scratched",
"blob_scratched",
clear_from_oss=True,
)
yield image
if not image.created:
return
try:
image.clean_up()
except FileNotFoundError as _:
pass
@pytest.fixture()
def nydus_parent_image(nydus_anchor: NydusAnchor):
parent_image = RafsImage(
nydus_anchor, nydus_anchor.parent_rootfs, "bootstrap_parent", "blob_parent"
)
yield parent_image
try:
parent_image.clean_up()
except FileNotFoundError as _:
pass
@pytest.fixture()
def nydus_scratch_parent_image(nydus_anchor: NydusAnchor):
nydus_anchor.prepare_scratch_parent_dir()
parent_image = RafsImage(
nydus_anchor, nydus_anchor.scratch_parent_dir, "bs_parent", "blob_parent"
)
yield parent_image
try:
parent_image.clean_up()
except FileNotFoundError as _:
pass
@pytest.fixture(scope="session", autouse=False)
def collect_report(request):
"""
To enable code coverage report, let @autouse be True.
"""
build_dir = ANCHOR.build_dir
from coverage_collect import collect_coverage
def CC():
collect_coverage(build_dir)
request.addfinalizer(CC)
@pytest.fixture
def rafs_conf(nydus_anchor):
"""Generate conf file via libconf(https://pypi.org/project/libconf/)"""
rc = RafsConf(nydus_anchor)
rc.dump_rafs_conf()
yield rc
@pytest.fixture(scope="session")
def nydusify_converter():
# Can't access a `function` scope fixture.
os.environ["GOTRACEBACK"] = "crash"
nydusify_source_dir = os.path.join(ANCHOR.nydus_project, "contrib/nydusify")
with utils.pushd(nydusify_source_dir):
ret, _ = utils.execute(["make", "release"])
assert ret
@pytest.fixture(scope="session")
def nydus_snapshotter():
# Can't access a `function` scope fixture.
snapshotter_source = os.path.join(ANCHOR.nydus_project, "contrib/nydus-snapshotter")
with utils.pushd(snapshotter_source):
ret, _ = utils.execute(["make"])
assert ret
@pytest.fixture()
def local_registry():
docker_client = docker.from_env()
registry_container = docker_client.containers.run(
"registry:latest", detach=True, network_mode="host", remove=True
)
yield registry_container
try:
registry_container.stop()
except docker.errors.APIError:
assert False, "fail in stopping container"
try:
ANCHOR.backend_proxy_blobs_dir
@pytest.fixture(scope="module", autouse=True)
def nydus_backend_proxy():
backend_proxy = BackendProxy(
ANCHOR,
ANCHOR.backend_proxy_blobs_dir,
bin=os.path.join(
ANCHOR.nydus_project,
"contrib",
"nydus-backend-proxy",
"target",
"release",
"nydus-backend-proxy",
),
)
backend_proxy.start()
yield
backend_proxy.stop()
except AttributeError:
pass

View File

@ -1,24 +0,0 @@
from os import PathLike
import utils
class BackendProxy:
def __init__(self, anchor, blobs_dir: PathLike, bin:PathLike):
self.__blobs_dir = blobs_dir
self.bin = bin
self.anchor = anchor
def start(self):
_, self.p = utils.run(
[self.bin, "-b", self.blobs_dir()],
wait=False,
stdout=self.anchor.logging_file,
stderr=self.anchor.logging_file,
)
def stop(self):
self.p.terminate()
self.p.wait()
def blobs_dir(self):
return self.__blobs_dir

View File

@ -1,54 +0,0 @@
import time
import hmac
import hashlib
import base64
import urllib.parse
import requests
import json
import sys
import os
from string import Template
sys.path.append(os.path.realpath("framework"))
BOT_SECRET = os.getenv("BOT_SECRET")
BOT_ACCESS_TOKEN = os.getenv("BOT_ACCESS_TOKEN")
SEND_CONTENT_TEMPLATE = """**nydus-bot**
${content}"""
class Bot:
def __init__(self):
if BOT_SECRET is None or BOT_ACCESS_TOKEN is None:
raise ValueError
timestamp = str(round(time.time() * 1000))
secret_enc = BOT_SECRET.encode("utf-8")
string_to_sign = "{}\n{}".format(timestamp, BOT_SECRET)
string_to_sign_enc = string_to_sign.encode("utf-8")
hmac_code = hmac.new(
secret_enc, string_to_sign_enc, digestmod=hashlib.sha256
).digest()
sign = urllib.parse.quote_plus(base64.b64encode(hmac_code))
self.url = f"https://oapi.dingtalk.com/robot/send?access_token={BOT_ACCESS_TOKEN}&sign={sign}&timestamp={timestamp}"
def send(self, content: str):
c = Template(SEND_CONTENT_TEMPLATE).substitute(content=content)
d = {
"msgtype": "markdown",
"markdown": {"title": "Nydus-bot", "text": c},
}
ret = requests.post(
self.url, headers={"Content-Type": "application/json"}, data=json.dumps(d)
)
print(ret.__dict__)
if __name__ == "__main__":
bot = Bot()
bot.send(sys.argv[1])

View File

@ -1,5 +0,0 @@
import os
ANCHOR_PATH = os.path.join(
os.getenv("ANCHOR_PATH", default=os.getcwd()), "anchor_conf.json"
)

View File

@ -1,88 +0,0 @@
import tempfile
import subprocess
import toml
import os
from snapshotter import Snapshotter
import utils
class Containerd(utils.ArtifactProcess):
state_dir = "/run/nydus-test_containerd"
def __init__(self, anchor, snapshotter: Snapshotter) -> None:
self.anchor = anchor
self.containerd_bin = anchor.containerd_bin
self.snapshotter = snapshotter
def gen_config(self):
_, p = utils.run(
[self.containerd_bin, "config", "default"], stdout=subprocess.PIPE
)
out, _ = p.communicate()
config = toml.loads(out.decode())
config["state"] = self.state_dir
self.__address = config["grpc"]["address"] = os.path.join(
self.state_dir, "containerd.sock"
)
config["plugins"]["io.containerd.grpc.v1.cri"]["containerd"][
"snapshotter"
] = "nydus"
config["plugins"]["io.containerd.grpc.v1.cri"]["sandbox_image"] = "google/pause"
config["plugins"]["io.containerd.grpc.v1.cri"]["containerd"][
"disable_snapshot_annotations"
] = False
config["plugins"]["io.containerd.runtime.v1.linux"]["no_shim"] = True
self.__root = tempfile.TemporaryDirectory(
dir=self.anchor.workspace, suffix="root"
)
config["root"] = self.__root.name
config["proxy_plugins"] = {
"nydus": {
"type": "snapshot",
"address": self.snapshotter.sock(),
}
}
self.config = tempfile.NamedTemporaryFile(mode="w", suffix="config.toml")
self.config.write(toml.dumps(config))
self.config.flush()
return self
@property
def root(self):
return self.__root.name
def run(self):
_, self.p = utils.run(
[self.containerd_bin, "--config", self.config.name],
wait=False,
stdout=self.anchor.logging_file,
stderr=self.anchor.logging_file,
)
@property
def address(self):
return self.__address
def remove_image_sync(self, repo):
cmd = [
"ctr",
"-n",
"k8s.io",
"-a",
self.__address,
"images",
"rm",
repo,
"--sync",
]
ret, out = utils.execute(cmd)
assert ret
def shutdown(self):
self.p.terminate()
self.p.wait()

View File

@ -1,32 +0,0 @@
import utils
import os
import sys
from argparse import ArgumentParser
def collect_coverage(source_dir, target_dir, report):
"""
Example:
./target/debug/ -s . -t lcov --llvm --branch --ignore-not-existing -o ./target/debug/coverage/
"""
cmd = f"framework/bin/grcov {target_dir} -s {source_dir} -t html --llvm --branch \
--ignore-not-existing -o {report}/coverage_report"
utils.execute(cmd, shell=True)
if __name__ == "__main__":
parser = ArgumentParser()
parser.add_argument("--source", help="path to source code", type=str)
parser.add_argument("--target", help="path to build target directory", type=str)
args = parser.parse_args()
source = args.source
target = args.target
report = "."
os.environ["RUSTFLAGS"] = "-Zinstrument-coverage"
collect_coverage(source, target, report)

View File

@ -1,241 +0,0 @@
import yaml
import tempfile
from string import Template
import json
import time
import uuid
import utils
POD_CONF = """
metadata:
attempt: 1
name: nydus-sandbox
namespace: default
uid: ${uid}
log_directory: /tmp
linux:
security_context:
namespace_options:
network: 2
"""
# annotations:
# "io.containerd.osfeature": "nydus.remoteimage.v1"
CONTAINER_CONF = """
metadata:
name: ${container_name}
image:
image: ${image}
log_path: container.1.log
command: ["sh"]
"""
class Cri:
def __init__(self, runtime_endpoint, image_endpoint) -> None:
config = dict()
config["runtime-endpoint"] = f"unix://{runtime_endpoint}"
config["image-endpoint"] = f"unix://{image_endpoint}"
config["timeout"] = 10
config["debug"] = False
self._config = tempfile.NamedTemporaryFile(
mode="w+", suffix="crictl.config", delete=False
)
yaml.dump(config, self._config)
def run_container(
self,
image,
container_name,
):
container_config = tempfile.NamedTemporaryFile(
mode="w+", suffix="container.config.yaml", delete=True
)
pod_config = tempfile.NamedTemporaryFile(
mode="w+", suffix="pod.config.yaml", delete=True
)
print(pod_config.read())
_s = Template(CONTAINER_CONF).substitute(
image=image, container_name=container_name
)
container_config.write(_s)
container_config.flush()
pod_config.write(
Template(POD_CONF).substitute(
uid=uuid.uuid4(),
)
)
pod_config.flush()
ret, _ = utils.execute(
[
"crictl",
"--config",
self._config.name,
"run",
container_config.name,
pod_config.name,
],
print_err=True,
)
assert ret
def stop_rm_container(self, id):
cmd = [
"crictl",
"--config",
self._config.name,
"stop",
id,
]
ret, _ = utils.execute(cmd)
assert ret
cmd = [
"crictl",
"--config",
self._config.name,
"rm",
id,
]
ret, _ = utils.execute(cmd)
assert ret
def list_images(self):
cmd = [
"crictl",
"--config",
self._config.name,
"images",
"--output",
"json",
]
ret, out = utils.execute(cmd)
assert ret
images = json.loads(out)
return images["images"]
def remove_image(self, repo):
images = self.list_images()
for i in images:
# Example:
# {'id': 'sha256:cc6e5af55020252510374deecb0168fc7170b5621e03317cb7c4192949becb9a',
# 'repoTags': ['reg.docker.alibaba-inc.com/chge-nydus-test/busybox:latest_converted'], 'repoDigests': ['reg.docker.alibaba-inc.com/chge-nydus-test/busybox@sha256:07592f0848a6752de1b58f06b8194dbeaff1cb3314ab3225b6ab698abac1185d'], 'size': '998569', 'uid': None, 'username': ''}
if i["repoTags"][0] == repo:
id = i["id"]
cmd = [
"crictl",
"--config",
self._config.name,
"rmi",
id,
]
ret, _ = utils.execute(cmd)
assert ret
return True
assert False
return False
def check_container_status(self, name, timeout):
"""
{
"containers": [
{
"id": "4098985ed96655dbd43eef2d6502197598b72fe40cfec4cb77466aedf755807f",
"podSandboxId": "2ae536d3481130d8a47a05fb6ffeb303cb3d57b29e8744d3ffcbbc27377ece3d",
"metadata": {
"name": "nydus-container",
"attempt": 0
},
"image": {
"image": "reg.docker.alibaba-inc.com/chge-nydus-test/mysql:latest_converted"
},
"imageRef": "sha256:68e06967547192d5eaf406a21ea39b3131f86e9dc8fb8b75e2437a1bde8d0aad",
"state": "CONTAINER_EXITED",
"createdAt": "1610018967168325132",
"labels": {
},
"annotations": {
}
}
]
}
---
{
"status": {
"id": "4098985ed96655dbd43eef2d6502197598b72fe40cfec4cb77466aedf755807f",
"metadata": {
"attempt": 0,
"name": "nydus-container"
},
"state": "CONTAINER_EXITED",
"createdAt": "2021-01-07T19:29:27.168325132+08:00",
"startedAt": "2021-01-07T19:29:28.172706527+08:00",
"finishedAt": "2021-01-07T19:29:32.882263863+08:00",
"exitCode": 0,
"image": {
"image": "reg.docker.alibaba-inc.com/chge-nydus-test/mysql:latest_converted"
},
"imageRef": "reg.docker.alibaba-inc.com/chge-nydus-test/mysql@sha256:ebadc23a8b2cbd468cb86ab5002dc85848e252de71cdc4002481f63a1d3c90be",
"reason": "Completed",
"message": "",
"labels": {},
"annotations": {},
"mounts": [],
"logPath": "/tmp/container.1.log"
},
"""
elapsed = 0
while elapsed <= timeout:
ps_cmd = [
"crictl",
"--config",
self._config.name,
"ps",
"-a",
"--output",
"json",
]
ret, out = utils.execute(
ps_cmd,
print_err=True,
)
assert ret
containers = json.loads(out)
for c in containers["containers"]:
# The container is found, no need to wait anylonger
if c["metadata"]["name"] == name:
id = c["id"]
inspect_cmd = [
"crictl",
"--config",
self._config.name,
"inspect",
id,
]
ret, out = utils.execute(inspect_cmd)
assert ret
status = json.loads(out)
if status["status"]["exitCode"] == 0:
return id, True
else:
return None, False
time.sleep(1)
elapsed += 1
return None, False

View File

@ -1,56 +0,0 @@
from linux_command import LinuxCommand
import utils
import subprocess
class DdParam(LinuxCommand):
def __init__(self, command_name):
LinuxCommand.__init__(self, command_name)
self.param_name_prefix = ""
def bs(self, block_size):
return self.set_param("bs", block_size)
def input(self, input_file):
return self.set_param("if", input_file)
def output(self, output_file):
return self.set_param("of", output_file)
def count(self, count):
return self.set_param("count", count)
def iflag(self, iflag):
return self.set_param("iflag", iflag)
def skip(self, len):
return self.set_param("skip", len)
class DD:
"""
dd always tries to to copy the entire file.
"""
def __init__(self):
self.dd_params = DdParam("dd")
def create_command(self):
return self.dd_params
def extend_command(self):
return self.dd_params
def __str__(self):
return str(self.dd_params)
def run(self):
ret, _ = utils.run(
str(self),
verbose=False,
wait=True,
shell=True,
stdout=subprocess.DEVNULL,
stderr=subprocess.STDOUT,
)
return ret

View File

@ -1,313 +0,0 @@
from utils import pushd
import os
from random import randint
import shutil
import logging
import random
import string
from fallocate import fallocate, FALLOC_FL_PUNCH_HOLE, FALLOC_FL_KEEP_SIZE
from utils import Size, Unit
import xattr
"""
Generate and distribute target files(regular, symlink, directory), link files.
File with holes(sparse file)
Hardlink
1. Generate directory tree structure firstly.
"""
CHINESE_TABLE = "搀掺蝉馋谗缠铲产阐颤昌猖场尝常长偿肠厂敞畅唱倡超抄钞朝嘲潮巢吵炒车扯撤掣彻澈郴臣辰尘晨忱沉\
愤粪丰封枫蜂峰锋风疯烽逢冯缝讽奉凤佛否夫敷肤孵扶拂辐幅氟符伏俘服浮涪福袱弗甫抚辅俯釜斧脯腑\
楔些歇蝎鞋协挟携邪斜胁谐写械卸蟹懈泄泻谢屑薪芯锌欣辛新忻心信衅星腥猩惺兴刑型形邢行醒幸杏性\
寅饮尹引隐印英樱婴鹰应缨莹萤营荧蝇迎赢盈影颖硬映哟拥佣臃痈庸雍踊蛹咏泳涌永恿勇用幽优悠忧尤\
庥庠庹庵庾庳赓廒廑廛廨廪膺忄忉忖忏怃忮怄忡忤忾怅怆忪忭忸怙怵怦怛怏怍怩怫怊怿怡恸恹恻恺恂恪"
def gb2312(length):
for i in range(0, length):
c = random.choice(CHINESE_TABLE)
yield c.encode("gb2312")
class Distributor:
def __init__(self, top_dir: str, levels: int, max_sub_directories: int):
self.top_dir = top_dir
self.levels = levels
self.max_sub_directories = max_sub_directories
# All files generated by this distributor, no matter `_put_single_file()`
# or `put_multiple_files` wll be recorded in this list.
self.files = []
self.symlinks = []
self.dirs = []
self.hardlinks = {}
def _relative_path_to_top(self, path: str) -> str:
return os.path.relpath(path, start=self.top_dir)
def _generate_one_level(self, level, cur_dir):
dirs = []
with pushd(cur_dir):
# At least, each level has a child directory
for index in range(0, randint(1, self.max_sub_directories)):
d_name = f"DIR.{level}.{index}"
try:
d = os.mkdir(d_name)
except FileExistsError:
pass
dirs.append(d_name)
if level >= self.levels:
return
for d in dirs:
self._generate_one_level(level + 1, d)
# This is top level planted tree.
return dirs
def generate_tree(self):
"""DIR.LEVEL.INDEX"""
dirs = self._generate_one_level(0, self.top_dir)
self.planted_tree_root = dirs[:]
def _random_pos_dir(self):
level = randint(0, self.levels)
with pushd(os.path.join(self.top_dir, random.choice(self.planted_tree_root))):
while level:
files = os.listdir()
level -= 1
files = [f for f in files if os.path.isdir(f)]
if len(files) != 0:
next_level = files[randint(0, len(files) - 1)]
else:
break
os.chdir(next_level)
return os.getcwd()
def put_hardlinks(self, count):
def _create_new_source():
source_file = os.path.join(
self._random_pos_dir(), Distributor.generate_random_name(60)
)
fd = os.open(source_file, os.O_CREAT | os.O_RDWR)
os.write(fd, os.urandom(randint(0, 1024 * 1024 + 7)))
os.close(fd)
return source_file
source_file = _create_new_source()
self.hardlinks[source_file] = []
self.hardlink_aliases = []
for i in range(0, count):
if randint(0, 16) % 4 == 0:
source_file = _create_new_source()
self.hardlinks[source_file] = []
link = os.path.join(
self._random_pos_dir(),
Distributor.generate_random_name(50, suffix="hardlink"),
)
logging.debug(link)
# TODO: `link` may be too long to link, so better to change directory first!
os.link(source_file, link)
self.hardlinks[source_file].append(self._relative_path_to_top(link))
self.hardlink_aliases.append(self._relative_path_to_top(link))
return self.hardlink_aliases[-count:]
def put_symlinks(self, count, chinese=False):
"""
Generate symlinks pointing to regular files or directories.
"""
def _create_new_source():
this_path = ""
if randint(0, 123) % 4 == 0:
self.put_directories(1)
this_path = self.dirs[-1]
del self.dirs[-1]
else:
_, this_path = self._put_single_file(
self._random_pos_dir(),
Size(randint(0, 100), Unit.KB),
chinese=chinese,
)
del self.files[-1]
return this_path
source_file = _create_new_source()
for i in range(0, count):
if randint(0, 12) % 3 == 0:
source_file = _create_new_source()
symlink = os.path.join(
self._random_pos_dir(),
Distributor.generate_random_length_name(20, suffix="symlink"),
)
# XFS limits symlink target path which is stored within symlink length at 1024bytes.
if len(source_file) >= 1024:
continue
if randint(0, 12) % 5 == 0:
source_file = os.path.relpath(source_file, start=self.top_dir)
try:
os.symlink(source_file, symlink)
except FileExistsError as e:
# Sometimes, symlink fails due to an existed symlink file met.
# This should rarely happen if `generate_random_length_name` truly randoms
logging.exception(e)
continue
if randint(0, 12) % 4 == 0:
try:
if os.path.isdir(source_file):
try:
os.rmdir(source_file)
except Exception:
pass
else:
os.unlink(source_file)
except FileNotFoundError:
pass
# Save symlink relative path so that we can tell which symlinks were put.
self.symlinks.append(self._relative_path_to_top(symlink))
return self.symlinks[-count:]
def put_directories(self, count):
for i in range(0, count):
dst_path = os.path.join(
self._random_pos_dir(),
Distributor.generate_random_name(30, suffix="dir"),
)
# We might have a very long name of `dst_path`. So better to mkdir one by one
dst_relpath = os.path.relpath(dst_path, start=self.top_dir)
with pushd(self.top_dir):
for d in dst_relpath.split("/")[0:]:
try:
os.chdir(d)
except FileNotFoundError:
os.mkdir(d)
os.chdir(d)
self.dirs.append(os.path.relpath(dst_path, start=self.top_dir))
return self.dirs[-count:]
@staticmethod
def generate_random_name(length, suffix=None, chinese=False):
if chinese:
result_str = "".join([s.decode("gb2312") for s in gb2312(length)])
else:
letters = string.ascii_letters
result_str = "".join(random.choice(letters) for i in range(length))
if suffix is not None:
result_str += f".{suffix}"
return result_str
@staticmethod
def generate_random_length_name(max_length, suffix=None, chinese=False):
# Shrink the max_length since it has a suffix
# Use max_length - 9 as the minimum length to reduce name conflict.
len = randint((max_length - 9) // 2, max_length - 9)
return Distributor.generate_random_name(len, suffix, chinese)
def _put_single_file(
self,
parent_dir,
file_size: Size,
specified_name=None,
letters=False,
chinese=False,
name_len=32,
):
if specified_name is None:
name = Distributor.generate_random_length_name(
name_len, suffix="regular", chinese=chinese
)
else:
name = specified_name
this_path = os.path.join(parent_dir, name)
with pushd(parent_dir):
if chinese:
fd = os.open(name.encode("gb2312"), os.O_CREAT | os.O_RDWR)
else:
fd = os.open(name.encode("ascii"), os.O_CREAT | os.O_RDWR)
if file_size.B != 0:
left = file_size.B
logging.debug("Putting file %s", this_path)
while left:
length = Size(1, Unit.MB).B if Size(1, Unit.MB).B < left else left
if not letters:
left -= os.write(fd, os.urandom(length))
else:
picked_list = "".join(
random.choices(string.ascii_lowercase[1:4], k=length)
)
left -= os.write(fd, picked_list.encode())
os.close(fd)
self.files.append(self._relative_path_to_top(this_path))
return name, this_path
def put_single_file(self, file_size: Size, pos=None, name=None):
self._put_single_file(
self._random_pos_dir() if pos is None else pos,
file_size,
letters=True,
specified_name=name,
)
return self.files[-1]
def put_single_file_with_xattr(self, file_size: Size, kv, pos=None, name=None):
self._put_single_file(
self._random_pos_dir() if pos is None else pos,
file_size,
letters=True,
specified_name=name,
)
p = os.path.join(self.top_dir, self.files[-1])
xattr.setxattr(p, kv[0].encode(), kv[1].encode())
def put_multiple_files(self, count: int, max_size: Size):
for i in range(0, count):
cur_size = Size.from_B(randint(0, max_size.B))
self._put_single_file(self._random_pos_dir(), cur_size)
return self.files[-count:]
def put_multiple_chinese_files(self, count: int, max_size: Size):
for i in range(0, count):
cur_size = Size.from_B(randint(0, max_size.B))
self._put_single_file(self._random_pos_dir(), cur_size, chinese=True)
return self.files[-count:]
def put_multiple_empty_files(self, count):
for i in range(0, count):
self._put_single_file(self._random_pos_dir(), Size(0, Unit.Byte))
return self.files[-count:]
if __name__ == "__main__":
top_dir = "/mnt/gen_tree"
if os.path.exists(top_dir):
shutil.rmtree(top_dir)
try:
os.makedirs(top_dir, exist_ok=True)
except FileExistsError:
pass
dist = Distributor(top_dir, 2, 5)
dist.generate_tree()
print(dist._random_pos_dir())
dist.put_hardlinks(10)
Distributor.generate_random_name(2000, suffix="sym")
dist._put_single_file(top_dir, Size(100, Unit.MB))
dist.put_multiple_files(1000, Size(4, Unit.KB))

View File

@ -1,17 +0,0 @@
from utils import execute, logging_setup
class Erofs:
def __init__(self) -> None:
pass
def mount(self, fsid, mountpoint):
cmd = f"mount -t erofs -o fsid={fsid} none {mountpoint}"
self.mountpoint = mountpoint
r, _ = execute(cmd, shell=True)
assert r
def umount(self):
cmd = f"umount {self.mountpoint}"
r, _ = execute(cmd, shell=True)
assert r

View File

@ -1,111 +0,0 @@
import datetime
import utils
import json
import os
from types import SimpleNamespace as Namespace
from linux_command import LinuxCommand
class FioParam(LinuxCommand):
def __init__(self, fio, command_name):
LinuxCommand.__init__(self, command_name)
self.fio = fio
self.command_name = command_name
def block_size(self, size):
return self.set_param("blocksize", size)
def direct(self, value: bool = True):
return self.set_param("direct", value)
def size(self, size):
return self.set_param("size", size)
def io_mode(self, io_mode):
return self.set_param("io_mode", io_mode)
def ioengine(self, engine):
return self.set_param("ioengine", engine)
def filename(self, filename):
return self.set_param("filename", filename)
def read_write(self, readwrite):
return self.set_param("readwrite", readwrite)
def iodepth(self, iodepth):
return self.set_param("iodepth", iodepth)
def numjobs(self, jobs):
self.set_flags("group_reporting")
return self.set_param("numjobs", jobs)
class Fio:
def __init__(self):
self.jobs = []
self.base_cmd_params = FioParam(self, "fio")
self.global_cmd_params = FioParam(self, "fio")
def create_command(self, *pattern):
self.global_cmd_params.set_flags("group_reporting")
p = "_".join(pattern)
try:
os.mkdir("benchmark_reports")
except FileExistsError:
pass
self.fio_report_file = os.path.join(
"benchmark_reports",
f'fio_run_{p}_{datetime.datetime.now().strftime("%Y-%m-%d_%H-%M-%s")}',
)
self.base_cmd_params.set_param("output-format", "json").set_param(
"output", self.fio_report_file
)
return self.global_cmd_params
def expand_command(self):
return self.global_cmd_params
def __str__(self):
fio_prams = FioParam(self, "fio")
fio_prams.command_param_dict.update(self.base_cmd_params.command_param_dict)
fio_prams.command_param_dict.update(self.global_cmd_params.command_param_dict)
fio_prams.command_flags.extend(self.global_cmd_params.command_flags)
fio_prams.set_param("name", "fio")
command = str(fio_prams)
return command
def run(self):
ret, _ = utils.run(
str(self),
wait=True,
shell=True,
)
assert ret == 0
def get_result(self, title_line, *keys):
with open(self.fio_report_file) as f:
data = json.load(f, object_hook=lambda d: Namespace(**d))
if hasattr(data, "jobs"):
jobs = data.jobs
assert len(jobs) == 1
job = jobs[0]
print("")
result = f"""
{title_line}
block size: {getattr(data, 'global options').bs}
direct: {getattr(data, 'global options').direct}
ioengine: {getattr(data, 'global options').ioengine}
runtime: {job.read.runtime}
iops: {job.read.iops}
bw(KB/S): {job.read.bw}
latency/ms: min:{job.read.lat_ns.min/1e6}, max: {job.read.lat_ns.max/1e3}, mean: {job.read.lat_ns.mean/1e6}
"""
print(result)
return result

View File

@ -1,45 +0,0 @@
class LinuxCommand:
def __init__(self, command_name):
self.command_name = command_name
self.command_param_dict = {}
self.command_flags = []
self.command_name = command_name
self.param_name_prefix = "--"
self.param_separator = " "
self.param_value_prefix = " "
self.param_value_list_separator = ","
self.subcommand = None
def set_subcommand(self, subcommand):
self.subcommand = subcommand
return self
def set_param(self, key, val):
self.command_param_dict[key] = val
return self
def set_flags(self, *new_flag):
for f in new_flag:
self.command_flags.append(f)
return self
def remove_param(self, key):
try:
del self.command_param_dict[key]
except KeyError:
pass
def __str__(self):
if self.subcommand is not None:
command = self.command_name + " " + self.subcommand
else:
command = self.command_name
for key, value in self.command_param_dict.items():
command += (
f"{self.param_separator}{self.param_name_prefix}"
f"{key}{self.param_value_prefix}{value}"
)
for flag in self.command_flags:
command += f"{self.param_separator}{self.param_name_prefix}{flag}"
return command

View File

@ -1,338 +0,0 @@
import os
import shutil
from inspect import stack, getframeinfo
from containerd import Containerd
from snapshotter import Snapshotter
import utils
from stat import *
import time
import logging
import sys
import signal
import tempfile
import json
import platform
NYDUSD_BIN = "nydusd"
NYDUS_IMG_BIN = "nydus-image"
from conf import ANCHOR_PATH
class NydusAnchor:
"""
Test environment setup, like,
- location of test target executable
- path to directory for data verification by comparing digest.
- wrapper for test io engin.
"""
def __init__(self, path=None):
"""
:rootfs: An alias for bootstrap file.
:verify_dir: Source directory from which to create this test image.
"""
self.machine = platform.machine()
if path is None:
path = ANCHOR_PATH
try:
with open(path, "r") as f:
kwargs = json.load(f)
except FileNotFoundError:
logging.error("Please define your own anchor file! [anchor_conf.json]")
sys.exit(1)
self.workspace = kwargs.pop("workspace", ".")
# Path to be searched for nydus binaries
self.nydus_project = kwargs.pop("nydus_project")
# In case we want to build image on top an existed image.
# Create an image from this parent rootfs firstly.
# TODO: Better to specify a different file system thus to have same inode numbers.
registry_conf = kwargs.pop("registry")
self.registry_url = registry_conf["registry_url"]
self.registry_auth = registry_conf["registry_auth"]
self.registry_namespace = registry_conf["registry_namespace"]
try:
self.backend_proxy_url = registry_conf["backend_proxy_url"]
self.backend_proxy_blobs_dir = registry_conf["backend_proxy_blobs_dir"]
os.makedirs(self.backend_proxy_blobs_dir, exist_ok=True)
except KeyError:
pass
artifacts = kwargs.pop("artifacts")
self.containerd_bin = artifacts["containerd"]
try:
self.ossutil_bin = artifacts["ossutil_bin"]
except KeyError:
self.ossutil_bin = (
"framework/bin/ossutil64.x86"
if self.machine != "aarch64"
else "framework/bin/ossutil64.aarch64"
)
nydus_runtime_conf = kwargs.pop("nydus_runtime_conf")
self.log_level = nydus_runtime_conf["log_level"]
profile = nydus_runtime_conf["profile"]
self.fs_version = kwargs.pop("fs_version", 6)
try:
oss_conf = kwargs.pop("oss")
self.oss_ak_id = oss_conf["ak_id"]
self.oss_ak_secret = oss_conf["ak_secret"]
self.oss_bucket = oss_conf["bucket"]
self.oss_endpoint = oss_conf["endpoint"]
except KeyError:
pass
self.logging_file_path = kwargs.pop("logging_file")
self.logging_file = self.decide_logging_file()
self.dustbin = []
self.tmp_dirs = []
self.localfs_workdir = os.path.join(self.workspace, "localfs_workdir")
self.nydusify_work_dir = os.path.join(self.workspace, "nydusify_work_dir")
# Where to mount this rafs
self.mountpoint = os.path.join(self.workspace, "rafs_mnt")
# From which directory to build rafs image
self.blobcache_dir = os.path.join(self.workspace, "blobcache_dir")
self.overlayfs = os.path.join(self.workspace, "overlayfs_mnt")
self.source_dir = os.path.join(self.workspace, "gen_rootfs")
self.parent_rootfs = os.path.join(self.workspace, "parent_rootfs")
self.fscache_dir = os.path.join(self.workspace, "fscache")
os.makedirs(self.fscache_dir, exist_ok=True)
link_target = kwargs.pop("target")
if link_target == "gnu":
self.binary_release_dir = os.path.join(
self.nydus_project, "target/release"
)
elif link_target == "musl":
arch = platform.machine()
self.binary_release_dir = os.path.join(
self.nydus_project,
f"target/{arch}-unknown-linux-musl",
"release",
)
self.build_dir = os.path.join(self.nydus_project, "target/debug")
self.binary_debug_dir = os.path.join(self.nydus_project, "target/debug")
if profile == "release":
self.binary_dir = self.binary_release_dir
elif profile == "debug":
self.binary_dir = self.binary_debug_dir
else:
sys.exit()
self.nydusd_bin = os.path.join(self.binary_dir, NYDUSD_BIN)
self.image_bin = os.path.join(self.binary_dir, NYDUS_IMG_BIN)
self.nydusify_bin = os.path.join(
self.nydus_project, "contrib", "nydusify", "cmd", "nydusify"
)
self.snapshotter_bin = kwargs.pop(
"snapshotter",
os.path.join(
self.nydus_project,
"contrib",
"nydus-snapshotter",
"bin",
"containerd-nydus-grpc",
),
)
self.images_array = kwargs.pop("images")["images_array"]
try:
shutil.rmtree(self.blobcache_dir)
except FileNotFoundError:
pass
os.makedirs(self.blobcache_dir)
os.makedirs(self.mountpoint, exist_ok=True)
os.makedirs(self.overlayfs, exist_ok=True)
def put_dustbin(self, path):
self.dustbin.append(path)
def cleanup_dustbin(self):
for p in self.dustbin:
if isinstance(p, utils.ArtifactProcess):
p.shutdown()
else:
os.unlink(p)
def check_prerequisites(self):
assert os.path.exists(self.source_dir), "Verification directory not existed!"
assert os.path.exists(self.blobcache_dir), "Blobcache directory not existed!"
assert (
len(os.listdir(self.blobcache_dir)) == 0
), "Blobcache directory not empty!"
assert not os.path.ismount(self.mountpoint), "Mount point was already mounted"
def clear_blobcache(self):
try:
if os.listdir(self.blobcache_dir) == 0:
return
# Under some cases, blob cache dir is temporarily mounted.
if os.path.ismount(self.blobcache_dir):
utils.execute(["umount", self.blobcache_dir])
shutil.rmtree(self.blobcache_dir)
logging.info("Cleared cache %s", self.blobcache_dir)
os.mkdir(self.blobcache_dir)
except Exception as exc:
print(exc)
def prepare_scratch_dir(self):
self.scratch_dir = os.path.join(
self.workspace,
os.path.basename(os.path.normpath(self.source_dir)) + "_scratch",
)
# We don't delete the scratch dir because it helps to analyze prolems.
# But if another round of test trip begins, no need to keep it anymore.
if os.path.exists(self.scratch_dir):
shutil.rmtree(self.scratch_dir)
shutil.copytree(self.source_dir, self.scratch_dir, symlinks=True)
def prepare_scratch_parent_dir(self):
self.scratch_parent_dir = os.path.join(
self.workspace,
os.path.basename(os.path.normpath(self.parent_rootfs)) + "_scratch",
)
# We don't delete the scratch dir because it helps to analyze problems.
# But if another round of test trip begins, no need to keep it anymore.
if os.path.exists(self.scratch_parent_dir):
shutil.rmtree(self.scratch_parent_dir)
shutil.copytree(self.parent_rootfs, self.scratch_parent_dir, symlinks=True)
@staticmethod
def check_nydusd_health():
pid_list = utils.get_pid(NYDUSD_BIN)
if len(pid_list) == 1:
return True
else:
logging.error("Captured nydusd process %s", pid_list)
return False
@staticmethod
def capture_running_nydusd():
pid_list = utils.get_pid(NYDUSD_BIN)
if len(pid_list) != 0:
logging.info("Captured nydusd process %s", pid_list)
# Kill remaining nydusd thus not to affect following cases.
# utils.kill_all_processes(NYDUSD_BIN, signal.SIGINT)
time.sleep(2)
return True
else:
return False
def mount_overlayfs(self, layers, base=os.getcwd()):
"""
We usually use overlayfs to act as a verifying dir. Some cases may scratch
the original source dir.
:source_dir: A directory acts on a layer of overlayfs, from which to build the image
:layers: tail item from layers is the bottom layer.
Cited:
```
Multiple lower layers
---------------------
Multiple lower layers can now be given using the the colon (":") as a
separator character between the directory names. For example:
mount -t overlay overlay -o lowerdir=/lower1:/lower2:/lower3 /merged
As the example shows, "upperdir=" and "workdir=" may be omitted. In
that case the overlay will be read-only.
The specified lower directories will be stacked beginning from the
rightmost one and going left. In the above example lower1 will be the
top, lower2 the middle and lower3 the bottom layer.
```
"""
handled_layers = [l.replace(":", "\\:") for l in layers]
if len(handled_layers) == 1:
self.sticky_lower_dir = tempfile.TemporaryDirectory(dir=self.workspace)
handled_layers.append(self.sticky_lower_dir.name)
layers_set = ":".join(handled_layers)
with utils.pushd(base):
cmd = [
"mount",
"-t",
"overlay",
"-o",
f"lowerdir={layers_set}",
"rafs_ci_overlay",
self.overlayfs,
]
ret, _ = utils.execute(cmd)
assert ret
def umount_overlayfs(self):
cmd = ["umount", self.overlayfs]
ret, _ = utils.execute(cmd)
assert ret
def decide_logging_file(self):
try:
p = os.environ["LOG_FILE"]
return open(p, "w+")
except KeyError:
if self.logging_file_path == "stdin":
return sys.stdin
elif self.logging_file_path == "stderr":
return sys.stderr
else:
return open(self.logging_file_path, "w+")
def check_fuse_conn(func):
last_conn_id = 0
print("last conn id %d" % last_conn_id)
def wrapped():
conn_id = func()
if last_conn_id != 0:
assert last_conn_id == conn_id
else:
last_conn_id == conn_id
return conn_id
return wrapped
# @check_fuse_conn
def inspect_sys_fuse():
sys_fuse_path = "/sys/fs/fuse/connections"
try:
conns = os.listdir(sys_fuse_path)
frameinfo = getframeinfo(stack()[1][0])
logging.info(
"%d | %d fuse connections: %s" % (frameinfo.lineno, len(conns), conns)
)
conn_id = int(conns[0])
return conn_id
except Exception as exc:
logging.exception(exc)

View File

@ -1,351 +0,0 @@
import logging
import subprocess
import tempfile
import utils
from nydus_anchor import NydusAnchor
import os
import json
import posixpath
from linux_command import LinuxCommand
import shutil
import tarfile
import re
class NydusifyParam(LinuxCommand):
def __init__(self, command_name):
super().__init__(command_name)
self.param_name_prefix = "--"
def source(self, source):
return self.set_param("source", source)
def target(self, target):
return self.set_param("target", target)
def nydus_image(self, nydus_image):
return self.set_param("nydus-image", nydus_image)
def work_dir(self, work_dir):
return self.set_param("work-dir", work_dir)
def fs_version(self, fs_version):
return self.set_param("fs-version", str(fs_version))
class Nydusify(LinuxCommand):
def __init__(self, anchor: NydusAnchor):
self.image_builder = anchor.image_bin
self.nydusify_bin = anchor.nydusify_bin
self.registry_url = anchor.registry_url
self.work_dir = anchor.nydusify_work_dir
self.anchor = anchor
# self.generate_auth_config(self.registry_url, anchor.registry_auth)
# os.environ["DOCKER_CONFIG"] = self.__temp_auths_config_dir.name
super().__init__(self.image_builder)
self.cmd = NydusifyParam(self.nydusify_bin)
self.cmd.nydus_image(self.image_builder).work_dir(self.work_dir)
def convert(self, source, suffix="_converted", target_ref=None, fs_version=5):
"""
A reference to image looks like registry/namespace/repo:tag
Before conversion begins, split the reference into those parts.
"""
# Notice: localhost:5000/busybox:latest
self.__repo = posixpath.basename(source).split(":")[0]
self.__converted_image = (
posixpath.basename(source) + suffix if suffix is not None else ""
)
self.__source = source
self.cmd.set_subcommand("convert")
if target_ref is None:
target_ref = posixpath.join(
self.anchor.registry_url,
self.anchor.registry_namespace,
self.__converted_image,
)
self.cmd.source(source).target(target_ref).fs_version(fs_version)
self.target_ref = target_ref
cmd = str(self.cmd)
with utils.timer(
f"### Rafs V{fs_version} Image conversion time including Pull and Push ###"
):
_, p = utils.run(
cmd,
False,
shell=True,
stdout=self.anchor.logging_file,
stderr=self.anchor.logging_file,
)
p.wait()
assert p.returncode == 0
def check(self, source, suffix="_converted", target_ref=None, fs_version=5):
"""
A reference to image looks like registry/namespace/repo:tag
Before conversion begins, split the reference into those parts.
"""
# Notice: localhost:5000/busybox:latest
self.__repo = posixpath.basename(source).split(":")[0]
self.__converted_image = (
posixpath.basename(source) + suffix if suffix is not None else ""
)
self.__source = source
self.cmd.set_subcommand("check")
self.cmd.set_param("nydusd", self.anchor.nydusd_bin)
self.cmd.set_param("nydus-image", self.anchor.image_bin)
if target_ref is None:
target_ref = posixpath.join(
self.anchor.registry_url,
self.anchor.registry_namespace,
self.__converted_image,
)
self.cmd.source(source).target(target_ref).fs_version(fs_version)
self.target_ref = target_ref
cmd = str(self.cmd)
with utils.timer("### Image Check Duration ###"):
_, p = utils.run(
cmd,
False,
shell=True,
stdout=self.anchor.logging_file,
stderr=self.anchor.logging_file,
)
p.wait()
assert p.returncode == 0
def docker_v2(self):
self.cmd.set_flags("docker-v2-format")
return self
def force_push(self):
self.cmd.set_flags("backend-force-push")
return self
def platform(self, p):
self.cmd.set_param("platform", p)
return self
def chunk_dict(self, chunk_dict_arg):
self.cmd.set_param("chunk-dict", chunk_dict_arg)
return self
def with_new_work_dir(self, work_dir):
self.work_dir = work_dir
self.cmd.set_param("work-dir", work_dir)
return self
def enable_multiplatfrom(self, enable: bool):
if enable:
self.cmd.set_flags("multi-platform")
return self
def build_cache_ref(self, ref):
self.cmd.set_param("build-cache", ref)
return self
def backend_type(self, type, oss_object_prefix=None, filed=False):
config = {
"endpoint": self.anchor.oss_endpoint,
"access_key_id": self.anchor.oss_ak_id,
"access_key_secret": self.anchor.oss_ak_secret,
"bucket_name": self.anchor.oss_bucket,
}
if oss_object_prefix is not None:
config["object_prefix"] = oss_object_prefix
self.cmd.set_param("backend-type", type)
if filed:
with open("oss_conf.json", "w") as f:
json.dump(config, f)
self.cmd.set_param("backend-config-file", "oss_conf.json")
else:
self.cmd.set_param("backend-config", json.dumps(json.dumps(config)))
return self
def nydus_image_output(self):
with utils.pushd(os.path.join(self.work_dir, "bootstraps")):
outputs = [o for o in os.listdir() if re.match(r".*json$", o) is not None]
outputs.sort(key=lambda x: int(x.split("-")[0]))
with open(outputs[0], "r") as f:
return json.load(f)
@property
def original_repo(self):
return self.__repo
@property
def converted_repo(self):
return posixpath.join(self.anchor.registry_namespace, self.__repo)
@property
def converted_image(self):
return posixpath.join(
self.registry_url, self.anchor.registry_namespace, self.__converted_image
)
def locate_bootstrap(self):
bootstraps_dir = os.path.join(self.work_dir, "bootstraps")
with utils.pushd(bootstraps_dir):
each_layers = os.listdir()
if len(each_layers) == 0:
return None
each_layers = [l.split("-") for l in each_layers]
each_layers.sort(key=lambda x: int(x[0]))
return os.path.join(bootstraps_dir, "-".join(each_layers[-1]))
def generate_auth_config(self, registry_url, auth):
auths = {"auths": {registry_url: {"auth": auth}}}
self.__temp_auths_config_dir = tempfile.TemporaryDirectory()
self.auths_config = os.path.join(
self.__temp_auths_config_dir.name, "config.json"
)
with open(self.auths_config, "w+") as f:
json.dump(auths, f)
f.flush()
def extract_source_layers_names_and_download(self, arch="amd64"):
skopeo = utils.Skopeo()
manifest, digest = skopeo.inspect(self.__source, image_arch=arch)
layers = [l["digest"] for l in manifest["layers"]]
# trimmed_layers = [os.path.join(self.work_dir, self.__source, l) for l in layers]
# trimmed_layers.reverse()
layers.reverse()
skopeo.copy_to_local(
self.__source,
layers,
os.path.join(self.work_dir, self.__source),
resource_digest=digest,
)
return layers, os.path.join(self.work_dir, self.__source)
def extract_converted_layers_names(self, arch="amd64"):
skopeo = utils.Skopeo()
manifest, _ = skopeo.inspect(
self.target_ref,
tls_verify=False,
features="nydus.remoteimage.v1",
image_arch=arch,
)
layers = [l["digest"] for l in manifest["layers"]]
layers.reverse()
return layers
def pull_bootstrap(self, downloaded_dir, bootstrap_name, arch="amd64"):
"""
Nydusify converts oci to nydus format and push the nydus image manifest to registry,
which belongs to a manifest index.
"""
skopeo = utils.Skopeo()
nydus_manifest, _ = skopeo.inspect(
self.target_ref,
tls_verify=False,
features="nydus.remoteimage.v1",
image_arch=arch,
)
layers = nydus_manifest["layers"]
for l in layers:
if l["mediaType"] == "application/vnd.docker.image.rootfs.diff.tar.gzip":
bootstrap_digest = l["digest"]
import requests
# Currently, we can not handle auth
# OCI distribution spec: /v2/<name>/blobs/<digest>
os.makedirs(downloaded_dir, exist_ok=True)
reader = requests.get(
f"http://{self.registry_url}/v2/{self.anchor.registry_namespace}/{self.original_repo}/blobs/{bootstrap_digest}",
stream=True,
)
with utils.pushd(downloaded_dir):
with open("image.gzip", "wb") as w:
shutil.copyfileobj(reader.raw, w)
with tarfile.open("image.gzip", "r:gz") as tar_gz:
def is_within_directory(directory, target):
abs_directory = os.path.abspath(directory)
abs_target = os.path.abspath(target)
prefix = os.path.commonprefix([abs_directory, abs_target])
return prefix == abs_directory
def safe_extract(tar, path=".", members=None, *, numeric_owner=False):
for member in tar.getmembers():
member_path = os.path.join(path, member.name)
if not is_within_directory(path, member_path):
raise Exception("Attempted Path Traversal in Tar File")
tar.extractall(path, members, numeric_owner=numeric_owner)
safe_extract(tar_gz)
os.rename("image/image.boot", bootstrap_name)
os.remove("image.gzip")
return os.path.join(downloaded_dir, bootstrap_name)
def pull_config(self, image, arch="amd64"):
"""
Nydusify converts oci to nydus format and push the nydus image manifest to registry,
which belongs to a manifest index.
"""
skopeo = utils.Skopeo()
nydus_manifest, digest = skopeo.inspect(
image, tls_verify=False, image_arch=arch
)
import requests
# Currently, we can handle auth
# OCI distribution spec: /v2/<name>/manifests/<digest>
reader = requests.get(
f"http://{self.registry_url}/v2/{self.original_repo}/manifests/{digest}",
stream=True,
)
manifest = json.load(reader.raw)
config_digest = manifest["config"]["digest"]
reader = requests.get(
f"http://{self.registry_url}/v2/{self.original_repo}/blobs/{config_digest}",
stream=True,
)
config = json.load(reader.raw)
return config
def find_nydus_image(self, image, arch):
skopeo = utils.Skopeo()
nydus_manifest, digest = skopeo.inspect(
image, tls_verify=False, image_arch=arch, features="nydus.remoteimage.v1"
)
assert nydus_manifest is not None
def get_build_cache_records(self, ref):
skopeo = utils.Skopeo()
build_cache_records, _ = skopeo.inspect(ref, tls_verify=False)
c = json.dumps(build_cache_records, indent=4, sort_keys=False)
logging.info("build cache: %s", c)
records = build_cache_records["layers"]
return records

View File

@ -1,107 +0,0 @@
import tempfile
from string import Template
import logging
import utils
OSS_CONFIG_TEMPLATE = """
[Credentials]
language=EN
endpoint=${endpoint}
accessKeyID=${ak}
accessKeySecret=${ak_secret}
"""
class OssHelper:
def __init__(self, util, endpoint, bucket, ak_id, ak_secret, prefix=None):
oss_conf = tempfile.NamedTemporaryFile(mode="w+", suffix="oss.conf")
items = {
"endpoint": endpoint,
"ak": ak_id,
"ak_secret": ak_secret,
}
template = Template(OSS_CONFIG_TEMPLATE)
_s = template.substitute(**items)
oss_conf.write(_s)
oss_conf.flush()
self.util = util
self.bucket = bucket
self.conf_wrapper = oss_conf
self.conf_file = oss_conf.name
self.prefix = prefix
self.path = (
f"oss://{self.bucket}/{self.prefix}"
if self.prefix is not None
else f"oss://{self.bucket}/"
)
def upload(self, src, dst, force=False):
if not self.stat(dst) or force:
cmd = [
self.util,
"--config-file",
self.conf_file,
"-f",
"cp",
src,
f"{self.path}{dst}",
]
ret, _ = utils.execute(cmd, print_output=True)
assert ret
if ret:
logging.info("Object %s is uploaded", dst)
def download(self, src, dst):
cmd = [
self.util,
"--config-file",
self.conf_file,
"cp",
"-f",
f"{self.path}{src}",
dst,
]
ret, _ = utils.execute(cmd, print_cmd=True)
if ret:
logging.info("Download %s ", src)
def rm(self, object):
cmd = [
self.util,
"rm",
"--config-file",
self.conf_file,
f"{self.path}{object}",
]
ret, _ = utils.execute(cmd, print_cmd=True, print_output=False)
assert ret
if ret:
logging.info("Object %s is removed from oss", object)
def stat(self, object):
cmd = [
self.util,
"--config-file",
self.conf_file,
"stat",
f"{self.path}{object}",
]
ret, _ = utils.execute(
cmd, print_cmd=False, print_output=False, print_err=False
)
if ret:
logging.info("Object %s already uploaded", object)
else:
logging.warning(
"Object %s was not uploaded yet",
object,
)
return ret
def list(self):
cmd = [self.util, "--config-file", self.conf_file, "ls", self.path]
ret, out = utils.execute(cmd, print_cmd=True, print_output=True, print_err=True)
print(out)

View File

@ -1,816 +0,0 @@
import shutil
import utils
import os
import time
import enum
import posixpath
from linux_command import LinuxCommand
import logging
from types import SimpleNamespace as Namespace
import json
import copy
import hashlib
import contextlib
import subprocess
import tempfile
import pytest
from nydus_anchor import NydusAnchor
from linux_command import LinuxCommand
from utils import Size, Unit
from whiteout import WhiteoutSpec
from oss import OssHelper
from backend_proxy import BackendProxy
class Backend(enum.Enum):
OSS = "oss"
REGISTRY = "registry"
LOCALFS = "localfs"
BACKEND_PROXY = "backend_proxy"
def __str__(self):
return self.value
class Compressor(enum.Enum):
NONE = "none"
LZ4_BLOCK = "lz4_block"
GZIP = "gzip"
ZSTD = "zstd"
def __str__(self):
return self.value
class RafsConf:
"""Generate nydusd working configuration file.
A `registry` backend example:
{
"device": {
"backend": {
"type": "registry",
"config": {
"scheme": "http",
"host": "localhost:5000",
"repo": "busybox"
}
},'
"mode": "direct",
"digest_validate": false
}
}
"""
def __init__(self, anchor: NydusAnchor, image: "RafsImage" = None):
self.__conf_file_wrapper = tempfile.NamedTemporaryFile(
mode="w+", suffix="rafs.config"
)
self.anchor = anchor
self.rafs_image = image
self._rafs_conf_default = {
"device": {
"backend": {
"type": "oss",
"config": {},
}
},
"mode": os.getenv("PREFERRED_MODE", "direct"),
"iostats_files": False,
"fs_prefetch": {"enable": False},
}
self._device_conf = json.loads(
json.dumps(self._rafs_conf_default), object_hook=lambda d: Namespace(**d)
)
self.device_conf = utils.object_to_dict(copy.deepcopy(self._device_conf))
def path(self):
return self.__conf_file_wrapper.name
def set_rafs_backend(self, backend_type, **kwargs):
b = str(backend_type)
self._configure_rafs("device.backend.type", b)
if backend_type == Backend.REGISTRY:
# Manager like nydus-snapshotter can fill the repo field, so we do nothing here.
if "repo" in kwargs:
self._configure_rafs(
"device.backend.config.repo",
posixpath.join(self.anchor.registry_namespace, kwargs.pop("repo")),
)
self._configure_rafs(
"device.backend.config.scheme",
kwargs["scheme"] if "scheme" in kwargs else "http",
)
self._configure_rafs("device.backend.config.host", self.anchor.registry_url)
self._configure_rafs(
"device.backend.config.auth", self.anchor.registry_auth
)
if backend_type == Backend.OSS:
if "prefix" in kwargs:
self._configure_rafs(
"device.backend.config.object_prefix", kwargs.pop("prefix")
)
self._configure_rafs(
"device.backend.config.endpoint", self.anchor.oss_endpoint
)
self._configure_rafs(
"device.backend.config.access_key_id", self.anchor.oss_ak_id
)
self._configure_rafs(
"device.backend.config.access_key_secret", self.anchor.oss_ak_secret
)
self._configure_rafs(
"device.backend.config.bucket_name", self.anchor.oss_bucket
)
if backend_type == Backend.BACKEND_PROXY:
self._configure_rafs("device.backend.type", "registry")
self._configure_rafs(
"device.backend.config.scheme",
"http",
)
self._configure_rafs("device.backend.config.repo", "nydus")
self._configure_rafs(
"device.backend.config.host", self.anchor.backend_proxy_url
)
if backend_type == Backend.LOCALFS:
if "image" in kwargs:
self._configure_rafs(
"device.backend.config.blob_file", kwargs.pop("image").localfs_backing_blob
)
else:
self._configure_rafs(
"device.backend.config.dir", self.anchor.localfs_workdir
)
return self
def get_rafs_backend(self):
return self._device_conf.device.backend.type
def set_registry_repo(self, repo):
self._configure_rafs("device.backend.config.repo", repo)
def _configure_rafs(self, k: str, v):
exec("self._device_conf." + k + "=v")
def enable_files_iostats(self):
self._device_conf.iostats_files = True
return self
def enable_latest_read_files(self):
self._device_conf.latest_read_files = True
return self
def enable_access_pattern(self):
self._device_conf.access_pattern = True
return self
def enable_rafs_blobcache(self, is_compressed=False, work_dir=None):
self._device_conf.device.cache = Namespace(
type="blobcache",
config=Namespace(
work_dir=self.anchor.blobcache_dir if work_dir is None else work_dir
),
compressed=is_compressed,
)
return self
def enable_fs_prefetch(
self,
threads_count=8,
merging_size=128 * 1024,
bandwidth_rate=0,
prefetch_all=False,
):
self._configure_rafs("fs_prefetch.enable", True)
self._configure_rafs("fs_prefetch.threads_count", threads_count)
self._configure_rafs("fs_prefetch.merging_size", merging_size)
self._configure_rafs("fs_prefetch.bandwidth_rate", bandwidth_rate)
self._configure_rafs("fs_prefetch.prefetch_all", prefetch_all)
return self
def enable_validation(self):
if int(self.anchor.fs_version) == 6:
return self
self._configure_rafs("digest_validate", True)
return self
def amplify_io(self, size):
self._configure_rafs("amplify_io", size)
return self
def rafs_mem_mode(self, v):
self._configure_rafs("mode", v)
def enable_xattr(self):
self._configure_rafs("enable_xattr", True)
return self
def dump_rafs_conf(self):
# In case the conf is dumped more than once
if int(self.anchor.fs_version) == 6:
logging.warning("Rafs v6 must enable blobcache")
self.enable_rafs_blobcache()
self.__conf_file_wrapper.truncate(0)
self.__conf_file_wrapper.seek(0)
logging.info("Current rafs metadata mode *%s*", self._rafs_conf_default["mode"])
self.device_conf = utils.object_to_dict(copy.deepcopy(self._device_conf))
json.dump(self.device_conf, self.__conf_file_wrapper)
self.__conf_file_wrapper.flush()
class RafsImage(LinuxCommand):
def __init__(
self,
anchor: NydusAnchor,
source,
bootstrap_name=None,
blob_name=None,
compressor=None,
clear_from_oss=True,
):
"""
:rootfs: A plain directory from which to build rafs images(bootstrap and blob).
:bootstrap_name: Name the generated test purpose bootstrap file.
:blob_prefix: Generally, a sha256 string follows this prefix.
:opts: Specify extra build options.
:parent_image: Associate an parent image which will be created ahead of time if necessary.
A rebuilt image tries to reuse block mapping info from parent image(bootstrap) if
the same block resides in parent image, which means new blob file will not have the
same block.
"""
self.__rootfs = source
self.bootstrap_name = (
bootstrap_name
if bootstrap_name is not None
else tempfile.NamedTemporaryFile(suffix="bootstrap").name
)
# The file name of blob file locally.
self.blob_name = (
blob_name
if blob_name is not None
else tempfile.NamedTemporaryFile(suffix="blob").name
)
# blob_id is used to identify blobs residing in OSS and how a IO can access backend.
self.blob_id = None
self.opts = ""
self.test_dir = os.getcwd()
self.anchor = anchor
LinuxCommand.__init__(self, anchor.image_bin)
self.param_value_prefix = " "
self.clear_from_oss = False
self.created = False
self.compressor = compressor
self.clear_from_oss = clear_from_oss
self.backend_type = None
# self.blob_abs_path = tempfile.TemporaryDirectory(
# "blob", dir=self.anchor.workspace
# ).name
self.blob_abs_path = tempfile.NamedTemporaryFile(
prefix="blob", dir=self.anchor.workspace
).name
def rootfs(self):
return self.__rootfs
def _tweak_build_command(self):
"""
Add more options into command line per as different test case configuration.
"""
for key, value in self.command_param_dict.items():
self.opts += (
f"{self.param_separator}{self.param_name_prefix}"
f"{key}{self.param_value_prefix}{value}"
)
for flag in self.command_flags:
self.opts += f"{self.param_separator}{self.param_name_prefix}{flag}"
def set_backend(self, type: Backend, **kwargs):
self.backend_type = type
if type == Backend.LOCALFS:
if not os.path.exists(self.anchor.localfs_workdir):
os.mkdir(self.anchor.localfs_workdir)
self.set_param("blob-dir", self.anchor.localfs_workdir)
return self
elif type == Backend.OSS:
self.set_param("blob", self.blob_abs_path)
prefix = kwargs.pop("prefix", None)
self.oss_helper = OssHelper(
self.anchor.ossutil_bin,
self.anchor.oss_endpoint,
self.anchor.oss_bucket,
self.anchor.oss_ak_id,
self.anchor.oss_ak_secret,
prefix,
)
elif self.backend_type == Backend.BACKEND_PROXY:
self.set_param("blob", self.blob_abs_path)
elif type == Backend.REGISTRY:
# Let nydusify upload blob from the path, which is an intermediate file
self.set_param("blob", self.blob_abs_path)
pass
return self
def create_image(
self,
image_bin=None,
parent_image=None,
clear_from_oss=True,
oss_uploader="util",
compressor=None,
prefetch_policy=None,
prefetch_files="",
from_stargz=False,
fs_version=None,
disable_check=False,
chunk_size=None,
) -> "RafsImage":
"""
:layers: Create an image on top of an existed one
:oss_uploader: ['util', 'nydusify']. Let image builder itself upload blob to oss or use third-party oss util
"""
self.clear_from_oss = clear_from_oss
self.oss_uploader = oss_uploader
self.compressor = compressor
self.parent_image = parent_image
assert oss_uploader in ("util", "builder", "none")
if prefetch_policy is not None:
self.set_param("prefetch-policy", prefetch_policy)
self.set_param("log-level", self.anchor.log_level)
if disable_check:
self.set_flags("disable-check")
if fs_version is not None:
self.set_param("fs-version", fs_version)
else:
self.set_param("fs-version", str(self.anchor.fs_version))
if self.compressor is not None:
self.set_param("compressor", str(self.compressor))
if chunk_size is not None:
self.set_param("chunk-size", str(hex(chunk_size)))
builder_output_json = tempfile.NamedTemporaryFile("w+", suffix="output.json")
self.set_param("output-json", builder_output_json.name)
builder_output_json.flush()
# In order to support specify different versions of nydus image tool
if image_bin is None:
image_bin = self.anchor.image_bin
# Once it's a layered image test, create test parent layer first.
# TODO: Perhaps, should not create parent together so we can have
# images with different flags and opts
if self.parent_image is not None:
self.set_param("parent-bootstrap", self.parent_image.bootstrap_name)
if from_stargz:
self.set_param("source-type", "stargz_index")
# Just before beginning building image, tweak building parameters
self._tweak_build_command()
cmd = f"{image_bin} create --bootstrap {self.bootstrap_name} {self.opts} {self.__rootfs}"
with utils.timer("Basic rafs image creation time"):
_, p = utils.run(
cmd,
False,
shell=True,
stdin=subprocess.PIPE,
stdout=self.anchor.logging_file,
stderr=self.anchor.logging_file,
)
if prefetch_policy is not None:
p.communicate(input=prefetch_files)
p.wait()
assert p.returncode == 0
assert os.path.exists(os.path.join(self.test_dir, self.bootstrap_name))
self.created = True
self.blob_id = json.load(builder_output_json)["blobs"][-1]
logging.info("Generated blob id %s", self.blob_id)
self.bootstrap_path = os.path.abspath(self.bootstrap_name)
if self.backend_type == Backend.OSS:
# self.blob_id = self.calc_blob_sha256(self.blob_abs_path)
# nydus-rs image builder can also upload image itself.
if self.oss_uploader == "util":
self.oss_helper.upload(self.blob_abs_path, self.blob_id)
elif self.backend_type == Backend.BACKEND_PROXY:
shutil.copy(
self.blob_abs_path,
os.path.join(self.anchor.backend_proxy_blobs_dir, self.blob_id),
)
elif self.backend_type == Backend.LOCALFS:
self.localfs_backing_blob = os.path.join(self.anchor.localfs_workdir, self.blob_id)
self.anchor.put_dustbin(self.bootstrap_name)
# Only oss has a temporary place to hold blob
try:
self.anchor.put_dustbin(self.blob_abs_path)
except AttributeError:
pass
try:
self.anchor.put_dustbin(self.localfs_backing_blob)
except AttributeError:
pass
if self.oss_uploader == "util":
self.dump_image_summary()
return self
def whiteout_spec(self, spec: WhiteoutSpec):
self.set_param("whiteout-spec", str(spec))
return self
def clean_up(self):
# In case image was not successfully created.
if hasattr(self, "bootstrap_path"):
os.unlink(self.bootstrap_path)
if hasattr(self, "oss_blob_abs_path"):
os.unlink(self.blob_abs_path)
if hasattr(self, "localfs_backing_blob"):
# Backing blob may already be put into dustbin.
try:
os.unlink(self.localfs_backing_blob)
except FileNotFoundError:
pass
try:
os.unlink(self.blob_abs_path)
except FileNotFoundError:
pass
except AttributeError:
# In case that test rootfs is not successfully scratched.
pass
try:
os.unlink(self.parent_blob)
os.unlink(self.parent_bootstrap)
except FileNotFoundError:
pass
except AttributeError:
pass
try:
if self.clear_from_oss and self.backend_type == Backend.OSS:
self.oss_helper.rm(self.blob_id)
except AttributeError:
pass
@staticmethod
def calc_blob_sha256(blob):
"""Example: blob id: sha256:a810724c8b2cc9bd2a6fa66d92ced9b429120017c7cf2ef61dfacdab45fa45ca"""
# We calculate the blob sha256 ourselves.
sha256 = hashlib.sha256()
with open(blob, "rb") as f:
for block in iter(lambda: f.read(4096), b""):
sha256.update(block)
return sha256.hexdigest()
def dump_image_summary(self):
return
logging.info(
f"""Image summary:\t
blob: {self.blob_name}\t
bootstrap: {self.bootstrap_name}\t
blob_sha256: {self.blob_id}\t
rootfs: {self.rootfs}\t
parent_rootfs: {self.parent_image.rootfs if self.__layers else 'Not layered image'}
compressor: {self.compressor}\t
blob_size: {os.stat(self.blob_abs_path).st_size//1024}KB, {os.stat(self.blob_abs_path).st_size}Bytes
"""
)
class RafsMountParam(LinuxCommand):
"""
Example:
nydusd --config config.json --bootstrap bs.test --sock \
vhost-user-fs.sock --apisock test_api --log-level trace
"""
def __init__(self, command_name):
LinuxCommand.__init__(self, command_name)
self.param_name_prefix = "--"
def bootstrap(self, bootstrap_file):
return self.set_param("bootstrap", bootstrap_file)
def config(self, config_file):
return self.set_param("config", config_file)
def sock(self, vhost_user_sock):
return self.set_param("sock", vhost_user_sock)
def log_level(self, log_level):
return self.set_param("log-level", log_level)
def mountpoint(self, path):
return self.set_param("mountpoint", path)
class NydusDaemon(utils.ArtifactProcess):
def __init__(
self,
anchor: NydusAnchor,
image: RafsImage,
conf: RafsConf,
with_defaults=True,
bin=None,
mode="fuse",
):
"""Start up nydusd and mount rafs.
:image: If image is `None`, then no `--metadata` will be passed to nydusd.
In this case, we have to use API to mount rafs.
"""
anchor.nydusd = self # So pytest has a chance to clean up dirties.
self.anchor = anchor
self.rafs_image = image # Associate with a rafs image to boot up.
self.conf: RafsConf = conf
self.mountpoint = anchor.mountpoint # To which point nydus will mount
self.param_value_prefix = " "
self.params = RafsMountParam(anchor.nydusd_bin if bin is None else bin)
self.params.set_subcommand(mode)
if with_defaults:
self._set_default_mount_param()
def __str__(self):
return str(self.params)
def __call__(self):
return self.params
def _set_default_mount_param(self):
# Set default part
self.apisock("api_sock").log_level(self.anchor.log_level)
if self.conf is not None:
self.params.mountpoint(self.mountpoint).config(self.conf.path())
if self.rafs_image is not None:
self.params.bootstrap(self.rafs_image.bootstrap_path)
def _wait_for_mount(self, test_fn=os.path.ismount):
elapsed = 0
while elapsed < 300:
if test_fn(self.mountpoint):
return True
if self.p.poll() is not None:
pytest.fail("file system process terminated prematurely")
elapsed -= 1
time.sleep(0.01)
pytest.fail("mountpoint failed to come up")
def thread_num(self, num):
self.params.set_param("thread-num", str(num))
return self
def fscache_thread_num(self, num):
self.params.set_param("fscache-threads", str(num))
return self
def set_fscache(self):
self.params.set_param("fscache", self.anchor.fscache_dir)
return self
def log_level(self, level):
self.params.log_level(level)
return self
def prefetch_files(self, file_path: str):
self.params.set_param("prefetch-files", file_path)
return self
def shared_dir(self, shared_dir):
self.params.set_param("shared-dir", shared_dir)
return self
def set_mountpoint(self, mp):
self.params.set_param("mountpoint", mp)
self.mountpoint = mp
return self
def supervisor(self, path):
self.params.set_param("supervisor", path)
return self
def id(self, daemon_id):
self.params.set_param("id", daemon_id)
return self
def upgrade(self):
self.params.set_flags("upgrade")
return self
def failover_policy(self, p):
self.params.set_param("failover-policy", p)
return self
def apisock(self, apisock):
self.params.set_param("apisock", apisock)
self.__apisock = apisock
self.anchor.put_dustbin(apisock)
return self
def get_apisock(self):
return self.__apisock
def bootstrap(self, b):
self.params.set_param("bootstrap", b)
return self
def mount(self, limited_mem=False, wait_mount=True, dump_config=True):
"""
:limited_mem: Unit is KB, limit nydusd process virtual memory usage thus to
inject some faults.
"""
cmd = str(self).split()
self.anchor.checker_sock = self.get_apisock()
if dump_config and self.conf is not None:
self.conf.dump_rafs_conf()
if isinstance(limited_mem, Size):
limit_kb = limited_mem.B // Size(1, Unit.KB).B
cmd = f"ulimit -v {limit_kb};" + cmd
_, p = utils.run(
cmd,
False,
shell=False,
stdout=self.anchor.logging_file,
stderr=self.anchor.logging_file,
)
self.p = p
if wait_mount:
self._wait_for_mount()
return self
def start(self):
cmd = str(self).split()
_, p = utils.run(
cmd,
False,
shell=False,
stdout=self.anchor.logging_file,
stderr=self.anchor.logging_file,
)
self.p = p
return self
def wait_mount(self):
self._wait_for_mount()
@contextlib.contextmanager
def automatic_mount_umount(self):
self.mount()
yield
self.umount()
def umount(self):
"""
Umount is sometimes invoked during teardown. So it can't assert.
"""
self._catcher_dead = True
ret, _ = utils.execute(["umount", "-l", self.mountpoint], print_output=True)
assert ret
# self.p.wait()
# assert self.p.returncode == 0
def is_mounted(self):
def _costum(self):
_, output = utils.execute(
["cat", "/proc/mounts"], print_output=False, print_cmd=False
)
mounts = output.split("\n")
for m in mounts:
if self.mountpoint in m:
return True
return False
check_fn = os.path.ismount
return check_fn(self.mountpoint)
def shutdown(self):
if self.is_mounted():
self.umount()
logging.error("shutting down nydusd")
self.p.terminate()
self.p.wait()
assert self.p.returncode == 0
BLOB_CONF_TEMPLATE = """
{
"type": "bootstrap",
"id": "5a74e7f26a2970c36ffd8963a278ea11e1fd752705a13c2ec0cb20b40e2a6699",
"domain_id": "5a74e7f26a2970c36ffd8963a278ea11e1fd752705a13c2ec0cb20b40e2a6699",
"config": {
"id": "5a74e7f26a2970c36ffd8963a278ea11e1fd752705a13c2ec0cb20b40e2a6699",
"backend_type": "registry",
"backend_config": {
"readahead": false,
"host": "hub.byted.org",
"repo": "gechangwei/java",
"auth": "",
"scheme": "http",
"proxy": {
"fallback": false
}
},
"cache_type": "fscache",
"cache_config": {
"work_dir": "/var/lib/containerd-nydus-grpc/snapshots/3754/fs"
},
"metadata_path": "/var/lib/containerd-nydus-grpc/snapshots/3754/fs/image/image.boot"
},
"fs_prefetch": {
"enable": false,
"prefetch_all": false,
"threads_count": 0,
"merging_size": 0,
"bandwidth_rate": 0
}
}
"""
class BlobEntryConf:
def __init__(self, anchor) -> None:
self.conf_base = json.loads(
BLOB_CONF_TEMPLATE, object_hook=lambda x: Namespace(**x)
)
self.anchor = anchor
self.conf_base.config.cache_config.work_dir = self.anchor.blobcache_dir
def set_type(self, t):
self.conf_base.type = t
return self
def set_repo(self, repo):
self.conf_base.config.repo = repo
return self
def set_metadata_path(self, path):
self.conf_base.config.metadata_path = path
return self
def set_fsid(self, fsid):
self.conf_base.id = fsid
self.conf_base.domain_id = fsid
self.conf_base.config.id = fsid
return self
def set_backend(self):
self.conf_base.config.backend_config.host = self.anchor.backend_proxy_url
self.conf_base.config.backend_config.repo = "nydus"
return self
def set_prefetch(self, threads_cnt=4):
self.conf_base.fs_prefetch.enable = True
self.conf_base.fs_prefetch.prefetch_all = True
self.conf_base.fs_prefetch.threads_count = threads_cnt
return self
def dumps(self):
return json.dumps(self.conf_base, default=vars)

View File

@ -1,59 +0,0 @@
import os
import tempfile
import utils
class Snapshotter(utils.ArtifactProcess):
def __init__(self, anchor: "NydusAnchor") -> None:
self.anchor = anchor
self.snapshotter_bin = anchor.snapshotter_bin
self.__sock = tempfile.NamedTemporaryFile(suffix="snapshotter.sock")
self.flags = []
def sock(self):
return self.__sock.name
def set_root(self, dir):
self.root = os.path.join(dir, "io.containerd.snapshotter.v1.nydus")
def cache_dir(self):
return os.path.join(self.root, "cache")
def run(self, rafs_conf: os.PathLike):
cmd = [
self.snapshotter_bin,
"--nydusd-path",
self.anchor.nydusd_bin,
"--config-path",
rafs_conf,
"--root",
self.root,
"--address",
self.__sock.name,
"--log-level",
"info",
"--log-to-stdout",
]
cmd = cmd + self.flags
ret, self.p = utils.run(
cmd,
wait=False,
shell=False,
stdout=self.anchor.logging_file,
stderr=self.anchor.logging_file,
)
def shared_mount(self):
self.flags.append("--shared-daemon")
return self
def enable_nydus_overlayfs(self):
self.flags.append("--enable-nydus-overlayfs")
return self
def shutdown(self):
self.p.terminate()
self.p.wait()

View File

@ -1,82 +0,0 @@
import socket
import array
import os
import struct
from multiprocessing import Process
import threading
import time
class RafsSupervisor:
def __init__(self, watcher_socket_name, conn_id):
self.watcher_socket_name = watcher_socket_name
self.conn_id = conn_id
@classmethod
def recv_fds(cls, sock, msglen, maxfds):
"""Function from https://docs.python.org/3/library/socket.html#socket.socket.recvmsg"""
fds = array.array("i") # Array of ints
msg, ancdata, flags, addr = sock.recvmsg(
msglen, socket.CMSG_LEN(maxfds * fds.itemsize)
)
for cmsg_level, cmsg_type, cmsg_data in ancdata:
if cmsg_level == socket.SOL_SOCKET and cmsg_type == socket.SCM_RIGHTS:
# Append data, ignoring any truncated integers at the end.
fds.frombytes(
cmsg_data[: len(cmsg_data) - (len(cmsg_data) % fds.itemsize)]
)
return msg, list(fds)
@classmethod
def send_fds(cls, sock, msg, fds):
"""Function from https://docs.python.org/3/library/socket.html#socket.socket.sendmsg"""
return sock.sendmsg(
[msg], [(socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", fds))]
)
def wait_recv_fd(self, event):
try:
os.unlink(self.watcher_socket_name)
except FileNotFoundError:
pass
sock = socket.socket(family=socket.AF_UNIX)
sock.bind(self.watcher_socket_name)
event.set()
sock.listen()
client, _ = sock.accept()
msg, fds = self.recv_fds(client, 100000, 1)
self.fds = fds
self.opaque = msg
client.close()
def wait_send_fd(self):
try:
os.unlink(self.watcher_socket_name)
except FileNotFoundError:
pass
sock = socket.socket(family=socket.AF_UNIX)
sock.bind(self.watcher_socket_name)
sock.listen()
client, _ = sock.accept()
msg = self.opaque
RafsSupervisor.send_fds(client, msg, self.fds)
client.close()
def send_fd(self):
t = threading.Thread(target=self.wait_send_fd)
t.start()
def recv_fd(self):
event = threading.Event()
t = threading.Thread(target=self.wait_recv_fd, args=(event,))
t.start()
return event

File diff suppressed because it is too large Load Diff

View File

@ -1,659 +0,0 @@
import posixpath
import subprocess
import logging
import sys
import os
import signal
from typing import Tuple
import io
import string
import random
try:
import psutil
except ModuleNotFoundError:
pass
import contextlib
import math
import enum
import datetime
import re
import random
import json
import tarfile
import pprint
import stat
import platform
def logging_setup(logging_stream=sys.stderr):
"""Inspired from Kadalu project"""
root = logging.getLogger()
if root.hasHandlers():
return
verbose = False
try:
if os.environ["NYDUS_TEST_VERBOSE"] == "YES":
verbose = True
except KeyError as _:
pass
# Errors should also be printed to screen.
handler = logging.StreamHandler(logging_stream)
if verbose:
root.setLevel(logging.DEBUG)
handler.setLevel(logging.DEBUG)
else:
root.setLevel(logging.INFO)
handler.setLevel(logging.INFO)
formatter = logging.Formatter(
"[%(asctime)s] %(levelname)s "
"[%(module)s - %(lineno)s:%(funcName)s] "
"- %(message)s"
)
handler.setFormatter(formatter)
root.addHandler(handler)
def execute(cmd, **kwargs):
exc = None
shell = kwargs.pop("shell", False)
print_output = kwargs.pop("print_output", False)
print_cmd = kwargs.pop("print_cmd", True)
print_err = kwargs.pop("print_err", True)
if print_cmd:
logging.info("Executing command: %s" % cmd)
try:
output = subprocess.check_output(
cmd, shell=shell, stderr=subprocess.STDOUT, **kwargs
)
output = output.decode("utf-8")
if print_output:
logging.info("%s" % output)
except subprocess.CalledProcessError as exc:
o = exc.output.decode() if exc.output is not None else ""
if print_err:
logging.error(
"Command: %s\nReturn code: %d\nError output:\n%s"
% (cmd, exc.returncode, o)
)
return False, o
return True, output
def run(cmd, wait: bool = True, verbose=True, **kwargs):
if verbose:
logging.info(cmd)
else:
logging.debug(cmd)
popen_obj = subprocess.Popen(cmd, **kwargs)
if wait:
popen_obj.wait()
return popen_obj.returncode, popen_obj
def kill_all_processes(program_name, sig=signal.SIGKILL):
ret, out = execute(["pidof", program_name])
if not ret:
logging.warning("No %s running" % program_name)
return
processes = out.replace("\n", "").split(" ")
for pid in processes:
try:
logging.info("Killing process %d" % int(pid))
os.kill(int(pid), sig)
except Exception as exc:
logging.exception(exc)
def get_pid(proc_name: str) -> list:
proc_list = []
for proc in psutil.process_iter():
try:
if proc_name.lower() in proc.name().lower():
proc_list.append((proc.pid, proc.name()))
except (psutil.NoSuchProcess, psutil.AccessDenied, psutil.ZombieProcess):
pass
return proc_list
def read_images_array(p) -> list:
with open(p) as f:
images = [i.rstrip("\n") for i in f.readlines() if not i.startswith("#")]
return images
@contextlib.contextmanager
def pushd(new_path: str):
previous_dir = os.getcwd()
os.chdir(new_path)
try:
yield
finally:
os.chdir(previous_dir)
def round_up(n, decimals=0):
return int(math.ceil(n / float(decimals))) * decimals
def get_current_time():
return datetime.datetime.now()
def delta_time(t_end, t_start):
delta = t_end - t_start
return delta.total_seconds(), delta.microseconds
@contextlib.contextmanager
def timer(slogan):
start = get_current_time()
try:
yield
finally:
end = get_current_time()
sec, usec = delta_time(end, start)
logging.info("%s, Takes time %u.%u seconds", slogan, sec, usec // 1000)
class Unit(enum.Enum):
Byte = 1
KB = 1024
MB = 1024 * KB
GB = 1024 * MB
TB = 1024 * GB
Blocks512 = 512
Blocks4096 = 4096
def get_value(self):
return self.value
class Size:
_KiB = 1024
_MiB = _KiB * 1024
_GiB = _MiB * 10244
_TiB = _GiB * 1024
_SECTOR_SIZE = 512
def __init__(self, value: int, unit: Unit = Unit.Byte):
self.bytes = value * unit.get_value()
def __index__(self):
return self.bytes
@classmethod
def from_B(cls, value):
return cls(value)
@classmethod
def from_KiB(cls, value):
return cls(value * cls._KiB)
@classmethod
def from_MiB(cls, value):
return cls(value * cls._MiB)
@classmethod
def from_GiB(cls, value):
return cls(value * cls._GiB)
@classmethod
def from_TiB(cls, value):
return cls(value * cls._TiB)
@classmethod
def from_sector(cls, value):
return cls(value * cls._SECTOR_SIZE)
@property
def B(self):
return self.bytes
@property
def KiB(self):
return self.bytes // self._KiB
@property
def MiB(self):
return self.bytes // self._MiB
@property
def GiB(self):
return self.bytes // self._GiB
@property
def TiB(self):
return self.bytes / self._TiB
@property
def sectors(self):
return self.bytes // self._SECTOR_SIZE
def __str__(self):
if self.bytes < self._KiB:
return "{}B".format(self.B)
elif self.bytes < self._MiB:
return "{}K".format(self.KiB)
elif self.bytes < self._GiB:
return "{}M".format(self.MiB)
elif self.bytes < self._TiB:
return "{}G".format(self.GiB)
else:
return "{}T".format(self.TiB)
def dump_process_mem_cpu_load(pid):
"""
https://psutil.readthedocs.io/en/latest/
"""
p = psutil.Process(pid)
mem_i = p.memory_info()
logging.info(
"[SYS LOAD]: RSS: %u(%u MB) VMS: %u(%u MB) DIRTY: %u | CPU num: %u, Usage: %f"
% (
mem_i.rss,
mem_i.rss / 1024 // 1024,
mem_i.vms,
mem_i.vms / 1024 // 1024,
mem_i.dirty,
p.cpu_num(),
p.cpu_percent(0.5),
)
)
def file_disk_usage(path):
s = os.stat(path).st_blocks * 512
return s
def list_object_to_dict(lst):
return_list = []
for l in lst:
return_list.append(object_to_dict(l))
return return_list
def object_to_dict(object):
if hasattr(object, "__dict__"):
dict = vars(object)
else:
return object
for k, v in dict.items():
if type(v).__name__ not in ["list", "dict", "str", "int", "float", "bool"]:
dict[k] = object_to_dict(v)
if type(v) is list:
dict[k] = list_object_to_dict(v)
return dict
def get_fs_type(path):
partitions = psutil.disk_partitions()
partitions.sort(reverse=True)
for part in partitions:
if path.startswith(part.mountpoint):
return part.fstype
def mess_file(path):
file_size = os.path.getsize(path)
offset = random.randint(0, file_size)
fd = os.open(path, os.O_WRONLY)
os.pwrite(fd, os.urandom(1000), offset)
os.close(fd)
# based on https://stackoverflow.com/a/42865957/2002471
units = {"B": 1, "KB": 1024, "MB": 1024**2, "GB": 1024**3}
def parse_size(size):
size = size.upper()
if not re.match(r" ", size):
size = re.sub(r"([KMGT]?B)", r" \1", size)
number, unit = [string.strip() for string in size.split()]
return int(float(number) * units[unit])
def clean_pagecache():
execute(["echo", "3", ">", "/proc/sys/vm/drop_caches"])
def pretty_print(*args, **kwargs):
pp = pprint.PrettyPrinter(indent=4)
pp.pprint(*args, **kwargs)
def is_regular(path):
mode = os.stat(path)[stat.ST_MODE]
return stat.S_ISREG(mode)
class ArtifactProcess:
def __init__(self) -> None:
super().__init__()
def shutdown(self):
pass
import gzip
def is_gzip(path):
"""
gzip.BadGzipFile: means it is not a gzip
"""
with gzip.open(path, "r") as fh:
try:
fh.read(1)
except Exception:
return False
return True
class Skopeo:
def __init__(self) -> None:
super().__init__()
self.bin = os.path.join(
"framework",
"bin",
"skopeo" if platform.machine() == "x86_64" else "skopeo.aarch64",
)
@staticmethod
def repo_from_image_ref(image):
repo = posixpath.basename(image).split(":")[0]
registry = posixpath.dirname(image)
return posixpath.join(registry, repo)
def inspect(
self, image, tls_verify=False, image_arch="amd64", features=None, verifier=None
):
"""
{
"manifests": [
{
"digest": "sha256:0415f56ccc05526f2af5a7ae8654baec97d4a614f24736e8eef41a4591f08019",
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"platform": {
"architecture": "amd64",
"os": "linux"
},
"size": 527
},
<snipped>
---
{
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"config": {
"mediaType": "application/vnd.docker.container.image.v1+json",
"size": 1457,
"digest": "sha256:b97242f89c8a29d13aea12843a08441a4bbfc33528f55b60366c1d8f6923d0d4"
},
"layers": [
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 764663,
"digest": "sha256:e5d9363303ddee1686b203170d78283404e46a742d4c62ac251aae5acbda8df8"
}
]
}
<snipped>
---
Example to fetch manifest by its hash
skopeo inspect --raw docker://docker.io/busybox@sha256:0415f56ccc05526f2af5a7ae8654baec97d4a614f24736e8eef41a4591f08019
"""
cmd = [self.bin, "inspect", "--raw", f"docker://{image}"]
if not tls_verify:
cmd.insert(2, "--tls-verify=false")
ret, p = run(
cmd,
wait=False,
shell=False,
stdout=subprocess.PIPE,
)
out, _ = p.communicate()
p.wait()
m = json.loads(out)
# manifest = None
digest = None
if m["mediaType"] == "application/vnd.docker.distribution.manifest.v2+json":
manifest = m
elif (
m["mediaType"]
== "application/vnd.docker.distribution.manifest.list.v2+json"
):
for mf in m["manifests"]:
# Choose coresponding platform
if (
mf["platform"]["architecture"] == image_arch
and mf["platform"]["os"] == "linux"
):
if features is not None:
if "os.features" not in mf["platform"]:
continue
elif mf["platform"]["os.features"][0] != features:
logging.error("cccc %s", mf["platform"]["os.features"][0])
continue
digest = mf["digest"]
repo = Skopeo.repo_from_image_ref(image)
cmd = [
self.bin,
"inspect",
"--raw",
f"docker://{repo}@{digest}",
]
if not tls_verify:
cmd.insert(2, "--tls-verify=false")
ret, p = run(
cmd,
wait=False,
shell=False,
stdout=subprocess.PIPE,
)
out, _ = p.communicate()
p.wait()
assert p.returncode == 0
manifest = json.loads(out)
break
else:
assert False
assert isinstance(manifest, dict)
return manifest, digest
def copy_to_local(
self, image, layers, extraced_dir, tls_verify=False, resource_digest=None
):
"""
:layers: From which to decompress each layer
"""
os.makedirs(extraced_dir, exist_ok=True)
if resource_digest is not None:
repo = Skopeo.repo_from_image_ref(image)
cmd = [
self.bin,
"--insecure-policy",
"copy",
f"docker://{repo}@{resource_digest}",
f"dir:{extraced_dir}",
]
else:
cmd = [
self.bin,
"copy",
"--insecure-policy",
f"docker://{image}",
f"dir:{extraced_dir}",
]
if not tls_verify:
cmd.insert(1, "--tls-verify=false")
ret, p = run(
cmd,
wait=True,
shell=False,
stdout=subprocess.PIPE,
)
assert ret == 0
if layers is not None:
with pushd(extraced_dir):
for i in layers:
# Blob layer downloaded has no "sha256" prefix
try:
layer = i.replace("sha256:", "")
os.makedirs(i, exist_ok=True)
with tarfile.open(
layer, "r:gz" if is_gzip(layer) else "r:"
) as tar_gz:
tar_gz.extractall(path=i)
except FileNotFoundError:
logging.warning("Should already downloaded")
def copy_all_to_registry(self, source_image_tagged, dest_image_tagged):
cmd = [
self.bin,
"--insecure-policy",
"copy",
"--all",
"--tls-verify=false",
f"docker://{source_image_tagged}",
f"docker://{dest_image_tagged}",
]
ret, p = run(
cmd,
wait=True,
shell=False,
stdout=subprocess.PIPE,
)
assert ret == 0
def manifest_list(self, image, tls_verify=False):
cmd = [self.bin, "inspect", "--raw", f"docker://{image}"]
if not tls_verify:
cmd.insert(2, "--tls-verify=false")
ret, p = run(
cmd,
wait=False,
shell=False,
stdout=subprocess.PIPE,
)
out, _ = p.communicate()
p.wait()
m = json.loads(out)
if m["mediaType"] == "application/vnd.docker.distribution.manifest.v2+json":
return None
elif (
m["mediaType"]
== "application/vnd.docker.distribution.manifest.list.v2+json"
):
return m
def pretty_print(artifact: dict):
a = json.dumps(artifact, indent=4)
print(a)
def write_tar_gz(source, tar_gz):
def f(ti):
ti.name = os.path.relpath(ti.name, start=source)
return ti
with tarfile.open(tar_gz, "w:gz") as t:
t.add(source, arcname="")
def parse_stargz(stargz):
"""
The footer MUST be the following 51 bytes (1 byte = 8 bits in gzip).
Footer format:
- 10 bytes gzip header
- 2 bytes XLEN (length of Extra field) = 26 (4 bytes header + 16 hex digits + len("STARGZ"))
- 2 bytes Extra: SI1 = 'S', SI2 = 'G'
- 2 bytes Extra: LEN = 22 (16 hex digits + len("STARGZ"))
- 22 bytes Extra: subfield = fmt.Sprintf("%016xSTARGZ", offsetOfTOC)
- 5 bytes flate header: BFINAL = 1(last block), BTYPE = 0(non-compressed block), LEN = 0
- 8 bytes gzip footer
(End of eStargz)
"""
f = open(stargz, "rb")
f.seek(-51, 2)
footer = f.read(51)
assert len(footer) == 51
header_extra = footer[16:]
toc_offset = header_extra[0:16]
toc_offset = int(toc_offset.decode("utf-8"), base=16)
f.seek(toc_offset)
toc_gzip = f.read(toc_offset - 51)
toc_tar = gzip.decompress(toc_gzip)
t = io.BytesIO(toc_tar)
with tarfile.open(fileobj=t, mode="r") as tf:
def is_within_directory(directory, target):
abs_directory = os.path.abspath(directory)
abs_target = os.path.abspath(target)
prefix = os.path.commonprefix([abs_directory, abs_target])
return prefix == abs_directory
def safe_extract(tar, path=".", members=None, *, numeric_owner=False):
for member in tar.getmembers():
member_path = os.path.join(path, member.name)
if not is_within_directory(path, member_path):
raise Exception("Attempted Path Traversal in Tar File")
tar.extractall(path, members, numeric_owner)
safe_extract(tf)
f.close()
return "stargz.index.json"
def docker_image_repo(reference):
return posixpath.basename(reference).split(":")[0]
def random_string(l=64):
res = "".join(random.choices(string.ascii_uppercase + string.digits, k=l))
return res

View File

@ -1,208 +0,0 @@
from abc import ABCMeta, abstractmethod
from distributor import Distributor
from utils import Size, Unit, pushd
import xattr
import os
import utils
from workload_gen import WorkloadGen
"""
Scratch a target directory
Verify image according to per schema
"""
class Verifier:
__metaclass__ = ABCMeta
def __init__(self, target, dist: Distributor):
self.target = target
self.dist = dist
@abstractmethod
def scratch(self):
pass
@abstractmethod
def verify(self):
pass
class XattrVerifier(Verifier):
def __init__(self, target, dist: Distributor):
super().__init__(target, dist)
def scratch(self, scratch_dir):
"""Put various kinds of xattr value into.
1. Very long value
2. a common short value
3. Nothing resides in value field
4. Single file, multiple pairs.
5. /n
6. whitespace
7. 中文
8. Binary
9. Only key?
"""
self.dist.put_symlinks(100)
files_cnt = 20
self.dist.put_multiple_files(files_cnt, Size(9, Unit.KB))
self.scratch_dir = os.path.abspath(scratch_dir)
self.source_files = {}
self.source_xattrs = {}
self.source_dirs = {}
self.source_dirs_xattrs = {}
self.encoding = "gb2312"
self.xattr_pairs = 50 if utils.get_fs_type(os.getcwd()) == "xfs" else 20
# TODO: Only key without values?
with pushd(self.scratch_dir):
for f in self.dist.files[-files_cnt:]:
relative_path = os.path.relpath(f, start=self.scratch_dir)
self.source_xattrs[relative_path] = {}
for idx in range(0, self.xattr_pairs):
# TODO: Random this Key
k = f"trusted.nydus.{Distributor.generate_random_name(20, chinese=True)}"
v = f"_{Distributor.generate_random_length_name(20, chinese=True)}"
xattr.setxattr(f, k.encode(self.encoding), v.encode(self.encoding))
# Use relative or canonicalized names as key to locate
# path in source rootfs directory. So we verify if image is
# packed correctly.
self.source_files[relative_path] = os.path.abspath(f)
self.source_xattrs[relative_path][k] = v
dir_cnt = 20
self.dist.put_directories(dir_cnt)
# Add xattr key-value paris to directories.
with pushd(self.scratch_dir):
for d in self.dist.dirs[-dir_cnt:]:
relative_path = os.path.relpath(d, start=self.scratch_dir)
self.source_dirs_xattrs[relative_path] = {}
for idx in range(0, self.xattr_pairs):
# TODO: Random this Key
k = f"trusted.{Distributor.generate_random_name(20)}"
v = f"{Distributor.generate_random_length_name(50)}"
xattr.setxattr(d, k, v.encode())
# Use relative or canonicalized names as key to locate
# path in source rootfs directory. So we verify if image is
# packed correctly.
self.source_dirs[relative_path] = os.path.abspath(d)
self.source_dirs_xattrs[relative_path][k] = v
def verify(self, target_dir):
""""""
with pushd(target_dir):
for f in self.source_files.keys():
fp = os.path.join(target_dir, f)
attrs = os.listxattr(path=fp, follow_symlinks=False)
assert len(attrs) == self.xattr_pairs
for k in self.source_xattrs[f].keys():
v = os.getxattr(fp, k.encode(self.encoding)).decode(self.encoding)
assert v == self.source_xattrs[f][k]
attrs = os.listxattr(fp, follow_symlinks=False)
if self.encoding != "gb2312":
for attr in attrs:
v = xattr.getxattr(f, attr)
assert attr in self.source_xattrs[f].keys()
assert v.decode(self.encoding) == self.source_xattrs[f][attr]
with pushd(target_dir):
for d in self.source_dirs.keys():
dp = os.path.join(target_dir, d)
attrs = xattr.listxattr(dp)
assert len(attrs) == self.xattr_pairs
for attr in attrs:
v = xattr.getxattr(d, attr)
assert attr in self.source_dirs_xattrs[d].keys()
assert v.decode(self.encoding) == self.source_dirs_xattrs[d][attr]
class SymlinkVerifier(Verifier):
def __init__(self, target, dist: Distributor):
super().__init__(target, dist)
def scratch(self):
# TODO: directory symlinks?
self.dist.put_symlinks(140)
self.dist.put_symlinks(24, chinese=True)
def verify(self, target_dir, source_dir):
for sl in self.dist.symlinks:
vt = os.path.join(target_dir, sl)
st = os.path.join(source_dir, sl)
assert os.readlink(st) == os.readlink(vt)
class HardlinkVerifier(Verifier):
def __init_(self, target, dist):
super().__init__(target, dist)
def scratch(self):
self.dist.put_hardlinks(30)
self.outer_source_name = "outer_source"
self.inner_hardlink_name = "inner_hardlink"
with pushd(os.path.dirname(os.path.realpath(self.dist.top_dir))):
fd = os.open(self.outer_source_name, os.O_CREAT | os.O_RDWR)
os.close(fd)
os.link(
self.outer_source_name,
os.path.join(self.target, self.inner_hardlink_name),
)
assert (
os.stat(os.path.join(self.target, self.inner_hardlink_name)).st_nlink == 2
)
def verify(self, target_dir, source_dir):
for links in self.dist.hardlinks.values():
try:
links_iter = iter(links)
l = next(links_iter)
except StopIteration:
continue
t_hl_path = os.path.join(target_dir, l)
last_md5 = WorkloadGen.calc_file_md5(t_hl_path)
last_stat = os.stat(t_hl_path)
last_path = t_hl_path
for l in links_iter:
t_hl_path = os.path.join(target_dir, l)
t_hl_md5 = WorkloadGen.calc_file_md5(t_hl_path)
t_hl_stat = os.stat(t_hl_path)
assert last_md5 == t_hl_md5
assert (
last_stat == t_hl_stat
), f"last hardlink path {last_path}, cur hardlink path {t_hl_path}"
last_md5 = t_hl_md5
last_stat = t_hl_stat
last_path = t_hl_path
with pushd(target_dir):
assert (
os.stat(os.path.join(target_dir, self.inner_hardlink_name)).st_nlink
== 1
)
class DirectoryVerifier(Verifier):
pass
class FileModeVerifier(Verifier):
pass
class UGIDVerifier(Verifier):
pass
class SparseVerifier(Verifier):
pass

Some files were not shown because too many files have changed in this diff Show More