Compare commits

...

1273 Commits

Author SHA1 Message Date
Fan Shang f7d513844d Remove mirrors configuration
Signed-off-by: Fan Shang <2444576154@qq.com>
2025-08-05 10:38:09 +08:00
Baptiste Girard-Carrabin 29dc8ec5c8 [registry] Accept empty scope during token auth challenge
The distribution spec (https://distribution.github.io/distribution/spec/auth/scope/#authorization-server-use) mentions that the access token provided during auth challenge "may include a scope" which means that it's not necessary to have one either to comply with the spec.
Additionally, this is something that is already accepted by containerd which will simply log a warning when no scope is specified: https://github.com/containerd/containerd/blob/main/core/remotes/docker/auth/fetch.go#L64
To match with what containerd and the spec suggest, the commit modifies the `parse_auth` logic to accept an empty `scope` field. It also logs the same warning as containerd.

Signed-off-by: Baptiste Girard-Carrabin <baptiste.girardcarrabin@datadoghq.com>
2025-07-31 20:28:47 +08:00
imeoer 7886e1868f storage: fix redirect in registry backend
To fix https://github.com/dragonflyoss/nydus/issues/1720

Signed-off-by: imeoer <yansong.ys@antgroup.com>
2025-07-31 11:49:44 +08:00
Peng Tao e1dffec213 api: increase error.rs UT coverage
Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2025-07-17 11:33:17 +08:00
Peng Tao cc62dd6890 github: add project common copilot instructions
Copilot generated with slight modification.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2025-07-17 11:33:17 +08:00
Peng Tao d140d60bea rafs: increase UT coverage for cached_v5.rs
Copilot generated.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2025-07-17 11:33:17 +08:00
Peng Tao f323c7f6e3 gitignore: ignore temp files generated by UTs
Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2025-07-17 11:33:17 +08:00
Peng Tao 5c8299c7f7 service: skip init fscache test if cachefiles is unavailable
Also skip the test for non-root users.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2025-07-17 11:33:17 +08:00
Jack Decker 14c0062cee Make filesystem sync operation fatal on failure
Signed-off-by: Jack Decker <jack@thundercompute.com>
2025-07-11 10:42:45 +08:00
Jack Decker d3bbc3e509 Add filesystem sync in both container and host namespaces before pausing container for commit to ensure all changes are flushed to disk.
Signed-off-by: Jack Decker <jack@thundercompute.com>
2025-07-11 10:42:45 +08:00
imeoer 80f80dda0e cargo: bump crates version
Signed-off-by: imeoer <yansong.ys@antgroup.com>
2025-07-08 10:38:27 +08:00
Yang Kaiyong a26c7bf99c test: support miri for unit test in actions
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-07-04 10:17:32 +08:00
imeoer 72b1955387 misc: add issue / PR stale workflow
Signed-off-by: imeoer <yansong.ys@antgroup.com>
2025-06-18 10:38:00 +08:00
ymy d589292ebc feat(nydusify): After converting the image, if the push operation fails, increase the number of retries.
Signed-off-by: ymy <ymy@zetyun.com>
2025-06-17 17:11:38 +08:00
Zephyrcf 344a208e86 Make ssl fallback check case-insensitive
Signed-off-by: Zephyrcf <zinsist77@gmail.com>
2025-06-12 19:03:49 +08:00
imeoer 9645820222 docs: add MAINTAINERS doc
Signed-off-by: imeoer <yansong.ys@antgroup.com>
2025-05-30 18:40:33 +08:00
Baptiste Girard-Carrabin d36295a21e [registry] Modify TokenResponse instead
Apply github comment.
Use `serde:default` in TokenResponse to have the same behavior as Option<String> without changing the struct signature.

Signed-off-by: Baptiste Girard-Carrabin <baptiste.girardcarrabin@datadoghq.com>
2025-04-28 16:11:23 +08:00
Baptiste Girard-Carrabin c048fcc45f [registry] Fix auth token parsing for access_token
Extend auth token parsing to support token in different json fields.
There is no real consensus on Oauth2 token response format, which means that each registry can implement their own. In particular, Azure ACR uses `access_token` as described here https://github.com/Azure/acr/blob/main/docs/Token-BasicAuth.md#get-a-pull-access-token-for-the-user. As such, when attempting to parse the JSON response containing the authorization token, we should attempt to deserialize using either `token` or `access_token` (and potentially more fields in the future if needed).
To not break the integration with existing registry, the behavior is to fallback to `access_token` only if `token` does not exist in the response.

Signed-off-by: Baptiste Girard-Carrabin <baptiste.girardcarrabin@datadoghq.com>
2025-04-28 16:11:23 +08:00
Baptiste Girard-Carrabin 67bf8b8283 [storage] Modify redirect policy to follow 10 redirects
From 2378d074fe (diff-c9f1f654cf0ba5d46a4ed25d8bb0ea22c942840c6693d31927a9fd912bcb9456R125-R131)
it seems that the redirect policy of the http client has always been to not follow redirects. However, this means that pulling from registries which have redirects when pulling blobs does not work. This is the case for instance on GCP's former container registries that were migrated to artifact registries.
Additionally, containerd's behavior is to follow up to 10 redirects https://github.com/containerd/containerd/blob/main/core/remotes/docker/resolver.go#L596 so it makes sense to use the same value.

Signed-off-by: Baptiste Girard-Carrabin <baptiste.girardcarrabin@datadoghq.com>
2025-04-27 18:54:04 +08:00
Peng Tao d74629233b readme: add deepwiki reference
Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2025-04-27 18:53:16 +08:00
Yang Kaiyong 21206e75b3 nydusify(refactor): handle layer with retry
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-04-23 11:04:54 +08:00
Yan Song c288169c1a action: add free-disk-space job
Try to fix the broken CI: https://github.com/dragonflyoss/nydus/actions/runs/14569290750/job/40863611290
It might be due to insufficient disk space.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2025-04-23 10:28:06 +08:00
Yang Kaiyong 23fdda1020 nydusify(feat): support for specifing log file and concurrently processing external model manifests
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-04-21 15:16:57 +08:00
Yang Kaiyong 9b915529a9 nydusify(feat): add crc32 in file attributes
Read CRC32 from external models' manifest and pass it to builder.

Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-04-18 18:30:18 +08:00
Yang Kaiyong 96c3e5569a nydus-image: only add crc32 flag in chunk level
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-04-18 14:39:03 +08:00
Yang Kaiyong 44069d6091 feat: support crc32 validation when validating chunks
- Add CRC32 algorithm implementation wiht crc-rs crate.
- Introduced a crc_enable option to the nydus builder.
- Support for generating CRC32 checksums when building images.
- Support for validating CRC32 in both normal chunk or external chunks.

Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-04-18 14:39:03 +08:00
Yang Kaiyong 31c8e896f0 chore: fix cargo-deny check failed
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-04-16 19:39:21 +08:00
Yang Kaiyong 8593498dbd nydusify: remove nydusd code which is working in progress
- remove the unready nydusd (runtime) implemention.
- remove the debug code.
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-04-02 22:21:27 +08:00
Yang Kaiyong 6161868e41 builder: suport build external model image from modctl
builder: add support for build external model image from modctl in local
context or remote registery.

feat(nydusify): add support for mount external large model images

chore: introduce GoReleaser for RPM package generation

nydusify(feat): add support for model image in check command

nydusify(test): add support for binary-based testing in external model's smoke tests

Signed-off-by: Yan Song <yansong.ys@antgroup.com>

Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-04-02 22:21:27 +08:00
Yang Kaiyong 871e1c6e4f chore(smoke): fix broken CI in smoke test
Run `rustup run stable cargo` instead of `cargo` to explicitly specify the toolchain.

Since `nextest` fails due to symlink resolution with new rustup v1.28.

Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-03-25 18:23:18 +08:00
Yan Song 8c0925b091 action: fix bootstrap path for fsck.erofs check
The output bootstrap path has been changed in the nydusify
check subcommand.

Related PR: https://github.com/dragonflyoss/nydus/pull/1652

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2025-03-25 14:51:28 +08:00
Yan Song baadb3990d misc: remove centos image from image conversion CI
The centos image has been deprecated on Docker Hub, so we can't
pull it in "Convert & Check Images" CI pipeline.

See https://hub.docker.com/_/centos

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2025-03-25 14:51:28 +08:00
Yan Song bd2123f2ed smoke: add v0.1.0 nydusd into native layer cases
To check the compatibility between the newer builder and old nydusd.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2025-03-24 20:34:54 +08:00
Yan Song c41ac4760d builder: remove redundant blobs for merge subcommand
After merging all trees, we need to re-calculate the blob index of
referenced blobs, as the upper tree might have deleted some files
or directories by opaques, and some blobs are dereferenced.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2025-03-24 20:34:54 +08:00
Yan Song 7daa0a3cd9 nydusify: refactor check subcommand
- allow either the source or target to be an OCI or nydus image;
- improve output directory structure and log format;

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2025-03-24 17:45:50 +08:00
ymy 7e5147990c feat(nydusify): A short container id is supported when you commit a container
Signed-off-by: ymy <ymy@zetyun.com>
2025-03-10 10:21:06 +08:00
ymy 36382b54dd Optimize: Improve code style in push lower blob section
Signed-off-by: ymy <ymy@zetyun.com>
2025-03-10 10:21:06 +08:00
yumy 8b03fd7593 fix: nydusify golang ci arg
Signed-off-by: ymy <ymy@zetyun.com>
2025-03-04 23:48:02 +08:00
ymy 76651c319a nydusify: fix the issue of blob not found when modifying image name during commit
Signed-off-by: ymy <ymy@zetyun.com>
2025-03-04 23:48:02 +08:00
Yang Kaiyong 91931607f8 fix(nydusd): fix parsing of failover-policy argument
Use `inspect_err` instead of `inspect` to to correctly handle and log
errors when parsing the `failover-policy` argument.

Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-02-24 11:25:26 +08:00
Yan Song dd9ba54e33 misc: remove goproxy.io for go build
The goproxy.io service is unstable for now, it effects,
the github CI, let's remove it.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2025-02-17 09:55:13 +08:00
Yan Song 09b81c50b4 nydusify: fix layer push retry for copy subcommand
Add push retry mechanism, enhance the success rate for image copy
when a single layer copy failure.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2025-02-17 09:55:13 +08:00
Yang Kaiyong 3beb9a72d9 chore: bump deps to address rustsec warnning
- Bump vm-memory to 1.14.1, vmm-sys-util to 0.12.1 and vhost to 0.11.0.
- Bump cargo-deny-action version from v1 to v2 in workflows.

Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-02-11 20:29:22 +08:00
Yang Kaiyong 3c10b59324 chore: comment the unused code to address clippy error
The backend-oss feature is nerver enabled, so comment the test code.

Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-02-10 10:21:58 +08:00
Yang Kaiyong bf17d221d6 fix: Support building rafs without the dedup feature
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-02-10 10:21:58 +08:00
Yang Kaiyong ee5ef64cdd chore: pass rust version to build docker container in CI
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-02-10 10:21:58 +08:00
Yang Kaiyong 05ea41d159 chore: specify the rust version to 1.84.0 and enable docker cache
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-02-10 10:21:58 +08:00
Yang Kaiyong 4def4db396 chore: fix the broken CI on riscv64
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-02-10 10:21:58 +08:00
Yang Kaiyong d48d3dbdb3 chore: bump rust version to 1.8.4 and update deps to reslove cargo deny check failures
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-02-10 10:21:58 +08:00
Kostis Papazafeiropoulos f60e40aafa fix(blobfs): Use correct result types for `open` and `create`
Use the correct result types for `open` and `create` expected by the
`fuse_backend_rs` 0.12.0 `Filesystem` trait

Signed-off-by: Kostis Papazafeiropoulos <papazof@gmail.com>
2025-01-15 10:18:59 +08:00
Kostis Papazafeiropoulos 83fa946897 build(rafs): Add missing `dedup` feature for `storage` crate dependency
Fix `rafs` build by adding missing `dedup` feature for `storage` crate
dependency

Signed-off-by: Kostis Papazafeiropoulos <papazof@gmail.com>
2025-01-15 10:18:59 +08:00
Gaius 365f13edcf chore: rename repo Dragonfly2 to dragonfly
Signed-off-by: Gaius <gaius.qi@gmail.com>
2024-12-20 17:09:10 +08:00
Lin Wang e23d5bc570 fix: dragonflyoss#1644 and #1651 resolve Algorithm to_string and FromStr inconsistency
Signed-off-by: Lin Wang <l.wang@mail.dlut.edu.cn>
2024-12-16 20:39:08 +08:00
Liu Bo acdf021ec9 rafs: fix typo
Fix an invalid info! usage.

Signed-off-by: Liu Bo <liub.liubo@gmail.com>
2024-12-13 14:40:50 +08:00
Xing Ma b175fc4baa nydusify: introduce optimize subcommand of nydusify
We can statically analyze the image entrypoint dependency, or use runtime dynamic
analysis technologies such as ebpf, fanotify, metric, etc. to obtain the container
file access pattern, and then build this part of data into an independent image layer:

* preferentially fetch blob during the image startup phase to reduce network and disk IO.
* avoid frequent image builds, allows for better local cache utilization.

Implement optimize subcommand of nydusify to generate a new image, which references a new
blob included prefetch file chunks.
```
nydusify optimize --policy separated-prefetch-blob \
	--source $existed-nydus-image \
	--target $new-nydus-image \
	--prefetch-files /path/to/prefetch-files
```

More detailed process is as follows:
1. nydusify first downloads the source image and bootstrap, utilize nydus-image to output a
new bootstrap along with an independent prefetchblob;
2. nydusify generate&push new meta layer including new bootstrap and the prefetch-files ,
also generates&push new manifest/config/prefetchblob, completing the incremental image build.

Signed-off-by: Xing Ma <maxing.lan@bytedance.com>
2024-12-09 14:51:13 +08:00
Xing Ma 8edc031a31 builder: Enhance optimize subcommand for prefetch
Major changes:
1. Added compatibility for rafs v5/v6 formats;
2. Set IS_SEPARATED_WITH_PREFETCH_FILES flag in BlobInfo for prefetchblob;
3. Add option output-json to store build output.

Signed-off-by: Xing Ma <maxing.lan@bytedance.com>
2024-12-09 14:51:13 +08:00
pyq bb4744c7fb docs: fix docker-env-setup.md
Signed-off-by: pyq <eilo.pengyq@gmail.com>
2024-12-04 10:10:26 +08:00
dDai Yongxuan 375f55f32e builder: introduce optimize subcommand for prefetch
We can statically analyze the image entrypoint dependency, or use runtime dynamic
analysis technologies such as ebpf, fanotify, metric, etc. to obtain the container
file access pattern, and then build this part of data into an independent image layer:

* preferentially fetch blob during the image startup phase to reduce network and disk IO.
* avoid frequent image builds, allows for better local cache utilization.

Implement optimize subcommand to optimize image bootstrap
from a prefetch file list, generate a new blob.

```
nydus-image optimize --prefetch-files /path/to/prefetch-files.txt \
  --bootstrap /path/to/bootstrap \
  --blob-dir /path/to/blobs
```
This will generate a new bootstrap and new blob in `blob-dir`.

Signed-off-by: daiyongxuan <daiyongxuan20@mails.ucas.ac.cn>
2024-10-29 14:52:17 +08:00
abushwang a575439471 fix: correct some typos about nerdctl image rm
Signed-off-by: abushwang <abushwangs@gmail.com>
2024-10-25 16:11:22 +08:00
abushwang 4ee6ddd931 fix: correct some typos in nydus-fscache.md
Signed-off-by: abushwang <abushwangs@gmail.com>
2024-10-25 15:05:32 +08:00
Yadong Ding 57c112a998 smoke: add smoking test for cas and chunk dedup
Add smoking test case for cas and chunk dedup.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-10-23 18:25:01 +08:00
Jiang Liu b9ba409f13 docs: add documentation for cas
Add documentation for cas.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2024-10-23 18:25:01 +08:00
Yadong Ding 2387fe8217 storage: enable chunk deduplication for file cache
Enable chunk deduplication for file cache. It works in this way:
- When a chunk is not in blob cache file yet, inquery CAS database
  whether other blob data files have the required chunk. If there's
  duplicated data chunk in other data files, copy the chunk data
  into current blob cache file by using copy_file_range().
- After downloading a data chunk from remote, save file/offset/chunk-id
  into CAS database, so it can be reused later.

Co-authored-by: Jiang Liu <gerry@linux.alibaba.com>
Co-authored-by: Yading Ding <ding_yadong@foxmail.com>
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-10-23 18:25:01 +08:00
Yadong Ding 4b1fd55e6e storage: add garbage collection in CasMgr
- Changed `delete_blobs` method in `CasDb` to take an immutable reference (`&self`) instead of a mutable reference (`&mut self`).
- Updated `dedup_chunk` method in `CasMgr` to correctly handle the deletion of non-existent blob files from both the file descriptor cache and the database.
- Implemented the `gc` (garbage collection) method in `CasMgr` to identify and remove blobs that no longer exist on the filesystem, ensuring the database and cache remain consistent.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-10-23 18:25:01 +08:00
Jiang Liu 45e07eab3d storage: implement CasManager to support chunk dedup at runtime
Implement CasManager to support chunk dedup at runtime.
The manager provides to major interfaces:
- add chunk data to the CAS database
- check whether a chunk exists in CAS database and copy it to blob file
  by copy_file_range() if the chunk exists.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2024-10-23 18:25:01 +08:00
Yadong Ding 51a6045d74 storage: improve copy_file_range
- improve copy_file_range when target os is not linux
- add more comprehensive tests

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-10-23 18:25:01 +08:00
Jiang Liu 7d1c2e635a storage: add helper copy_file_range
Add helper copy_file_range() which:
- avoid copy data into userspace
- may support reflink on xfs etc

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2024-10-23 18:25:01 +08:00
Mike Hotan 15ec192e3d Nydusify `localfs` support
Signed-off-by: Mike Hotan <mike@union.ai>
2024-10-17 09:42:59 +08:00
Yadong Ding da2510b6f5 action: bump macos-13
The macOS 12 Actions runner image will begin deprecation on 10/7/24.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-10-16 18:35:50 +08:00
Yadong Ding 47025395fa lint: bump golangci-lint v1.61.0 and fix lint
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-10-16 09:45:05 +08:00
Yadong Ding 678b44ba32 rust: upgrade to 1.75.0
1. reduce the binary size.
2. use more rust-clippy lints.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-10-16 09:45:05 +08:00
Yifan Zhao 7c498497fb nydusify: modify compact interface
This patch modifies the compact interface to meet the change in
nydus-image.

Signed-off-by: Yifan Zhao <zhaoyifan@sjtu.edu.cn>
2024-10-15 09:27:34 +08:00
Yifan Zhao 1ccc603525 nydus-image: modify compact interface
This commit uses compact parameter directly  insteadof compact config
file in the cli interface. It also fix a bug where chunk key for
ChunkWrapper::Ref is not generated correctly.

Signed-off-by: Yifan Zhao <zhaoyifan@sjtu.edu.cn>
2024-10-15 09:27:34 +08:00
Yifan Zhao a4683baa1e rafs: fix bug in InodeWrapper::is_sock()
We incorrectly use is_dir() to check if a file is a socket. This patch
fixes it.

Signed-off-by: Yifan Zhao <zhaoyifan@sjtu.edu.cn>
2024-09-27 12:35:14 +08:00
Yadong Ding 9f439ab404 bats: use nerdctl replace ctr-remote
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-09-25 09:23:19 +08:00
Yadong Ding 0c0ba2adec chore: remove contrib/ctr-remote
Nerdctl is more useful than `ctr-remote`, deprecate it.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-09-25 09:23:19 +08:00
Yadong Ding c5ef5c97a4 chore: keep smoke test component latest version
- Use the latest `nerdctl`, `nydus-snapshotter`, and `cni` in smoke test env.
- Delete `misc/takeover/snapshotter_config.toml`, use modifyed `misc/performance/snapshotter_config.toml` when test.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-09-25 09:11:08 +08:00
Yadong Ding 37a7b96412 nydusctl: fix build version info
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-09-20 17:32:55 +08:00
Yadong Ding 742954eb2c tests: chang assertr of test_worker_mgr_rate_limiter
assert_eq!(mgr.prefetch_inflight.load(Ordering::Acquire), 3); and assert!(mgr.prefetch_inflight.load(Ordering::Acquire) <= 2); sometimes failed.
The reason is the worker threads may have already started processing the requests and decreased the counter before the main thread checks it.

- change assert!(mgr.prefetch_inflight.load(Ordering::Acquire) = 3); to assert!(mgr.prefetch_inflight.load(Ordering::Acquire) <= 3);
-  thread::sleep(Duration::from_secs(1)); to thread::sleep(Duration::from_secs(2));

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-09-19 20:30:27 +08:00
Yadong Ding 849591afa9 feat: add retry mechanism in read blob metadata
When read blob size from blob metadata, we should retry read from the remote if error occurs.
Also set the max retry times is 3.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-09-19 15:12:04 +08:00
Yadong Ding e8a4305773 chore: bump go lint action v6 and version 1.61.0
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-09-19 15:04:16 +08:00
Yadong Ding 7fc9edeec5 chore: change nydus snapshotter work dir
- use /var/lib/containerd/io.containerd.snapshotter.v1.nydus
- bump nydusd snapshotter v1.14.0

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-09-18 11:13:22 +08:00
Yadong Ding f4fb04a50f lint: remove unused fieldsPath
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-09-18 09:18:12 +08:00
dependabot[bot] 481a63b885 build(deps): bump github.com/docker/docker in /contrib/nydusify
Bumps [github.com/docker/docker](https://github.com/docker/docker) from 25.0.5+incompatible to 25.0.6+incompatible.
- [Release notes](https://github.com/docker/docker/releases)
- [Commits](https://github.com/docker/docker/compare/v25.0.5...v25.0.6)

---
updated-dependencies:
- dependency-name: github.com/docker/docker
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-09-16 20:23:59 +08:00
BruceAko 9b4c272d78 fix: add tests for nydusify copy
Signed-off-by: BruceAko <chongzhi@hust.edu.cn>
2024-09-15 21:10:37 +08:00
BruceAko 30d53c3f25 fix: add a doc about nydusify copy
Signed-off-by: BruceAko <chongzhi@hust.edu.cn>
2024-09-15 21:10:37 +08:00
BruceAko 309feab765 fix: add getLocalPath() and close decompressor
Signed-off-by: BruceAko <chongzhi@hust.edu.cn>
2024-09-15 21:10:37 +08:00
BruceAko a1ceb176f4 feat: support local tarball for nydusify copy
Signed-off-by: BruceAko <chongzhi@hust.edu.cn>
2024-09-15 21:10:37 +08:00
Jiancong Zhu 6106fbc539 refactor: fixed the unnecessary mutex lock operation
Signed-off-by: Jiancong Zhu <Chasing1020@gmail.com>
2024-09-12 18:26:26 +08:00
Yifan Zhao d89410f3fc nydus-image: refactor unpack/compact cli interface
Since unpack and compact subcommands does not need the entire nydusd
configuration file, let's refactor their cli interface and directly
take backend configuration file.

Specifically, we introduce `--backend-type`, `--backend-config` and
`--backend-config-file` options to specify the backend type and remove
`--config` option.

Signed-off-by: Yifan Zhao <zhaoyifan@sjtu.edu.cn>

Fixes: #1602
2024-09-10 14:33:51 +08:00
Yifan Zhao 36fe98b3ac smoke: fix invalid cleanup issue in main_test.go
The cleanup of new registry is invalid as TestMain() calls os.Exit()
and will not run defer functions. This patch fixes the issue by
doing the cleanup explicitly.

Signed-off-by: Yifan Zhao <zhaoyifan@sjtu.edu.cn>
2024-09-10 14:33:51 +08:00
fappy1234567 114ec880a2 smoke: add mount api test case
Signed-off-by: fappy1234567 <2019gexinlei@bupt.edu.cn>
2024-08-30 15:36:59 +08:00
Yan Song 3eb5c7b5ef nydusify: small improvements for mount & check subcommands
- Add `--prefetch` option for enabling full image data prefetch.
- Support `HTTP_PROXY` / `HTTPS_PROXY` env for enabling proxy for nydusd.
- Change nydusd log level to `warn` for mount & check subcommands.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2024-08-28 11:07:26 +08:00
Yadong Ding 52ed07b4cf deny: ignore RUSTSEC-2024-0357
openssl 0.10.55 can't build in riscv64 and ppc64le.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-08-08 14:42:44 +08:00
Yan Song a6bd8ccb8d smoke: add nydusd hot upgrade test case
The test case in hot_upgrade_test.go is different with takeover_test.go,
it not depend on snapshotter component.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2024-07-22 09:22:47 +08:00
Yan Song 642571236d smoke: refactor nydusd methods for testing
Renaming and adding some methods for nydusd struct, for easily controlling
nydusd process.

And support SKIP_CASES env to allow skipping some cases.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2024-07-22 09:22:47 +08:00
Yadong Ding 32b6ead5ec action: fix upload-coverage-to-codecov with secret
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-07-15 09:32:19 +08:00
Yadong Ding c92fe6512f action: upgrade macos to 12
macos-11 was deprecated since 2024.06.28.
https://docs.github.com/actions/using-jobs/choosing-the-runner-for-a-job

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-07-15 09:32:19 +08:00
BruceAko 3684474254 fix: rename mirrors' check_pause_elapsed to health_check_pause_elapsed
Signed-off-by: BruceAko <chongzhi@hust.edu.cn>
2024-07-11 09:13:11 +08:00
BruceAko cd24506d43 feat: skip health check if connection is not active
1. Add last_active field for Connection. When Connection.call() is called, last_active is updated to current timestamp.
2. Add check_pause_elapsed field for ProxyConfig and MirrorConfig. Connection is considered to be inactive if the current time to the last_active time exceeds check_pause_elapsed.
3. In proxy and mirror's health checking thread's loop, if the connection is not active (exceeds check_pause_elapsed), this round of health check is skipped.
4. Update the document.

Signed-off-by: BruceAko <chongzhi@hust.edu.cn>
2024-07-11 09:13:11 +08:00
YuQiang 19b09ed12f fix: add namespace flag for nydusify commit.
Signed-off-by: YuQiang <yu_qiang@mail.nwpu.edu.cn>
2024-07-09 18:15:25 +08:00
BruceAko da5d423b8c fix: correct some typos in Nydusify
Signed-off-by: BruceAko <chongzhi@hust.edu.cn>
2024-07-09 18:14:16 +08:00
Lin Wang 455c856aa8 nydus-image: add documentation for chunk-level deduplication
Signed-off-by: Lin Wang <l.wang@mail.dlut.edu.cn>
2024-07-04 18:08:59 +08:00
Lin Wang 5dec7536fa nydusify: add chunkdict generate command and corresponding tests
Signed-off-by: Lin Wang <l.wang@mail.dlut.edu.cn>
2024-07-04 18:08:59 +08:00
Lin Wang 087c0b1baf nydus-image: Add support for chunkdict generation
Signed-off-by: Lin Wang <l.wang@mail.dlut.edu.cn>
2024-07-04 18:08:59 +08:00
泰友 332f3dd456 fix: compatibility to image without ext table for blob cache
There are scenes that cache file is smaller than expect size. Such as:

    1. Nydusd 1.6 generates cache file by prefetch, which is smaller than size in boot.
    2. Nydusd 2.2 generates cache file by prefetch, when image not provide ext blob tables.
    3. Nydusd not have enough time to fill cache for blob.

    Equality check for size is too much strict for both 1.6
    compatibility and 2.2 concurrency. This pr ensures blob size smaller
    or equal than expect size. It also truncates blob cache when smaller
    than expect size.

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2024-06-18 10:43:42 +08:00
泰友 7cf2d4a2d7 fix: bad read by wrong data region
User io may involve discontinuous segments in different chunks. Bad
    read is produced by merging them into continuous one. That is what
    Region does. This pr separate discontinuous segments into different
    regions, avoiding merging forcibly.

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2024-06-18 10:43:42 +08:00
泰友 64dddd2d2b fix: residual fuse mountpoint after graceful shutdown
1. Case1: Fuse server exits in thread not main. There is possibility
       that process finishes before shutdown of server.
    2. Case2: Fuse server exits in thread of state machine. There is
       possibiltiy that state machine not responses to signal catch
       thread. Then dead lock happens. Process exits before shutdown of
       server.

    This pr aims to seperator shutdown actions from signal catch
    handler. It only notifies controller. Controller exits with
    shutdown of fuse server. No race. No deadlock.

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2024-06-18 10:43:42 +08:00
Yan Song de7cfc4088 nydusify: upgrade acceleration-service v0.2.14
To bring the fixup: https://github.com/goharbor/acceleration-service/pull/290

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2024-06-06 10:18:45 +08:00
Yadong Ding 79a7015496 chore: upgrade components version in test env
1. Upgrade cni to v1.5.0 and try to fix error in TestCommit.
2. upgrade nerdctl to v1.7.6.
3. upgrade nydus-snapshotter to v0.13.13 and fix path error.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-06-06 09:56:26 +08:00
BruceAko 3b9b0d4588 fix: correct some typos and grammatical problem
Signed-off-by: chongzhi <chongzhi@hust.edu.cn>
2024-06-06 09:55:11 +08:00
Yadong Ding 7ea510b237 docs: fix incorrect file path
https://github.com/containerd/nydus-snapshotter/blob/main/misc/snapshotter/config.toml#L27
In snapshotter config nydusd config file path is /etc/nydus/nydusd-config.fusedev.json

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-06-06 09:50:40 +08:00
dependabot[bot] 34ab06b6b3 build(deps): bump golang.org/x/net in /contrib/ctr-remote
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.19.0 to 0.23.0.
- [Commits](https://github.com/golang/net/compare/v0.19.0...v0.23.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-06-05 16:32:26 +08:00
dependabot[bot] 9483286863 build(deps): bump golang.org/x/net in /contrib/nydusify
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.19.0 to 0.23.0.
- [Commits](https://github.com/golang/net/compare/v0.19.0...v0.23.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-06-05 15:56:24 +08:00
Yadong Ding 13a9aa625b fix: downgraded to codecov/codecov-action@v4.0.0
codecov/codecov-action@v4 is unstable.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-06-04 15:59:46 +08:00
Yadong Ding 305a418b31 fix: upload-coverage failed in master
When action don't run on pull request, Codecov GitHub Action V4 need token.
Refence:
1. https://github.com/codecov/codecov-action?tab=readme-ov-file#breaking-changes
2. https://docs.codecov.com/docs/codecov-uploader#supporting-token-less-uploads-for-forks-of-open-source-repos-using-codecov

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-06-04 15:18:48 +08:00
Qinqi Qu 4a16402120 action: bump codecov-action to v4
To solve the problem of CI failure.

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2024-04-17 16:39:48 +08:00
Qinqi Qu 1d1691692c deps: update indexmap from v1 to v2
Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2024-04-16 15:12:25 +08:00
Qinqi Qu d1dfe7bd65 backend-proxy: refactor to support latest versions of crates
Also fix some security alerts of Dependabot:
1. https://github.com/advisories/GHSA-q6cp-qfwq-4gcv
2. https://github.com/advisories/GHSA-8r5v-vm4m-4g25
3. https://github.com/advisories/GHSA-g98v-hv3f-hcfr

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2024-04-16 15:12:25 +08:00
Qinqi Qu 3b2a0c0bcc deps: remove dependency on atty
The atty crate is not maintained, so flexi_logger and clap are updated
to remove the dependency on atty.

Fix: https://github.com/advisories/GHSA-g98v-hv3f-hcfr

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>

s

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2024-04-16 15:12:25 +08:00
Qinqi Qu 9826b2cc3f bats test: add a backup image to avoid network errors
Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2024-04-09 17:32:28 +08:00
dependabot[bot] 260a044c6e build(deps): bump h2 from 0.3.24 to 0.3.26
Bumps [h2](https://github.com/hyperium/h2) from 0.3.24 to 0.3.26.
- [Release notes](https://github.com/hyperium/h2/releases)
- [Changelog](https://github.com/hyperium/h2/blob/v0.3.26/CHANGELOG.md)
- [Commits](https://github.com/hyperium/h2/compare/v0.3.24...v0.3.26)

---
updated-dependencies:
- dependency-name: h2
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-09 15:27:13 +08:00
dependabot[bot] e926d2ff9c build(deps): bump google.golang.org/protobuf in /contrib/nydusify
Bumps google.golang.org/protobuf from 1.32.0 to 1.33.0.

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-31 11:36:18 +08:00
dependabot[bot] fc52ebc7a1 build(deps): bump github.com/docker/docker in /contrib/nydusify
Bumps [github.com/docker/docker](https://github.com/docker/docker) from 25.0.3+incompatible to 25.0.5+incompatible.
- [Release notes](https://github.com/docker/docker/releases)
- [Commits](https://github.com/docker/docker/compare/v25.0.3...v25.0.5)

---
updated-dependencies:
- dependency-name: github.com/docker/docker
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-29 17:05:58 +08:00
YuQiang af914dd1a5 fix: modify benchmark prepare bash path
1. correct the performance test prepare bash file path

Signed-off-by: YuQiang <y_q_email@163.com>
2024-03-26 10:02:52 +08:00
Adolfo Ochagavía 2308efa6f7 Add compression method support to zran docs
Signed-off-by: Adolfo Ochagavía <github@adolfo.ochagavia.nl>
2024-03-25 17:38:44 +08:00
Wei Zhang 9ae8e3a7b5 overlay: add overlay implementation
With help of newly introduced Overlay FileSystem in `fuse-backend-rs`
library, now we can create writable rootfs in Nydus. Implementation of
writable rootfs is based on one passthrough FS(as upper layer) over one
readonly rafs(as lower layer).

To do so, configuration is extended with some Overlay options.

Signed-off-by: Wei Zhang <weizhang555.zw@gmail.com>
2024-03-15 14:15:54 +08:00
YuQiang 3dfa9e9776 docs: add doc for nydus-image check command
Signed-off-by: YuQiang <y_q_email@163.com>
2024-03-14 11:10:46 +08:00
YuQiang f10782c79d docs: add doc for nydusify commit command
Signed-off-by: YuQiang <y_q_email@163.com>
2024-03-14 09:33:02 +08:00
YuQiang ae842f9b8b action: merge and move prepare.sh
remove misc/performance/prepare.sh and misc/performance/prepare.sh and merge to misc/prepare.sh

Signed-off-by: YuQiang <y_q_email@163.com>
2024-03-14 09:32:38 +08:00
YuQiang 26b1d7db5a feat: add smoke test for nydusify commit
Signed-off-by: YuQiang <y_q_email@163.com>
2024-03-14 09:32:38 +08:00
YuQiang c14790cb21 feat: add nydusify commit command
add nydusify commit command to commit a nydus container into nydus image

Signed-off-by: YuQiang <y_q_email@163.com>
2024-03-14 09:32:38 +08:00
YuQiang 19daa7df6f feat: ported write overlay upperdir capability
ported  capability of get and write diff between overlayfs upper and lower.

Signed-off-by: YuQiang <y_q_email@163.com>
2024-03-14 09:32:38 +08:00
dependabot[bot] a0ec880182 build(deps): bump github.com/go-jose/go-jose/v3 in /contrib/nydusify
Bumps [github.com/go-jose/go-jose/v3](https://github.com/go-jose/go-jose) from 3.0.1 to 3.0.3.
- [Release notes](https://github.com/go-jose/go-jose/releases)
- [Changelog](https://github.com/go-jose/go-jose/blob/v3.0.3/CHANGELOG.md)
- [Commits](https://github.com/go-jose/go-jose/compare/v3.0.1...v3.0.3)

---
updated-dependencies:
- dependency-name: github.com/go-jose/go-jose/v3
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-08 19:01:13 +08:00
dependabot[bot] c57e7c038c build(deps): bump mio in /contrib/nydus-backend-proxy
Bumps [mio](https://github.com/tokio-rs/mio) from 0.8.5 to 0.8.11.
- [Release notes](https://github.com/tokio-rs/mio/releases)
- [Changelog](https://github.com/tokio-rs/mio/blob/master/CHANGELOG.md)
- [Commits](https://github.com/tokio-rs/mio/compare/v0.8.5...v0.8.11)

---
updated-dependencies:
- dependency-name: mio
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-08 19:00:57 +08:00
dependabot[bot] eba6afe5b8 build(deps): bump mio from 0.8.10 to 0.8.11
Bumps [mio](https://github.com/tokio-rs/mio) from 0.8.10 to 0.8.11.
- [Release notes](https://github.com/tokio-rs/mio/releases)
- [Changelog](https://github.com/tokio-rs/mio/blob/master/CHANGELOG.md)
- [Commits](https://github.com/tokio-rs/mio/compare/v0.8.10...v0.8.11)

---
updated-dependencies:
- dependency-name: mio
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-07 14:46:07 +08:00
YuQiang aaab560aa9 feat: add fs_version and compressor output of nydus image check
1.Add rafs_version value, output like 5 or 6.
2.Add compressor algorithm value, like ztsd.
Add rafs_version and compressor json output of nydus image check,so that more info can be get if it is necessary.

Signed-off-by: YuQiang <y_q_email@163.com>
2024-02-29 14:15:39 +08:00
Yadong Ding 7b3cc503a2 action: add contrib-lint in smoke test
1. Use the official GitHub action for golangci-lint from its authors.
2. fix golang lint error with v1.56
3. separate test and golang lint.Sometimes we need tests without golang lint and sometimes we just want to do golang lint.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-02-21 11:44:33 +08:00
dependabot[bot] 5fb809605d build(deps): bump github.com/opencontainers/runc in /contrib/ctr-remote
Bumps [github.com/opencontainers/runc](https://github.com/opencontainers/runc) from 1.1.11 to 1.1.12.
- [Release notes](https://github.com/opencontainers/runc/releases)
- [Changelog](https://github.com/opencontainers/runc/blob/v1.1.12/CHANGELOG.md)
- [Commits](https://github.com/opencontainers/runc/compare/v1.1.11...v1.1.12)

---
updated-dependencies:
- dependency-name: github.com/opencontainers/runc
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-02-20 13:11:38 +08:00
Yan Song abaf9caa16 docs: update outdated dingtalk QR code
And remove the outdated technical meeting schedule.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2024-02-20 10:17:19 +08:00
dependabot[bot] d7ea50e621 build(deps): bump github.com/opencontainers/runc in /contrib/nydusify
Bumps [github.com/opencontainers/runc](https://github.com/opencontainers/runc) from 1.1.11 to 1.1.12.
- [Release notes](https://github.com/opencontainers/runc/releases)
- [Changelog](https://github.com/opencontainers/runc/blob/v1.1.12/CHANGELOG.md)
- [Commits](https://github.com/opencontainers/runc/compare/v1.1.11...v1.1.12)

---
updated-dependencies:
- dependency-name: github.com/opencontainers/runc
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-02-18 17:11:09 +08:00
Yadong Ding d12634f998 action: bump nodejs20 github action
https://github.blog/changelog/2023-09-22-github-actions-transitioning-from-node-16-to-node-20/

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-02-06 09:36:54 +08:00
loheagn 9a1c47bd00 docs: add doc for nydusd failover and hot upgrade
Signed-off-by: loheagn <loheagn@icloud.com>
2024-01-23 20:01:48 +08:00
Yadong Ding 3f47f1ec6d fix: upload-artifact v4 break changes
upload-artifact v4 can't upload artifact to same name

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-01-19 11:01:50 +08:00
Yadong Ding 5f26f8ee1c fix: upgrade h2 to 0.3.24 to fix RUSTSEC-2024-0003
ID: RUSTSEC-2024-0003
Advisory: https://rustsec.org/advisories/RUSTSEC-2024-0003
An attacker with an HTTP/2 connection to an affected endpoint can send a steady stream of invalid frames to force the
generation of reset frames on the victim endpoint.
By closing their recv window, the attacker could then force these resets to be queued in an unbounded fashion,
resulting in Out Of Memory (OOM) and high CPU usage.

This fix is corrected in [hyperium/h2#737](https://github.com/hyperium/h2/pull/737), which limits the total number of
internal error resets emitted by default before the connection is closed.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-01-18 16:58:44 +08:00
Yadong Ding eae9ed7e45 fix: upload-artifact@v4 breake in release
Error:
Failed to CreateArtifact: Received non-retryable error: Failed request: (409) Conflict: an artifact with this name already exists on the workflow run

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-01-18 16:58:44 +08:00
Yadong Ding a3922b8e0d action: bump upload-artifact/download-artifact v4
Since https://github.com/actions/download-artifact/issues/249 are fixed,
we can use the v4 version.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-01-17 10:04:49 +08:00
Wenhao Ren 9dae4eccee storage: fix the tiny prefetch request for batch chunks
By passing the chunk continuous check, and correctly sort batch chunks,
the prefetch request will not be interrupted by batch chunks anymore.

Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2024-01-15 17:56:26 +08:00
Wenhao Ren d7190d9fee action: add convert test for batch chunk
Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2024-01-15 17:56:26 +08:00
Wenhao Ren 8bb53a873a storage: add validation and unit test for batch chunks
1. Add the validation for batch chunks.
2. Add unit test for `BatchInflateContext`.

Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2024-01-15 17:56:26 +08:00
Wenhao Ren 7f799ec8bb storage: introduce `BlobCCI` for reading batch chunk info
`BlobCompressionContextInfo` is need to read batch chunk info.
`BlobCCI` is introduced for simplifying the code,
and decrease the times of getting this context by lazy loading.

Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2024-01-15 17:56:26 +08:00
Wenhao Ren c557f99d08 storage: fix the read amplification for batch chunks.
Read amplification for batch chunk is not correctly implemented that may crash.
The read amplification is rewrited to fix this bug.
A unit test for read amplification is also added for covering this code.

Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2024-01-15 17:56:26 +08:00
Wenhao Ren 676acd0a6f storage: fix the Error type to log the error correctly
Currently, many error are output as `os error 22` lossing customized log info.
So we change the Error type for correctly output and log the error info
as what we expected.

Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2024-01-15 17:56:26 +08:00
Wenhao Ren fa72c98ffc rafs: add `is_batch()` for `BlobChunkInfo`
Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2024-01-15 17:56:26 +08:00
Wenhao Ren b4fe28aad6 rafs: move `compressed_offset` from `BatchInflateContext` to chunk info for batch chunks.
1. `compressed_offset` is used for build-time and runtime sorting for chunk info.
So we move `compressed_offset` from `BatchInflateContext` to chunk info for batch chunks.

2. the `compressed_size` for the blobs in batch mode is not correctly set.
We thus fix it by setting the value of `dumped_size`.

Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2024-01-15 17:56:26 +08:00
dependabot[bot] 596492b932 build(deps): bump github.com/go-jose/go-jose/v3 in /contrib/nydusify
Bumps [github.com/go-jose/go-jose/v3](https://github.com/go-jose/go-jose) from 3.0.0 to 3.0.1.
- [Release notes](https://github.com/go-jose/go-jose/releases)
- [Changelog](https://github.com/go-jose/go-jose/blob/main/CHANGELOG.md)
- [Commits](https://github.com/go-jose/go-jose/compare/v3.0.0...v3.0.1)

---
updated-dependencies:
- dependency-name: github.com/go-jose/go-jose/v3
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-01-04 18:52:59 +08:00
Yadong Ding 2743f163b9 deps: update the latest version and sync
Bump containerd v1.7.11 and golang.org/x/crypto v0.17.0.
Resolve GHSA-45x7-px36-x8w8 and GHSA-7ww5-4wqc-m92c.
Update dependents to latest version and sync in muti modules.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-01-04 14:11:36 +08:00
loheagn 04b4552e03 tests: add smoke test for hot upgrade
Signed-off-by: loheagn <loheagn@icloud.com>
2024-01-04 14:10:31 +08:00
Qinqi Qu 5ecda8c057 bats test: upgrade golang version to 1.21.5
Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2024-01-03 11:54:30 +08:00
Qinqi Qu 8e1799e5df bats test: change rust docker image to Debian 11 bullseye version
The rust:1.72.1 image is based on the Debian 12 bookworm, and requires
an excessively high version of glibc, resulting in the inability to
find the glibc version to run the compiled nydus program on some old
operating systems.

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2024-01-03 11:54:30 +08:00
Yadong Ding f08587928b rust: bump 1.72.1 and fix errors
https://rust-lang.github.io/rust-clippy/master/index.html#non_minimal_cfg
https://rust-lang.github.io/rust-clippy/master/index.html#unwrap_or_default
https://rust-lang.github.io/rust-clippy/master/index.html#needless_borrows_for_generic_args
https://rust-lang.github.io/rust-clippy/master/index.html#reserve_after_initializatio
https://rust-lang.github.io/rust-clippy/master/index.html#/arc_with_non_send_sync
https://rust-lang.github.io/rust-clippy/master/index.html#useless_vec

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-29 08:58:02 +08:00
Xin Yin cf76edbc52 dep: upgrade tokio to 1.35.1
Fix painc after all prefetch worker exit for fscache mode.

Signed-off-by: Xin Yin <yinxin.x@bytedance.com>
2023-12-27 20:36:23 +08:00
loheagn 7f27b7ae78 tests: add smoke test for nydusd failover
Signed-off-by: loheagn <loheagn@icloud.com>
2023-12-25 16:35:14 +08:00
Yadong Ding 17c373fc29 nydusify: fix error in go vet
`sudo` in action will change go env, remove sudo.
In runner user, we can create file in unpacktargz-test inseted of temp/unpacktargz-test,
so don't use os.CreateTemp in archive_test.go.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-25 14:02:59 +08:00
Yadong Ding d5242901f9 action: delete useless env
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-25 14:02:59 +08:00
Yadong Ding 39daa97bac nydusify: fix unit test fail in utils
utils_test.go:248:
                Error Trace:    /root/nydus/contrib/nydusify/pkg/utils/utils_test.go:248
                Error:          Should be true
                Test:           TestRetryWithHTTP

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-25 14:02:59 +08:00
Yadong Ding 2cd8ba25bd nydusify: add unit test for nydusify
We had removed the tests files(e2e) in nydusify, we need add the unit tests
to improve test coverage.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-25 09:14:54 +08:00
Yadong Ding 3164f19ab7 makefile: remove build in test
use `make test` to run unit tests, it don't need build.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-25 09:14:54 +08:00
Yadong Ding 6675da3186 action: use upload-artifact/download-artifact v3
master branch is unstable, change to v3.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-25 09:14:54 +08:00
Yadong Ding 7772082411 action: use sudo in contrib-unit-test-coverage
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-25 09:14:54 +08:00
Yadong Ding 65046b0533 refactor: use ErrSchemeMismatch and ECONNREFUSED
ref: https://github.com/golang/go/issues/44855

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-25 09:03:20 +08:00
Yadong Ding b5e88a4f4e chore: upgrade go version to 1.21
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-25 09:03:20 +08:00
Yadong Ding 18ba2eda63 action: fix failed to compile `cross v0.2.4`
error: failed to compile `cross v0.2.4`, intermediate artifacts can be found at `/tmp/cargo-installG1Scm4`

Caused by:
  package `home v0.5.9` cannot be built because it requires rustc 1.70.0 or newer, while the currently active rustc version is 1.68.2
  Try re-running cargo install with `--locked`

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-19 14:25:04 +08:00
Yadong Ding ab06841c39 revent build(deps): bump openssl from 0.10.55 to 0.10.60
Revent https://github.com/dragonflyoss/nydus/pull/1513.
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-19 14:25:04 +08:00
Yadong Ding e9d63f5d3b chore: upgrade dbs-snapshot to 1.5.1
v1.5.1 brings support of ppc64le and riscv64.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-19 14:25:04 +08:00
Yadong Ding 1a1e8fdb98 action: test build with more architectures
Test build with more architectures, but only use `amd64` in next jobs.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-19 14:25:04 +08:00
Yadong Ding a4ec9b8061 tests: add go module unit coverage to Codecov
resolve dragonflyoss#1518.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-19 09:48:48 +08:00
Yadong Ding 54a3395434 action: add contrib-test and build
Use contrib-tes job to test the golang modules in contrib.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-19 09:48:48 +08:00
Yadong Ding 0458817278 chore: modify repo to dragonflyoss/nydus
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-18 17:41:00 +08:00
Yadong Ding 763786f316 chore: change go module name to nydus
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-18 17:41:00 +08:00
dependabot[bot] d6da88a8f1 build(deps): bump github.com/docker/docker in /contrib/nydusify
Bumps [github.com/docker/docker](https://github.com/docker/docker) from 23.0.3+incompatible to 24.0.7+incompatible.
- [Release notes](https://github.com/docker/docker/releases)
- [Commits](https://github.com/docker/docker/compare/v23.0.3...v24.0.7)

---
updated-dependencies:
- dependency-name: github.com/docker/docker
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-12-18 13:38:23 +08:00
Yadong Ding 06755fe74b tests: remove useless test files
Since https://github.com/dragonflyoss/nydus/pull/983, we have the new smoke test, we can remove the
old smoke test files including nydusify and nydus.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-15 14:14:05 +08:00
Yadong Ding 2bca6f216a smoke: use golangci-lint to improve code quality
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-15 14:12:44 +08:00
Yadong Ding 0e81f2605d nydusify: fix errors found by golangci-lint
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-15 14:12:44 +08:00
Yadong Ding f98b6e8332 action: upgrade golangci-lint to v1.54.2
We have some golang lint error in nydusify.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-15 14:12:44 +08:00
Yadong Ding 1d289e25f9 rust: update to edition2021
Since we are using cargo 1.68.2 we don't need to require edition 2018 any more.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-15 14:10:50 +08:00
Yadong Ding 194641a624 chore: remove go test cover
In golang smoke test, go test don't need coverage analysis and create coverage profile.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-13 15:54:42 +08:00
Yiqun Leng 45331d5e18 bats test: move the logic of generating dockerfile into common lib
Signed-off-by: Yiqun Leng <yqleng@linux.alibaba.com>
2023-12-13 15:25:15 +08:00
dependabot[bot] 55a999b9e6 build(deps): bump openssl from 0.10.55 to 0.10.60
Bumps [openssl](https://github.com/sfackler/rust-openssl) from 0.10.55 to 0.10.60.
- [Release notes](https://github.com/sfackler/rust-openssl/releases)
- [Commits](https://github.com/sfackler/rust-openssl/compare/openssl-v0.10.55...openssl-v0.10.60)

---
updated-dependencies:
- dependency-name: openssl
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-12-13 13:09:44 +08:00
Yan Song 87e3db7186 nydusify: upgrade containerd package
To import some fixups from https://github.com/containerd/containerd/pull/9405.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-12-13 09:57:20 +08:00
Qinqi Qu a84400d165 misc: update rust-toolchain file to TOML format
1. Move rust-toolchain to rust-toolchain.toml
2. Update the parsing process of rust-toolchain in the test script.

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-12-12 20:27:12 +08:00
Yadong Ding d793aee881 action: delete clean-cache
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-11 09:47:54 +08:00
Yadong Ding a3e60c0801 action: benchmark add conversion_elapsed
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-08 09:33:03 +08:00
Yadong Ding 794f7f7293 smoke: add image conversion time in benchmark
ConversionElapsed can express the performance of accelerated image conversion.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-08 09:33:03 +08:00
Xin Yin e12416ef09 upgrade: change to use dbs_snapshot crate
Signed-off-by: Xin Yin <yinxin.x@bytedance.com>
2023-12-07 20:10:13 +08:00
Xin Yin 7b25d8a059 service: add unit test for upgrade mananger
Signed-off-by: Xin Yin <yinxin.x@bytedance.com>
2023-12-07 20:10:13 +08:00
Xin Yin e0ad430486 feat: support takeover for fscache
refine the UpgradeManager, make it can also support store status for
fscache daemon. And make the takeover feature applies to both fuse and
fscache mode.

Signed-off-by: Xin Yin <yinxin.x@bytedance.com>
2023-12-07 20:10:13 +08:00
Nan Li 16f5ac3d14 feat: implement `takeover` for nydusd fusedev daemon
This patch implements the `save` and `restore` functions in the `fusedev_upgrade` in the service create.
To do this,
- This patch add a new create named `nydus-upgrade` into the workspace. The `nydus-upgrade` create has some util functions help to do serialization and deserialization for the rust structs using the versionize and snapshot crates. The crate also has a trait named `StorageBackend` which can be used to store and restore fuse session fds and state data for the upgrade action, and there's also an implementation named `UdsStorageBackend` which uses unix domain socket to do this.
- as we have to use the same fuse session connection, backend file system mount commands, Vfs to re-mount the rafs for the new daemon (created for "hot upgrade" or failover), this patch add a new struct named `FusedevState` to hold these information. The `FusedevState` is serialized and stored into the `UdsStorageBackend` (which happens in the `save` function in the `fusedev_upgrade` module) before the new daemon is created, and the `FusedevState` is deserialized and restored from the `UdsStorageBackend` (which happens in the `restore` function in the `fusedev_upgrade` module) when the new daemon is triggered by `takeover`.

Signed-off-by: Nan Li <loheagn@icloud.com>
Signed-off-by: linan.loheagn3 <linan.loheagn3@bytedance.com>
2023-12-07 20:10:13 +08:00
Yadong Ding e4cf98b125 action: add oci in benchmark
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-07 10:19:53 +08:00
Yadong Ding b87814b557 smoke: support different snapshooter in bench
We can use overlayfs to test OCI V1 image.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-07 10:19:53 +08:00
Jiang Liu 50b8988751 storage: use connection pool for sqlite
Sqlite connection is not thread safe, so use connection pool to
support multi-threading.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-12-06 15:54:09 +08:00
Jiang Liu 1c293cfefd storage: move cas db from util into storage
Move cas db from util into storage.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-12-06 15:54:09 +08:00
Jiang Liu bfc171a933 util: refine database structure for CAS
Refine the sqlite database structure for storing CAS information.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-12-06 15:54:09 +08:00
xwb1136021767 6ca3ca7dc0 utils: introduce sqlite to store CAS related information
Introduce sqlite to store CAS related information.

Signed-off-by: Huang Jianan <jnhuang95@gmail.com>
Signed-off-by: xwb1136021767 <1136021767@qq.com>
2023-12-06 15:54:09 +08:00
Yadong Ding 93ef71db79 action:use more images in benchmark
Include:
- python:3.10.7
- golang:1.19.3
- ruby:3.1.3
- amazoncorretto:8-al2022-jdk

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-04 10:14:17 +08:00
Yadong Ding ba8d3102ab smoke: support more images in container
Support: python, golang, ruby, amazoncorretto.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-04 10:14:17 +08:00
Yadong Ding eeddfff9a0 nydusify: fix deprecated
1. replace `github.com/docker/distribution` with `github.com/distribution/reference`
2. replace `EndpointResolver` with `BaseEndpoint`

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-04 10:12:45 +08:00
Yadong Ding 11592893ea nydusify: update dependencies version
github.com/aliyun/aliyun-oss-go-sdk: `v2.2.6+incompatible` -> `v3.0.1+incompatible`
github.com/aws/aws-sdk-go-v2 `v1.17.6` -> `v1.23.5`
github.com/aws/aws-sdk-go-v2/config `v1.18.16` -> `v1.25.11`
github.com/aws/aws-sdk-go-v2/credentials `v1.13.16` -> `v1.16.9`
github.com/aws/aws-sdk-go-v2/feature/s3/manager `v1.11.56` -> `v1.15.4`
github.com/aws/aws-sdk-go-v2/service/s3 `v1.30.6` -> `v1.47.2`
github.com/containerd/nydus-snapshotter `v0.13.2 -> v0.13.3`
github.com/docker/cli `v24.0.6+incompatible` -> `v24.0.7+incompatible`
github.com/docker/distribution `v2.8.2+incompatible` -> `v2.8.3+incompatible`
github.com/google/uuid `v1.3.1` -> `v1.4.0`
github.com/hashicorp/go-hclog `v1.3.1` -> `v1.5.0`
github.com/hashicorp/go-plugin `v1.4.5` -> `v1.6.0`
github.com/opencontainers/image-spec `v1.1.0-rc4` -> `v1.1.0-rc5`
github.com/prometheus/client_golang `v1.16.0` -> `v1.17.0`
github.com/sirupsen/logrus `v1.9.0` -> `v1.9.3`
github.com/stretchr/testify `v1.8.3` -> `v1.8.4`
golang.org/x/sync `v0.3.0` -> `v0.5.0`
golang.org/x/sys `v0.13.0` -> `v0.15.0`
lukechampine.com/blake3 `v1.1.5` -> `v1.2.1`

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-04 10:12:45 +08:00
Yadong Ding 3f999a70c5 action: add `node:19.8` in benchmark
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-01 14:54:31 +08:00
Yadong Ding e0041ec9cb smoke: benchamrk support node
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-01 14:54:31 +08:00
Yadong Ding d266599128 docs: add benchmark badge with schedule event
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-01 11:38:31 +08:00
Yan Song e0fc6a1106 contrib: fix golangci lint for ctr-remote
Fix the lint check error by updating containerd package:

```
golangci-lint run
Error: commands/rpull.go:89:2: SA1019: log.G is deprecated: use [log.G]. (staticcheck)
	log.G(pCtx).WithField("image", ref).Debug("fetching")
	^
```

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-12-01 10:59:28 +08:00
Yan Song 838593fed3 nydusify: support --push-chunk-size option
Reference: https://github.com/containerd/containerd/pull/9405

Will replace containerd dep to upstream version if the PR can be merged.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-12-01 10:59:28 +08:00
Yadong Ding f1de095905 action: use same golang cache
setup-go@v4 use cache name `setup-go-Linux-ubuntu22-go-1.20.11-${hash}`.
`actions/cache@v3` restore the same content, so just restore the same cache.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-01 10:42:24 +08:00
Yadong Ding a1ad70a46c action: update setup-go to v4 and enabled caching
When update setup-go to v4, it can cache by itself. And select go version
by `go.work`.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-01 08:39:32 +08:00
Yadong Ding 40489c7365 action: update rust cache version and share caches
1. update Swatinem/rust-cache to v2.7.0.
2. share caches betwwen jobs in release, smoke, convert and benchmark.
3. save rust cache only in master branch in smoke test.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-12-01 08:39:32 +08:00
wuheng 3f5c2c8bb9 docs: nydus-sandbox.yaml add uid
Signed-off-by: wuheng <wuheng@kylinos.cn>
2023-11-30 15:05:07 +08:00
Yadong Ding f5001bbdc3 misc: delete python version benchmark
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-11-30 11:34:27 +08:00
Yadong Ding 0e10dbcaae action: use smoke BenchmarkTest in Benchmark
We should deprecate python version benchmark.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-11-30 11:34:27 +08:00
Yadong Ding 822c935c77 smoke: add benchmark test
1. refactor performance_test, move clearContainer to tools.
2. add benchmark test.
benchmark test will test image container, and save metrics to json file.
Fox example
```json
{
	e2e_time: 2747131
	image_size: 2107412
	read_amount: 121345
	read_count: 121
}
```

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-11-30 11:34:27 +08:00
Yadong Ding 8ad7ae541d fix: smoke test-performance env var set up failed
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-11-29 17:12:43 +08:00
zyfjeff 96f402bfee Let targz type conversions support multi-stream gzip
code reference https://github.com/madler/zlib/blob/master/examples/zran.c

at present, zran and normal targz do not consider the support for
multi stream gzip when decompressing, so there will be problems
when encountering this kind of image, and this PR is used to
support the gzip multi-stream.

Signed-off-by: zyfjeff <zyfjeff@linux.alibaba-inc.com>
2023-11-29 12:57:37 +08:00
zyfjeff 8247fe7b01 Update libz-sys& flate2 crate to latest version
Signed-off-by: zyfjeff <zyfjeff@linux.alibaba-inc.com>
2023-11-29 12:57:37 +08:00
Qinqi Qu 091697918c action: disable codecov patch check
Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-11-27 09:00:33 +08:00
Yadong Ding f21fe67a81 action: use performance test in smoke test
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-11-27 08:59:33 +08:00
Yadong Ding c51ecd0e42 smoke: add performance test
Add performance test to make sure there don't have performance descend

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-11-27 08:59:33 +08:00
Yadong Ding 4c33d4e605 action: remove benchmark test in smoke
We will rewrite it in performance_test with golang

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-11-27 08:59:33 +08:00
Wenhao Ren 71dfc6ff7e builder: align file dump order with prefetch list, fix #1488
1. The dump order for prefetch files does not match the order specified in prefetch list,
so let's fix it.
2. The construction of `Prefetch` is slow due to inefficient matching of prefetch patterns,
By adopting a more efficient data structure, this process has been accelerated.
3. Unit tests for prefetch are added.

Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2023-11-27 08:58:52 +08:00
Yadong Ding e2b131e4c6 go mod: sync deps by go mod tidy
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-11-22 14:50:20 +08:00
Yadong Ding 6f9551a328 git: add go.work.sum to .gitignore
`go.work.sum` always changes too large. We only need it to work well locally.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-11-22 14:50:20 +08:00
Yan Song 767adcf03a nydusify: fix unnecessary manifest index when copy one platform image
When use the command to copy the image with specified one platform:

```
nydusify copy --platform linux/amd64 --source nginx --target localhost:5000/nginx
```

We found the target image is a manifest index format like:

```
{
  "schemaVersion": 2,
  "mediaType": "application/vnd.docker.distribution.manifest.list.v2+json",
  "manifests": [
    {
      "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
      "digest": "sha256:d2e65182b5fd330470eca9b8e23e8a1a0d87cc9b820eb1fb3f034bf8248d37ee",
      "size": 1778,
      "platform": {
        "architecture": "amd64",
        "os": "linux"
      }
    }
  ]
}
```

This can be a bit strange, in fact just the manifest is enough, the patch improves this.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-11-10 16:50:41 +08:00
Wenhao Ren c9fbce8ccf nydusd: add the config support of `amplify_io`
Add the support of `amplify_io` in the config file of nydusd
to configure read amplification.

Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2023-11-09 14:15:18 +08:00
Wenhao Ren 468eeaa2cf rafs: rename variable names about prefetch configuration
Variable names about prefetch are confused currently.
So we merge variable names that have the same meaning,
while DO NOT affect the field names read from the configuration file.

Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2023-11-09 14:15:18 +08:00
Peng Tao 46dca1785f rafs/builder: fix build on macos
These are u16 on macos.

Signed-off-by: Peng Tao <bergwolf@gmail.com>
2023-11-09 11:20:25 +08:00
Peng Tao e06c1ca85f ut: stop testing some unit tests on macos
We are only testing blob cache and fscache in unit tests. And we are
testing linux device id. All of them do not work on macos at all.

Signed-off-by: Peng Tao <bergwolf@gmail.com>
2023-11-09 11:20:25 +08:00
Peng Tao 3061050e20 smoke: add macos build test
Signed-off-by: Peng Tao <bergwolf@gmail.com>
2023-11-09 11:20:25 +08:00
Yan Song 1c24213802 docs: update multiple snapshotter switch troubleshooting
Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-11-09 10:28:10 +08:00
weizhen.zt b572a0f24e utils: bugfix for unit test case.
Signed-off-by: weizhen.zt <wodemia@linux.alibaba.com>
2023-11-09 10:24:45 +08:00
weizhen.zt c608ef6231 storage: move toml to dev-dependencies
Signed-off-by: weizhen.zt <wodemia@linux.alibaba.com>
2023-11-09 10:24:45 +08:00
weizhen.zt 19185ed0d2 builder: add some unit test cases
Signed-off-by: weizhen.zt <wodemia@linux.alibaba.com>
2023-11-09 10:24:45 +08:00
weizhen.zt cc5a8c5035 api: add some unit test cases
Signed-off-by: weizhen.zt <wodemia@linux.alibaba.com>
2023-11-09 10:24:45 +08:00
weizhen.zt 60db5334ff rafs: add some unit test cases
Signed-off-by: weizhen.zt <wodemia@linux.alibaba.com>
2023-11-09 10:24:45 +08:00
weizhen.zt f75e0da3ad storage: add some unit test cases
Signed-off-by: weizhen.zt <wodemia@linux.alibaba.com>
2023-11-09 10:24:45 +08:00
weizhen.zt 9021871596 utils: add some unit test cases
Signed-off-by: weizhen.zt <wodemia@linux.alibaba.com>
2023-11-09 10:24:45 +08:00
Yan Song 360b59fa98 docs: unify object_prefix field for oss/s3 backend
Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-11-09 09:47:48 +08:00
Yan Song ea5db01442 docs: some improvements for usage
1. buildkit upstream follow-up is slow, update to nydusaccelerator/buildkit;
2. runtime-level snapshotter usage needs extra containerd patch;
3. add s3 storage backend example for nydusd doc page;

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-11-09 09:47:48 +08:00
hijackthe2 002b2f2c8a builder: fix assertion error by explicitly specifying type when building nydus in macos arm64 environment.
Signed-off-by: hijackthe2 <2948278083@qq.com>
2023-11-07 13:42:04 +08:00
hijackthe2 89882a4002 storage: add some unit test cases
Some unit test cases are added for device.rs, meta/batch.rs, meta/chunk_info_v2.rs, meta/mod.rs, and meta/toc.rs in storage/src to increase code coverage.

Signed-off-by: hijackthe2 <2948278083@qq.com>
2023-11-07 09:13:12 +08:00
Yadong Ding 2fb293411d action: get latest tag by Github API
Use https://api.github.com/repos/Dragonflyoss/nydus/releases/latest to get the
latest tag of nydus, and used in smoke/integration-test.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-11-07 09:08:14 +08:00
Junduo Dong 8b81a99108 contrib: correct parameter name
Signed-off-by: Junduo Dong <andj4cn@gmail.com>
2023-11-06 09:04:31 +08:00
hijackthe2 240af3e336 builder: add some unit test cases
Some unit test cases are added for compact.rs, lib.rs, merge.rs, stargz.rs, core/context.rs, and core/node.rs in builder/src to increase code coverage.

Signed-off-by: hijackthe2 <2948278083@qq.com>
2023-11-03 16:56:14 +08:00
hijackthe2 689900cc18 ci: add configurations to setup fscache
Since using `/dev/cachefiles` requires sudo mode, so some environment variables are defined and we use `sudo -E` to pass these environment variables to sudo operations.

The script file for enabling fscache is misc/fscache/setup.sh

Signed-off-by: hijackthe2 <2948278083@qq.com>
2023-11-03 08:35:31 +08:00
hijackthe2 cdc41de069 docs: add fscache configuation
Signed-off-by: hijackthe2 <2948278083@qq.com>
2023-11-03 08:35:31 +08:00
hijackthe2 3c57fc608c tests: add unit test case for blob_cache.rs, block_device.rs, fs_cache.rs, singleton.rs under service/src
1. In blob_cache.rs, two simple lines of code have been added to cover previously missed cases.
2. In block_device.rs, some test cases are added to cover function export(), block_size(), blocks_to_size(), and size_to_blocks().
3. In fs_cache.rs, some test cases are added to cover function try_from() for struct FsCacheMsgOpen and FsCacheMsgRead.
4. In singletion.rs, some test cases are added to cover function initialize_blob_cache() and initialize_fscache_service(). In addition, fscache must be correctly enabled firstly as the device file `/dev/cachefiles` will used by function initialize_fscache_service().

Signed-off-by: hijackthe2 <2948278083@qq.com>
2023-11-03 08:35:31 +08:00
Yadong Ding 4d4ebe66c0 go work: support go workspace mode and sync deps
We have mutile golang modules in repo, golang had supported the workspaces,
see https://go.dev/blog/get-familiar-with-workspaces.
Use `go work sync` to synchronize versions of the same dependencies for different modules.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-11-02 22:28:39 +08:00
Yan Song ac55d7f932 smoke: add basic nydusify copy test
Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-11-02 16:50:48 +08:00
Yan Song a478fb6e76 nydusify: fix copy race issue
1. Fix lost namespace on containerd image pull context:

```
pull source image: namespace is required: failed precondition
```

2. Fix possible semaphore Acquire race on the same one context:

```
panic: semaphore: released more than held
```

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-11-02 16:50:48 +08:00
Yan Song ace7c3633d smoke: fix stable version for compatibility test
And let's make stable version name as a env.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-11-02 10:35:00 +08:00
dependabot[bot] 75c87e9e42 build(deps): bump rustix in /contrib/nydus-backend-proxy
Bumps [rustix](https://github.com/bytecodealliance/rustix) from 0.36.8 to 0.36.17.
- [Release notes](https://github.com/bytecodealliance/rustix/releases)
- [Commits](https://github.com/bytecodealliance/rustix/compare/v0.36.8...v0.36.17)

---
updated-dependencies:
- dependency-name: rustix
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-11-02 08:47:06 +08:00
Peng Tao d638eb26e1 smoke: test v2.2.3 by default
Let's make stable v2.2.y a LTS one.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2023-11-01 11:44:25 +08:00
Yan Song 34a09d87ce api: fix unsupported dummy cache type
The dummycache type is missed to handle in config validation:

```
ERROR [/src/fusedev.rs:595] service mount error: RAFS failed to handle request, Failed to load config: failed to parse configuration information`
ERROR [/src/error.rs:18] Stack:
   0: backtrace::backtrace::trace
   1: backtrace::capture::Backtrace::new

ERROR [/src/error.rs:19] Error:
        Rafs(LoadConfig(Custom { kind: InvalidInput, error: "failed to parse configuration information" }))
        at service/src/fusedev.rs:596
ERROR [src/bin/nydusd/main.rs:525] Failed in starting daemon:
Error: Custom { kind: Other, error: "" }
```

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-10-31 18:00:45 +08:00
Yadong Ding e64b912a10 action: rename images-service to nydus
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-10-31 14:10:16 +08:00
Yadong Ding 44149519d1 docs: replace images-service to nydus in links
Since https://github.com/dragonflyoss/nydus/issues/1405, we had changed repo name to nydus.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-10-31 14:10:16 +08:00
Yan Song 55bba9d80b tests: remove useless rust smoke test
The rust integration test has been replaced with the go integration
test in smoke/tests, let's remove it.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-10-31 12:14:56 +08:00
Yan Song 47b62d978c contrib: remove unmaintained python integration test
The python integration test is too long without maintenance, it should
be replaced with the go integration test in smoke/tests.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-10-31 12:14:56 +08:00
Qinqi Qu f55d2c948f deps: bump google.golang.org/grpc to 1.59.0
1. Fix gRPC-Go HTTP/2 Rapid Reset vulnerability

Please refer to:
https://github.com/advisories/GHSA-m425-mq94-257g

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-10-27 16:13:49 +08:00
Qinqi Qu 69ddef9f4c smoke: replaces the io/ioutil API which was deprecated in go 1.19
Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-10-27 15:19:30 +08:00
Qinqi Qu cb458bdea4 contrib: upgrade to go 1.20
Keep consistent with other components in container ecosystem,
for example containerd is using go 1.20.

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-10-27 15:19:30 +08:00
YuQiang 46fc7249b4 update: integrate-acceld-cacheIntegrate acceld cache module
Signed-off-by: YuQiang <y_q_email@163.com>
2023-10-27 14:14:51 +08:00
linchuan 6dc9144193 enhance error handling with thiserror
Signed-off-by: linchuan <linchuan.jh@antgroup.com>
2023-10-27 10:27:24 +08:00
hijackthe2 3bb124ba77 tests: add unit test case for service/src/upgrade.rs
test type transformation between struct FailoverPolicy and String/&str
2023-10-24 18:48:51 +08:00
liyaojie acb689f19b CI : fix the failed fsck patch apply in CI
Signed-off-by: liyaojie <lyj199907@outlook.com>
2023-10-24 15:40:42 +08:00
Yan Song 9632d18e0b api: fix the log message print in macro
Regardless of whether debug compilation is enabled, we should
always print error messages. Otherwise, some error logs may be
lost, making it difficult to debug codes.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-10-20 10:46:42 +08:00
Yan Song 0cad49a6bd storage: fix compatibility on fetching token for registry backend
The registry backend received an unauthorized error from Harbor registry
when fetching registry token by HTTP GET method, the bug is introduced
from https://github.com/dragonflyoss/image-service/pull/1425/files#diff-f7ce8f265a570c66eae48c85e0f5b6f29fdaec9cf2ee2eded95810fe320d80e1L263.

We should insert the basic auth header to ensure the compatibility of
fetching token by HTTP GET method.

This refers to containerd implementation: dc7dba9c20/remotes/docker/auth/fetch.go (L187)

The change has been tested for Harbor v2.9.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-10-20 10:46:42 +08:00
Qinqi Qu 5c63ba924e deps: bump golang.org/x/net to v0.17.0
Fix the following 2 issues:
1. HTTP/2 rapid reset can cause excessive work in net/http
2. Improper rendering of text nodes in golang.org/x/net/html

Please refer to:
https://github.com/dragonflyoss/image-service/security/dependabot/95
https://github.com/dragonflyoss/image-service/security/dependabot/96
https://github.com/dragonflyoss/image-service/security/dependabot/97
https://github.com/dragonflyoss/image-service/security/dependabot/98
https://github.com/dragonflyoss/image-service/security/dependabot/99
https://github.com/dragonflyoss/image-service/security/dependabot/100

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-10-13 03:59:27 -05:00
zyfjeff 9ab1ec1297 Add --blob-cache-dir arg use to generate raw blob cache and meta
generate blob cache and blob meta through the —-blob-cache-dir parameters,
so that nydusd can be started directly from these two files without
going to the backend to download. this can improve the performance
of data loading in localfs mode.

Signed-off-by: zyfjeff <zyfjeff@linux.alibaba.com>
2023-10-10 05:19:53 -05:00
Yan Song 6ea22ccd8a docs: update containerd integration tutorial
Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-10-08 20:50:30 -05:00
Yan Song a9678d2c97 misc: remove outdated example doc
These docs and configs are poorly maintained, and it also can be
replaced by the doc https://github.com/dragonflyoss/image-service/blob/master/docs/containerd-env-setup.md.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-10-08 20:50:30 -05:00
lihuahua123 d7b1851f42 storage: fix auth compatibility for registry backend
Signed-off-by: lihuahua123 <771725652@qq.com>
2023-09-27 10:49:32 +08:00
YuQiang aa9c95ab42 feat: notifying incosistent fs version problem with exit code
If acceld converts with different fs version cache, leading to an inconsistent fs version problem when merging into boostrap layer. So we need to notify acceld that an inconsistent version occured and handle this error.

Signed-off-by: YuQiang <y_q_email@163.com>
2023-09-25 21:08:40 +08:00
zyfjeff b777564f45 Always use blob id as the name of the filecache when use separate blob
Before we only has one blob, called a data blob, so when generating a filecache,
we always used the id of this blob as the name of the filecache,
and later after supporting separate blobs, we has two blobs, one is a data blob,
the other is a meta blob, in order to maintain compatibility,
we should always use the data blob id as the filecache name, not the meta blob id.

Signed-off-by: zyfjeff <zyfjeff@linux.alibaba.com>
2023-09-25 11:45:46 +08:00
Junduo Dong 0cdf4725ac nydus-image: Fix blobs unpack bug
Signed-off-by: Junduo Dong <dongjunduo.djd@antgroup.com>
2023-09-25 10:40:41 +08:00
dependabot[bot] 35cd712d96 build(deps): bump github.com/cyphar/filepath-securejoin
Bumps [github.com/cyphar/filepath-securejoin](https://github.com/cyphar/filepath-securejoin) from 0.2.3 to 0.2.4.
- [Release notes](https://github.com/cyphar/filepath-securejoin/releases)
- [Commits](https://github.com/cyphar/filepath-securejoin/compare/v0.2.3...v0.2.4)

---
updated-dependencies:
- dependency-name: github.com/cyphar/filepath-securejoin
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-09-25 10:39:29 +08:00
dependabot[bot] 8b598f0060 build(deps): bump github.com/cyphar/filepath-securejoin
Bumps [github.com/cyphar/filepath-securejoin](https://github.com/cyphar/filepath-securejoin) from 0.2.3 to 0.2.4.
- [Release notes](https://github.com/cyphar/filepath-securejoin/releases)
- [Commits](https://github.com/cyphar/filepath-securejoin/compare/v0.2.3...v0.2.4)

---
updated-dependencies:
- dependency-name: github.com/cyphar/filepath-securejoin
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-09-25 10:39:19 +08:00
Junduo Dong 148cf96782 Fix no export subcmd panic on mac
Signed-off-by: Junduo Dong <dongjunduo.djd@antgroup.com>
2023-09-25 10:37:31 +08:00
Lin Wang 278915b4eb nydus-image:Optimize Chunkdict Save
Refactor the Deduplicate implementation to only
initialize config when inserting chunk data.
Simplify code for better maintainability.

Signed-off-by: Lin Wang <l.wang@mail.dlut.edu.cn>
2023-09-22 16:33:53 +08:00
Yan Song d2fcfcd56d action: update test branch for integration
We are focusing on v2.2 maintenance, let's we change the test branch
from `stable/v2.1` to `stable/v2.2`.

It also fix the broken integration test:
https://github.com/dragonflyoss/image-service/actions/runs/6153232407

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-09-13 17:53:55 +08:00
Yadong Ding a35e634202 misc: rename vault from library to hashicorp
Upcoming in Vault 1.14, stop publishing official Dockerhub images and publish only our Verified Publisher images.
Users of Docker images should pull from hashicorp/vault instead of vault.
Verified Publisher images can be found at https://hub.docker.com/r/hashicorp/vault.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-09-08 09:45:59 +08:00
Jiang Liu 919e8ac534 nydus-overlayfs: filter option "io.katacontainers.volume"
Filter mount option "io.katacontainers.volume", which is a superset
of "extraoptions".

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-09-06 14:25:13 +08:00
Yan Song 1d93f129c9 storage: fix chunk map compatibility
The blob cache file of nydusd v2.2 and <=v2.1 are in different
formats, which are not compatible. Should use different chunk map
files for them, in order to upgrade or downgrade smoothly.

For the nydusd <=v2.1, the files in blob cache directory:

```
$blob_id
$blob_id.chunk_map
```

For the nydusd =v2.2, the files in blob cache directory:

```
$blob_id.blob.data
$blob_id.chunk_map
```

NOTE: nydusd (v2.2) maybe use the chunk map file of nydusd(<=v2.1),
it will cause the corrupted blob cache data to be read.

For the nydusd of current patch, the files in blob cache directory:

```
$blob_id.blob.data
$blob_id.blob.data.chunk_map
```

NOTE: this will discard the old blob cache data and chunk map files.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-09-05 13:19:42 +08:00
zyfjeff 631db29759 Add seekable method for TarReader use to determine whether the current reader supports seek
Signed-off-by: zyfjeff <zyfjeff@linux.alibaba.com>
2023-08-29 15:38:40 +08:00
zyfjeff 55d8ac12f1 Add seekable method for TarReader use to determine whether the current reader supports seek
Signed-off-by: zyfjeff <zyfjeff@linux.alibaba.com>
2023-08-29 15:38:40 +08:00
zyfjeff d54c43f59a add --original-blob-ids args for merge
the default merge command is to get the name of the original
blob from the bootstrap name, and add a cli args for it

Signed-off-by: zyfjeff <zyfjeff@linux.alibaba.com>
2023-08-29 14:07:17 +08:00
zyfjeff 0e2d72c59b bugfix: do not fill 0 buffer, and skip validate features
1. Buffer reset to 0 will cause race during concurrency.

2. Previously, the second validate_header did not actually take effect. Now
it is repaired, and it is found that the features of blob info do not
set the --inline-bootstrap position to true, so the check of features is
temporarily skipped. Essentially needs to be fixed from nydus-image from
upstream.

Signed-off-by: zhaoshang <zhaoshangsjtu@linux.alibaba.com>
Signed-off-by: zyfjeff <zyfjeff@linux.alibaba.com>
2023-08-29 12:02:07 +08:00
zyfjeff 49cc3f9c73 Support use /dev/stdin as SOURCE path for image build
Signed-off-by: zyfjeff <zyfjeff@linux.alibaba.com>
2023-08-29 11:56:17 +08:00
zyfjeff 1abf0aeb84 Change /contrib/**/.vscode to **/.vscode
Signed-off-by: zyfjeff <zyfjeff@linux.alibaba.com>
2023-08-25 16:43:16 +08:00
zyfjeff 7455cdd233 Update cargo.lock to latest
Signed-off-by: zyfjeff <zyfjeff@linux.alibaba.com>
2023-08-25 16:43:16 +08:00
zyfjeff e5798eb228 Add vscode to gitignore for all contrib subdir
Signed-off-by: zyfjeff <zyfjeff@linux.alibaba.com>
2023-08-25 16:43:16 +08:00
Yan Song d9f8fa9c99 docs: add nydusify copy usage
Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2023-08-25 14:07:28 +08:00
Yan Song e4339ee2a2 nydusify: introduce copy subcommand
`nydusify copy` copies an image from source registry to target
registry, it also supports to specify a source backend storage.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-08-25 14:07:28 +08:00
David Baird 156ba6a8a3 Fix image-create with ACLs. Fixes #1394.
Signed-off-by: David Baird <dhbaird@gmail.com>
2023-08-17 10:28:22 +08:00
Qinqi Qu f3cdd071b0 deps: change tar-rs to upstream version
Since upstream tar-rs merged our fix for reading large uids/gids from
the PAX extension, so change tar-rs back to the upstream version.

Update tar-rs dependency xattr to 1.0.1 as well.

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-08-08 23:05:18 +08:00
Qinqi Qu 32143077d6 cargo: update rafs/storage/api/utils in cargo.lock
This change will be automatically generated during make.

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-08-08 23:05:18 +08:00
Zhao Yuan 8b59e192bb nydusify chunkdict generate --sources
Add the 'nydus-image chunkdict save' command
with the "--sources" followed by the nydus image of registry
(e.g.,'registry.com/busybox:nydus-v1,registry.com/busybox:nydus-v2')

Signed-off-by: Zhao Yuan <1627990440@qq.com>
2023-08-08 15:34:07 +08:00
Lin Wang 8a9302402d nydus-image: Store chunk and blob metadata
Add functionality to store chunk and blob metadata
from nydus source images.
Use the 'nydus-image chunkdict save' command
with the '--bootstrap' option followed by the
path to the nydus bootstrap file (e.g., '~/output/nydus_bootstrap')

Signed-off-by: Lin Wang <l.wang@mail.dlut.edu.cn>
2023-08-08 15:32:02 +08:00
Lin Wang 4d0c0c08ff cargo: Add rusqlite package to dependencies
Updates 'Cargo.toml' and 'Cargo.lock' files to include the 'rusqlite' package.
Enabling interaction with SQLite databases in the project.

Signed-off-by: Lin Wang <l.wang@mail.dlut.edu.cn>
2023-08-08 15:32:02 +08:00
Peng Tao 34ee8255de cargo: bump rafs/storage/api/utils crate version
To publish them on crates.io.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2023-08-07 22:11:16 +08:00
Wei Zhang f41934993f service: print more error message
Some error messages were swallowed which makes user confused, for
example, for RAFSv6, we need to set blobcache config in `localfs.json`
(following docs tutorial), before modification, the error message
indicates nothing:

```
ERROR [src/bin/nydusd/main.rs:525] Failed in starting daemon: Invalid
argument (os error 22)
```

After this modification, we get clearer error message:

```
ERROR [/src/fusedev.rs:595] service mount error: RAFS failed to handle
request, Configure("Rafs v6 must have local blobcache configured")
```

Signed-off-by: Wei Zhang <weizhang555.zw@gmail.com>
2023-08-04 16:31:37 +08:00
Xuewei Niu 43c737d816 deps: Bump dependent crate versions
This pull request is mainly for updating vm-memory and vmm-sys-util.

The affacted crates include:

- vm-memory: from 0.9.0 to 0.10.0;
- vmm-sys-util: from 0.10.0 to 0.11.0;
- vhost: from 0.5.0 to 0.6.0;
- virtio-queue: from 0.6.0 to 0.7.0
- fuse-backend-rs: from 0.10.4 to 0.10.5
- vhost-user-backend: from 0.7.0 to 0.8.0

Signed-off-by: Xuewei Niu <niuxuewei.nxw@antgroup.com>
2023-08-04 14:06:50 +08:00
Qinqi Qu a295c5429b deps: update tar-rs to handle very large uid/gid in image unpack
update tar-rs to support read large uid/gid from PAX extensions to
fix very large UIDs/GIDs (>=2097151, limit of USTAR tar) lost in
PAX style tar during unpack.

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-08-03 16:39:33 +08:00
Yan Song 48762896e5 nydusify: support --with-referrer option
With this option, we can track all nydus images associated with
an OCI image. For example, in Harbor we can cascade to show nydus
images linked to an OCI image, deleting the OCI image can also delete
the corresponding nydus images. At runtime, nydus snapshotter can also
automatically upgrade an OCI image run to nydus image.

Prior to this PR, we had enabled this feature by default. However,
it is now known that Docker Hub does not yet support Referrer.

Therefore, adding this option to disable this feature by default,
to ensure broad compatibility with various image registries.

Fix https://github.com/dragonflyoss/image-service/issues/1363.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-08-01 13:26:18 +08:00
Yan Song 69e6874d2c dep: upgrade nydus-snapshotter & acceleration-service package
Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-08-01 13:26:18 +08:00
Jiang Liu ddb4627b7a builder: optimize tarfs building speed by skipping file content
The tarfs crate provides seekable reader to iterate entries in tar
file, so optimize tarfs building speed by skipping file content.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-07-31 10:58:08 +09:00
Bin Tang 82ebd11ab8
parse image pull auth from env (#1382)
* nydusd: parse image pull auth from env

Signed-off-by: Bin Tang <tangbin.bin@bytedance.com>

* docs: introduce IMAGE_PULL_AUTH env

Signed-off-by: Bin Tang <tangbin.bin@bytedance.com>

* fs: add test for filling auth

Signed-off-by: Bin Tang <tangbin.bin@bytedance.com>

---------

Signed-off-by: Bin Tang <tangbin.bin@bytedance.com>
2023-07-27 11:39:28 +08:00
Zhang Tianci 0c01dacf2e builder: add a trace log for building v5 image
Signed-off-by: Zhang Tianci <zhangtianci.1997@bytedance.com>
2023-07-25 14:42:05 +08:00
Zhang Tianci 471a7370cc nydusctl: fixup umount argument usage
Signed-off-by: Zhang Tianci <zhangtianci.1997@bytedance.com>
2023-07-25 14:42:05 +08:00
Yan Song 58842cbfd1 storage: adjust token refresh interval automatically
- Make registry mirror log pretty;
- Adjust token refresh interval automatically;

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-07-25 14:40:42 +08:00
Yan Song 01c58e00b3 storage: remove auth_through option for registry mirror
The auth_through option adds user burden to configure the mirror
and understand its meaning, and since we have optimized handling
of concurrent token requests, this option can now be removed.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-07-25 14:40:42 +08:00
Yan Song 4eda4266dc storage: implement simpler first token request
Nydusd uses a registry backend which generates a surge of blob requests without
auth tokens on initial startup. This caused mirror backends (e.g. dragonfly)
to process very slowly, the commit fixes this problem.

It implements waiting for the first blob request to complete before making other
blob requests, this ensures the first request caches a valid registry auth token,
and subsequent concurrent blob requests can reuse the cached token.

This change is worthwhile to reduce concurrent token requests, it also makes the
behavior consistent with containerd, which first requests the image manifest and
caches the token before concurrently requesting blobs.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-07-25 14:40:42 +08:00
Jiang Liu be52ebd28b storage: support manually add blob object to localdisk backend driver
Enhance the localdisk storage backend, so we can manually add blob
objects in the disk, in addition to discovering blob objects by
scanning GPT partition table.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-07-18 10:07:20 +08:00
Jiang Liu d5cdc78d8e storage: use File instead of RawFd to avoid possible race conditions
Use File instead of RawFd in struct LocalDiskBlob to avoid possible
race conditions.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-07-18 10:07:20 +08:00
Jiang Liu d834fba87a storage: introduce feature `backend-localdisk-gpt`
Introduce feature `backend-localdisk-gpt` for localdisk storage backend,
so it can be optionally disabled.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-07-18 10:07:20 +08:00
Yiqun Leng 4e3c954702 change a new nydus image for ci test
The network is not stable when pulling the old image, which may result in
ci test failure, so use the new image instead.

Signed-off-by: Yiqun Leng <yqleng@linux.alibaba.com>
2023-07-13 10:19:57 +08:00
Peng Tao 3de2025495 Makefile: allow to build debug version
We still build release version by default, but make sure that `make build`
only generates a debug version nydusd.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2023-07-12 15:26:10 +08:00
ccx1024cc c8a39c876a
fix: amplify io is too large to hold in fuse buffer (#1311)
* fix: amplify io is too large to hold in fuse buffer

Fuse request buffer is fixed by `FUSE_KERN_BUF_SIZE * pagesize() + FUSE_HEADER_ SIZE`. When amplify io is larger than it, FuseDevWriter suffers from smaller buffer. As a result, invalid data error is returned.

Reproduction:
    run nydusd with 3MB amplify_io
    error from random io:
        reply error header OutHeader { len: 16, error: -5, unique: 108 }, error Custom { kind: InvalidData, error: "data out of range, available 1052656 requested 1250066" }

Details:
    size of fuse buffer = 1052656 + 16 (size of inner header) = 256(page number) * 4096(page size) + 4096(fuse header)
    let amplify_io = min(user_specified, fuseWriter.available_bytes())

Resolution:
    This pr is not best implements, but independent of modification to [fuse-backend-rs]("https://github.com/cloud-hypervisor/fuse-backend-rs").
    In future, evalucation of amplify_io will be replaced with [ZeroCopyWriter.available_bytes()]("https://github.com/cloud-hypervisor/fuse-backend-rs/pull/135").

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>

* feat: e2e for amplify io larger than fuse buffer

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>

---------

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
Co-authored-by: 泰友 <cuichengxu.ccx@antgroup.com>
2023-07-12 08:59:50 +08:00
泰友 31f2170bb9 fix: large files broke prefetch
Files larger than 4G leads to prefetch panic, because the max blob io
range is smaller than 4G. This pr changes blob io max size from u32 to
u64.

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2023-07-11 19:09:41 +08:00
泰友 9bb51517be fix: deprecated docker field leads to failure of nydusify check
`NydusImage.Config.Config.ArgsEscaped` is present only for legacy compatibility
with Docker and should not be used by new image builders. Nydusify (1.6 and
above) ignores it, which is an expected behavior.

This pr ignores comparision of it in nydusify checking, which leads to failure.

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2023-07-11 09:49:19 +08:00
xwb1136021767 0c225cae10 nydus-image: add unit test for setting default compression algorithm
Signed-off-by: xwb1136021767 <weibinxue@foxmail.com>
2023-07-10 22:24:56 +08:00
Yiqun Leng 1ae9800512 fix incidental bugs in ci test
1. sleep for a while after restart containerd
2. only show detailed logs when test failed

Signed-off-by: Yiqun Leng <yqleng@linux.alibaba.com>
2023-07-10 16:52:51 +08:00
kangkexi ba3c8fae62 Update docs
Signed-off-by: kangkexi <kangkexi@megvii.com>
2023-07-07 14:49:02 +08:00
kangkexi ad92996726 update docs about using runtime-level snapshotter
Signed-off-by: kangkexi <kangkexi@megvii.com>
2023-07-07 14:49:02 +08:00
kangkexi b2e507350d docs: add containerd runtime-level snapshotter usage for nydus
Signed-off-by: kangkexi <kangkexi@megvii.com>
2023-07-07 14:49:02 +08:00
taohong 98834dd4ef tests: add encrypt integration test
Add image encryption test integration case to Smoke test.

Signed-off-by: taohong <taoohong@linux.alibaba.com>
2023-07-06 13:52:50 +08:00
taohong 94c6378ed1 feat: nydus support encrypted images
Extend native nydus v6 to support handling encrypted
containers images:
* An encrypted nydus image is composed of encrypted
bootstrap and chunk-level encrypted data blobs. The
bootstrap is encrypted by the Ocicrypt and the data
blobs are encrypted by aes-128-xts with randomly
generated key and iv at chunk-level.
* For every data blob, all the chunk data, conpression
context. table and compression context table header
are encrypted.
* The chunk encryption key and iv are stored in the blob
info reusing some items of the structure to save reserved
space.
* Encrypted chunk data will be decrypted and then be
decompressed while be fetched by the storage backend.
* Encrypted or unencrypted blobs can be merged together.

Signed-off-by: taohong <taoohong@linux.alibaba.com>
2023-07-06 13:52:50 +08:00
Qinqi Qu 62643677d0 action: reduce the number of times the codecov tool sends comments
This patch alleviates the problem that codecov frequently sending
emails to users when PR is updated.

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-07-06 12:19:00 +08:00
泰友 5db3f0ac33 fix: merge io from same blob panic
When merging io from same blob with different id, assertion breaks. The
images without blob deduplication suffers from it.

This pr removes the assertion that requires merging in same blob index.
By design, it makes sense, because different blob layer may share same
blob file. A continuous read from same blob for different layer is
helpful for performance.

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2023-07-06 12:12:28 +08:00
Qinqi Qu 7d5cb1adfd docs: update the OpenAnolis kernel installation guide in fscache doc.
OpenAnolis adds support for fscache mode since kernel version
4.19.91-27 or 5.10.134-12.

Fix: #1342

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-07-04 20:53:53 +08:00
Yan Song c1247fdce1 nydusify: bump github.com/goharbor/acceleration-service v0.2.5
To bring some internal changes and features:

https://github.com/goharbor/acceleration-service/releases/tag/v0.2.5

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-07-03 18:37:25 +08:00
Jiang Liu 662117a065 rafs: add special handling of invalid zero blob index
The rafs v6 format reserves blob index 0 for meta blobs, so ensure
invalid zero blob index doesn't cause abnormal behavior.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-06-27 17:24:17 +08:00
lihuahua123 65cf530f64 nydusify: update the doc of nydusify about the subcommand mount
Signed-off-by: lihuahua123 <771725652@qq.com>
2023-06-25 09:38:39 +08:00
Jiang Liu ee433ab1d7 dep: upgrade base64 to v0.21
Upgrade base64 to v0.21, to avoid multiple versions of the base64
crate.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-06-21 11:56:32 +08:00
Jiang Liu 0f628fb804 storage: introduce feature flag `prefetch-rate-limit`
Introduce feature flag `prefetch-rate-limit` to reduce dependencies
of the nydus-service crate.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-06-21 11:56:32 +08:00
Jiang Liu 7ec8fd75b1 api: introduce feature `error-backtrace`
Intoduce feature `error-backtrace` to reduce dependency of the
nydus-service crate.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-06-21 11:56:32 +08:00
Jiang Liu ea32ee4408 dep: upgrade openssl to 0.10.55 to fix cve warnings
error[vulnerability]: `openssl` `X509VerifyParamRef::set_host` buffer over-read
    ┌─ /github/workspace/Cargo.lock:122:1
    │
122 │ openssl 0.10.48 registry+https://github.com/rust-lang/crates.io-index
    │ --------------------------------------------------------------------- security vulnerability detected
    │
    = ID: RUSTSEC-2023-0044
    = Advisory: https://rustsec.org/advisories/RUSTSEC-2023-0044
    = When this function was passed an empty string, `openssl` would attempt to call `strlen` on it, reading arbitrary memory until it reached a NUL byte.
    = Announcement: https://github.com/sfackler/rust-openssl/issues/1965
    = Solution: Upgrade to >=0.10.55

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-06-21 10:07:00 +08:00
Jiang Liu f8b561aacc rafs: enhance rafs to support inspecting rafs v6 raw block image
The rafs core assume meta data is 4k-aligned, so it fails to inspect
raw block image generated from tarfs images, which are 512-bytes
aligned.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-06-21 10:07:00 +08:00
Jiang Liu 2b6d6ea2db service: refine block device implementation
Refine block device implementation by:
1) limit number of blocks to u32::MAX
2) rename BlockDevice::new() to new_with_cache_manager()
3) introduce another implementation of BlockDevice::new()

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-06-21 10:07:00 +08:00
Yadong Ding 1339af4996 gha:add some descriptions for convert ci
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-06-20 16:39:36 +08:00
lihuahua123 66761f2ddd Nydusify: fix some bug about the subcommand mount of nydusify
- The `nydusify mount` subcomend don't require `--backend-type` and `--backend-config` options when the backend is registry.
    - The methord to resolve it is we can get the `--backend-type` and `--backend-config` options  from the docker configuration.
    - Also, we have refractor the code of checker module in order to reuse the code

Signed-off-by: lihuahua123 <771725652@qq.com>
2023-06-16 15:27:34 +08:00
killagu 3b71868e08 ci(release): fix macos nydusd rust target
Can not use `declare -A` in macos shell.

Signed-off-by: killagu <killa123@126.com>
2023-06-16 15:23:05 +08:00
Jiang Liu 9d89b8d193 service: prepare for publishing v0.2.1
Prepare for publishing v0.2.1.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-06-16 15:05:09 +08:00
Jiang Liu 9a1524b6be rafs: publish nydus-rafs v0.3.1
Publish nydus-rafs v0.3.1.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-06-16 15:05:09 +08:00
Jiang Liu aa2305beb1 storage: publish nydus-storage v0.6.3
Publish nydus-storage v0.6.3.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-06-16 15:05:09 +08:00
Jiang Liu fdd99e3962 utils: publish nydus-utils v0.4.2
Publish nydus-utils v0.4.2.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-06-16 15:05:09 +08:00
Jiang Liu caa7d055c4 api: publish v0.3.0
Publish nydus-api v0.3.0.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-06-16 15:05:09 +08:00
Jiang Liu cce78d4663 builder: split out builder into a dedicated crate
Split out builder into a dedicate nydus-builder crate, to reduce
dependencies of the nydus-rafs crate.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-06-16 11:47:45 +08:00
Wenhao Ren 3ab3a759b1 smoke: add integration test for batch chunk mode
Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2023-06-15 16:25:44 +08:00
Wenhao Ren 2ec92e1513 storage: add runtime prefetch support for batch chunk
Add prefetch range calculation for batch chunk.

Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2023-06-15 16:25:44 +08:00
Wenhao Ren 827d953b84 storage: add runtime support for batch chunk
Add region calculation and batch chunk decompression capability for nydusd.
Do not support prefetch for batch chunk.

Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2023-06-15 16:25:44 +08:00
Wenhao Ren c089873d61 storage: add basic runtime support for batch chunk
1. Add function utils for batch chunk for the help to get batch information.
2. Implement `Default` for `BlobCompressionContext` for simplification.

Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2023-06-15 16:25:44 +08:00
Wenhao Ren 830bfacdff rafs: Terminate the build of a buffered batch chunk if a large chunk is encountered
Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2023-06-15 16:25:44 +08:00
killagu b4c76cf2dd ci(release): add macos arm64 artifact
Signed-off-by: killagu <killa123@126.com>
2023-06-12 15:28:40 +08:00
Liu Bo 5b15922ea0 Rafs: Add missing prefix of hex
Without hex prefix, it confuses me a little bit when debugging flags problems.

Signed-off-by: Liu Bo <bo.liu@linux.alibaba.com>
2023-06-02 13:57:42 +08:00
Huang Jianan a88a2e88aa builder: set the default compression algorithm for meta ci to lz4
We set the compression algorithm of meta ci to zstd by default, but there
is no option for nydus-image to configure it.

This could cause compatibility problems on the nydus version that does
not support zstd. Let's reset it to lz4 by default.

Signed-off-by: Huang Jianan <jnhuang95@gmail.com>
2023-06-02 10:12:17 +08:00
Jiang Liu 08c3d0fa83 builder: fix a compilation failure on macos
Fix a compilation failure on macos.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-05-28 19:58:01 +08:00
Jiang Liu 92e6340a6f builder: correctly generate nid for v6 inodes
The `nid` is not actually used yet, but we should still generate it
with correct value.

Fixes: https://github.com/dragonflyoss/image-service/issues/1301

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-05-28 19:58:01 +08:00
Jiang Liu 23c61104a9 smoke: use v2.1.6 instead of v2.1.4
Update smoke tests to use the latest v2.1.6 instead of v2.1.4.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-05-25 18:29:39 +08:00
Jiang Liu 8517120a47 error: merge crate nydus-error into nydus-utils and nydus-api
Merge crate nydus-error into nydus-utils and nydus-api, to reduce
number of crates.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-05-24 21:35:43 -07:00
Jiang Liu 43e651802d builder: delay free data structure to reduce image build time
According to perf flame graph, it takes a long time to free objects
used by image builder. In most common cases, the builder will only
run once and exit, so it's unnecessary to free those used objects.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-05-25 08:50:33 +08:00
Jiang Liu 8a413345ac rafs: enhance blobfs to support read() operation
Enhance blobfs to support read() operation, in addition to DAX.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-05-24 17:34:58 -07:00
Jiang Liu 07437542c3 rafs: cache blobfs inode information
Cache blobfs inode information to avoid opening file on every dax
window operations.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-05-24 17:34:58 -07:00
Jiang Liu 17573f8610 rafs: use rwlock instead of mutex for blobfs
Use rwlock instead of mutex for blobfs, to avoid serializaiton.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-05-24 17:34:58 -07:00
Jiang Liu 809f8d9727 rafs: optimize the way to build RAFS filesystem
The current way to build RAFS filesystem is:
- build the lower tree from parent bootstrap
- convert the lower tree into an array
- build the upper tree from source
- merge the upper tree into the lower tree
- convert the merged tree into another array
- dump nodes from the array

Now we optimize it as:
- build the lower tree from parent bootstrap
- build the upper tree from source
- merge the upper tree into the lower tree
- dump the merged tree

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-05-23 10:47:07 +08:00
Jiang Liu 80bd7dca34 rafs: rename set_4k_aligned() to set_aligned()
Rename set_4k_aligned() to set_aligned(), for tarfs.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-05-23 10:47:07 +08:00
Qinqi Qu 17fd41c9a0 action: fix failing test test_large_file for v5 image temporarily
Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-05-22 20:45:01 +08:00
Qinqi Qu 5be4be9ab5 action: fix pytest failing to install in integration tests.
Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-05-22 20:45:01 +08:00
Jiang Liu 5879c91864 blobfs: merge crate blobfs into crate rafs
Merge crate blobfs into crate rafs, to reduce number of crates.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-05-22 11:13:21 +08:00
Qinqi Qu 197795d02a cargo: fix fuse-backend-rs dependency in cargo.toml
The previous changes #1283 only upgraded the values in cargo.lock,
we should also upgrade cargo.toml.

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-05-18 22:30:45 -07:00
Qinqi Qu 8ebccf2e69 docs: add pull request and issue templates
Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-05-19 11:26:35 +08:00
Jiang Liu d751679923 api: merge crate nydus-app into crate nydus
Merge crate nydus-app into crate nydus, to reduce number of crates.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-05-18 01:46:37 -07:00
Yadong Ding faa41163b7 action: benchmark add description
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-05-17 22:52:08 +08:00
Peng Tao 29260393ed cargo: update fuse-backend-rs dependency
To fetch several critical fixes.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2023-05-17 17:27:33 +08:00
Yadong Ding 29b5d5dfc4 action:clean cache after branch closes
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-05-17 17:24:01 +08:00
Yadong Ding 5f76e8bd9c action: convert ci show metrics during conversion
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-05-17 11:21:11 +08:00
Yadong Ding 49c2a9f100 fix: nydusify save metric to specify the file path
We should not save metrics in work directory, otherwise it will be cleared.
We need use to specify the file path, so just change output-json to string opt.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-05-17 11:21:11 +08:00
Peng Tao 0a88fda5a2 smoke: no need to run vet and lint
There is no need to run vet and lint on the test code.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2023-05-16 17:27:01 +08:00
Jiang Liu 293b032d7f rafs: add root inode into inode map when building RAFS
Add root inode into inode map when building RAFS filesystem,
so RAFS v5 gets correct inode number counts.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-05-16 16:03:37 +08:00
Jiang Liu 251730990e rafs: avoid a debug_assert related to v5 amplify io
In function RafsSuper::amplify_io(), is the next inode `ni` is
zero-sized, the debug assertion in function calculate_bio_chunk_index()
(rafs/src/metadata/layout/v5.rs) will get triggered. So zero-sized
file should be skipped by amplify_io().

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-05-16 16:03:37 +08:00
Yadong Ding c7c9fad14a nydusify: add new option output-json
During convert, we can collect the metric: image size and convert time, etc.
Nydusify can dump the metric to local file if user needs.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-05-15 22:52:13 +08:00
Yadong Ding 3c31e133ac nydusify: update acceleration-service version
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-05-15 22:52:13 +08:00
Yadong Ding db0cc412bb action: benchmark more images on schedule
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-05-15 15:00:18 +08:00
Yadong Ding 6139399f5d misc: benchmark delete tmp file
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-05-15 15:00:18 +08:00
dependabot[bot] 7eaea415f7 build(deps): bump github.com/docker/distribution in /contrib/nydusify
Bumps [github.com/docker/distribution](https://github.com/docker/distribution) from 2.8.1+incompatible to 2.8.2+incompatible.
- [Release notes](https://github.com/docker/distribution/releases)
- [Commits](https://github.com/docker/distribution/compare/v2.8.1...v2.8.2)

---
updated-dependencies:
- dependency-name: github.com/docker/distribution
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-05-12 10:52:59 +08:00
Qinqi Qu f09b579bfe misc: reorganize the configuration file of nydusd
1. Move configuration files from docs/samples to misc/configs
2. Fix incomplete configuration in docs/nydusd.md
3. Update outdated nydusd-config.json from nydus-snapshotter repo

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-05-12 09:11:36 +08:00
Yadong Ding 9d87631171 misc: benchmark metrics support more images
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-05-08 17:19:30 +08:00
Yadong Ding ad8f870344 misc: benchmark support more images
add support for golang, java(amazoncorretto), ruby, and python.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-05-08 17:19:30 +08:00
Yadong Ding 44f3b16c22 action: use benchmark runtime image arg
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-05-08 17:19:30 +08:00
Yadong Ding c2c79a21ec misc: move benchmark image from config to runtime
to support more images run on benchmark.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-05-08 17:19:30 +08:00
Yadong Ding 2f743c8a53 action: update actions version to use Node.js 16
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-05-08 09:57:59 +08:00
Yadong Ding 5a179bedf2 action: remove target-dir input in rust cache
Since we update the rust cache version from v1 to v2.2.0, the target-dir
is useless, and the ./target will be cached default.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-05-08 09:57:59 +08:00
Yadong Ding c2637eead0 action: benchmark will be triggered by pr
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-05-06 13:44:21 +08:00
Yadong Ding 348ac74554 misc: fix benchmark on schedule without cache will panic
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-05-06 13:44:21 +08:00
Yadong Ding 6d1d56e3d6 action: use the same version rust-cache@v2.2.0
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-05-06 11:24:34 +08:00
Yadong Ding 0e4135cba7 action: reuse rust cache by shared-key
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-05-06 11:24:34 +08:00
Yadong Ding 7a48992bce misc: supprot benchmark on schedule
supprot the benchmark on schedule by new mode in benchmark_summary.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-05-05 14:17:52 +08:00
Yadong Ding 5bfb155780 action: add benchmark on schedule
we will run benchmark twice per-week, and compare result with the last by cache.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2023-05-05 14:17:52 +08:00
Qinqi Qu 35aa3a2b08 nydusify: add some unit tests for pkg/utils and cmd/nydusify
Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-04-28 17:50:32 +08:00
Qinqi Qu c5fdfda77c nydusify: add unit test coverage output
1. Introduce `make coverage` to print coverage in console.

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-04-28 17:50:32 +08:00
Huang Jianan 8743e81b3b contrib: support nydus-overlayfs and ctr-remote on different platforms
Otherwise, the binary we compiled cannot run on other platforms such as
arm.

Signed-off-by: Huang Jianan <jnhuang@linux.alibaba.com>
2023-04-28 17:49:45 +08:00
Jiang Liu f11391d476 storage: refine the way to define compression algorithms
Reserve 4 bits to store toc compression algorithms, and use enumeration
instead of bitmask for algorithms.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-04-27 10:15:44 +08:00
Desiki-high 944cc69a3e misc: make benchmark summary more clear
All the data is reserved for two decimal places. When the gap of
current pr and master over five percent of the master, we will add ↑ or ↓.

Signed-off-by: Desiki-high <ding_yadong@foxmail.com>
2023-04-27 10:12:36 +08:00
Qinqi Qu 44b70fa07b action: replace cargo test with cargo nextest in CI
1. improve test speed and presents test results concisely.

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-04-25 17:55:25 +08:00
Desiki-high 7edea8a0e3 misc: add image-size for benchmark
Signed-off-by: Desiki-high <ding_yadong@foxmail.com>
2023-04-25 15:48:08 +08:00
Desiki-high 730c9bfe85 action: add benchmark-compare for PR
compare the benchmark result between PR and master when smoke test triggered by pull request

Signed-off-by: Desiki-high <ding_yadong@foxmail.com>
2023-04-25 15:48:08 +08:00
Desiki-high 849e4f3abd refactor: use python refactor benchmark_summary.sh
1. refactor.
2. add the arg mode to adapt two benchmark summary modes.

Signed-off-by: Desiki-high <ding_yadong@foxmail.com>
2023-04-25 15:48:08 +08:00
Desiki-high a78db8ba4a feat: support the arg batch-size for small chunks mergence for nydusify
Signed-off-by: Desiki-high <ding_yadong@foxmail.com>
2023-04-24 10:26:50 +08:00
Qinqi Qu ca8dab805a ctr-remote: update contaeinrd to v1.7.0 and fix lint error
Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-04-23 13:54:17 +08:00
Desiki-high 05fff6e939 misc: add read-amount and read-count for benchmark
We should add the read-amount and read-count for nydus benchmark to
compare nydus and zran or different batchsize.

Signed-off-by: Desiki-high <ding_yadong@foxmail.com>
2023-04-23 10:39:59 +08:00
Jiang Liu 27fa97393d nydus-image: optimize the way to generate tarfs
Optimize the way to generate tarfs from tar file, to reduce memory
and time consumption.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-04-23 10:05:20 +08:00
Jiang Liu d9fe6d8c19 dep: update dependency to fix a CVE warning
error[vulnerability]: Resource exhaustion vulnerability in h2 may lead to Denial of Service (DoS)
   ┌─ /github/workspace/Cargo.lock:68:1
   │
68 │ h2 0.3.13 registry+https://github.com/rust-lang/crates.io-index
   │ --------------------------------------------------------------- security vulnerability detected
   │
   = ID: RUSTSEC-2023-0034
   = Advisory: https://rustsec.org/advisories/RUSTSEC-2023-0034
   = If an attacker is able to flood the network with pairs of `HEADERS`/`RST_STREAM` frames, such that the `h2` application is not able to accept them faster than the bytes are received, the pending accept queue can grow in memory usage. Being able to do this consistently can result in excessive memory use, and eventually trigger Out Of Memory.

     This flaw is corrected in [hyperium/h2#668](https://github.com/hyperium/h2/pull/668), which restricts remote reset stream count by default.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-04-23 10:05:20 +08:00
Desiki-high 1088f47394 action: add zran-no-prefetch in benchmark
1. add the zran without prefetch benchmark.
2. move the same steps to prepare_env.sh.
3. move benchmark summary script to benchmark_summary.sh.
3. change the benchmark-result order and enable in push and schedule.
4. set stable the wordpress tag 6.1.1.
5. delete the artifacts after benchmark-result download all artifacts.

Signed-off-by: Desiki-high <ding_yadong@foxmail.com>
2023-04-22 16:21:32 +08:00
Desiki-high 6cd8781459 misc: create shell for benchmark
1. prepare_env.sh for prepare container environment.
2. benchmark_summary.sh for benchmark-result to summary result.

Signed-off-by: Desiki-high <ding_yadong@foxmail.com>
2023-04-22 16:21:32 +08:00
Desiki-high faa10b7e8c misc: delete unused benchmark code
Signed-off-by: Desiki-high <ding_yadong@foxmail.com>
2023-04-22 16:21:32 +08:00
Desiki-high b712c6e528 docs: add codecov
Signed-off-by: Desiki-high <ding_yadong@foxmail.com>
2023-04-22 12:04:44 +08:00
Yan Song 0d2958e6a8 docs: update the perf graph
Keep it simple and clean.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-04-21 20:22:41 +08:00
YanSong b937989f56 action: fix checkout on pull_request_target
The `pull_request_target` trigger will checkout the master branch
codes by default, but we need to use the new PR codes on smoke test.

See: https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#pull_request_target

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-04-21 13:07:03 +08:00
Jiang Liu ca9f7a8087 rafs: minor optimization for tarfs builder
Minor optimization for tarfs builder.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-04-20 18:33:50 +08:00
Yan Song 79f4a685c9 action: fix smoke test for branch pattern
To match `master` and `stable/*` branches at least.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-04-20 16:21:35 +08:00
Yan Song 39eed8cd19 action: allow running smoke test for stable/* branch
Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-04-20 15:27:18 +08:00
Desiki-high 36d8a5b4eb change the comment content
Signed-off-by: Desiki-high <ding_yadong@foxmail.com>
2023-04-20 09:58:19 +08:00
Desiki-high 9b699fa6d9 add the benchmark test for nydus image
1. add the benchmark scripts in misc/benchmark
2. add five benchmark jobs in smoke test and the benchmark-result job for show the benchmark result in the PR comment

Signed-off-by: Desiki-high <ding_yadong@foxmail.com>
2023-04-20 09:58:19 +08:00
泰友 1a934b6f77 feat: add more types of file to smoke
Including:
    * regular file with chinese name
    * regular with long name
    * symbolic link of deleted file
    * large regular file of 13MB
    * regular file with hole at both head and tail
    * empty regular file

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2023-04-20 09:30:35 +08:00
Qinqi Qu 34710b5837 action: add unit test coverage check workflow
1. Introduce `make coverage` to print coverage in console.
2. Github CI use `make coverage-codecov` to get coverage info.

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-04-19 22:26:48 +08:00
Eryu Guan 8671b0aa11
misc: update toolchain to 1.68.2 and fix clippy warnings (#1227)
Signed-off-by: Eryu Guan <eguan@linux.alibaba.com>
2023-04-19 17:15:56 +08:00
Wenhao Ren e8ba11ae40 rafs: enhance builder to support batch chunk
Add `--batch-size` subcommand on nydus-image.
Add build time support of batch chunk.

Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2023-04-18 19:25:32 +08:00
Wenhao Ren 180f6d2c9a storage: introduce BatchInflateContext to support batch chunk
Enhance chunk info to support batch chunk.
Introduce BatchInflateContext and generator.

Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2023-04-18 19:25:32 +08:00
Wenhao Ren d0ae0d574e rafs: reuse chunk data compress and write procedure
Refactor `Node::dump_file_chunk()` to reuse data compress and write procedure.

Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2023-04-18 19:25:32 +08:00
Wenhao Ren fb9560b5d0 storage: check `zran` flag before set `zran` values
Check `zran` flag before set `zran` values.
Refine comments.

Signed-off-by: Wenhao Ren <wenhaoren@mail.dlut.edu.cn>
2023-04-18 19:25:32 +08:00
Jiang Liu 6b78bd1be0 rafs: optimize Node::name() to reduce image build time
According to perf flamegraph, Node::name() costs too much time
when generating nydus images from tar files. So optimize it.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-04-18 14:52:47 +08:00
Desiki-high 52d563999d docs: add the tip for nydus-zran
we should tell user the nydus zran image must have the same namespace with the oci image

Signed-off-by: Desiki-high <ding_yadong@foxmail.com>
2023-04-18 14:51:56 +08:00
imeoer 0dc95f8fda
Merge pull request #1192 from jiangliu/encrypt
Enhance file cache to encrypt data written to the cache file
2023-04-17 14:29:56 +08:00
Jiang Liu 2a23e99589 storage: encrypt data in local cache file
Encrypt data before writing data to local cache file.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-04-15 16:33:14 +08:00
Jiang Liu 37273bfbcf api: add encryption configuration to file cache
Add encryption configuration to file cache, so we can encrypt data
written to the local cache file.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-04-15 16:33:14 +08:00
Jiang Liu f82cf6d144 storage: introduce struct CipherContext
Introduce struct CipherContext for data encryption/decryption.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-04-15 16:33:14 +08:00
Jiang Liu 5f1fc40ac4 storage: add fields for chunk encryption
Add data fields to BlobInfo and CacheFile for chunk encryption.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-04-15 16:33:14 +08:00
Jiang Liu d31c3b31c9 storage: add flag to indicate encrypted data chunk
Add method and flag to indicate that a data chunk is encrypted or not.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-04-15 16:33:14 +08:00
Jiang Liu a3eb243d66
Merge pull request #1218 from taoohong/mushu/fuse_backend
service: add a functoin to help create fuse vfs backend
2023-04-15 16:03:47 +08:00
taohong f31e930f88 service: add a functoin to help create fuse vfs backend
Add a function to help create fuse vfs backend,
reduce explicit references to crate fuse_backend_rs.

Signed-off-by: taohong <taoohong@linux.alibaba.com>
2023-04-15 14:34:06 +08:00
Jiang Liu 80ede7528e
Merge pull request #1213 from jiangliu/dir-entry-name
rafs: fix a regression caused by commit 2616fb2c05
2023-04-14 17:43:36 +08:00
Jiang Liu 56c48bcccb rafs: fix a regression caused by commit 2616fb2c05
Fix a regression caused by commit 2616fb2c05.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-04-14 17:03:20 +08:00
Jiang Liu a57a97b1f2
Merge pull request #1208 from jiangliu/v6-dir-size
rafs: fix a possible bug in v6_dirent_size()
2023-04-14 15:44:54 +08:00
Jiang Liu b29e4aa7f6
Merge pull request #1212 from dragonflyoss/dependabot/cargo/contrib/nydus-backend-proxy/h2-0.3.17
build(deps): bump h2 from 0.3.13 to 0.3.17 in /contrib/nydus-backend-proxy
2023-04-14 14:03:26 +08:00
dependabot[bot] 99a75addc7
build(deps): bump h2 in /contrib/nydus-backend-proxy
Bumps [h2](https://github.com/hyperium/h2) from 0.3.13 to 0.3.17.
- [Release notes](https://github.com/hyperium/h2/releases)
- [Changelog](https://github.com/hyperium/h2/blob/master/CHANGELOG.md)
- [Commits](https://github.com/hyperium/h2/compare/v0.3.13...v0.3.17)

---
updated-dependencies:
- dependency-name: h2
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-14 03:46:59 +00:00
Jiang Liu 5a83128561 rafs: fix a possible bug in v6_dirent_size()
Function Node::v6_dirent_size() may return wrong result when "." and
".." are not at the first and second entries in the sorted dirent array.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-04-13 17:41:25 +08:00
Bin Liu a5603f2ede
Merge pull request #1207 from imeoer/add-coreweave-adopter
add CoreWeave to the adopter list
2023-04-12 14:29:17 +08:00
Yan Song a9e5852d79 add CoreWeave to the adopter list
Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-04-12 06:14:53 +00:00
Jiang Liu a8c6a2328d
Merge pull request #1205 from adamqqqplay/add-helm-docs
docs: add helm quickstart link to deploy Dragonfly+Nydus
2023-04-10 22:53:32 +08:00
Qinqi Qu 7dff3c39b9 docs: add helm quickstart link to deploy Dragonfly+Nydus
Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-04-10 16:33:13 +08:00
dependabot[bot] 38e388bf53
Merge pull request #1197 from dragonflyoss/dependabot/go_modules/contrib/nydusify/github.com/docker/docker-23.0.3incompatible 2023-04-10 07:07:55 +00:00
Jiang Liu f767b66ce3
Merge pull request #1200 from taoohong/mushu/cc-feature
service: add coco feature in Cargo.toml
2023-04-10 14:27:44 +08:00
dependabot[bot] 28634faa35
build(deps): bump github.com/docker/docker in /contrib/nydusify
Bumps [github.com/docker/docker](https://github.com/docker/docker) from 23.0.1+incompatible to 23.0.3+incompatible.
- [Release notes](https://github.com/docker/docker/releases)
- [Commits](https://github.com/docker/docker/compare/v23.0.1...v23.0.3)

---
updated-dependencies:
- dependency-name: github.com/docker/docker
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-10 06:13:41 +00:00
taohong 2b808caa30 service: add coco feature in Cargo.toml
Add feature coco to Cargo.toml, so that confidential containers
can apply this feature to use nydus to download images.

Signed-off-by: taohong <taoohong@linux.alibaba.com>
2023-04-10 11:06:32 +08:00
Jiang Liu 5a6551328b
Merge pull request #1203 from imeoer/upgrade-golangci-lint
action: upgrade golangci-lint to v1.51.2
2023-04-10 10:57:43 +08:00
Yan Song 1282914e77 action: upgrade golangci-lint to v1.51.2
To resolve the panic when run golangci-lint:

```
panic: load embedded ruleguard rules: rules/rules.go:13: can't load fmt
```

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-04-10 02:38:12 +00:00
Jiang Liu 902fd71819
Merge pull request #1193 from dragonflyoss/dependabot/cargo/contrib/nydus-backend-proxy/spin-0.9.8
build(deps): bump spin from 0.9.3 to 0.9.8 in /contrib/nydus-backend-proxy
2023-04-04 15:11:33 +08:00
dependabot[bot] 60b02c0335
build(deps): bump spin in /contrib/nydus-backend-proxy
Bumps [spin](https://github.com/mvdnes/spin-rs) from 0.9.3 to 0.9.8.
- [Release notes](https://github.com/mvdnes/spin-rs/releases)
- [Changelog](https://github.com/mvdnes/spin-rs/blob/master/CHANGELOG.md)
- [Commits](https://github.com/mvdnes/spin-rs/commits)

---
updated-dependencies:
- dependency-name: spin
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-04 06:09:56 +00:00
imeoer 32cc7df139
Merge pull request #1153 from changweige/update-docs
doc: update descriptions about nydus-snapshotter
2023-04-03 10:05:43 +08:00
Changwei Ge 8cc04f15c2 doc: update descriptions about nydus-snapshotter
To match the latest nydus-snapshotter UI

Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2023-04-03 09:29:52 +08:00
Jiang Liu 06d2292d9d
Merge pull request #1189 from jiangliu/macos
macos: fix a build failure
2023-03-31 17:17:15 +08:00
Jiang Liu ff21a87531 macos: fix a build failure
Fix a build failure for macos caused by block device related code.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-31 16:40:23 +08:00
imeoer 86d36f704a
Merge pull request #1188 from adamqqqplay/upgrade-contrib-dependency
contrib: upgrade runc to v1.1.5
2023-03-31 15:38:46 +08:00
Qinqi Qu 2ecd25ea1d contrib: upgrade runc to v1.1.5
Runc v1.1.5 fixes three CVEs, we should upgrade it.

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-03-31 14:23:11 +08:00
Jiang Liu b79e90bc27
Merge pull request #1176 from jiangliu/export-block-verity
Add verity digests for exported block device
2023-03-31 14:11:06 +08:00
imeoer a5297847c7
Merge pull request #1183 from adamqqqplay/refine-readme
docs: polish and simplify README.md
2023-03-31 11:48:42 +08:00
Qinqi Qu ba4d2f9c98 docs: polish and simplify README.md
1. Add FAQ, Website and Quickstart link.
2. Reorganize document structure.
3. Remove some redundant descriptions.

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-03-31 11:41:04 +08:00
imeoer 140a0d7c9d
Merge pull request #1184 from jiangliu/v6-mapped-blkaddr
rafs: fix a incorrect mapped_blkaddr for multi layer images
2023-03-31 10:58:59 +08:00
Jiang Liu cd3d2444c6
Merge pull request #1185 from dragonflyoss/dependabot/go_modules/contrib/ctr-remote/github.com/opencontainers/runc-1.1.5
build(deps): bump github.com/opencontainers/runc from 1.1.4 to 1.1.5 in /contrib/ctr-remote
2023-03-31 10:38:51 +08:00
Jiang Liu fb8db88944
Merge pull request #1181 from jiangliu/nydus-image-doc
nydus-image: update documentation docs/nydus-image.md
2023-03-30 23:39:28 +08:00
Jiang Liu 646f320665 nydus-image: update documentation docs/nydus-image.md
Update documentation docs/nydus-image.md.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-30 23:31:24 +08:00
Jiang Liu 6594edb719
Merge pull request #1186 from imeoer/fix-https-fallback
storage: fix http fallback handle
2023-03-30 23:24:58 +08:00
Yan Song 74677615d2 storage: fix http fallback handle
If we attempt to establish a TLS connection with the HTTP registry server,
we are likely to encounter these types of error:

- Error `wrong version number` from openssl library;
- Error `connection refused` from standard library;

Before this, only the first type of error was handled. This commit handles
the second type of error, which was reproduced by running a local insecure
harbor registry.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-03-30 08:41:05 +00:00
dependabot[bot] a6d7a1ee89
build(deps): bump github.com/opencontainers/runc in /contrib/ctr-remote
Bumps [github.com/opencontainers/runc](https://github.com/opencontainers/runc) from 1.1.4 to 1.1.5.
- [Release notes](https://github.com/opencontainers/runc/releases)
- [Changelog](https://github.com/opencontainers/runc/blob/v1.1.5/CHANGELOG.md)
- [Commits](https://github.com/opencontainers/runc/compare/v1.1.4...v1.1.5)

---
updated-dependencies:
- dependency-name: github.com/opencontainers/runc
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-03-30 06:52:43 +00:00
Jiang Liu 4dd44255ff rafs: change alignment for v6 mapped_blkaddr from 2M to 512K
Change alignment for v6 mapped_blkaddr from 2M to 512K, 512K is enough
to support dm-verity.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-30 14:12:28 +08:00
Jiang Liu b7f8af04f6 rafs: fix a incorrect mapped_blkaddr for multi layer images
When generating a RAFS filesystem with multiple data blobs, the
mapped_blkaddr for second and following-on blobs are incorrect.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-30 13:51:52 +08:00
Jiang Liu 01e59a6149 nydus-image: generate dm-verity data for block device
Add `--verity` option to `nydus-image export --block` to generate
dm-verity data for block devices.

```
[root@iZ0jl3vazmhc81dur3xnm3Z image-service]# tar -cvf src.tar src
[root@iZ0jl3vazmhc81dur3xnm3Z image-service]# sha256sum src.tar
0e2dbe8b6e0f55f42c75034ed9dfc582ad0a94098cfc248c968522e7ef02e00a  src.tar
[root@iZ0jl3vazmhc81dur3xnm3Z image-service]# cp src.tar images/0e2dbe8b6e0f55f42c75034ed9dfc582ad0a94098cfc248c968522e7ef02e00a
[root@iZ0jl3vazmhc81dur3xnm3Z image-service]# target/debug/nydus-image create -t tar-tarfs -D images/ images/0e2dbe8b6e0f55f42c75034ed9dfc582ad0a94098cfc248c968522e7ef02e00a
[2023-03-27 16:32:00.068730 +08:00] INFO successfully built RAFS filesystem:
meta blob path: images/90f0e6e7e0ff822d4acddf30c36ac77fe06f549fe58f89a818fa824b19f70d47
data blob size: 0x3c000
data blobs: ["0e2dbe8b6e0f55f42c75034ed9dfc582ad0a94098cfc248c968522e7ef02e00a"]
[root@iZ0jl3vazmhc81dur3xnm3Z image-service]# target/debug/nydus-image export --block --verity -D images/ -B images/90f0e6e7e0ff822d4acddf30c36ac77fe06f549fe58f89a818fa824b19f70d47
[2023-03-27 23:49:14.450762 +08:00] INFO RAFS features: COMPRESSION_NONE | HASH_SHA256 | EXPLICIT_UID_GID | TARTFS_MODE
dm-verity options: --no-superblock --format=1 -s "" --hash=sha256 --data-block-size=4096 --hash-block-size=4096 --data-blocks 572 --hash-offset 2342912 ab7b417fc284c3b58a72044a996ec55e2c68a8b9dcf10bc469f4e640e5d98e6a
losetup -r /dev/loop1 images/90f0e6e7e0ff822d4acddf30c36ac77fe06f549fe58f89a818fa824b19f70d47.disk
[root@iZ0jl3vazmhc81dur3xnm3Z image-service]# veritysetup open -v --no-superblock --format=1 -s "" --hash=sha256 --data-block-size=4096 --hash-block-size=4096 --data-blocks 572 --hash-offset 2342912 /dev/loop1 verity /dev/loop1 ab7b417fc284c3b58a72044a996ec55e2c68a8b9dcf10bc469f4e640e5d98e6a
[root@iZ0jl3vazmhc81dur3xnm3Z image-service]# veritysetup status verity
/dev/mapper/verity is active.
  type:        VERITY
  status:      verified
  hash type:   1
  data block:  4096
  hash block:  4096
  hash name:   sha256
  salt:        -
  data device: /dev/loop1
  data loop:   /root/image-service/images/90f0e6e7e0ff822d4acddf30c36ac77fe06f549fe58f89a818fa824b19f70d47.disk
  size:        4576 sectors
  mode:        readonly
  hash device: /dev/loop1
  hash loop:   /root/image-service/images/90f0e6e7e0ff822d4acddf30c36ac77fe06f549fe58f89a818fa824b19f70d47.disk
  hash offset: 4576 sectors
  root hash:   ab7b417fc284c3b58a72044a996ec55e2c68a8b9dcf10bc469f4e640e5d98e6a
```

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-29 17:43:01 +08:00
Jiang Liu c6d2065c0c utils: introduce mechanism to generate Merkle tree for verity
Introduce mechanism to generate Merkle tree for verity.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-29 15:29:29 +08:00
imeoer 819ccafda5
Merge pull request #1159 from jiangliu/tarfs
Add `export` subcommand to `nydus-image`
2023-03-29 15:16:15 +08:00
imeoer d71957392f
Merge pull request #1177 from jiangliu/is-present
nydus: fix a possible panic caused by SubCmdArgs::is_present()
2023-03-29 10:03:43 +08:00
imeoer 35416b0697
Merge pull request #1178 from jiangliu/mapped-blkaddr
nydus-image: print mapped block address when inspecting blob info
2023-03-29 09:58:58 +08:00
Jiang Liu fc3979e46a nydus-image: enable multi-threading when exporting block images
Enable multi-threading when exporting block images, to reduce exporting
time.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-29 09:53:02 +08:00
Jiang Liu d5ef141219 nydus-image: introduce new subcommand export
Introduce new subcommand `export` to nydus-image, which will be used
to export RAFS filesystems as raw block device images or tar files.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-29 09:53:01 +08:00
Jiang Liu 0917afb411 nydus-image: syntax changes for commandline option preparation
Syntax only changes for commandline option preparation.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-29 09:53:01 +08:00
Jiang Liu 183625a513 nydus-image: print mapped block address when inspecting blob info
Print mapped block address when inspecting blob info.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-28 23:52:10 +08:00
Jiang Liu fc814a2991 nydus: fix a possible panic caused by SubCmdArgs::is_present()
Fix a possible panic caused by SubCmdArgs::is_present().

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-28 13:50:29 +08:00
Jiang Liu 667189b7d8
Merge pull request #1175 from jiangliu/deny
deny: fix cargo deny warnings related to openssl
2023-03-27 16:48:49 +08:00
Jiang Liu 7e3baeeb1e
Merge pull request #1121 from taoohong/master
service: Add a README.md to nydus-service
2023-03-26 23:31:41 +08:00
Jiang Liu f2dd8e63a7 deny: fix cargo deny warnings related to openssl
Fix cargo deny warnings related to openssl.

https://github.com/dragonflyoss/image-service/actions/runs/4522515576/jobs/7965040490

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-26 23:23:41 +08:00
Tao Hong 14f45afc5d
Merge branch 'dragonflyoss:master' into master 2023-03-24 10:14:17 +08:00
imeoer 14c709d080
Merge pull request #1169 from jiangliu/service-macos-clippy
service: clean clippy warnings for macos
2023-03-23 18:23:44 +08:00
Jiang Liu cd4cb44f39
Merge pull request #1173 from ccx1024cc/morgan/fix_ci
fix: master branch not run ci
2023-03-22 23:59:02 +08:00
泰友 6ecef3fe37 fix: master branch not run ci
Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2023-03-22 19:09:50 +08:00
Jiang Liu b9b4f23816
Merge pull request #1172 from ccx1024cc/morgan/trigger_ci
fix: stable/XXX branch not run ci
2023-03-22 16:58:32 +08:00
泰友 7eda36afe2 fix: ci: actions are not triggered for stable/v2.2
Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2023-03-22 15:01:35 +08:00
Jiang Liu 527ce73a78
Merge pull request #1170 from dragonflyoss/dependabot/go_modules/contrib/nydusify/google.golang.org/protobuf-1.29.1
build(deps): bump google.golang.org/protobuf from 1.29.0 to 1.29.1 in /contrib/nydusify
2023-03-22 13:40:14 +08:00
Jiang Liu c6e5bd8e75
Merge pull request #1168 from jiangliu/tarfs-merge
rafs: fix incorrect blob id in merged TARFS
2023-03-22 12:30:56 +08:00
Jiang Liu 9904f6d1b2 service: clean clippy warnings for macos
Clean clippy warnings for macos.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-22 12:26:40 +08:00
dependabot[bot] 2dfee1cc5f
build(deps): bump google.golang.org/protobuf in /contrib/nydusify
Bumps [google.golang.org/protobuf](https://github.com/protocolbuffers/protobuf-go) from 1.29.0 to 1.29.1.
- [Release notes](https://github.com/protocolbuffers/protobuf-go/releases)
- [Changelog](https://github.com/protocolbuffers/protobuf-go/blob/master/release.bash)
- [Commits](https://github.com/protocolbuffers/protobuf-go/compare/v1.29.0...v1.29.1)

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-03-22 02:22:26 +00:00
Jiang Liu e6c7871aca rafs: fix incorrect blob id in merged TARFS
When merging multiple RAFS filesystems in TARFS mode into one, the
generated data blob is incorrectly, it actually the meta blob id
instead of data blob id.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-21 21:45:37 +08:00
imeoer c202e918d4
Merge pull request #1167 from jiangliu/service-macos
service: fix compilation failures on macos
2023-03-21 17:49:05 +08:00
Jiang Liu b94307c86c service: fix compilation failures on macos
Fix compilation failures on macos caused by the nydus-service crate.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-21 16:56:44 +08:00
imeoer 0cfd7f6023
Merge pull request #1166 from jiangliu/inode-wrapper-unimplemented
rafs: git rid of several unimplemented()
2023-03-21 16:34:38 +08:00
Jiang Liu 818fe47243 rafs: get rid of several unimplemented()
The nydus-image check for v5 uses some unimplemented methods of
InodeWrapper, which causes panicking at runtime.

Fixes: https://github.com/dragonflyoss/image-service/issues/1160

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-21 15:45:22 +08:00
imeoer 0c1fee409a
Merge pull request #1165 from jiangliu/fix-prefetch
rafs: fix a assertion failure in prefetch list generation
2023-03-21 15:04:33 +08:00
Jiang Liu 49fc71e1e1 rafs: fix a assertion failure in prefetch list generation
Fix a assertion failure in prefetch list generation.

Fixes: https://github.com/dragonflyoss/image-service/issues/1154

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-21 14:50:29 +08:00
Jiang Liu 5160def413
Merge pull request #1164 from imeoer/nydusify-fix-workdir
nydusify: cleanup work directory when conversion finish
2023-03-21 12:24:40 +08:00
Jiang Liu 82f3ee97b6
Merge pull request #1163 from imeoer/nydusify-fix-oci-handle
nydusify: fix oci media type handle
2023-03-21 12:23:35 +08:00
Yan Song 5708cb2e56 nydusify: cleanup work directory when conversion finish
Remove the work directory to clean up the temporary image
blob data after the conversion is finished.

We should only clean up when the work directory not exists
before, otherwise it may delete user data by mistake.

Fix: https://github.com/dragonflyoss/image-service/issues/1162

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-03-21 03:58:39 +00:00
Yan Song dac61cc9f6 nydusify: fix oci media type handle
Bump nydus snapshotter v0.7.3 and bring some fixups:

1. If the original image is already an OCI type, we should forcibly set the bootstrap layer to the OCI type.
2. We need to append history item for bootstrap layer, to ensure the history consistency, see: e5d5810851/manifest/schema1/config_builder.go (L136)

Related PR: https://github.com/containerd/nydus-snapshotter/pull/427, https://github.com/goharbor/acceleration-service/pull/119

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-03-21 03:45:44 +00:00
Jiang Liu 6ab15d85bf
Merge pull request #1161 from yqleng1987/fix-compile-snapshotter
ci test: fix bug of compiling nydus-snapshotter
2023-03-20 23:35:08 +08:00
Yiqun Leng cd9f1278b9 ci test: fix bug of compiling nydus-snapshotter
Since developers changed "make clear" to "make clean" in the Makefile
in nydus-snapshotter, it also needs to be updated in ci test.
Signed-off-by: Yiqun Leng <yqleng@linux.alibaba.com>
2023-03-20 23:04:48 +08:00
Jiang Liu bca47e3dd7
Merge pull request #1158 from jiangliu/fuse-tarfs
Enhance FUSE implementation to support RAFS in TARFS mode
2023-03-20 21:27:53 +08:00
Jiang Liu 54723319d5
Merge pull request #1155 from imeoer/disable-validation-by-default
rafs: only enable digest validate based on configuration
2023-03-20 13:53:58 +08:00
Jiang Liu 81592f60df rafs: enhance RAFS FUSE implementation to support TARFS
Enhance RAFS FUSE implementation to support RAFS filesystms in
TARFS mode.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-20 13:50:50 +08:00
Jiang Liu d8d67a841d rafs: rename TarfsChunkInfo to PlainChunkInfoV6
Rename TarfsChunkInfo to PlainChunkInfoV6, so it can be used for
EROFS plain inode later.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-20 13:50:40 +08:00
Yan Song ac2d786dde rafs: only enable digest validate based on configuration
We found that when using the "nydus-image check --bootstrap /path/to/bootstrap"
command, it takes about 15s to check a 35MB bootstrap file (rafs v5) due to the
default digest validation. This is very slow and disabling it can reduce the time to 3s.

We should only allow this option to be configurable at runtime.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-03-20 03:06:18 +00:00
imeoer 7ea753dcba
Merge pull request #1151 from jiangliu/tarfs-merge
Enhance `nydus-image merge` to support tarfs
2023-03-20 11:04:26 +08:00
imeoer 65127afe75
Merge pull request #1156 from jiangliu/rafs-v6-inode
rafs: define dedicated RafsV6Inode to reduce memory consumption
2023-03-20 11:00:31 +08:00
Jiang Liu d2fa7d52df rafs: define dedicated RafsV6Inode to reduce memory consumption
There are several unused fields in RafsV5Inode when used for v6,
so define dedicated RafsV6Inode to reduce memory consumption.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-18 09:56:01 +08:00
Jiang Liu 93bf61bc96 rafs: minor improvement to builder/merge
Minor improvement to builder/merge to avoid building unnecessary
chunk dictionary.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-17 15:00:26 +08:00
Jiang Liu 893ab021c9 rafs: avoid unnecessary memory copy by using VecDeque
Vec::insert(0, node) will cause unnecessary memory copy, so use
VecDeque instead.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-17 14:46:36 +08:00
Jiang Liu 578fe72549 rafe: enhance builder/merger to support RAFS in TARFS mode
Enhance builder/merger to support RAFS in TARFS mode, so we can merge
multiple RAFS filesystems in TARFS mode into one.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-17 14:28:35 +08:00
Jiang Liu 2a55d3ef88 rafs: move image merger into rafs/builder
Move image merger into rafs/builder.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-16 17:31:38 +08:00
imeoer c9d9b435ef
Merge pull request #1147 from jiangliu/tarfs
Introduce new tarfs mode to Nydus
2023-03-16 17:26:35 +08:00
Jiang Liu 3891a51465 service: enhance block device to support RAFS filesystem in TARFS mode
Enhance block device to support block size of 512, in addition to 4096,
so we can expose RAFS filesystems in TARFS mode as block device.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-16 15:17:13 +08:00
Jiang Liu 6522750a67 storage: enhance filecache to support RAFS filesystem in TARFS mode
A RAFS filesystem in TARFS mode directly use tar files/streams as data
blobs. A RAFS filesystem in TARFS mode contains a RAFS meta blob and
one or more tar files. There's no blob meta, such as compression info
array, chunk digest, TOC etc, in the tar files. So there's no support
of lazy loading, chunk dedup, chunk validation etc.

So assume that the snapshotter will prepare meta blob and tar files
before mounting the RAFS filesystem. Enhance the filecache module to
support tar files without lazy-loading, chunk dedup and chunk
validation.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-16 15:17:12 +08:00
Jiang Liu 45d0f8e6cf rafs: enhance builder to support TARFS mode
When using containerd overlayfs snapshotter to handle OCIv1 images,
it works as below:
- download compressed blobs from registry
- uncompress compressed blobs into tar streams/files
- unpack tar streams to directories on local filesystem
- mount multiple directories into a container rootfs by overlayfs

Here we introduce a new work mode to nydus, called as TARFS, which
works as below:
- download compressed blobs from registry
- uncompress compressed blobs into tar streams/files
- build RAFS/EROFS meta blob from tar streams/files
- optionally merge multiple RAFS/EROFS meta blobs into one
- mount the generated RAFS filesystem by mount.erofs

By introducing TARFS mode to RAFS, it helps to avoid generating a bunch
of small files by `untar`, which speeds up image preparation and garbage
collection. It may also help to reduce levels of overlayfs by merging
multiple image layers into one final RAFS filesystem.

The TARFS mode of RAFS filesystem has several special behavior, compared
to current RAFS, as below:
1) Instead of generating RAFS data blob, it directly use tar files as
   RAFS data blob.
2) Tar files are uncompressed, so data blobs for TARFS mode are
   uncompressed.
3) Tar files will also be directly used as local cache files.
4) There's no chunk compression info, chunk digest, TOC etc, generated
   for TARFS mode.
5) Block size is 512 bytes instead of 4K, because tar files are 512
   bytes aligned.

Now we have three ways to make of OCIv1 images:
Mode            		TAR-TARFS		TARGZ-REF		TARGZ-RAFS
Generate meta blob?		Y			Y			Y
Generate chunk data?		N			N			Y
Generate blob.meta?		N			Y			Y
Generate data blobs?		N			Y(for blob.meta)	Y
Data in data blobs?		Not generated		blob.meta		chunk data & blob.meta
Chunk alignment?		512			4096			4096
Chunk dedup?			N			Y			Y
Lazy loading?			N			Y			Y

Note, RAFS in TARFS mode is designed to be used locally only. In other
words, it's a way to implement snapshotter, instead of an image format
for sharing.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-16 15:17:09 +08:00
Jiang Liu d203985ba9 utils: add option to enable/disable hash calc for BufReaderInfo
Add method to enable/disable hash value computation for BufReaderInfo.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-15 17:51:36 +08:00
Jiang Liu bf5c24617d rafs: introduce fake TarfsChunkInfo to provide ChunkInfo TARFS
Introduce fake TarfsChunkInfo to provide ChunkInfo TARFS.
The TarfsChunkInfo acts as follow:
1) all TARFS chunks are uncompressed, because the tar file is in
   plaintext.
2) chunk digests of TarfsChunkInfo are all zero, so they are fake.

Also add constants and helpers to support 512-bytes block.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-15 17:51:35 +08:00
Jiang Liu 6638eee247
Merge pull request #1148 from jiangliu/utils-crypt
utils: introduce methods and structures for encryption and decryption
2023-03-15 17:26:06 +08:00
Jiang Liu eb042ca2b1
Merge pull request #1146 from imeoer/nydusify-fix-pull
nydusify: fix pulling all platforms of source image
2023-03-15 14:52:12 +08:00
Jiang Liu 5c19dfb8b1
Merge pull request #1150 from ccx1024cc/morgan/upmaster
rafs: fix amplify can not be skipped.
2023-03-15 11:46:54 +08:00
Yan Song 8458bcc7d2 nydusify: forcibly enabled `--oci` option when `--oci-ref` be enabled
We need to forcibly enable `--oci` option for allowing to append
related annotation for zran image, otherwise an error be thrown:

```
merge nydus layers: invalid label containerd.io/snapshot/nydus-ref=: invalid checksum digest format
```

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-03-15 03:46:51 +00:00
Yan Song 6e4ceee291 nydusify: fix unnecessary golang-lint error
```
golangci-lint run
Error: pkg/converter/provider/ported.go:47:64: SA1019: rCtx.ConvertSchema1 is deprecated: use Schema 2 or OCI images. (staticcheck)
	if desc.MediaType == images.MediaTypeDockerSchema1Manifest && rCtx.ConvertSchema1 {
	                                                              ^
Error: pkg/converter/provider/ported.go:20:2: SA1019: "github.com/containerd/containerd/remotes/docker/schema1" is deprecated: use images formatted in Docker Image Manifest v2, Schema 2, or OCI Image Spec v1. (staticcheck)
	"github.com/containerd/containerd/remotes/docker/schema1"
	^
```

Disabled the check, it's unnecessary to check the ported codes.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-03-15 02:55:16 +00:00
Yan Song 851fc6de29 nydusify: fix `--oci` option for convert subcommand
The `--oci` option is not working, we make it reverse before,
this patch fix it and keep compatibility with the old option
`--docker-v2-format`.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-03-15 02:15:28 +00:00
Yan Song 0288c9d44f nydusify: fix pulling all platforms of source image
We should only handle specific platform for pulling by
`platforms.MatchComparer`, otherwise nydusify will pull
the layer data of all platforms for an source image.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-03-15 02:13:10 +00:00
imeoer a61debc97a
Merge pull request #1149 from jongwu/fuse-back
upgrad fuse-backend-rs to 0.10.2
2023-03-15 10:11:35 +08:00
泰友 0fefbb4898 rafs: fix amplify can not be skipped
``` json
{
    "device":{
        "backend":{
            "type":"registry",
            "config":{
                "readahead":false,
                "host":"dockerhub.kubekey.local",
                "repo":"dfns/alpine",
                "auth":"YWRtaw46SGFyYm9VMTIZNDU=",
                "scheme":"https",
                "skip_verify":true,
                "proxy":{
                    "fallback":false
                }
            }
        },
        "cache":{
            "type":"",
            "config":{
                "work_dir":"/var/lib/containerd-nydus/cache",
                "disable_indexed_map":false
            }
        }
    },
    "mode":"direct",
    "digest_validate":false,
    "jostats_files":true,
    "enable_xattr":true,
    "access_pattern":true,
    "latest_read_files":true,
    "batch_size":0,
    "amplify_io":0,
    "fs_prefetch":{
        "enable":false,
        "prefetch_all":false,
        "threads_count":10,
        "merging_size":131072,
        "bandwidth_rate":1048576,
        "batch_size":0,
        "amplify_io":0
    }
}
```
`{.fs_prefetch.merging_size}` is used, instead of `{.amplify_io}`

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2023-03-15 10:00:45 +08:00
Jianyong Wu c4a97f16fc upgrad fuse-backend-rs to 0.10.2
There is a bug in fuse-backend-rs 0.10.1 which leads nydusd quit with segment fault.
Luckly, it has been fixed in 0.10.2. See [1].

[1] 2f2b242ed2

Signed-off-by: Jianyong Wu <jianyong.wu@arm.com>
2023-03-15 09:35:19 +08:00
Jiang Liu a6cecb980a utils: introduce methods and structures for encryption and decryption
Introduce methods and structures for encryption and decryption, and
implement `aes128xts` and `aes256xts`.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-13 22:19:10 +08:00
imeoer 3213eb718b
Merge pull request #1144 from jiangliu/prepare-tarfs
Refine builder and rafs to prepare for tarfs
2023-03-13 11:04:50 +08:00
Jiang Liu bd837d0086 rafs: only invoke v5 related code for v5 builds
Only invoke v5 related code for v5 builds, also enforce strictly
validation when creating missing directory for tar based builds.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-13 10:53:00 +08:00
Jiang Liu c00c5784cc rafs: replace with_context() by context() when possible
Replace with_context() by context() when possible to avoid function
call.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-12 21:00:15 +08:00
Jiang Liu 3f336139a5
Merge pull request #1137 from imeoer/converter-parent-bootstrap
builder: support `--parent-bootstrap` for merge
2023-03-09 17:10:40 +08:00
Yan Song a99a41fcdb rafs: do not fix blob id for old bootstrap
In fact, there is no way to tell if a separate old bootstrap file
was inline to the blob, for example, for an old merged bootstrap,
we can't set the blob id it references to as the filename, otherwise
it will break blob table on loading rafs.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-03-09 07:01:22 +00:00
Yan Song bee62d6a9f smoke: add `--parent-bootstrap` for merge test
Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-03-09 07:01:22 +00:00
Yan Song 2423f4366c builder: support `--parent-bootstrap` for merge
This option allows merging multiple bootstraps of upper layer with
the bootstrap of a parent image, so that we can implement container
commit operation for nydus image.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-03-09 07:01:20 +00:00
Jiang Liu d0b07e1c13 rafs: rename EROFS_BLOCK_SIZE to EROFS_BLOCK_SIZE_4096
Rename EROFS_BLOCK_SIZE to EROFS_BLOCK_SIZE_4096, we are going to
support EROFS_BLOCK_SIZE_512.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-09 13:54:11 +08:00
Jiang Liu c2e08c46ba
Merge pull request #1143 from jiangliu/api-fix
api: fix a build error
2023-03-09 00:33:30 +08:00
Jiang Liu f82803d23e api: fix a build error
Fix a builder error caused by missing of `warn!`.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-08 18:37:58 +08:00
Jiang Liu 1344b9c108
Merge pull request #1141 from jiangliu/builder
Move RAFS filesystem builder into nydus-rafs crate
2023-03-08 17:55:47 +08:00
imeoer 3ef84892a6
Merge pull request #1139 from jiangliu/block-nbd
Export Nydus images as block devices by using NBD
2023-03-07 11:21:14 +08:00
Jiang Liu 2b3fcc0244 rafs: refine prefetch and chunk dictionary in builder
Refine prefetch and chunk dictionary in builder for maintenance.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-06 21:29:30 +08:00
Jiang Liu 7a226ce9f9 rafs: refine builder Bootstrap implementation
Refine builder Bootstrap implementation for maintenance.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-06 20:20:49 +08:00
Jiang Liu 4372f96cfb rafs: refine RAFS v6 builder implementation
Refine RAFS v6 builder implementation by:
- introduce helper Node::v6_dump_inode() to reduce duplicated code
- introduce helper BuildContext::v6_block_addr()

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-06 15:47:22 +08:00
Jiang Liu ab344d69c1 rafs: refine builder/Node related code
Refine builder/Node related code for maintenance.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-06 15:47:20 +08:00
Jiang Liu 009625d19f rafs: move RAFSv6 builder related code into a dedicated file
Move RAFSv6 builder related code into a dedicated file.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-06 15:47:10 +08:00
Jiang Liu fa574fb0c3 rafs: move overlay related code into builder/core/overlay.rs
Move overlay related code into builder/core/overlay.rs, for better
maintenance.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-06 15:38:40 +08:00
Jiang Liu 4b90c87c58 rafs: refine Node structure to reduce memory consumption and copy
Organize immutable fields of Node into a new struct NodeInfo, to
reduce memory consumption and copy operations.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-06 15:38:39 +08:00
Jiang Liu 3fc59da93c rafs: move builder from nydus-image into rafs
Move builder from nydus-image into rafs, so it can be reused.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-06 15:38:37 +08:00
Jiang Liu b971551a14 rafs: optimize InodeWrapper to reduce memory consumption
Optimize InodeWrapper to reduce memory consumption by only
instantialing the inode object when needed.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-06 15:36:40 +08:00
Jiang Liu dc54beea4d rafs: optimize ChunkWrapper to reduce memory consumption
Optimize ChunkWrapper to reduce memory consumption by only
instantialing the chunk info object when needed.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-06 15:36:39 +08:00
imeoer 6b1998f927
Merge pull request #1135 from jiangliu/nydus-image-simplify
Minor improvements to nydus-image
2023-03-06 14:33:09 +08:00
imeoer 5946738cfe
Merge pull request #1140 from changweige/add-optimizer-doc
readme: add a very brief section to introduce image optimizer
2023-03-06 14:28:46 +08:00
Changwei Ge 0218ff172d readme: add a very brief section to introdce image optimizer
Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2023-03-06 11:47:34 +08:00
Jiang Liu f9b051ed40 api: add method to load BlobCacheConfigV2 from file
Add method to load BlobCacheConfigV2 from configuration file.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-05 01:39:25 +08:00
Jiang Liu eceeefd74c nydusd: add subcommand nbd to export nydus images as block devices
Add subcommand nbd to export nydus images as block devices through
NBD.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-05 00:35:10 +08:00
Jiang Liu 1e9b2f3995 service: add nbd service to export RAFSv6 images as block devices
Implement NbdService which cooperates with the Linux nbd driver to
expose RAFSv6 images as block devices. To simplify the implementation,
the NbdService will directly talk with the nbd driver, instead of
following a typical nbd-server and nbd-client architecture.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-05 00:35:09 +08:00
Jiang Liu 10a2fef0cb service: compose a block device from a RAFSv6 image
Compose a block device from a RAFSv6 image, so all metadata/data
content can be accessed by block address. The EROFS fs driver can be
used to directly mount the block device.

It depends on the blob_cache subsystem and can be used to implement
nbd/ublk/virtio-blk/vhost-user-blk servers.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-05 00:35:05 +08:00
Jiang Liu e4dc7f8764 service: add common code to compose a block device from a RAFSv6 image
Add common code to compose a block device from a RAFS image,
which then can used exposed through nbd/ublk/virtio-blk/vhost-user-blk
etc.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-05 00:35:04 +08:00
Jiang Liu c8b13ebef5 rafs: load mapped-blkaddr for each data blob
Load the mapped_blkaddr field for each data blob, later it will
be used compose a RAFS v6 image into a block device.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-05 00:35:04 +08:00
Jiang Liu 748c12e578 rafs: refine v6 related code
Refine v6 related code and add two fields to meta info.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-05 00:35:02 +08:00
Jiang Liu b217101701 nydus-image: minor improvement to nydus-image
Minor improvement to nydus-image:
- better handling of `chunk-size` argument
- avoid assert at runtime by returning error code

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-04 10:12:24 +08:00
Jiang Liu dd68b191b6 nydus-image: simplify ArtifactWriter::new() to remove the `fifo` arg
Simplify ArtifactWriter::new() to remove the argument `fifo`. We can
detect whether a file is a FIFO or not, so need to pass flag for it.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-04 10:01:58 +08:00
imeoer 04e4349cc2
Merge pull request #1125 from dragonflyoss/dev/v2.3
Prepare for exposing nydus images as block devices
2023-03-03 10:19:56 +08:00
imeoer bca1b8a072
Merge pull request #1130 from jiangliu/fix-get-compressed-size
nydus-image: fix a underflow issue in get_compressed_size()
2023-03-03 10:08:32 +08:00
Jiang Liu 8a4bc8ba26 nydus-image: fix a underflow issue in get_compressed_size()
Fix a underflow issue in get_compressed_size() by skipping generating
useless Tar/Toc headers.

Fixes: https://github.com/dragonflyoss/image-service/issues/1129

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-03 09:57:11 +08:00
imeoer 88e3fe0aad
Merge pull request #1127 from jiangliu/nydus-exclude
nydus: exclude some components when publishing crate
2023-03-03 09:47:58 +08:00
Jiang Liu 2e3acd1aa0 nydus: exclude some components when publishing crate
Exclude some components when publishing crate, otherwise the package
gets too big and can't be published to crates.io due to maximum
size (10MB) limitation.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-02 14:11:21 +08:00
Jiang Liu 1a1f1ca801
Merge pull request #1123 from adamqqqplay/update-tempfile
deps: bump tempfile version to 3.4.0 to fix some security vulnerabilities
2023-03-01 23:39:26 +08:00
imeoer 449f37816d
Merge pull request #1126 from jiangliu/api-v0.2.2
api: prepare for publishing v0.2.2
2023-03-01 21:56:23 +08:00
Jiang Liu 0c1e5724b7 api: prepare for publishing v0.2.2
Prepare for publishing nydus-api v0.2.2.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-03-01 21:00:45 +08:00
Jiang Liu a38f6b8d62
Merge pull request #1122 from imeoer/prepare-v2.2-release
dep: prepare for v2.2 release
2023-03-01 16:46:28 +08:00
Yan Song 6bea4511d3 dep: prepare for v2.2 release
Bump nydus-snapshotter v0.6.1 and acceleration-service v0.2.0.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-03-01 08:32:46 +00:00
taohong 3a09d0773f service: add README for nydus-service
Signed-off-by: taohong <taoohong@linux.alibaba.com>
2023-03-01 16:09:32 +08:00
Qinqi Qu 766dbd43af deps: bump tempfile version to 3.4.0
Update tempfile related crates to fix https://github.com/advisories/GHSA-mc8h-8q98-g5hr

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-03-01 16:05:39 +08:00
imeoer 1392eebf66
Merge pull request #1119 from adamqqqplay/moby-integration
docs: Add Docker(moby) integration
2023-03-01 11:48:40 +08:00
Jiang Liu f7a10af027
Merge pull request #1120 from dragonflyoss/docs_update
docs: add zran-indexed demo and ADOPTERS.md
2023-03-01 11:37:59 +08:00
Gao Xiang 4d42ff123d docs: add ADOPTERS.md
Mostly from https://nydus.dev.

Signed-off-by: Gao Xiang <hsiangkao@linux.alibaba.com>
2023-03-01 11:30:06 +08:00
Gao Xiang a5b483b62c docs: add ZRAN-indexed OCI image recording
Signed-off-by: Gao Xiang <hsiangkao@linux.alibaba.com>
2023-03-01 11:06:05 +08:00
Qinqi Qu 480679c19d docs: Add Docker(moby) integration
Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-03-01 09:58:50 +08:00
imeoer 377ff28687
Merge pull request #1118 from ccx1024cc/morgan/smoke
nydusd: remove deprecate params
2023-02-28 17:37:50 +08:00
泰友 f8b5d0c92e nydusd: remove `api_notifier`, a deprecated param
It's designed for graceful exit before `ApiServiceController`. Now it's
deprecated, so remove it.

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2023-02-28 16:45:51 +08:00
Jiang Liu a2aeade421
Merge pull request #1116 from imeoer/nydusify-allow-pigz
nydusify: enable pigz by default
2023-02-28 13:49:54 +08:00
Yan Song 5fbc3a14dd nydusify: enable pigz by default
We should use pigz for supporting parallel gzip decompression, so that
improve the conversion speed when unpack gzip layer for source image.

We still allow users to specify the env `CONTAINERD_DISABLE_PIGZ=1` to
disable the feature when encounter any decompression error.

See 33c0eafb17/archive/compression/compression.go (L261)

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-02-28 03:57:49 +00:00
Changwei Ge 780b12f4d1
Merge pull request #1115 from imeoer/daily-image-arm
action: convert arm images in top image test
2023-02-28 10:34:32 +08:00
Yan Song f2535eb136 action: convert arm images in top image test
So that users can run nydus arm image for testing.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-02-27 14:53:48 +00:00
Jiang Liu 1d37aaadea
Merge pull request #1114 from dragonflyoss/dependabot/go_modules/contrib/nydus-overlayfs/golang.org/x/sys-0.1.0
build(deps): bump golang.org/x/sys from 0.0.0-20211007075335-d3039528d8ac to 0.1.0 in /contrib/nydus-overlayfs
2023-02-27 20:30:33 +08:00
dependabot[bot] 57da328e89
build(deps): bump golang.org/x/sys in /contrib/nydus-overlayfs
Bumps [golang.org/x/sys](https://github.com/golang/sys) from 0.0.0-20211007075335-d3039528d8ac to 0.1.0.
- [Release notes](https://github.com/golang/sys/releases)
- [Commits](https://github.com/golang/sys/commits/v0.1.0)

---
updated-dependencies:
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-02-27 12:30:26 +00:00
Jiang Liu 2e2def9248
Merge pull request #1113 from dragonflyoss/dependabot/go_modules/contrib/ctr-remote/golang.org/x/net-0.7.0
build(deps): bump golang.org/x/net from 0.6.0 to 0.7.0 in /contrib/ctr-remote
2023-02-27 20:29:50 +08:00
Jiang Liu d7967b72cd
Merge pull request #1112 from dragonflyoss/dependabot/go_modules/contrib/nydusify/golang.org/x/net-0.7.0
build(deps): bump golang.org/x/net from 0.5.0 to 0.7.0 in /contrib/nydusify
2023-02-27 20:29:20 +08:00
dependabot[bot] b225852816
build(deps): bump golang.org/x/net in /contrib/ctr-remote
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.6.0 to 0.7.0.
- [Release notes](https://github.com/golang/net/releases)
- [Commits](https://github.com/golang/net/compare/v0.6.0...v0.7.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-02-27 08:26:05 +00:00
dependabot[bot] ef74cf1303
build(deps): bump golang.org/x/net in /contrib/nydusify
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.5.0 to 0.7.0.
- [Release notes](https://github.com/golang/net/releases)
- [Commits](https://github.com/golang/net/compare/v0.5.0...v0.7.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-02-27 08:25:42 +00:00
Jiang Liu 02d1df36e7
Merge pull request #1108 from jiangliu/mapped-blkaddr
Correctly generate mapped-blkaddr for RAFS devslot array
2023-02-24 22:40:13 +08:00
Jiang Liu 0df89a33eb
Merge pull request #1109 from yqleng1987/add-case-compile_linux_in_container_with_rafs
add a new test case: run_container_with_rafs_and_compile_linux.bats
2023-02-24 22:39:52 +08:00
Yiqun Leng d072deff25 add a new test case: run_container_with_rafs_and_compile_linux.bats
Signed-off-by: Yiqun Leng <yqleng@linux.alibaba.com>
2023-02-24 20:22:17 +08:00
Jiang Liu 0291d6e486 api: define helpers to detect cache type
Define helper functions to detect caceh types.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-02-24 17:22:40 +08:00
Jiang Liu 753890bb04 nydus-image: correctly set mapped-blkaddr for devslot
Correctly set mapped-blkaddr for RAFS v6 device slots.
It will be used to represent a Nydus image as a block device.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-02-24 17:22:40 +08:00
Jiang Liu e2fe47d2ad nydus-image: refine dump_v6_bootstrap()
Refine dump_v6_bootstrap() to prepare for fixing a bug.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-02-24 17:22:39 +08:00
Jiang Liu 73b57c9f25 nydus-image: only support maximum 255 layers for RAFS v6
Only support maximum 255 layers for RAFS v6, because it could only
encoding 255 blob indice.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-02-24 17:22:38 +08:00
imeoer 01556c0d03
Merge pull request #1107 from jiangliu/service-macos-unused.patch
service: fix a unused variable warning on macos
2023-02-24 16:41:13 +08:00
Jiang Liu 7d699179a8 service: fix a unused variable warning on macos
Fix a unused variable warning on macos.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-02-24 16:29:31 +08:00
imeoer b26242e376
Merge pull request #1105 from jiangliu/v2.2-rc3-crates
publish: update crate version for nydus 2.2 rc3
2023-02-24 14:22:46 +08:00
imeoer 6187964126
Merge pull request #1104 from jiangliu/v6-validation
rafs: enforce stricter and safer validation of RAFS v6 images
2023-02-24 14:06:03 +08:00
Jiang Liu a8e952954b publish: update crate version for nydus 2.2 rc3
Update crate version for nydus 2.2 rc3.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-02-24 13:31:36 +08:00
Jiang Liu 2616fb2c05 rafs: fix a bug in calculate offset for dirent name
There's a bug in calculate offset and size for RAFS v6 Dirent name,
it will be treat as 0 instead of 4096 if the last block is 4096 bytes.

Fixes: https://github.com/dragonflyoss/image-service/issues/1098

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-02-24 12:25:58 +08:00
imeoer 2039eeea5d
Merge pull request #1103 from mofishzz/hjn/fix-nydusify
nydusify: pack: add missing cli parameters
2023-02-24 09:37:43 +08:00
Jiang Liu 61898719df rafs: enforce stricter and safer validation of RAFS v6 images
Enforce stricter and safer validations of RAFS v6 images,
- return error instead of panic
- ensure Dirent content doesn't cross EROFS_BLOCK_SIZE boundary
- ensure symlink size is smaller than EROFS_BLOCK_SIZE
- ensure xattr name and value are within range
- validate block index for flat plain and flat inline inodes
- add helpers to reduce duplicated code

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-02-24 00:07:43 +08:00
Huang Jianan e22368b0f8 nydusify: pack: add missing cli parameters
Otherwise nydus-image will not work as expected.

Signed-off-by: Huang Jianan <jnhuang@linux.alibaba.com>
2023-02-23 20:09:25 +08:00
Jiang Liu 06d59008e0
Merge pull request #1101 from taoohong/master
service: improve the resuability of create_daemon
2023-02-23 10:40:32 +08:00
Tao Hong 8a642d0eb0
Merge branch 'dragonflyoss:master' into master 2023-02-23 09:23:25 +08:00
Jiang Liu c84e5d95fb
Merge pull request #1089 from imeoer/smoke-fix-compatibility
smoke: fix compatibility test
2023-02-22 22:32:23 +08:00
Yan Song 85bef056ac builder: fix toc entry struct alignment
Otherwise the serialized data on disk will be filled with
unexpected non-zero data.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-02-22 12:43:07 +00:00
Yan Song a0c877cb92 smoke: fix compatibility test
Nydusd 2.1.3 is not work with nydus-image 2.2+ for rafs v6,
upgrade to 2.1.4 to pass the test.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-02-22 11:38:14 +00:00
taohong 48a49587e5 fix: function create_daemon has too many arguments
Signed-off-by: taohong <taoohong@linux.alibaba.com>
2023-02-22 14:25:14 +08:00
taohong d67f2dabd6 service: improve the resuability of create_daemon
In order to improve the reusability of the function create_daemon
the parameter processing logic is moved from the create_daemon
and initialize_fscache_service functions to the upper layer
function process_singleton_arguments

Signed-off-by: taohong <taoohong@linux.alibaba.com>
2023-02-22 13:52:51 +08:00
imeoer 34f1658d33
Merge pull request #1100 from jiangliu/revert-chunk-map
Revert "storage: return Ok(None) for check_range_ready_and_mark_pendi…
2023-02-22 11:13:30 +08:00
Jiang Liu 86537fa333 Revert "storage: return Ok(None) for check_range_ready_and_mark_pending () when needed"
This reverts commit 03326fd921.

The change may cause silent data corrupt because it disable
bitmap.wait_for_range_ready() in function
FileCacheEntry::do_fetch_chunks()

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-02-22 09:55:27 +08:00
Jiang Liu 40cdf1183b
Merge pull request #1096 from imeoer/builder-fix-blob-toc
builder: fix invalid compressed offset in image.blob toc entry
2023-02-21 14:23:30 +08:00
Yan Song 747edc9576 builder: fix invalid compressed offset in image.blob toc entry
The `blob_ctx.compressed_offset` has been set to the end of compressed
blob in tar2rafs blob dump, we shouldn't use it as the initial
compressed_offset, this will generate an invalid toc entry.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-02-21 03:58:06 +00:00
imeoer aea56eb414
Merge pull request #1082 from loheagn/unpack-stream
nydus-image: add `backend-config` support to `nydus-image unpack`
2023-02-20 11:55:50 +08:00
Jiang Liu da82dc9738
Merge pull request #1091 from dragonflyoss/dependabot/go_modules/contrib/ctr-remote/github.com/containerd/containerd-1.6.18
build(deps): bump github.com/containerd/containerd from 1.6.17 to 1.6.18 in /contrib/ctr-remote
2023-02-19 22:52:36 +08:00
Jiang Liu bdfeff9049
Merge pull request #1092 from dragonflyoss/dependabot/go_modules/smoke/github.com/containerd/containerd-1.6.18
build(deps): bump github.com/containerd/containerd from 1.6.17 to 1.6.18 in /smoke
2023-02-19 22:52:06 +08:00
Jiang Liu f7cc717f98
Merge pull request #1093 from dragonflyoss/dependabot/go_modules/contrib/nydusify/github.com/containerd/containerd-1.6.18
build(deps): bump github.com/containerd/containerd from 1.6.17 to 1.6.18 in /contrib/nydusify
2023-02-19 22:51:41 +08:00
dependabot[bot] ae90f2567d
build(deps): bump github.com/containerd/containerd in /contrib/nydusify
Bumps [github.com/containerd/containerd](https://github.com/containerd/containerd) from 1.6.17 to 1.6.18.
- [Release notes](https://github.com/containerd/containerd/releases)
- [Changelog](https://github.com/containerd/containerd/blob/main/RELEASES.md)
- [Commits](https://github.com/containerd/containerd/compare/v1.6.17...v1.6.18)

---
updated-dependencies:
- dependency-name: github.com/containerd/containerd
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-02-17 09:18:21 +00:00
dependabot[bot] 95fc09753f
build(deps): bump github.com/containerd/containerd
Bumps [github.com/containerd/containerd](https://github.com/containerd/containerd) from 1.6.17 to 1.6.18.
- [Release notes](https://github.com/containerd/containerd/releases)
- [Changelog](https://github.com/containerd/containerd/blob/main/RELEASES.md)
- [Commits](https://github.com/containerd/containerd/compare/v1.6.17...v1.6.18)

---
updated-dependencies:
- dependency-name: github.com/containerd/containerd
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-02-17 09:18:13 +00:00
dependabot[bot] e2f57c744a
build(deps): bump github.com/containerd/containerd in /smoke
Bumps [github.com/containerd/containerd](https://github.com/containerd/containerd) from 1.6.17 to 1.6.18.
- [Release notes](https://github.com/containerd/containerd/releases)
- [Changelog](https://github.com/containerd/containerd/blob/main/RELEASES.md)
- [Commits](https://github.com/containerd/containerd/compare/v1.6.17...v1.6.18)

---
updated-dependencies:
- dependency-name: github.com/containerd/containerd
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-02-17 09:18:13 +00:00
Jiang Liu 35a13c2db7
Merge pull request #1090 from dragonflyoss/dependabot/go_modules/contrib/nydusify/github.com/astaxie/beego-1.12.2
build(deps): bump github.com/astaxie/beego from 1.12.1 to 1.12.2 in /contrib/nydusify
2023-02-17 11:53:49 +08:00
dependabot[bot] 76de963f57
build(deps): bump github.com/astaxie/beego in /contrib/nydusify
Bumps [github.com/astaxie/beego](https://github.com/astaxie/beego) from 1.12.1 to 1.12.2.
- [Release notes](https://github.com/astaxie/beego/releases)
- [Commits](https://github.com/astaxie/beego/compare/v1.12.1...v1.12.2)

---
updated-dependencies:
- dependency-name: github.com/astaxie/beego
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-02-17 03:43:17 +00:00
Jiang Liu 949b553b7c
Merge pull request #1088 from jiangliu/daemon-exit
service: fix a bug to correct shutdown service manager
2023-02-16 16:20:43 +08:00
Jiang Liu 4765973239
Merge pull request #1087 from imeoer/fix-smoke-image-conflicts
smoke: fix possible image name conflicts
2023-02-16 16:15:55 +08:00
Jiang Liu 187446d3de service: fix a bug to correct shutdown service manager
Fix a bug to correct shutdown service manager, the condition to exit
serivce loop is wrong.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-02-16 16:13:58 +08:00
Yan Song 4c04ca88fc smoke: fix possible image name conflicts
To avoid concurrency confliction between different nydus images
created by `nydusify convert` or `nydusify check`.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-02-16 07:51:06 +00:00
imeoer 486a2a04a4
Merge pull request #1084 from imeoer/smoke-log
smoke: make test log clearer
2023-02-16 14:17:14 +08:00
Yan Song a5dbdecd9c smoke: make test log clearer
Redirect the nydusify stdout to /dev/null, and keep the stderr
output for debugging.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-02-16 06:03:47 +00:00
Jiang Liu 7e4cf8de78
Merge pull request #1085 from imeoer/builder-fix-xattr-flag
builder: fix inode xattr flag for tar2rafs build
2023-02-16 12:56:26 +08:00
Nan Li 081c8f968b Add `backend-config` support to `nydus-image unpack`
The `nydus-image unpack` only support reading nydus blob from the blob files on local machine (by the argument `blob`) by now. This patch adds a new argument `backend-config` to this command to allow `nydus-image unpack` to read blob data from all kinds of backends.

Signed-off-by: Nan Li <loheagn@icloud.com>
2023-02-16 11:40:59 +08:00
imeoer ca197484cb
Merge pull request #1086 from loheagn/fix-serde-http-proxy
Bugfix for `BackendConfigV2` deserialization
2023-02-16 11:39:17 +08:00
Nan Li c350998f9b Bugfix for `BackendConfigV2` deserialization
This patch adds a serde rename configuration to `http_proxy` of `BackendConfigV2` to fix the `BackendConfigV2` deserialization.

Signed-off-by: Nan Li <loheagn@icloud.com>
2023-02-16 11:23:24 +08:00
Yan Song 31ba48df75 builder: fix inode xattr flag in tar2rafs
We forgot to set inode xattr flag in tar2rafs build workflow,
this causes a broken chunk info offset calculation in the
`_get_chunk_info` method:

```
let mut offset = self.offset + inode.size();
if inode.has_xattr() {
    let xattrs = state.file_map.get_ref::<RafsV5XAttrsTable>(offset)?;
    offset += size_of::<RafsV5XAttrsTable>() + xattrs.aligned_size();
}
offset += size_of::<RafsV5ChunkInfo>() * idx as usize;
```

Then causes to get invalid chunk info:

```
invalid inode digest X, expected Y, ino: 1 name: "file-1"
```

This patch fixes it by setting the xattr flag in tar2rafs workflow.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-02-16 02:47:09 +00:00
Jiang Liu cd1df37441
Merge pull request #1083 from imeoer/nydusify-fix-prefetch
nydusify: upgrade deps to fix lost prefetch table
2023-02-15 16:28:34 +08:00
Yan Song d2e2d5a6b2 nydusify: upgrade deps to fix lost prefetch table
The `PrefetchPatterns` option should be paased to the builder (nydus-image)
merge subcommand, otherwise the prefetch feature table will not work in
the final bootstrap (used in nydus image).

See related commit: 01fba5d171

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-02-15 07:51:22 +00:00
Jiang Liu f6681a49e2
Merge pull request #1073 from imeoer/nydusify-speedup
nydusify: speed up conversion with tar2rafs
2023-02-15 15:39:38 +08:00
Yan Song c34e945eef nydusify: speed up conversion with tar2rafs
Default to enable `--type tar-rafs` option for nydus-image, which
converting OCI tar blob stream into nydus blob directly, eliminating
the need to decompress it to a local directory first, thus greatly
accelerating the pack process.

For compatibility, we still support env `NYDUS_DISABLE_TAR2RAFS=true`
variable to disable the optimization.

It's an internal feature, so we just only bump acceleration-service
and nydus-snapshotter packages to the latest version.

Related PR: https://github.com/containerd/nydus-snapshotter/pull/352

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-02-15 04:01:10 +00:00
imeoer e057895d0a
Merge pull request #1079 from jiangliu/rafs-compat
rafs: reserve bits in RafsSuperFlags for future compatible changes
2023-02-14 14:07:26 +08:00
Jiang Liu 2605421a6e rafs: reserve bits in RafsSuperFlags for future compatible changes
Reserve several bits in RafsSuperFlags for future compatible changes.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-02-14 13:18:32 +08:00
Jiang Liu 8f28cf8cd8
Merge pull request #1078 from jiangliu/rafs-v0.2.1
Prepare for publishing rafs v0.2.1
2023-02-14 13:00:23 +08:00
Jiang Liu e4383b59a9 rafs: prepare for publishing v0.2.1
Prepare for publishing v0.2.1.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-02-14 10:20:07 +08:00
Jiang Liu 64a93e8c99 storage: prepare for publishing v0.6.1
Prepare for publishing v0.6.1.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-02-14 10:16:03 +08:00
Jiang Liu 27f5b3cf47 util: prepare for publishing v0.4.1
Prepare for publishing v0.4.1.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-02-14 10:07:36 +08:00
Jiang Liu 94f6947b2b
Merge pull request #1077 from jiangliu/app-v0.4
Prepare for releasing nydus-app v0.4.0
2023-02-13 22:16:59 +08:00
Jiang Liu f6daab4f8a app: prepare for releasing v0.4.0
Prepare for releasing nydus-app v0.4.0.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-02-13 18:57:24 +08:00
Jiang Liu 1584c54485 api: prepare for releasing v0.2.1
Prepare for releasing nydus-api v0.2.1.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-02-13 18:54:55 +08:00
Jiang Liu 31c8988de5
Merge pull request #1076 from adamqqqplay/update-ctr-remote
ctr-remote: update dependencies to latest
2023-02-13 18:16:53 +08:00
Qinqi Qu 1b4687fae3 ctr-remote: update dependencies to latest
Fix: https://github.com/advisories/GHSA-mc8v-mgrf-8f4m

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-02-13 17:18:55 +08:00
imeoer d512e1e536
Merge pull request #1074 from jiangliu/daemon-controller
service: move DaemonController from nydus into nydus-service
2023-02-13 14:54:04 +08:00
Jiang Liu 8efd15e66e
Merge pull request #1075 from imeoer/builder-chunk-dict-fix
builder: fix configuration error on chunk dict load
2023-02-13 14:46:54 +08:00
Jiang Liu edae434a09 service: move DaemonController from nydus into nydus-service
Move DaemonController from nydus into nydus-service so it can
be reused by other project.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-02-13 14:37:40 +08:00
Yan Song b77c6ab697 builder: fix configuration error on chunk dict load
This patch fix the error when enable blob-toc feature with `--chunk-dict`:

```
$ nydus-image create --chunk-dict bootstrap=/path/to/chunk-dict-bootstrap --features blob-toc ...

Error: failed to open bootstrap file "/path/to/chunk-dict-bootstrap"  module=builder
```

The separate chunk dict bootstrap doesn't support blob accessible.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-02-13 06:24:28 +00:00
Jiang Liu 8e8445cb90
Merge pull request #1072 from jiangliu/service
nydus-service: split service framework from nydus into nydus-service
2023-02-12 15:13:16 +08:00
Jiang Liu 7f25983540 nydus-service: split service framework from nydus into nydus-service
Split service framework from nydus into nydus-service crate, to reduce
dependency when used by other project like image-rs.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-02-11 18:08:52 +08:00
imeoer 6581ec65a9
Merge pull request #1071 from jiangliu/zran-overflow
storage: fix a out of range bug related to zran
2023-02-11 13:48:54 +08:00
Jiang Liu ff615b51c8
Merge pull request #1070 from jiangliu/dep-ansi-term.patch
dep: remove dependency on ansi-term
2023-02-11 12:01:25 +08:00
Jiang Liu be1df698b0 storage: fix a out of range bug related to zran
Fix a out of range bug related to zran.

Ref: https://github.com/dragonflyoss/image-service/actions/runs/4148746667/jobs/7177127033

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-02-11 11:19:46 +08:00
Jiang Liu 4bc4adf7b0 dep: remove dependency on ansi-term
The ansi-term crate is deprecated, so upgrade flexi-logger to get
rid of dependency on ansi-term crate.

Fixes: https://github.com/dragonflyoss/image-service/issues/1064

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-02-11 11:10:11 +08:00
imeoer 6f97f0c2b1
Merge pull request #1069 from jiangliu/zran-prefetch
storage: fix bugs in prefetch data for ZRan images
2023-02-10 16:15:47 +08:00
Jiang Liu e79f3c5e3d storage: fix bugs in prefetch data for ZRan images
ZRan images may have holes between compressed chunk data, which breaks
the prefetch algorithm. So fix the bug by special handling of prefetch
for ZRan images.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-02-10 16:04:44 +08:00
Jiang Liu 4721545d45
Merge pull request #1067 from imeoer/nydusify-fix-overlay-options
nydusify: fix overlayfs mount options for check
2023-02-10 16:04:18 +08:00
imeoer 374d7c5285
Merge pull request #1068 from adamqqqplay/fix-nydusify-messages
nydusify: Fix nydusify help messages
2023-02-10 15:31:11 +08:00
Qinqi Qu c4125cec50 nydusify: Fix nydusify help messages
1. The "--merge-platform" flag in nydusify convert no longer need to push OCIv1 image first.
2. Fix wrong indentation in Version prompt.

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-02-10 15:10:33 +08:00
Yan Song c07cf12906 nydusify: cleanup temporary directories for check
Remove temporarily generated directories to save disk space
after `nydusify check` command, and prevent file conflict
between multiple `nydusify check` operations.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-02-09 10:51:34 +00:00
Yan Song b2c0f42283 nydusify: fix overlayfs mount options for check
To fix the error when mount OCI v1 image in check subcommand:

```
error: mount options is too long
```

The mount options has 4k buffer size limitation, we will
encounter the issue in a huge images with mass layers.

We need to shorten the length of lowerdir paths in overlayfs
option, change `sha256:xxx` to `layer-N` to alleviate the issue.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-02-09 10:51:32 +00:00
imeoer 45cc125bc6
Merge pull request #1062 from jiangliu/zran-assert
zran: fix an assert when generating ZRAN images
2023-02-09 15:41:44 +08:00
Jiang Liu 8ca0fc2fa4 zran: fix an assert when generating ZRAN images
Fix an assert when generating ZRAN images.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-02-09 11:22:43 +08:00
imeoer ab07dec227
Merge pull request #972 from adamqqqplay/nydusd-localdisk-backend
storage: introduce new backend localdisk in nydusd
2023-02-09 00:03:11 +08:00
imeoer 2b37bcfd4d
Merge pull request #1066 from dragonflyoss/dependabot/go_modules/contrib/ctr-remote/github.com/containerd/containerd-1.6.12
build(deps): bump github.com/containerd/containerd from 1.6.6 to 1.6.12 in /contrib/ctr-remote
2023-02-09 00:02:24 +08:00
Jiang Liu 274033bfd9
Merge pull request #1065 from dragonflyoss/dependabot/cargo/contrib/nydus-backend-proxy/tokio-1.25.0
build(deps): bump tokio from 1.19.2 to 1.25.0 in /contrib/nydus-backend-proxy
2023-02-08 21:45:06 +08:00
dependabot[bot] b56d2fab49 build(deps): bump tokio in /contrib/nydus-backend-proxy
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.19.2 to 1.25.0.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.19.2...tokio-1.25.0)

---
updated-dependencies:
- dependency-name: tokio
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-02-08 19:25:54 +08:00
dependabot[bot] 37d6860009
build(deps): bump github.com/containerd/containerd
Bumps [github.com/containerd/containerd](https://github.com/containerd/containerd) from 1.6.6 to 1.6.12.
- [Release notes](https://github.com/containerd/containerd/releases)
- [Changelog](https://github.com/containerd/containerd/blob/main/RELEASES.md)
- [Commits](https://github.com/containerd/containerd/compare/v1.6.6...v1.6.12)

---
updated-dependencies:
- dependency-name: github.com/containerd/containerd
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-02-08 11:00:07 +00:00
Qinqi Qu 44ced9959d storage: introduce new backend `localdisk`
This patch adds a new storage backend localdisk which supports reading
blobs on disk. In this scenario, each layer of the blob is stored in
partitions, and multiple partitions are addressed in the local raw
disk via the GUID partition table (GPT), which means that this disk
stores the entire image.

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-02-08 17:03:24 +08:00
Jiang Liu 931ad19aa8
Merge pull request #1063 from adamqqqplay/fix-cargo-deny-errors
cargo: Update dependencies to resolve security vulnerabilities
2023-02-08 17:00:08 +08:00
Qinqi Qu 7e050b2288 cargo: Update dependencies to resolve security vulnerabilities
cargo deny has detected many security vulnerabilities, especially in
openssl-src, this patch upgrades related crates to solve the problem.

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-02-08 16:27:13 +08:00
imeoer 2ad8675a57
Merge pull request #1059 from ccx1024cc/morgan/fix_nydus_test_debug_log
nydus-test: fix debug panic
2023-02-07 14:05:31 +08:00
泰友 40e31df62d nydus-test: fix debug panic
Reproduction: add `NYDUS_TEST_VERBOSE=YES` to booting params of pytest.
For example:

```shell
    sudo NYDUS_TEST_VERBOSE=YES pytest functional-test/test_layered_image.py::test_basic_read
```

Output:
```
    ERROR    root:workload_gen.py:459 Stress read failure, not all arguments converted during string formatting
    Traceback (most recent call last):
      File "/home/morgan/workspace/rust/image-service/contrib/nydus-test/framework/workload_gen.py", line 456, in io_read
        cnt, size, duration = self.read_collected_files(io_duration)
      File "/home/morgan/workspace/rust/image-service/contrib/nydus-test/framework/workload_gen.py", line 404, in read_collected_files
        logging.debug(
      File "/usr/lib/python3.10/logging/__init__.py", line 2148, in debug
        root.debug(msg, *args, **kwargs)
      File "/usr/lib/python3.10/logging/__init__.py", line 1465, in debug
        self._log(DEBUG, msg, args, **kwargs)
      File "/usr/lib/python3.10/logging/__init__.py", line 1624, in _log
        self.handle(record)
      File "/usr/lib/python3.10/logging/__init__.py", line 1634, in handle
        self.callHandlers(record)
      File "/usr/lib/python3.10/logging/__init__.py", line 1696, in callHandlers
        hdlr.handle(record)
      File "/usr/lib/python3.10/logging/__init__.py", line 968, in handle
        self.emit(record)
      File "/usr/local/lib/python3.10/dist-packages/_pytest/logging.py", line 343, in emit
        super().emit(record)
      File "/usr/lib/python3.10/logging/__init__.py", line 1108, in emit
        self.handleError(record)
      File "/usr/lib/python3.10/logging/__init__.py", line 1100, in emit
        msg = self.format(record)
      File "/usr/lib/python3.10/logging/__init__.py", line 943, in format
        return fmt.format(record)
      File "/usr/local/lib/python3.10/dist-packages/_pytest/logging.py", line 114, in format
        return super().format(record)
      File "/usr/lib/python3.10/logging/__init__.py", line 678, in format
        record.message = record.getMessage()
      File "/usr/lib/python3.10/logging/__init__.py", line 368, in getMessage
        msg = msg % self.args
    TypeError: not all arguments converted during string formatting
```

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2023-02-07 10:35:13 +08:00
imeoer 9d268f70ba
Merge pull request #974 from loheagn/cs-proxy
storage: add new backend `HttpProxy`
2023-02-06 14:17:12 +08:00
Nan Li 7aac4129d8 storage: add new backend `HttpProxy`
This patch adds a new storage backend `HttpProxy` which can access blobs through a http proxy server.

The http proxy server can be local (using unix socket) or remote (using `https://` or using `http://`).

`HttpProxy` uses two API endpoints to access the blobs:
- `HEAD /path/to/blobs` to get the blob size
- `GET /path/to/blobs` to read the blob

The http proxy server should respect [the `Range` header](https://www.rfc-editor.org/rfc/rfc9110.html#name-range) to compute the offset and length of the blob.

The example config files for this new backend may be:

```jsonc
// for remote usage
{
  "backend": {
    "type": "http-proxy",
    "config": {
      "addr": "http://127.0.0.1:9977",
      "path": "/namespace/<repo>/blobs"
    }
  }
}
```

or

```jsonc
// for local unix socket
{
  "backend": {
    "type": "http-proxy",
    "config": {
      "addr": "/path/to/unix/socket/file"
    }
  }
}
```

There is also a test in `http_proxy.rs` to make sure `HttpProxy` works well, which setups a simple http server and generates a `HttpProxy` backend to get contents from the server.

Signed-off-by: Nan Li <loheagn@icloud.com>
2023-02-05 19:21:13 +08:00
Jiang Liu fadc715a4c
Merge pull request #1053 from ccx1024cc/morgan/test_framework
smoke: support async test cases
2023-02-04 22:52:29 +08:00
泰友 d6c3f8e1b1 smoke: support async test cases
It provides both dynamic and static/common way to define test cases. The dynamic
way generates test cases by customized generator in runtime. The static way executes
cases which are defined in compiling.

It also provides both synchronized and asynchronous way to run test cases. The
asynchronous/synchronized control is suite-leveled.

Compared with github.com/onsi/ginkgo, this framework provides simpler way to organize
cases into suite, which requires less learing of terms and less nested definitions.
Moreover, the asynchronous run is more golang-natived, which requires no other binary.

Compared with github.com/stretchr/testify, this framework provides asynchronous mode
and dynamic way to generate cases.

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2023-02-03 17:47:07 +08:00
imeoer 005c3b7166
Merge pull request #1052 from changweige/fix-anolis-test
Fix anolis test
2023-02-03 16:18:21 +08:00
Changwei Ge 5e7b3daea5 docs: update docs about how to start nydus-snapshotter
Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2023-02-03 15:19:20 +08:00
Changwei Ge 0d11f7cb6d test: let anolis smoke test apdat nydus-snapshotter's CLI
Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2023-02-03 14:59:45 +08:00
Peng Tao b59b64c47e
Merge pull request #1050 from yqleng1987/master
add e2e testcases for ci
2023-02-03 11:59:01 +08:00
yqleng1987 f112e7606e add e2e testcases for ci
These test cases are developed with bats test framework, which covers
compile related components, run container with rafs image and zran image.

Signed-off-by: Yiqun Leng <yqleng@linux.alibaba.com>
2023-02-02 20:47:54 +08:00
Jiang Liu eb5ec51ae4
Merge pull request #1047 from ccx1024cc/morgan/migrate_api_test
smoke: migrate api tests from contrib/nydus-test
2023-02-01 21:37:15 +08:00
Jiang Liu bb65da8b38
Merge pull request #1049 from imeoer/action-fix-fsck
action: download fsck.erofs before using
2023-02-01 20:11:35 +08:00
Yan Song e7944449f3 action: download fsck.erofs before using
Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-02-01 09:49:37 +00:00
Jiang Liu c09b8efe86
Merge pull request #1048 from jiangliu/zran-fsck
action: enable fsck.erofs for ZRan images
2023-02-01 17:30:25 +08:00
Jiang Liu 08c6bf8297 action: enable fsck.erofs for ZRan images
Check converted ZRan images with fsck.erofs.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-02-01 17:14:46 +08:00
泰友 4e0cefc6ac smoke: migrate tests from contrib/nydus-test/api.py
From contrib/nydus-test/api.py, this pr rewrite tests in golang for
smoke. It tests http apis, including get_daemon_status,
get_global_metrics, get_latest_visited_files_metrics, get_files_metrics,
get_backend_metrics, get_blobcache_metrics.

The main idea of this pr is to migrate tests from python to golang.
Because the python tests is stale, contents of this pr is far from
completion.

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2023-02-01 17:08:46 +08:00
Jiang Liu 0821c866e4
Merge pull request #1046 from imeoer/action-convert-1
action: fix referenced oci image for zran
2023-02-01 16:56:22 +08:00
Yan Song e8b9370532 action: fix referenced oci image for zran
Also push referenced oci image to target registry for zran image,
otherwise nydusd runtime can't find oci layer.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-02-01 08:37:56 +00:00
Jiang Liu cba85dd5e2
Merge pull request #1043 from imeoer/action-convert
action: convert and check top images for zran
2023-02-01 15:13:16 +08:00
Yan Song 634041d6aa action: tidy workflow names
Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-02-01 06:53:39 +00:00
Yan Song 70f4a0e48a action: convert and check top images for zran
To ensure zran images work fine on conversion and run.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-02-01 06:53:38 +00:00
imeoer 744f8f560a
Merge pull request #1045 from jiangliu/zran
Fixes two bugs related to ZRan and fscache
2023-02-01 14:09:26 +08:00
Jiang Liu 4269bf2c7b storage: align end address when searching chunks for ZRAN
Fscache aligns request size to 4k, so we need also to align chunk
end address to 4K when searching ZRAN chunks.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-02-01 13:53:39 +08:00
Jiang Liu 68641a8273 storage: do not create BlobDevice when loading ZRAN metadata blob
When loading ZRAN metadata blob for fscache, the fscache file handle
is not ready yet. So do not create BlobDevice otherwise it will fail
to load.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-02-01 13:53:20 +08:00
imeoer 462b08312a
Merge pull request #1044 from jiangliu/cmd-option-fix
nydusd: fix a bug in parsing `fscache-tag` commandline option
2023-02-01 10:17:32 +08:00
Jiang Liu 16d538afc9 nydusd: fix a bug in parsing `fscache-tag` commandline option
When `--fscache` is present but `--fscache-tag` is missing, nydusd
will panic.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-02-01 00:07:53 +08:00
Jiang Liu 33d24990f9
Merge pull request #1040 from jiangliu/macos-fix
nydus: fix a build failure for macos
2023-01-31 15:30:34 +08:00
Jiang Liu 1f0dce105d nydus: fix a build failure for macos
Fixes a build failure for macos platform:
https://github.com/dragonflyoss/image-service/actions/runs/4048615864/jobs/6964078882

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-01-31 14:03:56 +08:00
Jiang Liu c9980d3b3f
Merge pull request #1039 from imeoer/update-zran-doc
docs: update to use nydusify in zran doc
2023-01-31 13:43:29 +08:00
Yan Song fdc8957acb docs: update to use nydusify in zran doc
To simplify zran image conversion workflow.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-01-31 03:46:25 +00:00
Jiang Liu 02b621aed4
Merge pull request #1036 from imeoer/fix-lost-smoke-case
smoke: fix lost compatibility test case
2023-01-30 23:19:47 +08:00
Yan Song c9eeb95cb8 smoke: fix lost compatibility test case
We skipped compatibility test cases because of incorrect env
configuration, this patch fixes it.

See previous action: https://github.com/dragonflyoss/image-service/actions/runs/4042014250/jobs/6949286202#step:7:70

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-01-30 11:02:39 +00:00
Jiang Liu 5bef7fc1e5
Merge pull request #1027 from jiangliu/nydus-refactor
Move generic code from nydusd into nydus crate
2023-01-30 18:06:32 +08:00
Jiang Liu baf9dad0c8
Merge pull request #1022 from jiangliu/file-leakage
storage: avoid file leakage in error handling path
2023-01-30 16:38:17 +08:00
imeoer 15f248310e
Merge pull request #1035 from imeoer/fix-chunk-size-merge
builder: fix inconsistent chunk size for merge
2023-01-30 16:09:02 +08:00
Yan Song 87d2d6d5cc smoke: enable chunk size 0x200000 test case
We have fixed the chunk size related bug in `nydus-image merge`,
so just enable the test case in smoke.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-01-30 07:58:47 +00:00
Yan Song 53da56fe5e builder: fix inconsistent chunk size for merge
The chunk size should be set to rafs superblock when enable
chunk dict in `nydus-image merge` subcommand.

Otherwise merge will set default chunk size (0x100000) in final
bootstrap, causes that inconsistent chunk size between blob
entry and superblock, and with nydusd panic:

```
ERROR [/src/metadata/layout/v6.rs:1439] RafsV6Blob: idx 0 invalid chunk_size 0x200000, expect 0x100000
ERROR [/src/error.rs:22] Error:
	"invalid Rafs v6 blob entry"
	at rafs/src/metadata/layout/v6.rs:1648
	note: enable `RUST_BACKTRACE=1` env to display a backtrace
```

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-01-30 07:57:22 +00:00
Jiang Liu 33a595f71b nydus: use thiserror to simplify implementation of Error
Use thiserror to simplify implementation of Error.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-01-30 14:10:10 +08:00
Jiang Liu e754c4b872 nydus: move common code from nydusd into nydus crate
Move common code from nydusd into nydus crate, so it can be reused.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-01-30 14:09:20 +08:00
Jiang Liu ca1bb5352e nydus: move Error and Result from nydusd into nydus crate
Move Error and Result from nydusd into nydus crate.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-01-30 14:09:10 +08:00
Liu Bo 6e9772b4f6
Merge pull request #1033 from jiangliu/v2.2-crates
publish: prepare for publishing crates for v2.2
2023-01-29 18:13:35 -08:00
Jiang Liu d22fb628e1 publish: prepare for publishing crates for v2.2
Prepare for publishing crates for v2.2.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-01-29 14:59:47 +08:00
Changwei Ge ffc4cc7d7c
Merge pull request #1032 from jiangliu/libz-sys
build: clean up libz-sys stale cache for static build
2023-01-29 14:17:49 +08:00
Jiang Liu c89476fd76 build: clean up libz-sys stale cache for static build
The docker and host build env use different root path, which causes
libz-sys to report inconsistent cache state.

Fixes: https://github.com/dragonflyoss/image-service/issues/1028

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-01-29 13:58:46 +08:00
Jiang Liu 05ce370dc3
Merge pull request #1031 from changweige/fix-nydus-test
nydus-test: don't fetch iostats per file metrics
2023-01-29 13:56:32 +08:00
Changwei Ge 8ff84c0cfd nydus-test: don't fetch iostats per file metrics
The files' metrics have a large size ending up with
uncompleted json string. The root cause why the buffer
is so uncomleted is still in fog. Moreover, we are going
to remove files's metrics in the future.

Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2023-01-29 12:17:48 +08:00
Jiang Liu 833be89c63
Merge pull request #1029 from changweige/revert-default-install-prefix
makefile: revert a change on default `install` prefix dir
2023-01-29 10:34:24 +08:00
Changwei Ge 806b747150 makefile: revert a change on default `install` prefix dir
For user-self-compiled binaries, we'd better install them
into `/usr/loca/bin` rather than `/usr/bin` which software
packages manager usually put binaries. If the default $PATH
does not have `/usr/local/bin`, we should append it to the $PATH
or change the install prefix by assigning it with `/usr/bin` when
performing `make install`

Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2023-01-29 09:21:59 +08:00
Changwei Ge 99ba5000a7
Merge pull request #1026 from jiangliu/nydusd-options
nydusd: refine commandline help messages
2023-01-28 11:06:06 +08:00
Jiang Liu 72aa8f3182
Merge pull request #1025 from adamqqqplay/update-readme
docs: fix badges and update community description
2023-01-26 16:36:12 +08:00
Jiang Liu 2cfb0a137e nydusd: refine commandline help messages
Refine commandline help messages, and restrict the way to provide
command line options for subcommands.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-01-25 13:58:00 +08:00
Qinqi Qu 04dcee3111 docs: fix badges and update community description
Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-01-20 22:58:04 +08:00
imeoer 6b90aa5e57
Merge pull request #1023 from jiangliu/trim-blob-id
storage: trim `\0` at the tail of blob id
2023-01-20 11:32:46 +08:00
imeoer 953b965552
Merge pull request #1024 from jiangliu/dep2
dep: remove unused dependencies
2023-01-20 11:31:56 +08:00
Jiang Liu 8450262ae3 dep: remove unused dependencies
Remove unused dependencies, and upgrade fuse-backend-rs to fix a bug
in `lookup()`.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-01-19 17:04:34 +08:00
Jiang Liu 4aa74c50d3 storage: trim `\0` at the tail of blob id
With user specified blob id, there may be padding `\0` at the end,
which must be trimmed when building blob file path from blob id.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-01-19 16:07:37 +08:00
Jiang Liu 7820f83281 storage: avoid file leakage in error handling path
If error happens during preparing chunkmap and blob.meta files for
data blobs, some files may be left over. So change code to avoid
file leakages.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-01-19 16:01:34 +08:00
Jiang Liu 045c438b6b
Merge pull request #1021 from ccx1024cc/morgan/refactor-smoke
smoke: DescartesIterator must return item when hasNext() returns true
2023-01-19 15:33:48 +08:00
泰友 937a61a625 fix: scroll back golang-lint version from 1.49.0 to 1.47.3
Golang-lint 1.48+(included) uses gofmt based on go 1.9. The result of
Gofmt based on go1.9 is different from Gofmt based on
go1.18([different](https://github.com/golang/go/issues/54789)). As a
result, Golang-lint 1.48+ reports "File is not `gofmt`-ed with `-s` (gofmt)",
although gofmt(1.8 based) says the code is ok.

Scroll back to golang-lint 1.47.3, which uses gofmt based on go 1.8. In
the future, upgrade golang-lint when go version is higher or equal to
1.9.

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2023-01-19 15:27:59 +08:00
imeoer 3de2d83626
Merge pull request #1020 from jiangliu/zran-assert
storage: fix an assert failure related to zran
2023-01-19 13:35:56 +08:00
泰友 dcf79ef1b0 smoke: DescartesIterator must return item when hasNext() returns true
In order to return Item from `Next()` when `hasNext()` returns true, DescartesIterator stores the information of next item,
which involves user-customized `skip()` function. It updates information only when `Next()` is called.

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2023-01-19 11:56:30 +08:00
Jiang Liu 033de262e7 storage: fix an assert failure related to zran
Fix an assert failure related to zran.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-01-19 11:00:03 +08:00
Jiang Liu 8ac26518d0
Merge pull request #1016 from imeoer/fix-release-pipeline
release: fix pipeline hang when compile for amd64
2023-01-18 23:39:10 +08:00
Yan Song 57cc413b83 release: fix pipeline hang when compile for amd64
Add `DEBIAN_FRONTEND=noninteractive` to skip interaction when install
cmake, otherwise the command will stuck forever.

ref: https://stackoverflow.com/questions/67452096/docker-build-hangs-based-on-order-of-install

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-01-18 15:18:23 +00:00
imeoer 0af45e9747
Merge pull request #983 from imeoer/refactor-smoke
smoke: refactor smoke test
2023-01-18 19:02:06 +08:00
泰友 923a8dc3a5 smoke: refact multiple nested loop to iterator
Refact the way to generate test options. Replace nested loop with
iterator, which works by compositing cursors of option enumerations.

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2023-01-18 10:56:47 +00:00
Yan Song c4f15b8a00 smoke: add github action pipeline
1. refactor smoke test in smoke.yml for faster run (named ci.yml beofre);
2. remove ci.yml, the macos build will be ran in release.yml;

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-01-18 10:56:47 +00:00
Yan Song 1d48615c64 nydusify: make backward compatibility for check subcommand
1. Do not use "fuse" subcommand for nydusd in checker for compatibility;
2. Do not check mtime between OCI v1 image and nydus image for compatibility;
3. Throw the error and exit in advance if have any fails in filesystem check;

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-01-18 10:56:47 +00:00
Yan Song b837a4ddee smoke: refactor smoke test
Refactoring smoke test with golang, it makes easier to test the basic
functionality of nydus in different dimensions, from single directory
to multi-layers image, and it is much easier to maintain than rust.

Currently the below test cases are implemented:

- TestCompatibility: test binary compatibility (nydus-image, nydusd,
  nydusify);
- TestImage: test native, zran image build & run for fusedev driver;
- TestNativeLayer: test various options for nydus native layer;
- TestZranLayer: test various options for zran layer;

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-01-18 10:56:47 +00:00
imeoer 651b7c73d0
Merge pull request #1019 from jiangliu/chunk-map
storage: return Ok(None) for check_range_ready_and_mark_pending () wh…
2023-01-18 18:54:12 +08:00
Jiang Liu af3bdfc3be
Merge pull request #1018 from jiangliu/v6-underflow
rafs: avoid possible underflow in find_target_block()
2023-01-18 18:20:28 +08:00
Jiang Liu 03326fd921 storage: return Ok(None) for check_range_ready_and_mark_pending () when needed
Return Ok(None) for check_range_ready_and_mark_pending () when the
result is empty, to follow the definition.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-01-18 18:12:48 +08:00
Jiang Liu 1a4bb6d9ca rafs: avoid possible underflow in find_target_block()
Avoid possible underflow in find_target_block().

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-01-18 17:52:17 +08:00
Jiang Liu d8c3e0f33d
Merge pull request #1015 from imeoer/fix-oss-signature
storage: fix invalid hmac signature for oss backend
2023-01-18 17:27:13 +08:00
Yan Song 3bf8f7f3e2 storage: fix invalid hmac signature for oss backend
The crate `hmac-sha1-compact` is not work with oss signature calculation,
causes that oss server throws the error:

```
SignatureDoesNotMatch: The request signature we calculated does
not match the signature you provided. Check your key and signing method.
```

Fix it by replacing with `hmac` and `sha1` crates.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-01-18 06:25:38 +00:00
imeoer c6a7ae0e19
Merge pull request #1006 from jiangliu/nydus-test-localfs
Enhance nydus-test to support localfs backend of v2.2
2023-01-17 18:47:27 +08:00
Jiang Liu 941b8e8f3b utils: enhance error messages with more context information
Enhance error messages with more context information. Also introduce
some utilities to the nydus-error crate to simplify code.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-01-17 15:59:07 +08:00
Jiang Liu fa5b46e1fe nydus-test: enhance localfs backend to support nydus 2.2
Now nydus-image and nydus only supports the combination of:
1) nydus-image --blob-id xxx --blob xxx. The blob file name and the blob
   id stored in bootstrap will both be `xxx`.
2) nydus-image --blob-dir working-dir. The blob file name and the blob
   id stored in bootstrap will both be digest of the blob.

The nydus-test works by `nydus-image --blob xxx` without `--blob-id xxx`.
So the blob file name is xxx and the blob id stored in bootstrap is
digest of the blob. This breaks the assumption of nydusd in v2.2.

So rename the blob file from `xxx` to digest of the blob.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-01-17 15:59:02 +08:00
Jiang Liu a13d854a75 nydus-test: remove unused readahead configuration
Support of readahead has been removed from nydus 2.2, so remove related
configuration. Also rename `readahead_xxx` to `prefetch_xxx`.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-01-17 15:31:29 +08:00
Jiang Liu e611630869
Merge pull request #1014 from jiangliu/v6-fix
Fix a bug in rafs v6 introduced during code refactor
2023-01-17 15:28:37 +08:00
Jiang Liu 834d20c4dd rafs: refine error handling for v6
Improve the way to handle error cases for find_target_block().

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-01-17 14:54:33 +08:00
Jiang Liu 38db84918a rafs: fix a bug in rafs v6 introduced during refactor
Fix a bug in rafs v6 introduced during refactor, which get wrong offset
for directory inode with xattrs.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-01-17 14:54:32 +08:00
Jiang Liu a9f71420f5
Merge pull request #1012 from changweige/enrich-nydusctl
nydusctl: print fs read errors and total read data
2023-01-17 14:53:09 +08:00
Jiang Liu 36359f8c7d
Merge pull request #1013 from bergwolf/1010
utils: let flate2 use libz instead of libz-ng for ppc64le
2023-01-17 14:05:08 +08:00
Jiang Liu a2d0d63223 utils: let flate2 use libz instead of libz-ng for ppc64le
Let flate2 use libz instead of libz-ng for ppc64le, libz-ng fails to
compile on ppc64le platforms.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2023-01-17 04:05:44 +00:00
Changwei Ge 9e06dc90fd nydusctl: print fs read errors and total read data
It helps to analyz system performance

Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2023-01-17 11:20:30 +08:00
Peng Tao 94430ec1f1
Merge pull request #1009 from dragonflyoss/feature/CODEOWNERS
chore: add CODEOWNERS to .github
2023-01-16 17:06:16 +08:00
Peng Tao dd54cb1abf
Merge pull request #1008 from adamqqqplay/add-readme-badge
docs: add some badges in README.md
2023-01-16 15:49:26 +08:00
Gaius 7298ceeb0a
chore: add CODEOWNERS to .github
Signed-off-by: Gaius <gaius.qi@gmail.com>
2023-01-16 15:40:38 +08:00
Qinqi Qu c8006d378c docs: add some badges in README.md
Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-01-16 14:39:55 +08:00
imeoer 506644930b
Merge pull request #1004 from jiangliu/v5-digest
storage: do not generate digest mismatch messages for old images
2023-01-16 09:58:51 +08:00
Liu Bo ec5b9d2ef4
Merge pull request #1005 from jiangliu/ppc64le
utils: tweek Cargo.toml for powerpc64le again
2023-01-14 23:47:13 -08:00
Jiang Liu 82be44ebe5 utils: tweek Cargo.toml for powerpc64le again
Tweek Cargo.toml for powerpc64le again.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-01-15 10:19:47 +08:00
Jiang Liu 6d571c7ad9 storage: do not generate digest mismatch messages for old images
For old images handled with DigestedChunkMap, chunk ready state is
not persistent. So it tries to read data from cache file and validate
chunk digest values. So do not generate digest mismatch error messages
when try to load chunk from cache files.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-01-14 21:43:23 +08:00
imeoer 07d2c7adaf
Merge pull request #1003 from jiangliu/prefetch-warn
storage: add BlobFeature::HAS_TAR_HEADER to calcalute
2023-01-14 14:39:15 +08:00
Jiang Liu 7e0f9010ae storage: add BlobFeature::HAS_TAR_HEADER to calcalute
Add BlobFeature::HAS_TAR_HEADER to calcalute compressed blob size for
merged bootstrap. When merging data blobs with inlined bootstrap,
the BlobFeature::INLINED_FS_META flag will get lost, so we need a
new flag to indicate there's tar header in these data blobs.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-01-14 12:00:05 +08:00
Jiang Liu 3805773832
Merge pull request #1002 from jiangliu/v5-size
storage: correctly calculate compressed blob size for RAFS v5
2023-01-14 11:58:42 +08:00
Jiang Liu dfa5d6c87f storage: correctly calculate compressed blob size for RAFS v5
Correctly calculate compressed blob size for RAFS v5 blobs with
BlobFeatures::INLINED_META_FS.

And disable `--feature blob-toc` for RAFS v5.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-01-14 00:55:33 +08:00
imeoer caea96f6ff
Merge pull request #1001 from jiangliu/fuse-backend-0.10.1
dep: update fuse-backend to fix a bug in lookup()
2023-01-13 15:56:54 +08:00
imeoer 8ba8c41619
Merge pull request #1000 from adamqqqplay/upgrade-rust-version-2.2
rustup: upgrade toolchain to 1.66.1 and fix cargo clippy warnings
2023-01-13 15:33:02 +08:00
Jiang Liu 0fedd00947 dep: update fuse-backend to fix a bug in lookup()
Update fuse-backend to fix a bug in lookup().

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-01-13 15:09:33 +08:00
Qinqi Qu 4e2199c63e rustup: upgrade toolchain to 1.66.1 and fix cargo clippy warnings
Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-01-13 10:56:09 +08:00
imeoer 0fafe1409f
Merge pull request #982 from jiangliu/fscache
Refine nydusd/fscache implementation
2023-01-12 18:36:58 +08:00
imeoer f1ce7d55d7
Merge pull request #999 from jiangliu/ppc64
utils: use `powerpc64` instead of `ppc64le`
2023-01-12 10:09:05 +08:00
Jiang Liu 7722a1d571 utils: use `powerpc64` instead of `ppc64le`
Use `powerpc64` instead of `ppc64le`.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-01-11 22:42:48 +08:00
imeoer 344fa19f90
Merge pull request #997 from jiangliu/ppc64le
utils: fix a compilation error for libz-ng-sys on ppc64le
2023-01-09 22:59:36 +08:00
Jiang Liu 7014e9a5bc
Merge pull request #987 from Desiki-high/master
read prefetch list(runtime) from file
2023-01-09 22:50:23 +08:00
Jiang Liu 0b77ff098e utils: fix a compilation error for libz-ng-sys on ppc64le
libz-ng-sys fails to compile on ppc64le, so fallback to libz-ng.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-01-09 22:05:18 +08:00
Jiang Liu fdc648b891 nydus: move fs_service from nydusd into nydus crate
Move fs_service from nydusd into nydus crate, so it could be reused
by other clients.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-01-09 21:47:40 +08:00
Jiang Liu 3782207f79 nydusd: fix bugs in handle_open_bootstrap()
There are two bugs in handle_open_bootstrap():
1) It sleeps for 10ms to ensure copen reply message has been sent to
kernel. It's unreliable.
2) It calls std:🧵:sleep(2s) to sleep for 2s, which will block
the async worker. So use tokio::time::sleep() instead.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-01-09 21:47:39 +08:00
Jiang Liu c2e3849e3e nydusd: ensure fscache prefetch size is bigger than 256K
Ensure fscache prefetch size is bigger than 256K, otherwise it may
consume to much memory to issue prefetch requests.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-01-09 21:47:34 +08:00
Jiang Liu 41d3651374 nydusd: fix a race window in fscache fill_bootstrap_cache()
Two rawfds are passed to fill_bootstrap_cache(), but there's no
mechanism to ensure those rawfds are valid. So it's a race window.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-01-09 20:21:06 +08:00
Jiang Liu 7196fd613a nydusd: syntax only changes to fscache
Syntax only changes to fscache, no functional chnages.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-01-09 20:17:30 +08:00
Jiang Liu 3c43c2bcb5
Merge pull request #996 from jiangliu/chunk-size
nydus-image: refine image to enable page cache sharing for EROFS
2023-01-09 19:34:16 +08:00
imeoer d0b8620fac
Merge pull request #995 from jiangliu/bootstrap
Enhance the way to handle `--bootstrap` option for nydus-image
2023-01-09 10:37:58 +08:00
Jiang Liu 162abb2789 nydus-image: refine image to enable page cache sharing for EROFS
If all chunks of an inode are continuous in uncompressed cache file,
set inode's chunk to `inode.size().next_power_of_two()`, so EROFS
can optimize page cache sharing such a case.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-01-08 23:40:58 +08:00
Jiang Liu c64794180c nydus-image: refine option --bootstrap related logic
Refine option --bootstrap related logic.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-01-08 11:55:18 +08:00
Jiang Liu 5f24024166 nydus-image: disable validation when creating images
It's too hard to maintain the validation functionality when creating
images. On the other hand, user may execute `nydus-image check` to
validate images. So disable image validation when creating images.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-01-08 11:29:43 +08:00
Jiang Liu 9d45957213 nydus-image: minor improvements nydus-image commandline options
Minor improvements nydus-image commandline options, also enhance check
for digester.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-01-08 11:28:21 +08:00
Jiang Liu 68890a67d0
Merge pull request #994 from jiangliu/xattr
nydus-image: fix a bug in handle xattr for tarballs
2023-01-07 13:05:33 +08:00
Jiang Liu 3f93a788a8 nydus-image: fix a bug in handle xattr for tarballs
There's a bug in xattr handling for tarball which causing losing of
xattrs.

Fixes: https://github.com/dragonflyoss/image-service/issues/990

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-01-07 11:59:21 +08:00
Jiang Liu 8fe814665c
Merge pull request #993 from TrellixVulnTeam/master
CVE-2007-4559 Patch
2023-01-07 10:29:38 +08:00
dingyadong ab7a2e61fa
Merge branch 'dragonflyoss:master' into master 2023-01-07 10:02:02 +08:00
TrellixVulnTeam b042c967c4 Adding tarfile member sanitization to extractall() 2023-01-06 20:37:22 +00:00
imeoer 1620a7171e
Merge pull request #992 from jiangliu/separate
storage: introduce BlobFeatures::SEPARATE to support tar-ref
2023-01-06 23:59:03 +08:00
Jiang Liu 98b9fd0885 storage: introduce BlobFeatures::SEPARATE to support tar-ref
Introduce BlobFeatures::SEPARATE to support tar-ref, in addition to
BlobFeatures::ZRAN.
SEPARATE means that blob.data is separated from blob.meta.
ZRAN means that it contains ZRAN context information and should be
decoded by ZRAN.

Fixes: https://github.com/dragonflyoss/image-service/issues/990

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-01-06 22:11:54 +08:00
Jiang Liu 4702a7b75b
Merge pull request #991 from sctb512/avoid-too-much-info
avoid too much information log
2023-01-06 21:16:20 +08:00
Desiki-high 4040885e6f trim trailing whitespace and use writelines replace write
Signed-off-by: Desiki-high <2448906309@qq.com>
2023-01-06 20:50:13 +08:00
dingyadong e117b523c5
Merge branch 'dragonflyoss:master' into master 2023-01-06 19:48:58 +08:00
imeoer cdba2a1156
Merge pull request #989 from bergwolf/github/fix-release-build
cross: install cmake
2023-01-06 10:25:49 +08:00
Desiki-high b299dfa1a5 modify the description of prefetch-files
Signed-off-by: Desiki-high <2448906309@qq.com>
2023-01-05 18:56:00 +08:00
dingyadong 4b70c24521
Merge branch 'dragonflyoss:master' into master 2023-01-05 18:10:17 +08:00
Desiki-high 5ef2fcb724 update nydus-test for prefetch-files
Signed-off-by: Desiki-high <2448906309@qq.com>
2023-01-05 16:31:05 +00:00
Desiki-high 480ba09b90 modify the arg prefetch-files help msg
Signed-off-by: Desiki-high <2448906309@qq.com>
2023-01-05 14:27:27 +00:00
Desiki-high aca0cbc040 update prefecth_files
Signed-off-by: Desiki-high <2448906309@qq.com>
2023-01-05 14:25:04 +00:00
Peng Tao 69eb05d917 cross: install cmake
Building nydusd failed on linux-riscv64 with:

  thread 'main' panicked at '
  failed to execute command: No such file or directory (os error 2)
  is `cmake` not installed?

We need to install cmake to allow cross-rs to build.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2023-01-05 09:50:57 +00:00
Peng Tao 31f64bf712
Merge pull request #988 from adamqqqplay/update-nydusd-setup-doc
docs: update nydus path to /usr/bin for better compatibility
2023-01-05 17:22:25 +08:00
Qinqi Qu 53f515a3b6 docs: update nydus path to /usr/bin for better compatibility
Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2023-01-05 10:28:04 +08:00
Desiki-high 5aea151177 read prefetch list(runtime) from file
Signed-off-by: Desiki-high <2448906309@qq.com>
2023-01-04 17:01:30 +00:00
Bin Tang c9a0e9b19d avoid too much information log
Signed-off-by: Bin Tang <tangbin.bin@bytedance.com>
2023-01-03 20:17:49 +08:00
imeoer f3594cf637
Merge pull request #985 from jiangliu/prefetch-warn
Fix warning message related to data prefetch
2023-01-03 19:36:35 +08:00
imeoer 7680fffd6b
Merge pull request #986 from imeoer/disable-convert-ci-cache
ci: disable build cache for top images test
2023-01-03 18:08:42 +08:00
Yan Song bfb7638dc2 ci: disable build cache for top images test
Prepare for refactoring build cache for nydusify, and make
the test pass first:

https://github.com/dragonflyoss/image-service/actions/runs/3825905057/jobs/6509249708

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2023-01-03 09:26:45 +00:00
Jiang Liu 06e6c8c3cc stroage: fix warnings during blob prefetch
There are different definitions about blob.compressed_size():
- size of all data chunks
- size of the data blob, including data chunks, digests, fs meta, toc

So introduce blob.compressed_data_size() to get size of all data
chunks, and blob.compressed_size() to get blob size.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-01-03 16:59:55 +08:00
Jiang Liu b51f5a893c storage: introduce BlobFeature::HAS_TOC
Introduce BlobFeature::HAS_TOC to indicate a blob has table of content.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-01-03 16:59:39 +08:00
imeoer d00c6a09c3
Merge pull request #980 from jiangliu/blob-meta
storage: update documentation for blob meta to reflect recent changes
2023-01-03 16:33:03 +08:00
imeoer f9f514f466
Merge pull request #984 from jiangliu/fscache-bug
fscache: fix a bug introduced by commit eac959f
2023-01-03 16:31:53 +08:00
Jiang Liu 6c08d19603 fscache: fix a bug introduced by commit eac959f
Fix a bug introduced by commit eac959f, which will cause data corruption
under race conditions.

Fixes: https://github.com/dragonflyoss/image-service/issues/969

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-30 23:09:52 +08:00
Jiang Liu 1602a74e72
Merge pull request #981 from imeoer/fix-zran-merge
builder: fix zran bootstrap merge
2022-12-29 20:40:10 +08:00
Yan Song db51efe915 builder: fix zran bootstrap merge
Fix the panic when merge bootstraps for zran image:

```
level=info msg="no chunk information object for blob 0 chunk 0" module=builder
level=info msg="at rafs/src/metadata/direct_v6.rs:1163" module=builder
level=error msg="fail to run nydus-image merge..."
```

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2022-12-29 11:27:44 +00:00
Jiang Liu fcaeb4d582 storage: update documentation for blob meta to reflect recent changes
Update documentation for blob meta to reflect recent changes.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-29 16:03:06 +08:00
imeoer ba3449873d
Merge pull request #975 from jiangliu/inlined-meta
Fully support RAFS blobs with inlined bootstrap
2022-12-29 14:04:30 +08:00
Jiang Liu 79bc029e9e storage: rename some blob feature names
Rename some blob feature names.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-29 11:14:35 +08:00
Jiang Liu 0b994601e1
Merge pull request #977 from imeoer/refactor-nydusify-3
nydusify: recover more options
2022-12-29 10:57:46 +08:00
Yan Song e0cfa20e64 nydus-image: ensure backward compatibility with previous converter
Ensure backward compatibility with previous converter by:

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-28 20:33:06 +08:00
Jiang Liu d18e09a343 storage: refine error messages
Refine error messages.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-28 17:56:30 +08:00
Yan Song da6d29f022 nydusify: recover more options
After nydusify was refactored, some options functionality was lost, since
the acceld converter package is currently supported, we now add them back.

- Recover `--backend-force-push` option;
- Recover `--oci` (alias `--docker-v2-format`) option;
- Recover `--platform` option, and support multiple platforms;
- Recover `--fs-align-chunk` option;
- Recover `--fs-chunk-size` (alias `--chunk-size`) option;
- Recover `--prefetch-patterns` option;

- Add `--oci-ref` option;
- Add `--all-platforms` option;

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2022-12-28 09:56:18 +00:00
Jiang Liu c4090c9fb8 nydus: generate blob id for v5 images with inlined meta
Generate blob id for v5 images with inlined meta, instead of use the
special blob id "xxxxx...".

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-27 10:11:54 +08:00
Jiang Liu ebd132d919 nydus: support images generated with `blob-inline-meta`
Support images generated with `blob-inline-meta` but without
`--features blob-toc`.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-26 23:17:31 +08:00
Jiang Liu bfd3b27434 nydus: support meta blob file name with multiple dots
Support meta blob file name with multiple dots.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-26 23:17:31 +08:00
Jiang Liu 4ef37db5f2 error: refine error message
Refine error message.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-26 23:17:31 +08:00
Jiang Liu 3a43c5cf74 nydus-image: fix bug in chunk dictionary related implementation
Fix bug in chunk dictionary related implementation.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-26 23:17:31 +08:00
Jiang Liu d740995558 nydusd: directly mount data blobs with inlined meta
Enhance nydusd to directly mount data blobs with inlined meta.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-26 23:17:31 +08:00
Jiang Liu 774e932f65 nydus-image: enhance merge subcommand
Enhance merge subcommand of nydus-image.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-26 23:17:31 +08:00
Jiang Liu bba72d08d6 rafs: enhance load_from_metadata() to support digest stored in data blobs
Enhance load_from_metadata() to support digest stored in data blob.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-26 23:17:31 +08:00
Jiang Liu deb933ac18 nydus-image: fix bugs in handling of `blob-inline-meta` flag
Fix bugs in handling of `blob-inline-meta` flag
1) set INLINED_META flags on blobs with inlined meta.
2) avoid file leakage when downloading ToC content
3) provide method to download inlined-meta

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-26 23:17:31 +08:00
Jiang Liu bf47e3d429 nydus-image: enhance inspect to support new features
Enhance inspect to support new features and better support of RAFS v6.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-26 23:17:31 +08:00
Jiang Liu 096a220f78 rafs: support chunk digests stored in RAFS data blobs
Support chunk digests stored in RAFS data blobs.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-26 23:17:31 +08:00
Jiang Liu 6b406e0e1a storage: implement BlobDevice::get_chunk_info()
Implement BlobDevice::get_chunk_info(), and also implement
BlobV5ChunkInfo for BlobMetaChunk.
So the storage layer has full chunk info implementation for RAFS v6.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-26 23:17:31 +08:00
Jiang Liu 2677927c64 storage: support chunk digested inlined in data blobs
If the data blob has inlined chunk digests, make use of it.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-26 23:17:30 +08:00
Jiang Liu 1a808aba16 storage: add more methods to download toc header and content
Add more methods to download toc header and content.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-26 23:17:30 +08:00
Jiang Liu 63a1e25394 storage: provide method to extract bootstrap from data blob
Provide method to extract bootstrap embedded in data blobs.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-26 23:17:30 +08:00
Jiang Liu f4dd9e1c34
Merge pull request #970 from imeoer/refactor-nydusify-2
nydusify: refactor with converter package
2022-12-26 11:13:17 +08:00
Yan Song dc83f582f5 nydusify: fix tests broken by code refactoring
And remove useless `registry` backend type in cli options.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2022-12-23 03:12:18 +00:00
Yan Song 7dbb42829c nydusify: remove containerd client dependence
Remove containerd client and implement the content.Provider interface
to resolve, pull and push image by self.

This patch allows nydusify do the conversion without a containerd
daemon service.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2022-12-23 03:12:18 +00:00
Yan Song bd2660874e nydusify: support chunk dict (refactor)
Recovery chunk dict feature broken by the code refactoring.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2022-12-23 03:12:18 +00:00
Yan Song 48634b65eb nydusify: refactor with converter package
Refactor converter workflow using the converter package in
acceld project:

https://github.com/goharbor/acceleration-service/pull/84

This allows nydusify to reuse acceld/snapshotter codes and
to convert layers concurrently, as well as to support the
zran conversion later.

This patch causes nydusify to rely on the containerd daemon
service, we will remove this restriction in the next commit.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2022-12-23 03:12:16 +00:00
imeoer bbb42ecf9c
Merge pull request #897 from TrellixVulnTeam/master
CVE-2007-4559 Patch
2022-12-20 10:22:41 +08:00
Jiang Liu 7e0d698c6d
Merge pull request #968 from imeoer/fix-convert-ci
ci: fix converting top images
2022-12-19 21:52:13 +08:00
Yan Song 83f60ff872 ci: fix converting top images
Fix the panic when convert images in [CI](https://github.com/dragonflyoss/image-service/actions/runs/3727276947/jobs/6321277363):

```
Caused by:
    inconsistent compressor with the lower layer, current Zstd, lower: Lz4Block.
```

The previous build cache image uses Lz4Block compression, but the new
version nydus-image uses Zstd by default, resulting in incompatibility.

Fix by using `--build-cache-version` option to invalidate the build cache image.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2022-12-19 10:14:33 +00:00
imeoer cce704ae33
Merge pull request #966 from imeoer/refactor-nydusify-1
nydusify: some minor fixups
2022-12-19 17:51:38 +08:00
Yan Song 559e277a9f nydusify: fix a http fallback case for build cache
When the `--source/--target` options specified by the users
is targeting the https registry, but `--build-cache` option
is targeting the http registry, nydusify can't fallback to
plain http for build cache registry, it causing a pull/push
failure for the build cache image.

This patch fixed the failure case.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2022-12-19 07:52:40 +00:00
Yan Song 5a50d66cb8 nydusify: fix panic if only --target be specified
nydusify panics when use `nydusify check --target localhost:5000/library/test:nydus`:

```
INFO[2022-12-19T07:24:02Z] Parsing image localhost:5000/library/test:nydus
INFO[2022-12-19T07:24:02Z] trying next host                              error="failed to do request: Head \"https://localhost:5000/v2/library/test/manifests/nydus\": http: server gave HTTP response to HTTPS client" host="localhost:5000"
INFO[2022-12-19T07:24:02Z] Parsing image localhost:5000/library/test:nydus
INFO[2022-12-19T07:24:02Z] Dumping OCI and Nydus manifests to ./output
INFO[2022-12-19T07:24:02Z] Pulling Nydus bootstrap to output/nydus_bootstrap
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0xb363bd]

goroutine 1 [running]:
github.com/dragonflyoss/image-service/contrib/nydusify/pkg/checker.(*Checker).check(0xc000222500, {0x14544f8, 0xc0000a8000})
	/nydus-rs/contrib/nydusify/pkg/checker/checker.go:160 +0xedd
github.com/dragonflyoss/image-service/contrib/nydusify/pkg/checker.(*Checker).Check(0xc000222500, {0x14544f8, 0xc0000a8000})
	/nydus-rs/contrib/nydusify/pkg/checker/checker.go:88 +0xee
main.main.func2(0xc0000bbe40)
	/nydus-rs/contrib/nydusify/cmd/nydusify.go:608 +0x5b1
github.com/urfave/cli/v2.(*Command).Run(0xc000540fc0, 0xc0000bba80)
	/go/pkg/mod/github.com/urfave/cli/v2@v2.3.0/command.go:163 +0x8b8
github.com/urfave/cli/v2.(*App).RunContext(0xc0004e91e0, {0x14544f8, 0xc0000a8000}, {0xc0000ba040, 0x4, 0x4})
	/go/pkg/mod/github.com/urfave/cli/v2@v2.3.0/app.go:313 +0xb2a
github.com/urfave/cli/v2.(*App).Run(0xc0004e91e0, {0xc0000ba040, 0x4, 0x4})
	/go/pkg/mod/github.com/urfave/cli/v2@v2.3.0/app.go:224 +0x75
main.main()
	/nydus-rs/contrib/nydusify/cmd/nydusify.go:885 +0x8d5d
```

This patch checks the target is nil or not first.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2022-12-19 07:51:41 +00:00
Jiang Liu 8399d0a1ec
Merge pull request #960 from changweige/use-async-runtime-pool-download
storage: use async runtime to download blobs meta info
2022-12-15 19:05:48 +08:00
Jiang Liu e5f6de431c
Merge pull request #962 from jiangliu/gzip
nydus-image: do not expose gzip to commandline
2022-12-15 19:05:29 +08:00
Jiang Liu 41049746f9
Merge pull request #963 from imeoer/fix-fscache-zran
storage: minor fixups for zran in fscache
2022-12-15 18:53:12 +08:00
Yan Song c0317ed0b9 storage: minor fixups for zran in fscache
Fix the panic when run a zran image in fscache driver:

```
thread '<unnamed>' panicked at 'index out of bounds: the len
   is 14 but the index is 14': storage/src/cache/cachedfile.rs:707
```

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2022-12-15 09:27:02 +00:00
Jiang Liu a5ac4f091f nydus-image: do not expose gzip to commandline
Compressor gzip are used for Zran and stargz images, it's not
configurable by users. So hide it from commandline options.

Fixes: https://github.com/dragonflyoss/image-service/issues/951

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-15 17:07:42 +08:00
Jiang Liu 196de063c9
Merge pull request #953 from jiangliu/misc
Misc code cleanup
2022-12-15 13:45:31 +08:00
Jiang Liu 6d457926c1 storage: only compile fscache related code for linux
Fscache service is only available on Linux, so only enable compilation
for Linux.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-15 11:49:54 +08:00
Jiang Liu fea4e09917 storage: use different suffix for compressed cache file
Currently the same file name is used for both compressed and
uncompressed cache file, which may cause abnormal behavior after
switching cache file mode. So use different file name for compressed
and uncompressed cache files.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-15 11:49:45 +08:00
Jiang Liu 5cb667c60d nyuds: move blob-cache from nydusd into nydus library
Move blob-cache from nydusd into nydus library, so it can be reused later.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-15 11:49:44 +08:00
Jiang Liu 8f9d544fa7 nydus: refine doc and add unit tests
Refine doc and add unit tests for the Nydus library crate.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-15 11:49:43 +08:00
Jiang Liu de9b89a9cc nydus-image: alias `blob-toc` to `blob_toc`
Alias `blob-toc` to `blob_toc` to improve usability.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-15 11:49:40 +08:00
Jiang Liu 8640c45573 storage: move toc related code from rafs into storage
The ToC table is used to organize blob data, so it belongs to the
storage subsystem.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-15 11:48:41 +08:00
Jiang Liu d1c1f7ebad
Merge pull request #959 from mofishzz/fix_v6_lookup
rafs: fix overflow panic of rafs v6 lookup
2022-12-15 11:13:52 +08:00
Jiang Liu e101a431f4
Merge pull request #958 from jiangliu/multi-algo
rafs: support blobs with differrent compression/digest algorithm
2022-12-14 18:59:05 +08:00
Changwei Ge 38c31640fa storage: use async runtime to download blobs meta info
For image with 100 layers and 1000 nodes, 100 thousands requsts are
poured to registry and object storage system. The pressure is too
high. So use existed tokio runtime with limited threads to download
those blob metas

Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-12-14 18:27:37 +08:00
Jiang Liu a378c9428c rafs: support blobs with differrent compression/digest algorithm
Currently there's a constraint that all data blobs referenced by a RAFS
filesystem must use the same compression and digest algorithms.
The constraint is unnecessary, so relax it.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-14 17:27:59 +08:00
Huang Jianan d56b060674 rafs: fix overflow panic of rafs v6 lookup
The directory for v6 is stored in the following way:
...name1name2
The first subdirectory we got here is ".".

When looking for some file whose ascii value is less than "." like
"*", we need to move forward to the block index -1. This caused
uszie's index "pirvot" to attempt to subtract with overflow and then
panic.

Fixes: 50ca1a1 ("rafs: optimize entry search in rafs v6")
Signed-off-by: Huang Jianan <jnhuang@linux.alibaba.com>
2022-12-14 16:09:27 +08:00
Jiang Liu 987b2b0e06
Merge pull request #957 from uran0sH/insecure-registry
nydusify & nydusd: give a briefer suggestion for insecure cert
2022-12-14 09:30:06 +08:00
Wenyu Huang 7d8b4839ff nydusify & nydusd: give a briefer suggestion for insecure cert
When the user encounters the error x509: certificate signed by
unknown authority:
* try to enable "skip_verify: true" option for nydusd;
* try to enable "--source-insecure" / "--target-insecure" option
for nydusify convert/check;

Signed-off-by: Wenyu Huang <huangwenyuu@outlook.com>
2022-12-13 12:27:16 -05:00
imeoer 7f26710c37
Merge pull request #955 from hsiangkao/path_cleanup
treewide: tidy up several paths
2022-12-13 21:33:42 +08:00
Gao Xiang 03b47c396c treewide: tidy up several paths
1. --config-path "/etc/nydus/nydusd-config.fusedev.json" or
                 "/etc/nydus/nydusd-config.fscache.json"
2. --root "/var/lib/containerd-nydus"
3. --address "/run/containerd-nydus/containerd-nydus-grpc.sock"

Signed-off-by: Gao Xiang <hsiangkao@linux.alibaba.com>
2022-12-13 13:13:39 +00:00
imeoer 758aeebe34
Merge pull request #928 from imeoer/zran-doc
docs: add zran guide
2022-12-13 14:57:34 +08:00
Yan Song b465ea001f docs: add zran guide
Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2022-12-13 03:43:43 +00:00
Jiang Liu 62f18f0bd9
Merge pull request #954 from imeoer/fix-tarball-hardlink
builder: fix hardlink handle for tarball
2022-12-12 21:47:04 +08:00
imeoer 044bdbe8a4
Merge pull request #946 from imeoer/improve-env-setup-doc
docs: improve env setup doc
2022-12-12 18:29:50 +08:00
Yan Song 15365c2e74 builder: fix hardlink handle for tarball
The node's hardlink pair should be found using the index of `nodes`.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2022-12-12 09:05:29 +00:00
Yan Song c8d847a67c docs: improve env setup doc
Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2022-12-12 03:42:09 +00:00
imeoer 9d6332d4a1
Merge pull request #950 from jiangliu/tar-ref
Enable generating ZRan image from tar files
2022-12-12 11:15:20 +08:00
Jiang Liu 1bf7c50d98 utils: move Read related wrappers into dedicate file
Move Read related wrappers into dedicate file.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-12 09:01:55 +08:00
Peng Tao 91dd7cf76d
Merge pull request #942 from jiangliu/api-snapshottor
api: add unit test for configuration generated by snapshotter
2022-12-11 11:37:50 +08:00
Jiang Liu c0993ba7ba nydus-image: support conversion type `tar-ref`.
Support conversion type `tar-ref`, and auto-detech `targz` vs `tar`.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-10 17:00:20 +08:00
Jiang Liu c7f58a003f storage: fix bug related to rafs-blob-digest
Clarify usage of `blob_id` and `rafs_blob_digest`, and generate correct
values for them.

Also reject `--blob-id` commandline options for ZRan images.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-10 12:57:24 +08:00
imeoer 0a7a1abb91
Merge pull request #949 from bergwolf/github/restore-chunk-dict-test
nydus-test: restore chunk dict test
2022-12-10 00:33:19 +08:00
imeoer 16adf00936
Merge pull request #948 from bergwolf/github/oss-ci
nydus-test: enable test_upload_oss test
2022-12-10 00:33:12 +08:00
Peng Tao c8e6a8647b nydus-test: restore chunk dict test
This partly reverts #566 to get back the chunk dict test.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2022-12-09 15:56:39 +00:00
Peng Tao 5d6285ef05 nydus-test: enable test_upload_oss test
We don't have any blocker to testing it.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2022-12-09 15:55:14 +00:00
Jiang Liu 139a67c445
Merge pull request #947 from changweige/trim-nydus-image-param
nydus-test: drop a nydus-image flag backend-config
2022-12-09 20:52:17 +08:00
Changwei Ge 235014082e nydus-test: drop a nydus-image flag backend-config
Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-12-09 19:43:58 +08:00
imeoer 2815bb1cf8
Merge pull request #945 from jiangliu/storage-gap
storage: remove incorrect assert
2022-12-09 18:24:35 +08:00
Jiang Liu 013b4e1efe
Merge pull request #944 from imeoer/fix-broken-link
docs: fix broken graphdriver link
2022-12-09 17:31:41 +08:00
Jiang Liu 6f7cf43439 storage: remove incorrect assert
Now the storage already supports fetching discrete chunks, so remove the
incorrect assert.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-09 17:27:42 +08:00
Yan Song 8852d8e08b docs: fix broken graphdriver link
Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2022-12-09 08:13:07 +00:00
Jiang Liu c5e7ec6a88
Merge pull request #943 from changweige/disable-files-iostats-test
nydus-test: disable case of files iostats
2022-12-09 14:20:21 +08:00
Changwei Ge 34278ec4ed nydus-test: disable case of files iostats
Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-12-09 13:52:57 +08:00
Jiang Liu afeeb9dcde
Merge pull request #941 from changweige/fix-graceful-exit
nydusd: register signal handler earlier
2022-12-09 10:50:51 +08:00
Jiang Liu dfe4f9bef1 api: add unit test for configuration generated by snapshotter
Add unit test for configuration generated by snapshotter.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-09 10:46:26 +08:00
Changwei Ge 8f97e9d446 nydusd: register signal hanlder earlier
Otherwise, it loses a window to gracefully exit
by umounting FUSE

Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-12-09 10:14:25 +08:00
Changwei Ge 74a3e7ddc4
Merge pull request #938 from jiangliu/prefetch
api: merge fs-prefetch and blob-prefetch configuration
2022-12-09 09:27:52 +08:00
Jiang Liu e04aaa4239 rafs: fix a bug in fs prefetch
There's a bug which skips data chunks at blob tail.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-08 22:04:59 +08:00
Jiang Liu 9849117af4 api: merge fs-prefetch and blob-prefetch configuration
The storage manager executes data prefetch according to blob prefetch
configuration, so merge fs-prefetch configuration into blob prefetch
configuration.
The blob configuration has high priority.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-08 22:04:57 +08:00
Jiang Liu bc7c4bace2
Merge pull request #933 from jiangliu/zlib-ng
utils: replace libz-sys with libz-ng-sys
2022-12-08 21:01:34 +08:00
Jiang Liu 486b262e97
Merge pull request #927 from imeoer/migrate-docker-graphdriver
contrib: migrate out docker graph driver
2022-12-08 19:59:40 +08:00
Yan Song 5ce669dbde contrib: migrate out docker graph driver
Moved to https://github.com/nydusaccelerator/docker-nydus-graphdriver

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2022-12-08 09:57:18 +00:00
imeoer bdf4d871d4
Merge pull request #935 from changweige/backoff-retry
nydusctl: print each prefetch item to new line
2022-12-08 17:50:43 +08:00
Changwei Ge fa9cf3b9ab nydusctl: print each prefetch item to new line
Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-12-08 16:48:16 +08:00
Jiang Liu 653d742757 utils: replace libz-sys with libz-ng-sys
zlib-ng may have better performance than zlib, and it's api compatible
with zlib.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-08 15:07:48 +08:00
Jiang Liu 273c1ef702
Merge pull request #894 from kevinXYin/fscache-gc-api
nydusd: make DELETE /api/v2/blobs API support cull fscache blob
2022-12-08 13:06:07 +08:00
Jiang Liu e64509c662
Merge pull request #921 from kevinXYin/fscache-chunkmap-restore
Fscache chunkmap restore
2022-12-08 12:57:16 +08:00
imeoer 4db3c2cbe0
Merge pull request #932 from jiangliu/rafs-toc
Provide interface to access ToC table
2022-12-08 11:40:56 +08:00
imeoer 454815beff
Merge pull request #931 from loheagn/nydusify-s3-backend-v2
nydusify: full support for s3 backend
2022-12-08 10:54:07 +08:00
Jiang Liu c4ff90f047 rafs: provide methods to access ToC table
Provide methods to access ToC table.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-08 10:52:23 +08:00
Xin Yin 6896dfc1c1
Merge branch 'master' into fscache-chunkmap-restore 2022-12-08 09:53:34 +08:00
Nan Li 3a598ad5d9 nydusify: full support for s3 backend
This patch add full support for s3 backend to all subcommands of nydusify and updates the relevant docs.

Signed-off-by: Nan Li <loheagn@icloud.com>
2022-12-07 22:55:56 +08:00
Jiang Liu cf0a525aec rafs: record size of RAFS ToC table
Record size of RAFS ToC table in RafsV6Blob, which helps to locate
the ToC table in the blob.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-07 22:27:58 +08:00
imeoer db86ae6e15
Merge pull request #926 from jiangliu/blob-toc
Refine code to generate image
2022-12-07 19:09:07 +08:00
Jiang Liu 15f2f8895f nydus-image: fix bug in handling `blob-id`
Current only 64 bytes `blob-id` is supported, relax the constraints.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-07 18:29:32 +08:00
Jiang Liu 9aace9c373 nydus-image: refactor the code to generate image
Refactor the code to generate image:
1) make blob-toc independent of inline-bootstrap
2)make blob-toc work for all builders

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-07 18:28:38 +08:00
imeoer 8f7f9f7fbb
Merge pull request #929 from adamqqqplay/fix-ioutil-api
nydusify: replaces the io/ioutil API which was deprecated in go 1.16
2022-12-07 16:36:03 +08:00
Qinqi Qu c58480a678 nydusify: replaces the io/ioutil API which was deprecated in go 1.16
Interfaces in the io/ioutil package may not be available in future go versions,
use new interfaces to avoid future build failures.

Signed-off-by: Qinqi Qu <quqinqi@linux.alibaba.com>
2022-12-07 15:48:49 +08:00
Xin Yin c9c9a23c2d nydusd: make DELETE /api/v2/blobs API support cull fscache blob
If only set blob_id parameter in DELETE /api/v2/blobs, will try to cull
fscache blob. Then snapshotter can use this to delete fscache blob files.

Signed-off-by: Xin Yin <yinxin.x@bytedance.com>
2022-12-07 15:44:45 +08:00
Jiang Liu 1c71da885c
Merge pull request #923 from changweige/backoff-retry
Backoff retry
2022-12-07 15:31:02 +08:00
Jiang Liu e48a7b8cb3
Merge pull request #925 from loheagn/nydusd-s3-backend
storage: nydusd supports s3 backend
2022-12-07 09:22:55 +08:00
Nan Li b9295396af storage: nydusd supports use s3 as backend
As we have supported nydusify using s3 services as the storage backend to store blobs, this patch supports nydusd fetching blobs from s3 services.

Signed-off-by: Nan Li <loheagn@icloud.com>
2022-12-06 23:25:27 +08:00
Jiang Liu 2bd9e2f97d storage: rename SEPARATE_BLOB_META to INLINED_META
Rename SEPARATE_BLOB_META to INLINED_META.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-06 22:51:52 +08:00
Jiang Liu 2ca83b6e12 nydus-image: rename "--inline-bootstrap" to "--blob-inline"
"blob.meta" will always be inlined into the data blob, and later
"blob.digest" will be inlined into the data blob. So  rename
"--inline-bootstrap" to "--blob-inline-meta", to inline boostrap
into the data blob.

In next Nydus 3.0 release, by default all will be inlined into
the data blob.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-06 20:21:59 +08:00
Xin Yin f1a78d206a storage: restore chunkmap from blob file in fscache mode
In fscache mode, restore chunkmap from blob file via seek_hole when
create FileCacheEntry.

Signed-off-by: Xin Yin <yinxin.x@bytedance.com>
2022-12-05 20:34:44 +08:00
imeoer 634eb916dc
Merge pull request #896 from jiangliu/clib
clib: introduce nydus-clib framework
2022-12-05 19:36:42 +08:00
Jiang Liu e22571b75d
Merge pull request #924 from imeoer/fix-v6-blob-meta
storage: fix uncompressed blob meta read
2022-12-05 19:29:07 +08:00
imeoer 2ba96f0009
Merge pull request #920 from jiangliu/blob-size
storage: get blob size from blob info object
2022-12-05 18:56:44 +08:00
Yan Song e5dea71042 tests: remove stargz test from smoke
We need to adjust the stargz conversion and run implementation,
It should be similar with zran image, we need to pack compressed
blob meta and bootstrap to a inline blob.

Let's remove the stargz test case in smoke first, then add it in
snapshotter test or smoke in golang implementation in future.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2022-12-05 10:49:06 +00:00
Yan Song 96ecedca93 storage: fix uncompressed blob meta read
We need to clarify the following facts:

- Never fill 4k aligned padding for compressed/uncompressed blob meta stored in backend;
- Always fill 4k aligned padding for uncompressed blob meta in blob cache;

This patch fixes the broken behavior committed in #858

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2022-12-05 09:43:08 +00:00
Changwei Ge ea04ff8c4f nydusctl: refine nydusctl general information print messages
Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-12-05 16:33:39 +08:00
Changwei Ge 89237756a1 storage: introduce BackOff delayer to mitigate backend presure
Don't retry immediately since registry can return "too many requests"
error. We'd better be slow to retry.

Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-12-05 16:33:39 +08:00
Xin Yin 1833904c63 storage: synchronously download blob meta in fscache mode
In fscache mode, FileCacheEntry will be created in a asynchronous
thread, do not need launch new thread to download meta data. Also this
is needed for following patch to support chunkmap restore.

Signed-off-by: Xin Yin <yinxin.x@bytedance.com>
2022-12-05 12:07:12 +08:00
Jiang Liu b2b15a5ab6 storage: get blob size from blob info object
Get blob size from blob info object instead of returning invalid value.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-05 11:02:58 +08:00
Jiang Liu fad29fa3bd
Merge pull request #919 from changweige/prefetch-metrics
Prefetch metrics
2022-12-05 10:59:23 +08:00
Jiang Liu ca7a02ffbe
Merge pull request #918 from changweige/v6-blob-prefetch
rafs: prefetch based on blob chunks rather than files
2022-12-05 10:58:23 +08:00
Jiang Liu f32c840bbb
Merge pull request #912 from jiangliu/cachefile
storage: simplify implementation of cachefile
2022-12-05 10:52:37 +08:00
Changwei Ge 83ec8b175c nydusctl: show prefetch latency and bandwidth
Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-12-05 10:25:15 +08:00
Changwei Ge 0d14a58fe2 nydusctl: adapt renaming prefetch_mr_count to prefetch_requests_count
Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-12-05 10:25:10 +08:00
Changwei Ge 7714dc4a87 metrics: rename prefetch_mr_count to prefetch_requests_count
Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-12-05 10:24:58 +08:00
Jiang Liu 215c1970d4 storage: simplify implementation of cachefile
Simplify implementation of cachefile by:
1) remove compressor and digester and directly get those info from
   blob info object.
2) rename is_compressed to is_raw_data, the cache data may be
   uncompressed and encrypted in future.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-05 10:24:41 +08:00
Changwei Ge b2f4b20022 metrics/cache: add more prefetch related metrics
Record prefech request average latency.
Calculate prefetch average bandwidth.

Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-12-05 10:24:16 +08:00
Changwei Ge 3c37964eac metrics: add method set() to initialize the metric
Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-12-05 10:23:17 +08:00
Changwei Ge bc2ce186e3 rafs: prefetch based on blob chunks rather than files
Perform different policy for v5 format and v6 format as rafs v6's blobs are capable to
to download chunks and decompress them all by themselves. For rafs v6, directly perform
chunk based full prefetch to reduce requests to container registry and
P2P cluster.

Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-12-05 10:16:58 +08:00
imeoer 9cabca6dcb
Merge pull request #917 from jiangliu/blob-features
rafs: unify meta-flags and blob-features
2022-12-05 10:11:04 +08:00
Jiang Liu b895c4300c rafs: unify meta-flags and blob-features
There are two sets of flags to control blob features: meta-flag and
blob-features. Unify them into one blob-features.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-03 10:22:42 +08:00
imeoer 618916ab2b
Merge pull request #915 from jiangliu/create-help
nydus-image: refine help message for create subcommand
2022-12-03 00:30:44 +08:00
imeoer 55bac2bc8f
Merge pull request #916 from jiangliu/blake3
nydus-image: use blake3 as default digester
2022-12-03 00:28:11 +08:00
Jiang Liu e099567b68 nydus-image: use blake3 as default digester
Blake3 is much faster than sha256, so use it as default.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-02 22:12:01 +08:00
Jiang Liu 540ae75eb2 nydus-image: refine help message for create subcommand
Refine help message for create subcommand.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-12-02 22:05:44 +08:00
Jiang Liu 7e79442a2f
Merge pull request #858 from imeoer/ref-inline-bootstrap
support ref image build and run
2022-12-02 19:02:35 +08:00
Yan Song 5e1808c6ff builder: fix some clippy warnings
Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2022-12-02 09:55:16 +00:00
Yan Song daf6480424 builder: add rafs blob info on merge
Dump rafs blob digest, toc digest, and blob size into merged bootstrap,
so that we can use them to fetch blob meta data or validate toc digest.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2022-12-02 09:55:16 +00:00
Yan Song 06bdb9ece9 builder: complete toc implementation
Write raw blob, blob meta and bootstrap info to toc entry list, so that
we can seek them and validate digest on runtime.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2022-12-02 09:55:14 +00:00
Yan Song a032a5c88f builder: support --features option
Using --features option on nydus-image create command to ensure compatibility
when using new converter with old nydus-image binary.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2022-12-02 06:46:08 +00:00
Yan Song d070af8fc5 builder: support zran ref build
Create zran blob meta from targz fifo stream like:

```
nydus-image create --type targz-ref /path/to/fifo/layer.tar.gz --inline-bootstrap --blob-meta /path/to/blob.meta
```

It allow nydus lazily load OCI targz layer with the nydus zran ref image.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2022-12-02 06:46:06 +00:00
Jiang Liu a3a83850ea
Merge pull request #909 from imeoer/enhance-http-scheme
storage: enhance retryable registry http scheme
2022-11-30 19:15:26 +08:00
Yan Song 98d72657d4 storage: enhance retryable registry http scheme
Check the tls connection error with `wrong version number` keywords,
it's more reliable than the specific error code.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2022-11-30 10:20:39 +00:00
Jiang Liu b8f92c40b4
Merge pull request #910 from bergwolf/github/smoke
misc: add a volume in nydus-smoke test
2022-11-30 12:09:48 +08:00
Peng Tao 89dc0169ee misc: add a volume in nydus-smoke test
The smoke test requires a non-overlayfs-upperlayer /tmp directory,
otherwise it fails at mknod char device (0,0). Let's create a volume
for it.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2022-11-30 03:26:08 +00:00
Peng Tao b2495b0761
Merge pull request #908 from jiangliu/uncompressed-chunk
utils: fix a warning generated when decoding chunk data
2022-11-30 11:24:15 +08:00
imeoer 8a7a207cca
Merge pull request #893 from uran0sH/registry-hwy
nydusd: automatically retry registry http scheme
2022-11-30 10:46:22 +08:00
Jiang Liu 71e044a08b utils: fix a warning generated when decoding chunk data
The refactor introduced a regression which tries to decompress an
uncompressed data chunk, thus generating a warning about decompression
failure. Fix it by check whether the chunk data is compressed or not.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-30 10:41:57 +08:00
Jiang Liu 4f94e263f0
Merge pull request #906 from imeoer/builder-fix-merge
builder: fix panic on merge
2022-11-29 19:51:28 +08:00
Yan Song 046e201f99 builder: fix panic on merge
Fix panic on nydus-image merge command:

```
thread 'main' panicked at 'called `Option::unwrap()` on a `None` value': src/bin/nydus-image/merge.rs:183
```

The fs_version should be set in previous code.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2022-11-29 11:28:01 +00:00
Jiang Liu 35df8727c9
Merge pull request #905 from bergwolf/github/as_any
nydusd: allow to get fusedev fsservice from fusedev daemon
2022-11-29 19:15:03 +08:00
Peng Tao a8dd445f08 nydusd: allow to get fusedev fsservice from fusedev daemon
To be used by the upgrade manager.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2022-11-29 09:52:43 +00:00
Wenyu Huang ddabb963b4 nydusd: automatically retry registry http scheme
Signed-off-by: Wenyu Huang <huangwenyuu@outlook.com>
2022-11-29 04:04:40 -05:00
imeoer 8a3a83f116
Merge pull request #904 from imeoer/builder-fix-chunk-v2
builder: use chunk v1 for native directory build
2022-11-29 15:51:08 +08:00
Yan Song e029fe463b builder: use chunk v1 for native directory build
Keep backward compatibility for old nydusd, otherwise the old nydusd
will throw a panic like:

```
ERROR [error/src/error.rs:21] Error:
        "blob metadata size is too big!"
        at storage/src/meta/mod.rs:344
        note: enable `RUST_BACKTRACE=1` env to display a backtrace
```

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2022-11-29 07:16:36 +00:00
TrellixVulnTeam 9f4194b05a Adding tarfile member sanitization to extractall() 2022-11-27 11:51:37 +00:00
Jiang Liu 8189b732f0 clib: introduce nydus-clib framework
Introduce nydus-clib framework, which provides static/dynamic to
integrate nydus functionality into C programs.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-27 18:10:12 +08:00
Jiang Liu 57b8c42722
Merge pull request #860 from jiangliu/config
Introduce Nydus configuration file format version 2
2022-11-26 22:05:03 +08:00
imeoer ddb8819352
Merge pull request #895 from liubin/fix/remove-unused-var-in-action
actions: remove unused var
2022-11-26 21:36:44 +08:00
bin liu 41929d9d3e actions: remove unused var
If .github/workflows/release.yml the cnt is not used.

Signed-off-by: bin liu <bin@hyper.sh>
2022-11-25 22:39:35 +08:00
Jiang Liu 00b7f6340a api: introduce Toml based configuration file format version 2
Introduce Toml based configuration file format version 2.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-25 21:38:59 +08:00
Jiang Liu 65592030e2 api: move RAFS configuration struct into api
Move RAFS configuration struct into api.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-25 20:33:59 +08:00
Jiang Liu d4910715ac api: move configuration file related struct to dedicate file
Move configuration file related struct to dedicate file.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-25 20:33:56 +08:00
Jiang Liu 535d02999b
Merge pull request #891 from bergwolf/github/rust-vmm
nydusd: update rust-vmm/fuse-backend-rs dependencies
2022-11-23 19:59:13 +08:00
Peng Tao a45fef8a6a nydusd: update rust-vmm/fuse-backend-rs dependencies
virtio-queue: 0.4.0 -> 0.6.1
vhost: 0.4.0 -> 0.5.0
vhost-user-backend: 0.5.1 -> 0.7.0
fuse-backend-rs: 0.9.0 -> 0.10.0

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2022-11-23 10:27:25 +00:00
Jiang Liu dbe6ee126f
Merge pull request #882 from changweige/fix-frequent-retry
Fix frequent retry
2022-11-23 15:17:00 +08:00
Jiang Liu b5d53e8e5d
Merge pull request #888 from kevinXYin/fix-fscache-shared-domain
nydusd: fix fscache prefetch threads can't exit in shared domain mode
2022-11-23 15:16:21 +08:00
Jiang Liu 21d759deec
Merge pull request #883 from changweige/use-unstale-sort
storage: user unstable sort to reduce memory allocation
2022-11-23 15:06:29 +08:00
Xin Yin 2b3efe3097 nydusd: fix fscache prefetch threads can't exit in shared domain mode
We check if all blobs were created before stop prefetch threads for
fscache, but FsCacheMgr is a pre-image struct, in shared domain mode,
we don't know how many blobs will be created. So do not check the
number of blobs.

Signed-off-by: Xin Yin <yinxin.x@bytedance.com>
2022-11-23 11:25:50 +08:00
imeoer bcf636b47e
Merge pull request #879 from changweige/add-nydusify-compressor
nydusify: add CLI parameter --compressor to control nydus-image
2022-11-23 10:20:56 +08:00
imeoer 9cef4c26cd
Merge pull request #871 from loheagn/nydusify-s3-backend
nydusify: support using s3 as the storage backend
2022-11-23 10:19:18 +08:00
Changwei Ge fd77eb020a storage: user unstable sort to reduce memory allocation
We can bear with non-ordered equal elements when sorting.

Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-11-23 10:14:27 +08:00
Changwei Ge 15bb4cea42 nydusify: add a parameter to change chunk size
Public the parameter to end users.

Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-11-23 09:52:36 +08:00
Changwei Ge 771b55796c storage: fix too frequent retry when blob prefetch fails
tick() will complete when the next instant reaches which is
a very short time rather than 1 second.

In addition limit the total retry times

Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-11-23 09:42:00 +08:00
Changwei Ge de4acc46a9 storage: change error type if meta file is not found
ENOENT would be more suggestive.

Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-11-23 09:40:24 +08:00
Changwei Ge 1578fcede2 nydusify: add CLI parameter --compressor to control nydus-image
It has been proved that zstd has a smaller image size. We
should provide user a option to use zstd as nydus image compressor
to reduce image size.

Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-11-23 09:38:02 +08:00
imeoer 76b2588200
Merge pull request #881 from changweige/change-default-fs-version
nydusify: change default fs version to 6
2022-11-22 15:36:36 +08:00
Changwei Ge 532bdc5eb5 nydusify: change default fs version to 6
Nydus-image has changed its default fs version,
let nydusify adapt it.

Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-11-22 15:03:22 +08:00
Nan Li 37b0eb837d nydusify: support using s3 as the storage backend
This patch supports using s3 as the storage backend other than registry and oss when converting images.

Through this patch, users can use the official s3 service or other s3 compatible services (for example minio, ceph s3 gateway, etc.) as the backend to store the blob layers of the converted nydus images when executing `nydusify convert`. The usage of s3 backend is very similar with the oss backend. Users should use `--backend-type` and `--backend-config-file`(or `--backend-config`) to make the nydusify choose s3 backend.

For example:

```
nydusify convert --source <source_image> --target <target_image> --backend-type s3 --backend-config-file <path_to_config_json_file>
```

The schema of the s3 config json looks like the following, and only the `bucket_name` and the `region` are mandatory:

```
{
  "access_key_id": "access_key_id",
  "access_key_secret": "access_key_secret",
  "endpoint": "http://localhost:9000",
  "bucket_name": "bucket_name",
  "region": "us-east-1",
  "object_path-prefix": "path/to/registry/"
}
```

This patch also adds a test to verify the s3 backend works well. As the nydusd doesn't support s3 backend for now, the `nydusify check` is skipped in the end of the test.

Signed-off-by: Nan Li <loheagn@icloud.com>
2022-11-21 21:08:56 +08:00
imeoer 536e8e6b43
Merge pull request #876 from changweige/update-uhttp
cargo: update version of dbs-uhttp
2022-11-21 14:02:57 +08:00
Changwei Ge 77c93f5ded
Merge pull request #875 from changweige/fix-nydusify-typo
nydusify: fix a typo in its version message
2022-11-21 10:52:10 +08:00
Changwei Ge 3ba8139e83 cargo: update version of dbs-uhttp
dbs-uhttp has fixed the problem that http client
get EBUSY error when fetching body data from API server.

Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-11-21 10:13:19 +08:00
Changwei Ge 1fe603a2d6 nydusify: fix a typo in its version message
It should be Revision

Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-11-21 10:09:45 +08:00
Jiang Liu 7f72cb1384
Merge pull request #864 from yawqi/bootstrap-consistent
nydus-image: prevent inconsistent RAFS version/compressor/digester of parent bootstrap/chunk dict
2022-11-19 18:23:32 +08:00
Qi Wang dd144eee7b nydus-image: prevent inconsistent RAFS version/compressor/digester of parent bootstrap/chunk dict
When specifying parent bootstrap or chunk dict, we need to make sure its rafs version,
compressor and digester is consistent with current configuration.

Signed-off-by: Qi Wang <mpiglet@outlook.com>
2022-11-19 14:24:05 +08:00
Jiang Liu 9bfcc48a8e
Merge pull request #869 from sctb512/fix-print-auth
storage: fix registry to avoid print bearer auth
2022-11-17 19:45:05 +08:00
Jiang Liu 69a5bd3afe
Merge pull request #866 from changweige/beautify-nydusify-version
nydusify: beautify version print message of nydusify
2022-11-17 19:44:31 +08:00
Bin Tang 6c5ec905e5 storage: fix registry to avoid print bearer auth
Signed-off-by: Bin Tang <tangbin.bin@bytedance.com>
2022-11-17 17:53:00 +08:00
Changwei Ge 7c63847c77 nydusify: beautify version print message of nydusify
Print more infomations on git version, reversion and golang version

Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-11-17 15:41:53 +08:00
Jiang Liu 522c8aacef
Merge pull request #865 from jiangliu/nydus-image-check
nydus-image: keep compatibility with "nydus-image check --bootstrap"
2022-11-15 16:00:36 +08:00
Jiang Liu acc04f99be nydus-image: keep compatibility with "nydus-image check --bootstrap"
Tween the "nydus-image check" subcommand to keep compatibility with
"nydus-image check --bootstrap".

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-15 15:29:55 +08:00
Jiang Liu 371ea8483f
Merge pull request #863 from loheagn/oss-fix-concurrency
nydusify: fix the concurrency error when push image blob layer to oss backend
2022-11-14 15:35:47 +08:00
Nan Li f0e491c8f9 nydusify: fix the concurrency error when use oss backend to upload image blob layer
Signed-off-by: Nan Li <loheagn@icloud.com>
2022-11-14 12:49:48 +08:00
imeoer d37b877356
Merge pull request #862 from jiangliu/storage-compressed
storage: treat cache files as uncompressed when the algorithm is None
2022-11-14 10:01:07 +08:00
imeoer 81fc9164c8
Merge pull request #861 from jiangliu/test-digest
test: fix invalid digest value of repeatable blob
2022-11-14 09:56:52 +08:00
Jiang Liu fa19172a30 storage: treat cache files as uncompressed when the algorithm is None
When the compression algorithm is None, treat cache file as uncompressed
even if user configures it as compressed.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-12 22:04:27 +08:00
Jiang Liu 8688d285f6 test: fix a bug in tests for compressed cache file
The json configuration for compressed cache file is wrong, so fix it
enable test cases for compressed cache files.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-12 21:58:09 +08:00
Jiang Liu df31803d5b test: fix invalid digest value of repeatable test images
Fix invalid digest values of repeatable test mages.

Fixes: https://github.com/dragonflyoss/image-service/issues/832

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-12 17:32:16 +08:00
imeoer 269bb4d0b9
Merge pull request #857 from hsiangkao/gx/buildkit_support
README.md: mark buildkit integration as completed
2022-11-10 11:29:54 +08:00
Gao Xiang fd7a998394 README.md: mark buildkit integration as completed
Since https://github.com/moby/buildkit/pull/2581 has been merged
upstream (although no new release yet.)

Signed-off-by: Gao Xiang <hsiangkao@linux.alibaba.com>
2022-11-10 03:25:22 +00:00
Jiang Liu f95b959f27
Merge pull request #854 from changweige/fix-nydus-image-backend-config
Enhance pseudo fs e2e test and refine nydusd client error message
2022-11-09 18:14:53 +08:00
Changwei Ge fe4dda7b89 nydus-test: change storage backend for api test suite
Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-11-09 17:09:15 +08:00
Changwei Ge 0ee73079ae nydus-test: enhance pseudofs e2e test
Also do rootfs integratoin test for it

Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-11-09 17:09:15 +08:00
Changwei Ge c0deb19e53 nydus-test: adjust how nydusd client report error message
It will be easier to get wrong message from server this way.

Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-11-09 17:09:15 +08:00
Jiang Liu 33a357b09d
Merge pull request #853 from changweige/fix-nydus-image-backend-config
nydus-test: don't test nydus-image for localfs backend
2022-11-09 14:27:13 +08:00
imeoer 5be153feb0
Merge pull request #844 from jiangliu/cmd-option
Refine nydus-image/nydusd commandline options
2022-11-09 14:05:48 +08:00
imeoer 185abca59c
Merge pull request #849 from jiangliu/nydus-image-v6
rafs: fix a bug in set prefetch table size of empty filesysetem
2022-11-09 14:04:12 +08:00
imeoer ef203f073a
Merge pull request #850 from jiangliu/chunk-map
rafs: refine the way to build chunk map for v6
2022-11-09 14:03:28 +08:00
Changwei Ge 2de1781f00 nydus-test: don't test nydus-image for localfs backend
nydus-image can't move directly blobs to a specified dir anymore.
So adapt it.

Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-11-09 13:59:46 +08:00
Jiang Liu a73dd2f42e
Merge pull request #852 from imeoer/builder-fix-redundant-blob
builder: fix redundant blob for inline bootstrap
2022-11-09 13:47:32 +08:00
Yan Song bae627668d builder: fix redundant blob for inline bootstrap
The previous code generates a new empty blob for inline bootstrap mode:

```
if ctx.inline_bootstrap {
    let (_, _) = blob_mgr.get_or_create_current_blob(ctx)?;
}
```

And then the empty blob will be appended to blob table although
no chunk data be generated in the current build.

This patch avoids generating this kind of redundant blob.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2022-11-09 04:01:12 +00:00
Jiang Liu eeb2e0dc74 rafs: refine the way to build chunk map for v6
Refine the way to build chunk map for v6, avoid RefCell and thread local
variables.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-08 18:02:30 +08:00
Jiang Liu 7967991fb7 rafs: fix a bug in generating block count for v6
The block count has been incorrectly set to EROFS_BLOCK_SIZE(4096).

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-08 15:52:04 +08:00
Jiang Liu 8fe5021329 rafs: fix a bug in set prefetch table size of empty filesysetem
For empty RAFS v6 filesystem, it has incorrectly non-zero prefetch table
size.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-08 15:52:04 +08:00
Jiang Liu a85e7de2bd
Merge pull request #847 from jiangliu/rafs-v6
Enhance RAFS v6
2022-11-08 15:41:35 +08:00
Jiang Liu fc7cbc9025 rafs: refine direct v6 implementation
Refine direct v6 implementation by:
1) avoid Cell
2) reduce ArcSwap::load()
3) cache some frequently used fields from meta

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-08 14:21:59 +08:00
Jiang Liu ded20c35ae rafs: enforce stricter validation rules for v6
Enforce stricter validation rules for v6.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-08 14:21:59 +08:00
Jiang Liu 1cf87b2412 rafs: do not convert endian for zero values
Do not convert endian for zero values, it's a noop.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-08 14:21:58 +08:00
Jiang Liu bf97dfd746 rafs: simplify DirectMappingState for RAFS v5
Simplify DirectMappingState for RAFS v5 by:
1) avoid Arc<>
2) do not support clone

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-08 14:21:58 +08:00
imeoer acbbdb358e
Merge pull request #845 from jiangliu/cache-file-size
storage: correctly setup size for compressed cache file
2022-11-08 14:19:23 +08:00
Jiang Liu e098b9babf nydusd: remove uncommonly used short options
Remove uncommonly used short options.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-08 14:17:17 +08:00
Jiang Liu 04b2859ff9
Merge pull request #848 from liubin/some-minor-fixes
Some minor fixes
2022-11-08 14:13:12 +08:00
bin liu 7a58510a01 nydusd: use BackFileSystem in fuse_backend_rs crate
Remove the self defined BackFileSystem and use the one
in the fuse_backend_rs crate.

Signed-off-by: bin liu <bin@hyper.sh>
2022-11-08 13:18:24 +08:00
bin liu d23220a0a9 nydusd: fix log message
Some log messages is not correct.

Signed-off-by: bin liu <bin@hyper.sh>
2022-11-08 13:17:37 +08:00
bin liu 1962c6afbd rafs: fix crate name in readme
The name should be rafs but not storage in readme.

Signed-off-by: bin liu <bin@hyper.sh>
2022-11-08 13:15:19 +08:00
Jiang Liu 27e79c4133 storage: correctly setup size for compressed cache file
When cache file is compressed, file size is incorrectly set to
uncompressed file size.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-07 18:41:00 +08:00
Jiang Liu 196811e508 nydus-image: remove some uncommonly used short options
Remove some uncommonly used short options.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-07 12:40:42 +08:00
Jiang Liu 13ebd341ab nydus-image: refine command line help messages
Refine command line help messages.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-07 12:40:40 +08:00
Jiang Liu 752b72be9d
Merge pull request #840 from jiangliu/filemap
Reduce duplicated code by using utils/filemap
2022-11-07 12:21:28 +08:00
Jiang Liu 0fbe8baecf rafs: use filemap to reduce duplicated for direct v6
Use filemap to reduce duplicated for direct v6.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-07 11:56:25 +08:00
Jiang Liu a65593aa75 rafs: fix a bug in impl Clone for DirectSuperBlockV6
Fix a bug in impl Clone for DirectSuperBlockV6, also reduce deep memory
copy.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-07 11:54:51 +08:00
Jiang Liu efb7c908fb rafs: get rid of unused digest validation code from RAFS v6
Get rid of unused digest validation code from RAFS v6.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-07 11:54:49 +08:00
Jiang Liu 4495852207 storage: implement chunk map with FileMap
Implement chunk map with FileMap to reduce duplicated code.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-07 11:54:48 +08:00
Jiang Liu 392c81d305 rafs: use filemap for direct v5 mode
Use the generic filemap from nydus-utils crate to avoid duplicated
code.

Also reduce unsafe code, now there's only two unsafe left in direct v5.

Fix a lifetime related by xattr related code.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-07 11:51:48 +08:00
imeoer e3df935cb8
Merge pull request #833 from jiangliu/zran
Adopt zlib random access algorithm to support OCIv1 compatible nydus image
2022-11-07 10:31:58 +08:00
imeoer 630606b539
Merge pull request #841 from sctb512/fix-mirror-health-check
storage: remove unused code for refreshing registry tokens
2022-11-07 10:03:54 +08:00
Bin Tang f5e958ffe3 storage: remove unused code for refreshing registry tokens
There is no need to change 'grant_type' for refreshing registry tokens.
Because we use the URL with cached 'grant_type' can get token as well.

Signed-off-by: Bin Tang <tangbin.bin@bytedance.com>
2022-11-06 22:00:24 +08:00
Jiang Liu 9e49996410
Merge pull request #798 from jiangliu/nydus-image-default
Change default value of nydus-image and optimize output
2022-11-06 17:19:39 +08:00
Jiang Liu 49d7dd8f06 app: refine log messages
Refine log messages:
1) remove file:line from INFO
2) strip prefix of file

Now the output becomes:
root@5ad838c8bf7b:/nydus# target/debug/nydus-image create -t directory -D images/  src/
[2022-10-17 15:54:35.102076 +00:00] INFO rafs superblock features: HASH_SHA256 | EXPLICIT_UID_GID | COMPRESSION_ZSTD
[2022-10-17 15:54:35.139154 +00:00] INFO successfully build RAFS filesystem:
meta blob path: images/6fb4d603307df0e8c523a14825181cc0c9c3de4c2797b5e341c7c489b40ab13c
data blob size: 0x22de7
data blobs: ["c508580ee90cbd543713067cb22c28b7bf444c457de5e4a53b797e5575307b7a"]

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-06 16:58:46 +08:00
Jiang Liu 1b4beabf42 nydus-image: remove deprecated options of create subcommand
Remove deprecated options of create subcommand:
--backend-config
--backend-type

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-06 16:34:29 +08:00
Jiang Liu 073d9b064e nydus-image: add bootstrap path to build output
Add bootstrap path to build output.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-06 16:34:27 +08:00
Jiang Liu 3ccf905fab nydus-image: beautify build output
Change build output from
[2022-10-17 10:28:32.724440 +00:00] INFO [src/bin/nydus-image/main.rs:745] build successfully: BuildOutput { blobs: ["f13359310bee12d3f3a61e15be862bbcf16ecd1615d8e1a62e64ac80b56dd561"], blob_size: Some(2854077), bootstrap_path: Some("images/dc9ee42be5cb0645f69a3592cc7aaece02cef12675460b3c3b5240b3ec62866a") }

to
root@5ad838c8bf7b:/nydus# target/debug/nydus-image create -t directory -D images/  src/
[2022-10-17 15:32:58.542898 +00:00] INFO [rafs/src/metadata/md_v6.rs:49] rafs superblock features: HASH_SHA256 | EXPLICIT_UID_GID | COMPRESSION_ZSTD
[2022-10-17 15:32:58.545912 +00:00] INFO [src/bin/nydus-image/main.rs:718] successfully build RAFS filesystem:
meta blob path: images/b4e34ad5aeecd4f2e6c047fdd52db9ac8f53981251adb0ac8048db48e45cac1a
data blob size: 0x22dde
data blobs: ["58b6693aa54de2515d2092dab0e66978b20b6382d08e52c06f14498e770df739"]

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-06 16:34:26 +08:00
Jiang Liu 31759ce726 nydus-image: simplify check and inspect subcommand
Simplify check and inspect subcommand by directly taking a BOOTSTRAP
parameter instead of --bootstrap BOOTSTRAP.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-06 16:34:24 +08:00
Jiang Liu 8cb7f6c14a nydus-image: default to rafs v6
Build rafs v6 image by default, instead of rafs v5.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-06 13:43:37 +08:00
Jiang Liu aab0dddaf9 nydus-image: use sha256 as default digester
Use sha256 as default digestor, it's commonly used to digest container
images.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-06 13:42:29 +08:00
Jiang Liu c08b6c22bb nydus-image: change default compressor to zstd
The zstd compressor may achieve better compression ratio, so it's
suitable as default configuration.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-06 13:18:32 +08:00
imeoer ae0cf4c01f
Merge pull request #835 from imeoer/d7y-doc
docs: add d7y doc link
2022-11-04 11:32:31 +08:00
Yan Song f26d4c22d5 docs: add d7y doc link
Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2022-11-04 03:26:18 +00:00
Changwei Ge 4f6d62d543
Merge pull request #815 from sctb512/rafsv6-file-parent-new
nydus-image: fix inspect to get correct path of rafs v6 file
2022-11-04 11:03:33 +08:00
Bin Tang dda9cbb70b nydus-image: fix inspect to get correct path of rafs v6 file
A similar patch for stable/v2.1 does not work for
the master branch now. Therefore, we need to update it.

Signed-off-by: Bin Tang <tangbin.bin@bytedance.com>
2022-11-04 10:20:47 +08:00
imeoer 33935e0be6
Merge pull request #800 from sctb512/mirror-health-check
storage: add mirror health checking support
2022-11-04 10:05:12 +08:00
Jiang Liu 03e3d0318f nydus-image: only support generating RAFS v6 from estargz/targz files
Only support generating RAFS v6 from estargz/targz files.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-03 23:22:45 +08:00
Jiang Liu 8d1a805ba7 nydus-image: support generating ZRan based RAFS fs from estargz file
Support generating ZRan based RAFS fs from estargz file, which is
similiar to 'targz-to-ref' but filtering out all estargz specific files.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-03 23:22:44 +08:00
Jiang Liu 0dcf731ab7 nydusd: enhance nydusd to support ZRan based RAFS v6 fs
Enhance nydusd to support ZRan based RAFS v6 fs.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-03 23:22:33 +08:00
Bin Tang 3181bd651e storage: fix syntax for mirror health checking
Signed-off-by: Bin Tang <tangbin.bin@bytedance.com>
2022-11-03 21:28:49 +08:00
Bin Tang 9f5d1368b5 storage: refresh token to avoid forwarding to P2P/dragonfly
Forward 401 response to P2P/dragonfly will affect performance.
When there is a mirror that auth_through false, we refresh the token regularly
to avoid forwarding the 401 response to mirror.

Signed-off-by: Bin Tang <tangbin.bin@bytedance.com>
2022-11-03 21:09:23 +08:00
Bin Tang 6eb46caf2b storage: add mirror health checking support
Currently, the mirror is set to unavailable if the failed times reach failure_limit.
We added mirror health checking, which will recover unavailable mirror server.
The failure_limit indicates the failed time at which the mirror is set to unavailable.
The health_check_interval indicates the time interval to recover the unavailable mirror.
The ping_url is the endpoint to check mirror server health.

Signed-off-by: Bin Tang <tangbin.bin@bytedance.com>
2022-11-03 20:45:48 +08:00
Jiang Liu 94c5a419cc storage: fix a bug in generating chunk info
Fix a bug in generating chunk info for RAFS Zran fs.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-03 17:03:47 +08:00
Jiang Liu 198746a9b5 storage: amplify read requests for RAFS v6 at storage layer
Amplify read requests for RAFS v6 at storage layer, this may help to
reduce IO requests to storage backend and improve performance.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-03 16:49:50 +08:00
Jiang Liu 68ddee7b14 nydusd: refine data validation related logic
Refine data validation related logic:
1) force validating of data fetched from storage backend
2) disable data validation for legacy stargz images.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-03 15:57:24 +08:00
Jiang Liu eac959fc4d rafs: simplify RAFS fs prefetch implementation
Simplify RAFS fs prefetch implementation.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-03 15:15:16 +08:00
Jiang Liu 90349d9445 storage: refine implementation of legacy stargz
Refine implementation of legacy stargz.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-03 14:14:28 +08:00
imeoer 5c1122a565
Merge pull request #829 from uran0sH/fix-clap
nydus-image: fix clap
2022-11-03 14:04:37 +08:00
Wenyu Huang 56ea2c0c1e nydus-image: fix clap
Change requires_if to required_if_eq_any

Signed-off-by: Wenyu Huang <huangwenyuu@outlook.com>
2022-11-03 00:15:29 -04:00
imeoer 11a7661ea0
Merge pull request #826 from changweige/fix-nydusify-blobs-id
nydusify: drop label "nydus-blob-ids" from meta layer
2022-11-03 10:01:17 +08:00
imeoer fdc47e319a
Merge pull request #827 from changweige/nydus-test-impr
improve nydus-test cases
2022-11-03 09:55:11 +08:00
Changwei Ge 377af4ceb3 nydus-test: enable nydusd dynamic input prefetch files list
Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-11-02 18:51:34 +08:00
Changwei Ge b77b4d2e5b nydus-test: add case for on-disk specific files prefetch
Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-11-02 18:50:41 +08:00
Changwei Ge 521881d765 nydus-test: provdie --insecure-policy to skopeo
Otherwise, skopeo can't copy image layers from registry

Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-11-02 17:34:08 +08:00
Changwei Ge 01991e88eb nydusify: drop label "nydus-blob-ids" from meta layer
Image with layers more than 64 can't be pulled by containerd
since the label is exceeding the label size limitation 4096bytes.

We should figure out another way to do GC in nydus-snapshotter

Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-11-02 16:24:12 +08:00
Jiang Liu d550dfd7bd stroage: relax the constraint that compressed chunk must be continuous
Change storage interfaces to relax the constraint that compressed chunk
must be continuous.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-02 14:08:55 +08:00
Jiang Liu 81766ef5f3 nydus-image: fix a bug related to clap upgrading
thread 'main' panicked at 'Command create: Argument or group 'estargztoc-ref' specified in 'requires*' for 'blob-id' does not exist', /root/.cargo/registry/src/github.com-1ecc6299db9ec823/clap-4.0.18/src/builder/debug_asserts.rs:152:13
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-02 11:50:42 +08:00
Jiang Liu 67d737a65b
Merge pull request #823 from jiangliu/device3
Enhance storage subsystem for coming features and fixes several bugs
2022-11-02 11:49:17 +08:00
Jiang Liu e7927de358 storage: enable support of compressed cache file for v6
Enable support of compressed cache file for v6.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-02 10:57:17 +08:00
Jiang Liu 956b7c9fb9 storage: fix bug in handling compressed cache files
If the cache file is configured as "compressed", some uncompressed data
will be written into the compressed data file, thus cause data
corruption.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-02 10:57:08 +08:00
Jiang Liu dbbffbf9ec storage: fix bugs related to validate_digest
The validate_digest flags doesn't take effect under certain conditions,
so fix it.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-02 10:57:07 +08:00
Jiang Liu 3379162351 storage: refine storage cache subsystem to support new features
Refine storage cache subsystem to support new features.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-02 10:57:05 +08:00
Jiang Liu c6b552b67e storage: optimize the way to decode compressed data
Optimize the way to decode compressed data by adopting stream based
decoder.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-02 10:57:03 +08:00
Jiang Liu fb955d3180 nydus-image: optimize the way to compute compressed size of stargz
Optimize the way to compute compressed size of stargz.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-02 10:57:01 +08:00
Jiang Liu 35b3b0afa5 utils: introduce Decoder to support stream based decoding
Introduce Decoder to support stream based decoding.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-02 10:57:00 +08:00
Jiang Liu b23609dfe8 utils: introduce FileRangeReader to read a range from a file
Introduce FileRangeReader to read a range of data from a file.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-02 10:56:59 +08:00
Jiang Liu ea934575d0 nydusd: refine log messages
Refine log messages.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-02 10:56:56 +08:00
Jiang Liu 0fd47ca9cd storage: refine storage device implementation
Refine storage device implementation by:
1) tune struct/fields visibility
2) remove unused code
3) refine doc

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-11-02 10:56:53 +08:00
Jiang Liu 79ba2bfe03
Merge pull request #824 from uran0sH/fix-clap
fixup! Upgrade clap to 4.0
2022-10-31 23:30:45 +08:00
Wenyu Huang dfadfcfd90 fixup! Upgrade clap to 4.0
Change ArgMatches::get_many::<&str> to
ArgMatches::get_many::<String> in values_of to fix "Could not
downcast to &str, need to downcast to alloc::string::String"

Signed-off-by: Wenyu Huang <huangwenyuu@outlook.com>
2022-10-31 03:47:12 -04:00
Changwei Ge 7d99238982
Merge pull request #820 from imeoer/doc-nerdctl-build
docs: update nerdctl description
2022-10-31 14:12:43 +08:00
Yan Song e342bfcd9d docs: update nerdctl description
Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2022-10-31 06:07:52 +00:00
Jiang Liu 5f4d12372e storage: prepare for support of non-continuous chunks
Enhace storage device interface to prepare for support of
non-continuous chunks.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-30 23:31:49 +08:00
Jiang Liu d5305f1665 storage: simplify BlobIoChunk abstraction and implementation
Simplify BlobIoChunk abstraction and implementation.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-30 23:31:49 +08:00
Jiang Liu 5f633d895d rafs: fix a bug in calculate size for io
Fix a bug in calculate size for io.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-30 23:31:49 +08:00
Jiang Liu 3b579c9f70 storage: simplify interface of BlobIoVec
Simplify interface of BlobIoVec.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-30 23:31:49 +08:00
Jiang Liu bba53d47cd stroage: fix a bug in implementation of ArcSwap::clone()
Fix a bug in implementation of ArcSwap::clone(), we should share the
ArcSwap object instead of generate a new independent one.

Previous version implemented Clone, but it turned out to be very
confusing to people, since it created fully independent ArcSwap.
Users expected the instances to be tied to each other, that store
in one would change the result of future load of the other.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-30 23:31:49 +08:00
Jiang Liu 4c30fa51c7 storage: remove deadcode from device
Remove deadcode from device.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-30 23:31:49 +08:00
imeoer 9daaff8a25
Merge pull request #822 from uran0sH/upgrade-clap
upgrade clap to 4.0
2022-10-30 20:55:23 +08:00
Wenyu Huang 877ae1ea3e Upgrade clap to 4.0
In order to use derive API to refactor, we upgrade clap firstly.

Signed-off-by: Wenyu Huang <huangwenyuu@outlook.com>
2022-10-30 08:22:26 -04:00
imeoer 91989b2e82
Merge pull request #816 from jiangliu/device1
Refine storage device abstraction and implementation
2022-10-29 23:56:58 +08:00
Jiang Liu 01317e170f
Merge pull request #818 from sctb512/fix-v6-pretch-table
nydus-image: fix prefetch table size for rafs v6
2022-10-27 16:14:50 +08:00
Bin Tang 9c8eb6e8a9 nydus-image: fix prefetch table size for rafs v6
Now, the prefetch table size in rafs v6 SuperBlock is the number of prefetch files list.
It is incorrect when the prefetch files list contains a path that does not exist.

Signed-off-by: Bin Tang <tangbin.bin@bytedance.com>
2022-10-27 14:23:36 +08:00
Jiang Liu 5407f04207
Merge pull request #813 from sctb512/optimize-for-workload
optimize prefetch table for workload
2022-10-26 20:21:52 +08:00
Jiang Liu f2fce798c1 nydusd: do not print error message when exit
Do not print error message when exit.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-26 20:16:50 +08:00
Jiang Liu 24ed3cba0b storage: reorganize unit tests related blob meta management
Reorganize unit tests related blob meta management.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-26 20:16:50 +08:00
Jiang Liu 30b86752fe nydus-image: generate RAFS fs referring OCIv1 data blob
Generate RAFS fs referring OCIv1 data blob by adopting random access
zlib algorithm.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-26 20:16:50 +08:00
Jiang Liu b452968262 storage: get rid of special handling of stargz in blob meta
Get rid of special handling of stargz in blob meta by:
1) ensure compressed_end() is always less than compressed_size
2) ensure uncompressed data are always continuous
3) do not enforce compressed data is continuous

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-26 20:16:50 +08:00
Jiang Liu 45c89d24a0 storage: improve doc and error messages for blob meta management
Improve doc and error messages for blob meta management.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-26 20:16:50 +08:00
Jiang Liu d7153e05da nydus-image: support build RAFS filesystem referring tar.gz data blob
Enhance nydus image to build RAFS filesystem referring tar.gz data blob
by using zlib random access algorithm.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-26 20:16:50 +08:00
Jiang Liu 1aa8ff0290 utils: add interface to provide file digest value
Add interface to provide file digest value.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-26 20:16:50 +08:00
Jiang Liu 732f1c67b6 nydus-image: refine node to prepare for zran
Refine node.rs in order to support zran.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-26 20:16:50 +08:00
Jiang Liu 2fcdce495e utils: enhance zran interface for ease of use
Enhance zran interface for ease of use.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-26 20:16:50 +08:00
Jiang Liu b80c41cc81 nydus-image: enable blob meta chunk v2 for directory builder
Enable blob meta chunk v2 for directory builder.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-26 20:16:50 +08:00
Jiang Liu bcc6ae3040 nydus-image: enhance check subcommand to output blob features
Enhance nydus-image check subcommand to output blob features.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-26 20:16:50 +08:00
imeoer abf6298ccb
Merge pull request #796 from jiangliu/blob-meta
Introduce blob meta format v2
2022-10-26 19:58:46 +08:00
Bin Tang 4fd3a6a125 optimize prefetch table for workload
Now, nydus prefetch files with the order of index.
This method is not accurate. We already get accessed files list for workload.
Hence, we can prefetch files based on the order they are accessed.

Signed-off-by: Bin Tang <tangbin.bin@bytedance.com>
2022-10-26 18:53:05 +08:00
Jiang Liu 9449f648c1 storage: introduce blob meta chunk info v2 format
Introduce blob meta chunk info v2 format with better encoding and
a 32-bit data field.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-23 11:40:02 +08:00
Jiang Liu 53c5c58968 nydusd: move chunk v1 related tests into mod v1
Move chunk v1 related tests into mod v1.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-23 11:32:38 +08:00
Jiang Liu 7cfcc906f5 storage: prepare for supporting of multiple blob meta formats
Change the blob meta implement to prepare for supporting of
multiple blob meta formats.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-23 11:32:37 +08:00
Jiang Liu 3dbc6f92f9 storage: make BlobChunkInfoV1Ondisk private
Make BlobChunkInfoV1Ondisk private so we could support different chunk
information format later.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-23 11:32:36 +08:00
Jiang Liu 2455c7ef72 storage: introduce trait BlobMetaChunkInfo
Introduce trait BlobMetaChunkInfo so we could introduce other chunk
info format later.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-23 11:32:35 +08:00
Jiang Liu df3115ae1e storage: rename BlobChunkInfoOndisk to BlobChunkInfoV1Ondisk
Rename BlobChunkInfoOndisk to BlobChunkInfoV1Ondisk.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-23 11:32:34 +08:00
Jiang Liu 9ba2a787f9 storage: use filemap for blob meta
Use FileMapState for blob meta to reduce duplicated code.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-23 11:32:33 +08:00
imeoer b841e21b53
Merge pull request #795 from jiangliu/tar-rafs
Enhance nydus-image to generate RAFS filesystem directly from tarball
2022-10-22 23:02:58 +08:00
Jiang Liu b2f6ac1a32
Merge pull request #809 from imeoer/update-release
action: update release notes for download mirror
2022-10-20 23:51:42 +08:00
Changwei Ge ae94ac4572
Merge pull request #808 from dragonflyoss/chore/ci-actions
chore: ci action adds paths-ignore
2022-10-20 19:08:05 +08:00
Yan Song e05441cb3d action: update release notes for download mirror
Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2022-10-20 10:57:20 +00:00
Gaius 9591140f1e
chore: ci action adds paths-ignore
Signed-off-by: Gaius <gaius.qi@gmail.com>
2022-10-20 18:43:45 +08:00
Jiang Liu 487d9dc260
Merge pull request #806 from changweige/pytest-stop
action/nydus-test: stop on the first test failure
2022-10-20 18:09:10 +08:00
Changwei Ge baed44e727 action/nydus-test: stop on the first test failure
By default, pytest will continue executing test even
current test fails. It's hard to tell what to happen
on such a environment. And it makes it hard to investigate
the first failed case.

Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-10-20 13:36:03 +08:00
imeoer eeaf822a7f
Merge pull request #802 from jiangliu/nydusd-localfs
nydusd: simplify the way to use localfs for fuse subcommand
2022-10-19 11:11:57 +08:00
Jiang Liu 5d34d5c12c nydusd: simplify the way to use localfs for fuse subcommand
Add command line option "--localfs-dir" to "nydusd fuse", so we could
mount an RAFS filesystem by
nydusd -M ./mnt -D ./localfs-dir -B meta-blob-id

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-18 22:53:00 +08:00
imeoer 4e33906d1b
Merge pull request #801 from bergwolf/github/enable-ci-on-stable
action: run integration tests on stable-2.1 branch as well
2022-10-18 17:10:39 +08:00
Peng Tao bc0fc78d42 action: run integration tests on stable-2.1 branch as well
Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2022-10-18 16:01:08 +08:00
imeoer 9d164c4022
Merge pull request #790 from sctb512/mirror-performance-fix
fix mirror's performance issue
2022-10-18 14:01:24 +08:00
Bin Tang 6f13ad6e29 fix mirror's performance issue
In some scenarios(e.g. P2P/Dragonfly), sending an authorization request
to the mirror will cause performance loss. We add parameter
auth_through. When auth_through is false, nydusd will directly send
non-authorization request to original registry.

Signed-off-by: Bin Tang <tangbin.bin@bytedance.com>
2022-10-18 13:06:30 +08:00
Jiang Liu 7650fddd24 nydus-image: enable converting estargz file to RAFS filesystem
Enable converting estargz file to RAFS filesystem. The main different
between targz and estargz is that several estargz specific files should
be filtered out.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-17 18:44:52 +08:00
Jiang Liu 39db4b8fef nydus-image: enable tar.gz to RAFS conversion
Enable converting tar.gz(OCIv1) file into RAFS filesystem directly.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-17 17:50:16 +08:00
Jiang Liu a60f4d50d1 nydus-image: enable support of --type tar-rafs
Enable support of --type tar-rafs to convert tar file into rafs fs.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-17 17:25:33 +08:00
Jiang Liu f5bc386881 storage: fix a bug when checking for hardlink
Hardlink must be regular files, otherwise directory will be reported
as hardlink.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-17 17:25:33 +08:00
Jiang Liu 511133364b nydus-image: only support single SOURCE file for create subcommand
Only allow single SOUECE file for the 'nydus-image create' subcommand,
there's no support of multiple SOURCE files at all.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-17 17:25:33 +08:00
Jiang Liu 19ec6f04e0 nydus-image: avoid copy child node when apply node to tree
Avoid copy child node when apply node to tree.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-17 17:25:33 +08:00
Jiang Liu 09b5367757 nydus-image: support format of conversion type
Implment format for ConversionType.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-17 17:25:33 +08:00
Jiang Liu 447f97c9b4 nydus-image: simplify code in blob-compatct
Simplify code in blob-compatct.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-17 17:25:33 +08:00
Jiang Liu 2a14262d0c nydus-image: avoid duplicated code in builder
Abstract common code in builder as helper functions to reduce duplicated
code.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-17 17:25:33 +08:00
Jiang Liu a0ede31dc4
Merge pull request #797 from jiangliu/targz
Add gzip decoder to enable converting tar.gz into rafs
2022-10-17 16:26:33 +08:00
Jiang Liu 981f2545e4 util: introduce zlib decoder for targz file
Introduce zlib decoder for targz file.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-17 16:06:47 +08:00
Jiang Liu 0ca9adad76 utils: add Cargo feature for zran
Add cargo feature zran to control zran related code.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-17 15:18:34 +08:00
Jiang Liu 94c32a6772 util: decompress data slice according to random access information
Add methods to decompress data slices according to random access context
information.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-17 15:06:28 +08:00
Jiang Liu bc1c0b68af util: generate random access info for zlib stream
Add methods to generate random access context information for zlib/gzip
streams.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-17 15:06:26 +08:00
Jiang Liu a88a578375 util: provide methods to get zlib decompression context information
Provide methods to get zlib decompression context information.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-17 15:06:24 +08:00
Jiang Liu 22ba5ba29b util: add base implemenation of ZranTarReader
Add base implemenation of ZranTarReader to parse OCIv1 image tarballs.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-17 15:06:21 +08:00
imeoer 796b3a64b8
Merge pull request #792 from jiangliu/stargz-toc
Improve implementation to convert stargz TOC into RAFS fs
2022-10-17 10:14:37 +08:00
Jiang Liu 2f91b4fa50 nydus-image: enforce stricter validation when converting from stargz
Enforce stricter validation when converting from stargz images.
Also generate valid compressed size for chunks reference stargz blob.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-17 09:53:13 +08:00
Jiang Liu 4edbf0cd33 nydus-image: set default chunk size to 4M for stargz
Set default chunk size to 4M for stargz.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-17 09:53:13 +08:00
Jiang Liu 3941ff99cb nydus-image: only compute inode digest for v5 when converting stargz
RAFS v6 doesn't support inode digest, so only compute inode digest for
v5 when converting stargz.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-17 09:53:13 +08:00
Jiang Liu 5b658dcf4b nydus-image: do not allocate chunk index for duplicated chunks
Do not allocate chunk index for duplicated chunks, so we could get
a compact blob compression information array.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-17 09:53:13 +08:00
Jiang Liu 03eb6d2022 nydus-image: add doc for stargz
Add more doc for stargz related code.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-17 09:53:13 +08:00
imeoer 81170b88c8
Merge pull request #778 from jiangliu/image-p2
Second part to refine nydus-image
2022-10-17 09:46:13 +08:00
Jiang Liu 7105add760 nydus-image: remove Blob::new()
Remove Blob::new() to simplify the implementation and improve code
readability.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-16 11:29:56 +08:00
Jiang Liu 87fa275cde nydus-image: improve help message for unpack subcommand
Improve help message for unpack subcommand.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-16 11:29:55 +08:00
Jiang Liu f3a86d4f0c nydus-image: use the same compressor as chunk for compression info
Use the same compressor as chunk to compress blob compression info
array.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-16 11:29:53 +08:00
Jiang Liu ed498f064f nydus-image: refine the way to deal with directory
support all possible combination off --blob-dir, --blob, --blob-id,
--blob-meta, --bootstrap, --inline-bootstrap. Now we could auto generate
file name for bootstrap with sha256 of its content.

Also add a field "bootstrap_path" to build output as:
[2022-10-07 16:14:27.880517 +00:00] INFO [src/bin/nydus-image/main.rs:696] build successfully: BuildOutput { blobs: ["8ac22b2a8277b8e07b00240b6beb72e6225b1aec2abb0bc3b69f7915c3374367"], blob_size: Some(209121), bootstrap_path: Some("tmp/380b88426c78b0024263cb29d2b973c4ca4e2f01ccb2381807e60e95efb3bf0b") }

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-16 11:29:37 +08:00
Jiang Liu 4a59643774 nydus-image: refine v6 related logic in bootstrap
Refine v6 related logic in bootstrap by:
1) do not add root into hardlink map, directory can't be hardlink
2) only enable v6 code when building v6 image
3) rename functions with rafsv5/rafsv6 prefixes.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-16 10:14:49 +08:00
Jiang Liu 2951658bb7 nydus-image: refine directory builder
Refine directory builder.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-16 10:14:45 +08:00
Jiang Liu a80fec5412
Merge pull request #794 from changweige/fix-v5-inode-prefetch-hardlink
nydus-image/v5: prefetch table should contain inode numbers rather it…
2022-10-14 21:50:23 +08:00
imeoer e8a0112d56
Merge pull request #791 from imeoer/nydusify-check-fix-overlay
nydusify: fix overlay error for image with single layer
2022-10-14 17:38:29 +08:00
Changwei Ge 2901c147e1 nydus-image/v5: prefetch table should contain inode numbers rather its index
Nydusd is performing prefetch by mathcing inode number in prefetch
table. Right now, inode's index is persisted to prefetch table though
at most time they are equal.

Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-10-14 16:34:17 +08:00
Yan Song 2979f712b1 nydusify: fix overlay error for image with single layer
Nydusify check subcommand will check the consistency of
OCI image and nydus image by mounting (overlayfs or nydusd).

For the OCI image with a single layer, we should use bind
mount instead of overlay to mount rootfs, otherwise an error
will be thrown like:

```
wrong fs type, bad option, bad superblock on overlay, missing
codepage or helper program, or other error.
```

This commit also refine the codes for image.Mount/image.Umount.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2022-10-14 03:31:09 +00:00
Jiang Liu 665a7b143d
Merge pull request #788 from kevinXYin/fscache_fetch_chunks_fix
storage: retry timeout chunks for fscache ondemand path
2022-10-14 10:56:45 +08:00
Changwei Ge 996f75ac0a
Merge pull request #735 from imeoer/release-version
release: update version on build automatically
2022-10-13 17:37:58 +08:00
Yan Song 9058227933 release: update version on build automatically
We only need to git tag to release a version without modifying
the version field in Cargo.toml and Cargo.lock.

Signed-off-by: Yan Song <imeoer@linux.alibaba.com>
2022-10-13 08:57:37 +00:00
Jiang Liu e32233b0e9 nydus-image: refine commandline for create subcommand
Refine commandline for create subcommand by:
1) support --blob-dir for bootstrap blob
2) introduce source type of tar and targz

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-12 14:07:41 +08:00
imeoer 042600b157
Merge pull request #784 from ccx1024cc/upfix3
fix: miss oss file of nydusify packer
2022-10-12 09:37:50 +08:00
Jiang Liu 51295f1f4f nydus-image: refine output of check subcommand
Refine output of check subcommand:
1) use println instead of info/debug
2) refine help messages
3) ignore the target bootstrap in the blob-dir for stat subcommand

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-11 21:43:28 +08:00
Jiang Liu a888da6571
Merge pull request #772 from jiangliu/rafs-v5-validation
rafs: enforce stricter inode validation rules for v5
2022-10-11 13:22:04 +08:00
Xin Yin 873d696851 storage: retry timeout chunks for fscache ondemand path
for fscache ondemand path, if some requested chunks are set to pending by
prefetch threads, and wait them timeout, will casue EIO to container side.

retry the timeout chunks on ondemand path, minimize EIOs.

Signed-off-by: Xin Yin <yinxin.x@bytedance.com>
2022-10-11 10:14:21 +08:00
Jiang Liu f0f014dc7f
Merge pull request #787 from changweige/enlarge-fuse-threads-num
nydusd: enlarge default fuse server threads
2022-10-10 22:27:54 +08:00
Changwei Ge ffcb621f50 nydusd: enlarge default fuse server threads
Now the default value is only 1, it affacts performance.

Signed-off-by: Changwei Ge <gechangwei@bytedance.com>
2022-10-10 20:06:29 +08:00
Changwei Ge f01ca3b29a
Merge pull request #786 from sctb512/mirror-fix
nydusd: fix mirror description syntax
2022-10-10 19:07:12 +08:00
Bin Tang 625047ddf0 nydusd: fix mirror description syntax
Signed-off-by: Bin Tang <tangbin.bin@bytedance.com>
2022-10-10 16:50:30 +08:00
Jiang Liu 2a04edbb79 rafs: enforce stricter inode validation rules for v5
Enforce stricter inode validation rules for v5 by:
1) ensure ino is not zero
2) inode is less than max_inodes
3) i_name is not empty
4) parent inode is not zero
5) n_link is not zero
6) parent inode is less than current inode for non-hard-link inodes
7) i_blocks is correct
8) child inode is bigger than parent
9) child count is corrent

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-10 16:21:20 +08:00
泰友 04a31c342f fix: miss oss file of nydusify packer
Reproduction:

Prepare configuration file used for pack command. {
"bucket_name": "XXX",
"endpoint": "XXX",
"access_key_id": "XXX",
"access_key_secret": "XXX",
"meta_prefix": "nydus_rund_sidecar_meta",
"blob_prefix": "blobs"
}

Pack by nydusify sudo contrib/nydusify/cmd/nydusify pack
--source-dir test
--output-dir tmp
--name ccx-test
--backend-push
--backend-config-file backend-config.json
--backend-type oss
--nydus-image target/debug/nydus-image

Miss blob file and meta file in oss

Problem:

Forgot to CompleteMultipartUpload after chunk uploading.

Fix:

CompleteMultipartUpload to complete uploading.

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2022-10-10 15:40:52 +08:00
ccx1024cc a894be7322
Merge pull request #780 from ccx1024cc/upfix2
refact: set default prefix for oss backend
2022-10-10 15:37:18 +08:00
泰友 5d19aa7651 refact: use specified object prefix and meta prefix directly
issue: https://github.com/dragonflyoss/image-service/issues/608

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2022-10-10 15:11:12 +08:00
Jiang Liu c2b0e976ba
Merge pull request #783 from hsiangkao/osseu22
README.md: add OSSEU22 meterials
2022-10-10 15:02:32 +08:00
Gao Xiang 2f6c162062 README.md: add OSSEU22 meterials
Add "Introduction to Nydus Image Service on In-kernel EROFS" to
README.md.

Signed-off-by: Gao Xiang <hsiangkao@linux.alibaba.com>
2022-10-10 06:36:19 +00:00
Jiang Liu 197a7da84e
Merge pull request #782 from jiangliu/xattr-fix
rafs: fix bug in handling extended attributes
2022-10-10 10:39:20 +08:00
Jiang Liu bc2fe0d0c2 rafs: fix bug in handling extended attributes
Extended attribute's key/name may not be UTF-8 encoded, so don't assume
that.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-10 10:12:10 +08:00
imeoer 31cd6054a5
Merge pull request #775 from jiangliu/image-p1
Refine nydus-image: part 1
2022-10-10 09:53:59 +08:00
Jiang Liu b50b036c2c nydus-image: avoid duplicated blobs in merged fs
If the RAFS filesystems to be merged contains common referenced blob,
those data blob will get duplicated in the merged RAFS filesystem.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-09 17:53:02 +08:00
Jiang Liu 62d777bb8d nydus-image: refine doc and readability
Syntax only changes to refine doc and readability.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-09 17:29:35 +08:00
Jiang Liu fdd0e64a1a storage: rename readahead to prefetch
rename readahead_offset/size to prefetch_offset/size.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-09 17:29:34 +08:00
Jiang Liu cc65e08d3e nydus-image: refine code to generate prefetch list
Refine code to generate prefetch list by:
1) simplify the code logic
2) prepare to support tar based workflow
3) add unit tests.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-09 17:29:33 +08:00
Jiang Liu 6a8dc8d482 nydus-image: simple the code to generate RAFS v6 inode
Simple the code to generate RAFS v6 inode by:
1) introduing helpers
2) rafs metadata related structures into nydus_rafs crate

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-09 17:29:25 +08:00
Jiang Liu d944673354 rafs: add more helper function to detech inode mode
Add more helper function into RafsV5Inode to detech inode mode.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-09 17:27:58 +08:00
Jiang Liu aad7392138 rafs: add doc for InodeWrapper
Add doc for InodeWrapper.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-09 17:27:53 +08:00
Jiang Liu 55da0f1d16 rafs: add doc for ChunkWrapper
Add doc for ChunkWrapper.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-09 17:27:49 +08:00
imeoer 142ebe4561
Merge pull request #779 from ccx1024cc/upfix
fix: nydusify pack fail
2022-10-09 11:17:24 +08:00
泰友 4881aac7d4 fix: nydusify pack fail
Reproduction
1. Prepare configuration file used for pack command.
{
    "bucket_name": "XXX",
    "endpoint": "XXX",
    "access_key_id": "XXX",
    "access_key_secret": "XXX",
    "meta_prefix": "nydus_rund_sidecar_meta",
    "blob_prefix": "blobs"
}

2. Pack by nydusify
sudo contrib/nydusify/cmd/nydusify pack \
--source-dir test \
--output-dir tmp \
--name ccx-test \
--backend-push \
--backend-config-file backend-config.json \
--backend-type oss \
--nydus-image target/debug/nydus-image

3. Got error
FATA[2022-10-08T18:06:46+08:00] failed to push pack result to remote: failed to put metafile to remote: split file by part size: open tmp/tmp/ccx-test.meta: no such file or directory

Problem
The path of bootstrap file, which is to upload, is wrong.

Fix
Use imageName as req.Meta, which is bootstrap file to upload.

Signed-off-by: 泰友 <cuichengxu.ccx@antgroup.com>
2022-10-08 18:29:42 +08:00
Jiang Liu 62e31fb4af nydus-image: move common inode/chunk wrapper into rafs
Move common inode/chunk wrapper into rafs for reuse.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-08 14:26:50 +08:00
Jiang Liu bb3c4930c8
Merge pull request #774 from jiangliu/inode-ext
Split `RafsInode` into `RafsInode` and `RafsInodeExt`
2022-10-08 14:25:07 +08:00
Jiang Liu 9e99571ed3 rafs: refine v5 direct mode implemenation
Simplify rafs v5 implementation by introducing helper functins.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-08 13:32:43 +08:00
Jiang Liu 9640c242fe rafs: fix a bug in clone ArcSwap object
The documentation from ArcSwap states:
Previous version implemented Clone, but it turned out to be very
confusing to people, since it created fully independent ArcSwap.
Users expected the instances to be tied to each other, that store
in one would change the result of future load of the other.
To emulate the original behaviour, one can do something like this:
let new = ArcSwap::new(old.load_full());

We do expect instances to be tied to each other, so use another Arc
over ArcSwap.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-08 13:32:43 +08:00
Jiang Liu ff688a0078 rafs: rename RAFS_ROOT_INODE to RAFS_V5_ROOT_INODE
RAFS_ROOT_INODE is for v5 only, so rename it.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-08 13:32:43 +08:00
Jiang Liu d435c81bd0 rafs: remove dead code
Remove unused dead code from rafs crate.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-08 13:32:43 +08:00
Jiang Liu 073802cbe3 rafs: add unit test cases for compression/digest algorithms
Add unit test cases for compression/digest algorithm conversion.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-08 13:32:13 +08:00
Jiang Liu e340fa1183 rafs: split RafsInodeExt out of RafsInode
The RAFS v6 metadata has different designs with RAFS v5, which breaks
some interfaces defined by RafsInode. So split RafsInodeExt out of
RafsInode, then we could provide better implementation for RAFS v6.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-08 13:32:11 +08:00
Jiang Liu e9222eb204 nydusd: do not print error message when exiting
Do not print error message when exiting, that's expected instead of
error.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-08 13:30:55 +08:00
Jiang Liu 926566552d
Merge pull request #773 from jiangliu/api-dep
api: reduce dependencies when used by client
2022-10-05 00:06:47 +08:00
Jiang Liu 50417ce5c0 api: reduce dependencies when used by client
When the nyuds-api crate is used by client, it only cares about data
types defined by the API, so we should get rid of extra/unnecessary
dependencies.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-03 00:11:44 +08:00
Jiang Liu 1d5eefedc7
Merge pull request #771 from jiangliu/rafs-syntax
rafs: refine doc and consts
2022-10-02 23:58:47 +08:00
Jiang Liu c1ffb6f49c rafs: remove RafsInode::walk_chunks()
There's no user of RafsInode::walk_chunks(), so remove it.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-02 23:34:02 +08:00
Jiang Liu c8c273d6c5 rafs: refine doc and consts
Syntax only changes to refine documentation and consts.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-02 23:33:58 +08:00
Jiang Liu a15facd832
Merge pull request #770 from jiangliu/util-file-map
util: add FileMapState to map a file region into current process
2022-10-02 10:03:33 +08:00
Jiang Liu bae6b8deaa
Merge pull request #766 from jiangliu/storage-dep
storage: reduce dependence crates
2022-10-02 10:02:49 +08:00
Jiang Liu e4fdc5bc1a cargo: update cargo deny configuration
Update cargo deny configuration to allow ISC license.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-02 09:35:22 +08:00
Jiang Liu 16add152e5 storage: disable remote access related code
Disable remote access related code, it's unused currently.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-02 09:35:22 +08:00
Jiang Liu 9aa393c0ca storage: use hmac-sha1-compact to reduce dependencies
Use hmac-sha1-compact to reduce dependencies.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-02 09:35:22 +08:00
Jiang Liu 565e21d6eb storage: replace governor by leaky-bucket
Replace governor by leaky-bucket to reduce dependency.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-02 09:35:22 +08:00
Jiang Liu 903b4babec util: add FileMapState to map a file region into current process
Add FileMapState to map a file region into current process, so we could
access the memory mapped region in safe way.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-02 09:34:00 +08:00
Jiang Liu db8a51e451
Merge pull request #745 from jiangliu/rafs-v6
Enhancements to rafs v6 implemntation (for 2.2)
2022-10-02 01:04:48 +08:00
Jiang Liu b8c7075456 rafs: enforce strict validation of extended attribute
Enforce stricter validation of extended attribute by:
1) only accept permitted key prefix
2) ensure the key is a pure prefix
3) ensure value length is less than 64K

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-01 22:21:33 +08:00
Jiang Liu 77c4cb6a51 rafs: fix unit test failure due to disabled features
Some unit test cases have dependency on the backend-oss feature,
so only enable them when the backend-oss features is enabled.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-01 22:21:31 +08:00
Jiang Liu b017f12791 rafs/v6: enforce strict validation of compression information
Enforce strict validation of compression information array in data
blob.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-01 16:32:52 +08:00
Jiang Liu ecc7e4ee73 rafs/v6: syntax only change
Improve v6 code with docs and group methods by functionality.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-01 16:32:51 +08:00
Jiang Liu c543524f9b rafs/v6 : convert from little-endian when accessing on-disk data
Ensure conversion from little-endian when accessing on-disk data
structures.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2022-10-01 16:32:48 +08:00
525 changed files with 75508 additions and 40922 deletions

7
.github/CODEOWNERS vendored Normal file
View File

@ -0,0 +1,7 @@
# A CODEOWNERS file uses a pattern that follows the same rules used in gitignore files.
# The pattern is followed by one or more GitHub usernames or team names using the
# standard @username or @org/team-name format. You can also refer to a user by an
# email address that has been added to their GitHub account, for example user@example.com
* @dragonflyoss/nydus-reviewers
.github @dragonflyoss/nydus-maintainers

44
.github/ISSUE_TEMPLATE.md vendored Normal file
View File

@ -0,0 +1,44 @@
## Additional Information
_The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all._
### Version of nydus being used (nydusd --version)
<!-- Example:
Version: v2.2.0
Git Commit: a38f6b8d6257af90d59880265335dd55fab07668
Build Time: 2023-03-01T10:05:57.267573846Z
Profile: release
Rustc: rustc 1.66.1 (90743e729 2023-01-10)
-->
### Version of nydus-snapshotter being used (containerd-nydus-grpc --version)
<!-- Example:
Version: v0.5.1
Revision: a4b21d7e93481b713ed5c620694e77abac637abb
Go version: go1.18.6
Build time: 2023-01-28T06:05:42
-->
### Kernel information (uname -r)
_command result: uname -r_
### GNU/Linux Distribution, if applicable (cat /etc/os-release)
_command result: cat /etc/os-release_
### containerd-nydus-grpc command line used, if applicable (ps aux | grep containerd-nydus-grpc)
```
```
### client command line used, if applicable (such as: nerdctl, docker, kubectl, ctr)
```
```
### Screenshots (if applicable)
## Details about issue

21
.github/PULL_REQUEST_TEMPLATE.md vendored Normal file
View File

@ -0,0 +1,21 @@
## Relevant Issue (if applicable)
_If there are Issues related to this PullRequest, please list it._
## Details
_Please describe the details of PullRequest._
## Types of changes
_What types of changes does your PullRequest introduce? Put an `x` in all the boxes that apply:_
- [ ] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to change)
- [ ] Documentation Update (if none of the other choices apply)
## Checklist
_Go over all the following points, and put an `x` in all the boxes that apply._
- [ ] I have updated the documentation accordingly.
- [ ] I have added tests to cover my changes.

23
.github/codecov.yml vendored Normal file
View File

@ -0,0 +1,23 @@
coverage:
status:
project:
default:
enabled: yes
target: auto # auto compares coverage to the previous base commit
# adjust accordingly based on how flaky your tests are
# this allows a 0.2% drop from the previous base commit coverage
threshold: 0.2%
patch: false
comment:
layout: "reach, diff, flags, files"
behavior: default
require_changes: true # if true: only post the comment if coverage changes
codecov:
require_ci_to_pass: false
notify:
wait_for_ci: true
# When modifying this file, please validate using
# curl -X POST --data-binary @codecov.yml https://codecov.io/validate

250
.github/copilot-instructions.md vendored Normal file
View File

@ -0,0 +1,250 @@
# GitHub Copilot Instructions for Nydus
## Project Overview
Nydus is a high-performance container image service that implements a content-addressable file system on the RAFS format. It enhances the OCI image specification by enabling on-demand loading, chunk-level deduplication, and improved container startup performance.
### Key Components
- **nydusd**: User-space daemon that processes FUSE/fscache/virtiofs messages and serves Nydus images
- **nydus-image**: CLI tool to convert OCI image layers to Nydus format
- **nydusify**: Tool to convert entire OCI images to Nydus format with registry integration
- **nydusctl**: CLI client for managing and querying nydusd daemon
- **nydus-service**: Library crate for integrating Nydus services into other projects
## Architecture Guidelines
### Crate Structure
```
- api/ # Nydus Image Service APIs and data structures
- builder/ # Image building and conversion logic
- rafs/ # RAFS filesystem implementation
- service/ # Daemon and service management framework
- storage/ # Core storage subsystem with backends and caching
- utils/ # Common utilities and helper functions
- src/bin/ # Binary executables (nydusd, nydus-image, nydusctl)
```
### Key Technologies
- **Language**: Rust with memory safety focus
- **Filesystems**: FUSE, virtiofs, EROFS, fscache
- **Storage Backends**: Registry, OSS, S3, LocalFS, HTTP proxy
- **Compression**: LZ4, Gzip, Zstd
- **Async Runtime**: Tokio (current thread for io-uring compatibility)
## Code Style and Patterns
### Rust Conventions
- Use `#![deny(warnings)]` in all binary crates
- Follow standard Rust naming conventions (snake_case, PascalCase)
- Prefer `anyhow::Result` for error handling in applications
- Use custom error types with `thiserror` for libraries
- Apply `#[macro_use]` for frequently used external crates like `log`
- Always format the code with `cargo fmt`
- Use `clippy` for linting and follow its suggestions
### Error Handling
```rust
// Prefer anyhow for applications
use anyhow::{bail, Context, Result};
// Use custom error types for libraries
use thiserror::Error;
#[derive(Error, Debug)]
pub enum NydusError {
#[error("Invalid arguments: {0}")]
InvalidArguments(String),
#[error("IO error: {0}")]
Io(#[from] std::io::Error),
}
```
### Logging Patterns
- Use structured logging with appropriate levels (trace, debug, info, warn, error)
- Include context in error messages: `.with_context(|| "description")`
- Use `info!`, `warn!`, `error!` macros consistently
### Configuration Management
- Use `serde` for JSON configuration serialization/deserialization
- Support both file-based and environment variable configuration
- Validate configurations at startup with clear error messages
- Follow the `ConfigV2` pattern for versioned configurations
## Development Guidelines
### Storage Backend Development
- When implementing new storage backends:
- - Implement the `BlobBackend` trait
- - Support timeout, retry, and connection management
- - Add configuration in the backend config structure
- - Consider proxy support for high availability
- - Implement proper error handling and logging
### Daemon Service Development
- Use the `NydusDaemon` trait for service implementations
- Support save/restore for hot upgrade functionality
- Implement proper state machine transitions
- Use `DaemonController` for lifecycle management
### RAFS Filesystem Features
- Support both RAFS v5 and v6 formats
- Implement chunk-level deduplication
- Handle prefetch optimization for container startup
- Support overlay filesystem operations
- Maintain POSIX compatibility
### API Development
- Use versioned APIs (v1, v2) with backward compatibility
- Implement HTTP endpoints with proper error handling
- Support both Unix socket and TCP communication
- Follow OpenAPI specification patterns
## Testing Patterns
### Unit Tests
- Test individual functions and modules in isolation
- Use `#[cfg(test)]` modules within source files
- Mock external dependencies when necessary
- Focus on error conditions and edge cases
### Integration Tests
- Place integration tests in `tests/` directory
- Test complete workflows and component interactions
- Use temporary directories for filesystem operations
- Clean up resources properly in test teardown
### Smoke Tests
- Located in `smoke/` directory using Go
- Test real-world scenarios with actual images
- Verify performance and functionality
- Use Bats framework for shell-based testing
## Performance Considerations
### I/O Optimization
- Use async I/O patterns with Tokio
- Implement prefetching for predictable access patterns
- Optimize chunk size (default 1MB) for workload characteristics
- Consider io-uring for high-performance scenarios
### Memory Management
- Use `Arc<T>` for shared ownership of large objects
- Implement lazy loading for metadata structures
- Consider memory mapping for large files
- Profile memory usage in performance-critical paths
### Caching Strategy
- Implement blob caching with configurable backends
- Support compression in cache to save space
- Use chunk-level caching with efficient eviction policies
- Consider cache warming strategies for frequently accessed data
## Security Guidelines
### Data Integrity
- Implement end-to-end digest validation
- Support multiple hash algorithms (SHA256, Blake3)
- Verify chunk integrity on read operations
- Detect and prevent supply chain attacks
### Authentication
- Support registry authentication (basic auth, bearer tokens)
- Handle credential rotation and refresh
- Implement secure credential storage
- Support mutual TLS for backend connections
## Specific Code Patterns
### Configuration Loading
```rust
// Standard pattern for configuration loading
let config = match config_path {
Some(path) => ConfigV2::from_file(path)?,
None => ConfigV2::default(),
};
// Environment variable override
if let Ok(auth) = std::env::var("IMAGE_PULL_AUTH") {
config.update_registry_auth_info(&auth);
}
```
### Daemon Lifecycle
```rust
// Standard daemon initialization pattern
let daemon = create_daemon(config, build_info)?;
DAEMON_CONTROLLER.set_daemon(daemon);
// Event loop management
if DAEMON_CONTROLLER.is_active() {
DAEMON_CONTROLLER.run_loop();
}
// Graceful shutdown
DAEMON_CONTROLLER.shutdown();
```
### Blob Access Pattern
```rust
// Standard blob read pattern
let mut bio = BlobIoDesc::new(blob_id, blob_address, blob_size, user_io);
let blob_device = factory.get_device(&blob_info)?;
blob_device.read(&mut bio)?;
```
## Documentation Standards
### Code Documentation
- Document all public APIs with `///` comments
- Include examples in documentation
- Document safety requirements for unsafe code
- Explain complex algorithms and data structures
### Architecture Documentation
- Maintain design documents in `docs/` directory
- Update documentation when adding new features
- Include diagrams for complex interactions
- Document configuration options comprehensively
### Release Notes
- Document breaking changes clearly
- Include migration guides for major versions
- Highlight performance improvements
- List new features and bug fixes
## Container and Cloud Native Patterns
### OCI Compatibility
- Maintain compatibility with OCI image spec
- Support standard container runtimes (runc, Kata)
- Implement proper layer handling and manifest generation
- Support multi-architecture images
### Kubernetes Integration
- Design for Kubernetes CRI integration
- Support containerd snapshotter pattern
- Handle pod lifecycle events appropriately
- Implement proper resource cleanup
### Cloud Storage Integration
- Support major cloud providers (AWS S3, Alibaba OSS)
- Implement proper credential management
- Handle network interruptions gracefully
- Support cross-region replication patterns
## Build and Release
### Build Configuration
- Use `Cargo.toml` workspace configuration
- Support cross-compilation for multiple architectures
- Implement proper feature flags for optional components
- Use consistent dependency versioning
### Release Process
- Tag releases with semantic versioning
- Generate release binaries for supported platforms
- Update documentation with release notes
- Validate release artifacts before publishing
Remember to follow these guidelines when contributing to or working with the Nydus codebase. The project emphasizes performance, security, and compatibility with the broader container ecosystem.

40
.github/workflows/Dockerfile.cross vendored Normal file
View File

@ -0,0 +1,40 @@
FROM ubuntu:latest
ENV DEBIAN_FRONTEND=noninteractive
ARG RUST_VERSION=1.84.0
RUN apt-get update && apt-get install -y \
software-properties-common \
build-essential \
curl \
git \
libssl-dev \
pkg-config \
cmake \
gcc-riscv64-linux-gnu \
g++-riscv64-linux-gnu \
&& rm -rf /var/lib/apt/lists/*
RUN add-apt-repository ppa:ubuntu-toolchain-r/test \
&& apt-get update && apt-get install -y \
gcc-14 \
g++-14 \
gcc-14-riscv64-linux-gnu \
g++-14-riscv64-linux-gnu \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /root
RUN curl https://sh.rustup.rs -sSf | sh -s -- -y --default-toolchain $RUST_VERSION
ENV PATH="/root/.cargo/bin:${PATH}"
RUN rustup target add \
riscv64gc-unknown-linux-gnu
RUN mkdir -p ~/.cargo && echo '\
[target.riscv64gc-unknown-linux-gnu]\n\
linker = "riscv64-linux-gnu-gcc-14"' > ~/.cargo/config.toml
CMD ["/bin/bash"]

329
.github/workflows/benchmark.yml vendored Normal file
View File

@ -0,0 +1,329 @@
name: Benchmark
on:
schedule:
# Run at 03:00 clock UTC on Monday and Wednesday
- cron: "0 03 * * 1,3"
pull_request:
paths:
- '.github/workflows/benchmark.yml'
workflow_dispatch:
env:
CARGO_TERM_COLOR: always
jobs:
contrib-build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Golang
uses: actions/setup-go@v5
with:
go-version-file: 'go.work'
cache-dependency-path: "**/*.sum"
- name: Build Contrib
run: |
make -e DOCKER=false nydusify-release
- name: Upload Nydusify
uses: actions/upload-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd/nydusify
nydus-build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Rust Cache
uses: Swatinem/rust-cache@v2
with:
cache-on-failure: true
shared-key: Linux-cargo-amd64
- uses: dsherret/rust-toolchain-file@v1
- name: Build Nydus
run: |
make release
- name: Upload Nydus Binaries
uses: actions/upload-artifact@v4
with:
name: nydus-artifact
path: |
target/release/nydus-image
target/release/nydusd
benchmark-description:
runs-on: ubuntu-latest
steps:
- name: Description
run: |
echo "## Benchmark Environment" > $GITHUB_STEP_SUMMARY
echo "| operating system | cpu | memory " >> $GITHUB_STEP_SUMMARY
echo "|:----------------:|:---:|:------ " >> $GITHUB_STEP_SUMMARY
echo "| ubuntu-22.04 | 2-core CPU (x86_64) | 7GB |" >> $GITHUB_STEP_SUMMARY
benchmark-oci:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
strategy:
matrix:
include:
- image: wordpress
tag: 6.1.1
- image: node
tag: 19.8
- image: python
tag: 3.10.7
- image: golang
tag: 1.19.3
- image: ruby
tag: 3.1.3
- image: amazoncorretto
tag: 8-al2022-jdk
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: target/release
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Environment
run: |
sudo bash misc/prepare.sh
- name: BenchMark Test
run: |
export BENCHMARK_TEST_IMAGE=${{ matrix.image }}:${{ matrix.tag }}
export BENCHMARK_MODE=oci
export BENCHMARK_METRIC_FILE=${{ matrix.image }}-oci.json
export SNAPSHOTTER=overlayfs
sudo -E make smoke-benchmark
- name: Save BenchMark Result
uses: actions/upload-artifact@v4
with:
name: benchmark-oci-${{ matrix.image }}
path: smoke/${{ matrix.image }}-oci.json
benchmark-fsversion-v5:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
strategy:
matrix:
include:
- image: wordpress
tag: 6.1.1
- image: node
tag: 19.8
- image: python
tag: 3.10.7
- image: golang
tag: 1.19.3
- image: ruby
tag: 3.1.3
- image: amazoncorretto
tag: 8-al2022-jdk
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: target/release
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Environment
run: |
sudo bash misc/prepare.sh
- name: BenchMark Test
run: |
export BENCHMARK_TEST_IMAGE=${{ matrix.image }}:${{ matrix.tag }}
export BENCHMARK_MODE=fs-version-5
export BENCHMARK_METRIC_FILE=${{ matrix.image }}-fsversion-v5.json
sudo -E make smoke-benchmark
- name: Save BenchMark Result
uses: actions/upload-artifact@v4
with:
name: benchmark-fsversion-v5-${{ matrix.image }}
path: smoke/${{ matrix.image }}-fsversion-v5.json
benchmark-fsversion-v6:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
strategy:
matrix:
include:
- image: wordpress
tag: 6.1.1
- image: node
tag: 19.8
- image: python
tag: 3.10.7
- image: golang
tag: 1.19.3
- image: ruby
tag: 3.1.3
- image: amazoncorretto
tag: 8-al2022-jdk
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: target/release
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Environment
run: |
sudo bash misc/prepare.sh
- name: BenchMark Test
run: |
export BENCHMARK_TEST_IMAGE=${{ matrix.image }}:${{ matrix.tag }}
export BENCHMARK_MODE=fs-version-6
export BENCHMARK_METRIC_FILE=${{ matrix.image }}-fsversion-v6.json
sudo -E make smoke-benchmark
- name: Save BenchMark Result
uses: actions/upload-artifact@v4
with:
name: benchmark-fsversion-v6-${{ matrix.image }}
path: smoke/${{ matrix.image }}-fsversion-v6.json
benchmark-zran:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
strategy:
matrix:
include:
- image: wordpress
tag: 6.1.1
- image: node
tag: 19.8
- image: python
tag: 3.10.7
- image: golang
tag: 1.19.3
- image: ruby
tag: 3.1.3
- image: amazoncorretto
tag: 8-al2022-jdk
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: target/release
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Environment
run: |
sudo bash misc/prepare.sh
- name: BenchMark Test
run: |
export BENCHMARK_TEST_IMAGE=${{ matrix.image }}:${{ matrix.tag }}
export BENCHMARK_MODE=zran
export BENCHMARK_METRIC_FILE=${{ matrix.image }}-zran.json
sudo -E make smoke-benchmark
- name: Save BenchMark Result
uses: actions/upload-artifact@v4
with:
name: benchmark-zran-${{ matrix.image }}
path: smoke/${{ matrix.image }}-zran.json
benchmark-result:
runs-on: ubuntu-latest
needs: [benchmark-oci, benchmark-fsversion-v5, benchmark-fsversion-v6, benchmark-zran]
strategy:
matrix:
include:
- image: wordpress
tag: 6.1.1
- image: node
tag: 19.8
- image: python
tag: 3.10.7
- image: golang
tag: 1.19.3
- image: ruby
tag: 3.1.3
- image: amazoncorretto
tag: 8-al2022-jdk
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Download benchmark-oci
uses: actions/download-artifact@v4
with:
name: benchmark-oci-${{ matrix.image }}
path: benchmark-result
- name: Download benchmark-fsversion-v5
uses: actions/download-artifact@v4
with:
name: benchmark-fsversion-v5-${{ matrix.image }}
path: benchmark-result
- name: Download benchmark-fsversion-v6
uses: actions/download-artifact@v4
with:
name: benchmark-fsversion-v6-${{ matrix.image }}
path: benchmark-result
- name: Download benchmark-zran
uses: actions/download-artifact@v4
with:
name: benchmark-zran-${{ matrix.image }}
path: benchmark-result
- name: Benchmark Summary
run: |
case ${{matrix.image}} in
"wordpress")
echo "### workload: wait the 80 port response" > $GITHUB_STEP_SUMMARY
;;
"node")
echo "### workload: node index.js; wait the 80 port response" > $GITHUB_STEP_SUMMARY
;;
"python")
echo "### workload: python -c 'print("hello")'" > $GITHUB_STEP_SUMMARY
;;
"golang")
echo "### workload: go run main.go" > $GITHUB_STEP_SUMMARY
;;
"ruby")
echo "### workload: ruby -e "puts \"hello\""" > $GITHUB_STEP_SUMMARY
;;
"amazoncorretto")
echo "### workload: javac Main.java; java Main" > $GITHUB_STEP_SUMMARY
;;
esac
cd benchmark-result
metric_files=(
"${{ matrix.image }}-oci.json"
"${{ matrix.image }}-fsversion-v5.json"
"${{ matrix.image }}-fsversion-v6.json"
"${{ matrix.image }}-zran.json"
)
echo "| bench-result | e2e-time(s) | read-count | read-amount(MB) | image-size(MB) |convert-time(s)|" >> $GITHUB_STEP_SUMMARY
echo "|:-------------|:-----------:|:----------:|:---------------:|:--------------:|:-------------:|" >> $GITHUB_STEP_SUMMARY
for file in "${metric_files[@]}"; do
name=$(basename "$file" .json | sed 's/^[^-]*-\(.*\)$/\1/')
data=$(jq -r '. | "\(.e2e_time / 1e9) \(.read_count) \(.read_amount_total / (1024 * 1024)) \(.image_size / (1024 * 1024)) \(.conversion_elapsed / 1e9)"' "$file" | \
awk '{ printf "%.2f | %.0f | %.2f | %.2f | %.2f", $1, $2, $3, $4, $5 }')
echo "| $name | $data |" >> $GITHUB_STEP_SUMMARY
done

View File

@ -1,83 +0,0 @@
name: CI
on:
push:
branches: ["*"]
pull_request:
branches: ["*"]
schedule:
# Run daily sanity check at 23:08 clock UTC
- cron: "8 23 * * *"
workflow_dispatch:
env:
CARGO_TERM_COLOR: always
jobs:
contrib-ut:
runs-on: ubuntu-latest
strategy:
matrix:
go-version: [1.17.x, 1.18.x]
env:
DOCKER: false
steps:
- uses: actions/checkout@v2
- uses: actions/setup-go@v2
with:
go-version: ${{ matrix.go-version }}
- name: cache go mod
uses: actions/cache@v2
with:
path: /go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/contrib/nydusify/go.sum', '**/contrib/ctr-remote/go.sum', '**/contrib/docker-nydus-graphdriver/go.sum', '**/contrib/nydus-overlayfs/go.sum') }}
restore-keys: |
${{ runner.os }}-go
- name: test contrib UT
run: |
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(go env GOPATH)/bin v1.49.0
make contrib-test
smoke:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Cache Nydus
uses: Swatinem/rust-cache@v1
with:
target-dir: |
./target
cache-on-failure: true
key: ${{ runner.os }}-cargo-amd64
- name: Cache Docker Layers
uses: satackey/action-docker-layer-caching@v0.0.11
# Ignore the failure of a step and avoid terminating the job.
continue-on-error: true
- name: Smoke Test
run: |
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(go env GOPATH)/bin v1.49.0
echo Cargo Home: $CARGO_HOME
echo Running User: $(whoami)
make docker-smoke
sudo chown -R $(id -un):$(id -gn) . ~/.cargo/
macos-ut:
runs-on: macos-11
strategy:
matrix:
arch: [amd64, arm64]
steps:
- uses: actions/checkout@v2
- name: build and check
run: |
rustup component add rustfmt && rustup component add clippy
make
make ut
deny:
name: Cargo Deny
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- uses: actions/checkout@v2
- uses: EmbarkStudios/cargo-deny-action@v1

View File

@ -1,4 +1,4 @@
name: Convert Top Docker Hub Images name: Convert & Check Images
on: on:
schedule: schedule:
@ -14,73 +14,376 @@ env:
FSCK_PATCH_PATH: misc/top_images/fsck.patch FSCK_PATCH_PATH: misc/top_images/fsck.patch
jobs: jobs:
convert-images: nydusify-build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Golang
uses: actions/setup-go@v5
with:
go-version-file: 'go.work'
cache-dependency-path: "**/*.sum"
- name: Build Contrib
run: |
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sudo sh -s -- -b /usr/local/bin v1.61.0
make -e DOCKER=false nydusify-release
- name: Upload Nydusify
uses: actions/upload-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd/nydusify
nydus-build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Rust Cache
uses: Swatinem/rust-cache@v2
with:
cache-on-failure: true
shared-key: Linux-cargo-amd64
- uses: dsherret/rust-toolchain-file@v1
- name: Build Nydus
run: |
make release
- name: Upload Nydus Binaries
uses: actions/upload-artifact@v4
with:
name: nydus-artifact
path: |
target/release/nydus-image
target/release/nydusd
fsck-erofs-build:
runs-on: ubuntu-latest runs-on: ubuntu-latest
# don't run this action on forks
if: github.repository_owner == 'dragonflyoss'
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v2 uses: actions/checkout@v4
- name: Install Nydus binaries
run: |
NYDUS_VERSION=$(curl --silent "https://api.github.com/repos/dragonflyoss/image-service/releases/latest" | grep -Po '"tag_name": "\K.*?(?=")')
wget https://github.com/dragonflyoss/image-service/releases/download/$NYDUS_VERSION/nydus-static-$NYDUS_VERSION-linux-amd64.tgz
tar xzf nydus-static-$NYDUS_VERSION-linux-amd64.tgz
sudo cp nydus-static/nydusify nydus-static/nydus-image /usr/local/bin/
sudo cp nydus-static/nydusd /usr/local/bin/nydusd
- name: Log in to the container registry
uses: docker/login-action@f054a8b539a109f9f41c372932f1ae047eff08c9
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build fsck.erofs - name: Build fsck.erofs
run: | run: |
sudo apt-get update && sudo apt-get install -y build-essential git autotools-dev automake libtool pkg-config uuid-dev liblz4-dev sudo apt-get update && sudo apt-get install -y build-essential git autotools-dev automake libtool pkg-config uuid-dev liblz4-dev
git clone https://git.kernel.org/pub/scm/linux/kernel/git/xiang/erofs-utils.git git clone https://git.kernel.org/pub/scm/linux/kernel/git/xiang/erofs-utils.git
cd erofs-utils && git apply ../${{ env.FSCK_PATCH_PATH }} && ./autogen.sh && ./configure && make && cd .. cd erofs-utils && git checkout v1.6 && git apply ../${{ env.FSCK_PATCH_PATH }} && ./autogen.sh && ./configure && make && cd ..
sudo cp erofs-utils/fsck/fsck.erofs /usr/local/bin/ sudo cp erofs-utils/fsck/fsck.erofs /usr/local/bin/
- name: Convert RAFS v5 images - name: Upload fsck.erofs
uses: actions/upload-artifact@v4
with:
name: fsck-erofs-artifact
path: |
/usr/local/bin/fsck.erofs
convert-zran:
runs-on: ubuntu-latest
needs: [nydusify-build, nydus-build, fsck-erofs-build]
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Login ghcr registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: /usr/local/bin
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: /usr/local/bin
- name: Download fsck.erofs
uses: actions/download-artifact@v4
with:
name: fsck-erofs-artifact
path: /usr/local/bin
- name: Convert and check zran images
run: | run: |
sudo chmod +x /usr/local/bin/nydus*
sudo chmod +x /usr/local/bin/fsck.erofs
sudo docker run -d --restart=always -p 5000:5000 registry sudo docker run -d --restart=always -p 5000:5000 registry
sudo mkdir convert-zran
for I in $(cat ${{ env.IMAGE_LIST_PATH }}); do
echo "converting $I:latest to $I:nydus-nightly-oci-ref"
ghcr_repo=${{ env.REGISTRY }}/${{ env.ORGANIZATION }}
# push oci image to ghcr/local for zran reference
sudo docker pull $I:latest
sudo docker tag $I:latest $ghcr_repo/$I
sudo docker tag $I:latest localhost:5000/$I
sudo DOCKER_CONFIG=$HOME/.docker docker push $ghcr_repo/$I
sudo docker push localhost:5000/$I
# for pre-built images
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--oci-ref \
--source $ghcr_repo/$I \
--target $ghcr_repo/$I:nydus-nightly-oci-ref \
--platform linux/amd64,linux/arm64
# use local registry for speed
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--oci-ref \
--source localhost:5000/$I \
--target localhost:5000/$I:nydus-nightly-oci-ref \
--platform linux/amd64,linux/arm64 \
--output-json convert-zran/${I}.json
# check zran image and referenced oci image
sudo rm -rf ./tmp
sudo DOCKER_CONFIG=$HOME/.docker nydusify check \
--source localhost:5000/$I \
--target localhost:5000/$I:nydus-nightly-oci-ref
sudo fsck.erofs -d1 ./output/target/nydus_bootstrap/image/image.boot
sudo rm -rf ./output
done
- name: Save Nydusify Metric
uses: actions/upload-artifact@v4
with:
name: convert-zran-metric
path: convert-zran
convert-native-v5:
runs-on: ubuntu-latest
needs: [nydusify-build, nydus-build]
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Login ghcr registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: /usr/local/bin
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: /usr/local/bin
- name: Convert and check RAFS v5 images
run: |
sudo chmod +x /usr/local/bin/nydus*
sudo docker run -d --restart=always -p 5000:5000 registry
sudo mkdir convert-native-v5
for I in $(cat ${{ env.IMAGE_LIST_PATH }}); do for I in $(cat ${{ env.IMAGE_LIST_PATH }}); do
echo "converting $I:latest to $I:nydus-nightly-v5" echo "converting $I:latest to $I:nydus-nightly-v5"
# for pre-built images # for pre-built images
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \ sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--source $I:latest \ --source $I:latest \
--target ${{ env.REGISTRY }}/${{ env.ORGANIZATION }}/$I:nydus-nightly-v5 \ --target ${{ env.REGISTRY }}/${{ env.ORGANIZATION }}/$I:nydus-nightly-v5 \
--build-cache ${{ env.REGISTRY }}/${{ env.ORGANIZATION }}/nydus-build-cache:$I-v5 \ --fs-version 5 \
--fs-version 5 --platform linux/amd64,linux/arm64
# use local registry for speed # use local registry for speed
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \ sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--source $I:latest \ --source $I:latest \
--target localhost:5000/$I:nydus-nightly-v5 \ --target localhost:5000/$I:nydus-nightly-v5 \
--fs-version 5 --fs-version 5 \
--platform linux/amd64,linux/arm64 \
--output-json convert-native-v5/${I}.json
sudo rm -rf ./tmp sudo rm -rf ./tmp
sudo DOCKER_CONFIG=$HOME/.docker nydusify check --source $I:latest \ sudo DOCKER_CONFIG=$HOME/.docker nydusify check --source $I:latest \
--target localhost:5000/$I:nydus-nightly-v5 --target localhost:5000/$I:nydus-nightly-v5
done done
- name: Save Nydusify Metric
uses: actions/upload-artifact@v4
with:
name: convert-native-v5-metric
path: convert-native-v5
convert-native-v6:
runs-on: ubuntu-latest
needs: [nydusify-build, nydus-build, fsck-erofs-build]
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Login ghcr registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: /usr/local/bin
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: /usr/local/bin
- name: Download fsck.erofs
uses: actions/download-artifact@v4
with:
name: fsck-erofs-artifact
path: /usr/local/bin
- name: Convert and check RAFS v6 images - name: Convert and check RAFS v6 images
run: | run: |
sudo chmod +x /usr/local/bin/nydus*
sudo chmod +x /usr/local/bin/fsck.erofs
sudo docker run -d --restart=always -p 5000:5000 registry
sudo mkdir convert-native-v6
for I in $(cat ${{ env.IMAGE_LIST_PATH }}); do for I in $(cat ${{ env.IMAGE_LIST_PATH }}); do
echo "converting $I:latest to $I:nydus-nightly-v6" echo "converting $I:latest to $I:nydus-nightly-v6"
# for pre-built images # for pre-built images
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \ sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--source $I:latest \ --source $I:latest \
--target ${{ env.REGISTRY }}/${{ env.ORGANIZATION }}/$I:nydus-nightly-v6 \ --target ${{ env.REGISTRY }}/${{ env.ORGANIZATION }}/$I:nydus-nightly-v6 \
--build-cache ${{ env.REGISTRY }}/${{ env.ORGANIZATION }}/nydus-build-cache:$I-v6 \ --fs-version 6 \
--fs-version 6 --platform linux/amd64,linux/arm64
# use local registry for speed # use local registry for speed
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \ sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--source $I:latest \ --source $I:latest \
--target localhost:5000/$I:nydus-nightly-v6 \ --target localhost:5000/$I:nydus-nightly-v6 \
--fs-version 6 --fs-version 6 \
--platform linux/amd64,linux/arm64 \
--output-json convert-native-v6/${I}.json
sudo rm -rf ./tmp sudo rm -rf ./tmp
sudo DOCKER_CONFIG=$HOME/.docker nydusify check --source $I:latest \ sudo DOCKER_CONFIG=$HOME/.docker nydusify check --source $I:latest \
--target localhost:5000/$I:nydus-nightly-v6 --target localhost:5000/$I:nydus-nightly-v6
sudo fsck.erofs -d1 output/nydus_bootstrap sudo fsck.erofs -d1 ./output/target/nydus_bootstrap/image/image.boot
sudo rm -rf ./output sudo rm -rf ./output
done done
- name: Save Nydusify Metric
uses: actions/upload-artifact@v4
with:
name: convert-native-v6-metric
path: convert-native-v6
convert-native-v6-batch:
runs-on: ubuntu-latest
needs: [nydusify-build, nydus-build, fsck-erofs-build]
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Login ghcr registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: /usr/local/bin
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: /usr/local/bin
- name: Download fsck.erofs
uses: actions/download-artifact@v4
with:
name: fsck-erofs-artifact
path: /usr/local/bin
- name: Convert and check RAFS v6 batch images
run: |
sudo chmod +x /usr/local/bin/nydus*
sudo chmod +x /usr/local/bin/fsck.erofs
sudo docker run -d --restart=always -p 5000:5000 registry
sudo mkdir convert-native-v6-batch
for I in $(cat ${{ env.IMAGE_LIST_PATH }}); do
echo "converting $I:latest to $I:nydus-nightly-v6-batch"
# for pre-built images
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--source $I:latest \
--target ${{ env.REGISTRY }}/${{ env.ORGANIZATION }}/$I:nydus-nightly-v6-batch \
--fs-version 6 \
--batch-size 0x100000 \
--platform linux/amd64,linux/arm64
# use local registry for speed
sudo DOCKER_CONFIG=$HOME/.docker nydusify convert \
--source $I:latest \
--target localhost:5000/$I:nydus-nightly-v6-batch \
--fs-version 6 \
--batch-size 0x100000 \
--platform linux/amd64,linux/arm64 \
--output-json convert-native-v6-batch/${I}.json
sudo rm -rf ./tmp
sudo DOCKER_CONFIG=$HOME/.docker nydusify check --source $I:latest \
--target localhost:5000/$I:nydus-nightly-v6-batch
sudo fsck.erofs -d1 ./output/target/nydus_bootstrap/image/image.boot
sudo rm -rf ./output
done
- name: Save Nydusify Metric
uses: actions/upload-artifact@v4
with:
name: convert-native-v6-batch-metric
path: convert-native-v6-batch
convert-metric:
runs-on: ubuntu-latest
needs: [convert-zran, convert-native-v5, convert-native-v6, convert-native-v6-batch]
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Download Zran Metric
uses: actions/download-artifact@v4
with:
name: convert-zran-metric
path: convert-zran
- name: Download V5 Metric
uses: actions/download-artifact@v4
with:
name: convert-native-v5-metric
path: convert-native-v5
- name: Download V6 Metric
uses: actions/download-artifact@v4
with:
name: convert-native-v6-metric
path: convert-native-v6
- name: Download V6 Batch Metric
uses: actions/download-artifact@v4
with:
name: convert-native-v6-batch-metric
path: convert-native-v6-batch
- name: Summary
run: |
echo "## Image Size(MB)" > $GITHUB_STEP_SUMMARY
echo "> Compare the size of OCI image and Nydus image."
echo "|image name|oci/nydus-zran|oci/nydus-v5|oci/nydus-v6|oci/nydus-batch|" >> $GITHUB_STEP_SUMMARY
echo "|:--------:|:------------:|:----------:|:----------:|:-------------:|" >> $GITHUB_STEP_SUMMARY
for I in $(cat ${{ env.IMAGE_LIST_PATH }}); do
zranSourceImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.SourceImageSize' convert-zran/${I}.json) / 1048576")")
zranTargetImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.TargetImageSize' convert-zran/${I}.json) / 1048576")")
v5SourceImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.SourceImageSize' convert-native-v5/${I}.json) / 1048576")")
v5TargetImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.TargetImageSize' convert-native-v5/${I}.json) / 1048576")")
v6SourceImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.SourceImageSize' convert-native-v6/${I}.json) / 1048576")")
v6TargetImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.TargetImageSize' convert-native-v6/${I}.json) / 1048576")")
batchSourceImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.SourceImageSize' convert-native-v6-batch/${I}.json) / 1048576")")
batchTargetImageSize=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.TargetImageSize' convert-native-v6-batch/${I}.json) / 1048576")")
echo "|${I}:latest|${zranSourceImageSize}/${zranTargetImageSize}|${v5SourceImageSize}/${v5TargetImageSize}|${v6SourceImageSize}/${v6TargetImageSize}|${batchSourceImageSize}/${batchTargetImageSize}|" >> $GITHUB_STEP_SUMMARY
done
echo "## Conversion Time(ms)" >> $GITHUB_STEP_SUMMARY
echo "> Time elapsed to convert OCI image to Nydus image."
echo "|image name|nydus-zran|nydus-v5|nydus-v6|nydus-batch|" >> $GITHUB_STEP_SUMMARY
echo "|:---:|:--:|:-------:|:-------:|:-------:|" >> $GITHUB_STEP_SUMMARY
for I in $(cat ${{ env.IMAGE_LIST_PATH }}); do
zranConversionElapsed=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.ConversionElapsed' convert-zran/${I}.json) / 1000000")")
v5ConversionElapsed=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.ConversionElapsed' convert-native-v5/${I}.json) / 1000000")")
v6ConversionElapsed=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.ConversionElapsed' convert-native-v6/${I}.json) / 1000000")")
batchConversionElapsed=$(printf "%0.2f" "$(bc <<< "scale=2; $(jq -r '.ConversionElapsed' convert-native-v6-batch/${I}.json) / 1000000")")
echo "|${I}:latest|${zranConversionElapsed}|${v5ConversionElapsed}|${v6ConversionElapsed}|${batchConversionElapsed}|" >> $GITHUB_STEP_SUMMARY
done
- uses: geekyeggo/delete-artifact@v2
with:
name: '*'

View File

@ -1,112 +0,0 @@
name: Nydus Integration Test
on:
schedule:
# Do conversion every day at 00:03 clock UTC
- cron: "3 0 * * *"
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
arch: [amd64]
fs_version: [5, 6]
steps:
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: 1.18
- name: Setup pytest
run: |
sudo apt install --no-install-recommends -y attr libattr1-dev fio pkg-config libssl-dev python3
sudo python3 -m pip install --upgrade pip
sudo pip3 install pytest xattr requests psutil requests_unixsocket libconf py-splice fallocate pytest-repeat PyYAML six docker toml
- name: containerd runc and crictl
run: |
sudo wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.17.0/crictl-v1.17.0-linux-amd64.tar.gz
sudo tar zxvf ./crictl-v1.17.0-linux-amd64.tar.gz -C /usr/local/bin
sudo wget https://github.com/containerd/containerd/releases/download/v1.4.3/containerd-1.4.3-linux-amd64.tar.gz
mkdir containerd
sudo tar -zxf ./containerd-1.4.3-linux-amd64.tar.gz -C ./containerd
sudo mv ./containerd/bin/* /usr/bin/
sudo wget https://github.com/opencontainers/runc/releases/download/v1.1.2/runc.amd64 -O /usr/bin/runc
sudo chmod +x /usr/bin/runc
- name: Set up ossutils
run: |
sudo wget https://gosspublic.alicdn.com/ossutil/1.7.13/ossutil64 -O /usr/bin/ossutil64
sudo chmod +x /usr/bin/ossutil64
- uses: actions/checkout@v3
- name: Cache cargo
uses: Swatinem/rust-cache@v1
with:
target-dir: |
./target
cache-on-failure: true
key: ${{ runner.os }}-cargo-${{ matrix.arch }}
- name: Build nydus-rs
run: |
declare -A rust_target_map=( ["amd64"]="x86_64-unknown-linux-musl" ["arm64"]="aarch64-unknown-linux-musl")
RUST_TARGET=${rust_target_map[${{ matrix.arch }}]}
cargo install --version 0.2.1 cross
rustup component add rustfmt clippy
make -e RUST_TARGET=$RUST_TARGET -e CARGO=cross static-release
make release -C contrib/nydus-backend-proxy/
sudo cp -r misc/configs .
sudo chown -R $(id -un):$(id -gn) . ~/.cargo/
pwd
ls -lh target/$RUST_TARGET/release
- name: Set up anchor file
env:
OSS_AK_ID: ${{ secrets.OSS_TEST_AK_ID }}
OSS_AK_SEC: ${{ secrets.OSS_TEST_AK_SECRET }}
FS_VERSION: ${{ matrix.fs_version }}
run: |
sudo mkdir -p /home/runner/nydus-test-workspace
sudo mkdir -p /home/runner/nydus-test-workspace/proxy_blobs
sudo cat > /home/runner/work/image-service/image-service/contrib/nydus-test/anchor_conf.json << EOF
{
"workspace": "/home/runner/nydus-test-workspace",
"nydus_project": "/home/runner/work/image-service/image-service",
"nydus_runtime_conf": {
"profile": "release",
"log_level": "info"
},
"registry": {
"registry_url": "localhost:5000",
"registry_namespace": "",
"registry_auth": "YOURAUTH==",
"backend_proxy_url": "127.0.0.1:8000",
"backend_proxy_blobs_dir": "/home/runner/nydus-test-workspace/proxy_blobs"
},
"oss": {
"endpoint": "oss-cn-beijing.aliyuncs.com",
"ak_id": "$OSS_AK_ID",
"ak_secret": "$OSS_AK_SEC",
"bucket": "nydus-ci"
},
"images": {
"images_array": [
"busybox:latest"
]
},
"artifacts": {
"containerd": "/usr/bin/containerd",
"ossutil_bin": "/usr/bin/ossutil64"
},
"fs_version": "$FS_VERSION",
"logging_file": "stderr",
"target": "musl"
}
EOF
- name: run test_api
run: |
cd /home/runner/work/image-service/image-service/contrib/nydus-test
sudo mkdir -p /blobdir
sudo python3 nydus_test_config.py --dist fs_structure.yaml
sudo pytest -vs --durations=0 functional-test/test_api.py \
functional-test/test_nydus.py \
functional-test/test_layered_image.py

45
.github/workflows/miri.yml vendored Normal file
View File

@ -0,0 +1,45 @@
name: Miri Test
on:
push:
branches: ["**", "stable/**"]
paths-ignore: [ '**.md', '**.png', '**.jpg', '**.svg', '**/docs/**' ]
pull_request:
branches: ["**", "stable/**"]
paths-ignore: [ '**.md', '**.png', '**.jpg', '**.svg', '**/docs/**' ]
schedule:
# Run daily sanity check at 03:00 clock UTC
- cron: "0 03 * * *"
workflow_dispatch:
env:
CARGO_TERM_COLOR: always
jobs:
nydus-unit-test-with-miri:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Rust Cache
uses: Swatinem/rust-cache@v2
with:
cache-on-failure: true
shared-key: Linux-cargo-amd64
save-if: ${{ github.ref == 'refs/heads/master' }}
- name: Install cargo nextest
uses: taiki-e/install-action@nextest
- name: Fscache Setup
run: sudo bash misc/fscache/setup.sh
- name: Install Miri
run: |
rustup toolchain install nightly --component miri
rustup override set nightly
cargo miri setup
- name: Unit Test with Miri
run: |
CARGO_HOME=${HOME}/.cargo
CARGO_BIN=$(which cargo)
RUSTUP_BIN=$(which rustup)
sudo -E RUSTUP=${RUSTUP_BIN} make miri-ut-nextest 2>&1 | tee miri-ut.log
grep -C 2 'Undefined Behavior' miri-ut.log

View File

@ -1,4 +1,4 @@
name: release name: Release
on: on:
push: push:
@ -19,28 +19,60 @@ jobs:
matrix: matrix:
arch: [amd64, arm64, ppc64le, riscv64] arch: [amd64, arm64, ppc64le, riscv64]
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v4
- name: Cache cargo - name: Cache cargo
uses: Swatinem/rust-cache@v1 uses: Swatinem/rust-cache@v2
with: with:
target-dir: |
./target
cache-on-failure: true cache-on-failure: true
key: ${{ runner.os }}-cargo-${{ matrix.arch }} shared-key: ${{ runner.os }}-cargo-${{ matrix.arch }}
- name: Build nydus-rs - uses: dsherret/rust-toolchain-file@v1
- name: Read Rust toolchain version
id: set_toolchain_version
run: |
RUST_TOOLCHAIN_VERSION=$(grep -oP '(?<=channel = ")[^"]*' rust-toolchain.toml)
echo "Rust toolchain version: $RUST_TOOLCHAIN_VERSION"
echo "rust-version=$RUST_TOOLCHAIN_VERSION" >> $GITHUB_OUTPUT
shell: bash
- name: Set up Docker Buildx
if: matrix.arch == 'riscv64'
uses: docker/setup-buildx-action@v3
- name: Build and push Docker image
if: matrix.arch == 'riscv64'
uses: docker/build-push-action@v6
with:
context: .
file: ./.github/workflows/Dockerfile.cross
push: false
load: true
tags: rust-cross-compile-riscv64:latest
cache-from: type=gha
cache-to: type=gha,mode=max
build-args: |
RUST_VERSION=${{ steps.set_toolchain_version.outputs.rust-version }}
- name: Build nydus-rs Non-RISC-V
if: matrix.arch != 'riscv64'
run: |
declare -A rust_target_map=( ["amd64"]="x86_64-unknown-linux-musl" ["arm64"]="aarch64-unknown-linux-musl" ["ppc64le"]="powerpc64le-unknown-linux-gnu")
RUST_TARGET=${rust_target_map[${{ matrix.arch }}]}
cargo install --locked --version 0.2.5 cross
make -e RUST_TARGET_STATIC=$RUST_TARGET -e CARGO=cross static-release
- name : Build Nydus-rs RISC-V
if: matrix.arch == 'riscv64'
run: |
RUST_TARGET=riscv64gc-unknown-linux-gnu
docker run --rm -v ${{ github.workspace }}:/root/src rust-cross-compile-riscv64:latest \
sh -c "cd /root/src && make -e RUST_TARGET_STATIC=$RUST_TARGET static-release"
- name: Prepare to upload artifacts
run: | run: |
declare -A rust_target_map=( ["amd64"]="x86_64-unknown-linux-musl" ["arm64"]="aarch64-unknown-linux-musl" ["ppc64le"]="powerpc64le-unknown-linux-gnu" ["riscv64"]="riscv64gc-unknown-linux-gnu") declare -A rust_target_map=( ["amd64"]="x86_64-unknown-linux-musl" ["arm64"]="aarch64-unknown-linux-musl" ["ppc64le"]="powerpc64le-unknown-linux-gnu" ["riscv64"]="riscv64gc-unknown-linux-gnu")
RUST_TARGET=${rust_target_map[${{ matrix.arch }}]} RUST_TARGET=${rust_target_map[${{ matrix.arch }}]}
cargo install --version 0.2.4 cross
rustup component add rustfmt clippy
make -e RUST_TARGET_STATIC=$RUST_TARGET -e CARGO=cross static-release
sudo mv target/$RUST_TARGET/release/nydusd nydusd sudo mv target/$RUST_TARGET/release/nydusd nydusd
sudo mv target/$RUST_TARGET/release/nydus-image . sudo mv target/$RUST_TARGET/release/nydus-image .
sudo mv target/$RUST_TARGET/release/nydusctl . sudo mv target/$RUST_TARGET/release/nydusctl .
sudo cp -r misc/configs . sudo cp -r misc/configs .
sudo chown -R $(id -un):$(id -gn) . ~/.cargo/ sudo chown -R $(id -un):$(id -gn) . ~/.cargo/
- name: store-artifacts - name: store-artifacts
uses: actions/upload-artifact@v2 uses: actions/upload-artifact@v4
with: with:
name: nydus-artifacts-linux-${{ matrix.arch }} name: nydus-artifacts-linux-${{ matrix.arch }}
path: | path: |
@ -50,27 +82,33 @@ jobs:
configs configs
nydus-macos: nydus-macos:
runs-on: macos-11 runs-on: macos-13
strategy: strategy:
matrix: matrix:
arch: [amd64, arm64] arch: [amd64, arm64]
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v4
- name: Cache cargo - name: Cache cargo
uses: Swatinem/rust-cache@v1 uses: Swatinem/rust-cache@v2
with: with:
target-dir: |
./target
cache-on-failure: true cache-on-failure: true
key: ${{ runner.os }}-cargo-${{ matrix.arch }} shared-key: ${{ runner.os }}-cargo-${{ matrix.arch }}
- uses: dsherret/rust-toolchain-file@v1
- name: build - name: build
run: | run: |
rustup component add rustfmt clippy if [[ "${{matrix.arch}}" == "amd64" ]]; then
make -e INSTALL_DIR_PREFIX=. install RUST_TARGET="x86_64-apple-darwin"
else
RUST_TARGET="aarch64-apple-darwin"
fi
cargo install --version 0.2.5 cross
rustup target add ${RUST_TARGET}
make -e RUST_TARGET_STATIC=$RUST_TARGET -e CARGO=cross static-release
sudo mv target/$RUST_TARGET/release/nydusd nydusd
sudo cp -r misc/configs . sudo cp -r misc/configs .
sudo chown -R $(id -un):$(id -gn) . ~/.cargo/ sudo chown -R $(id -un):$(id -gn) . ~/.cargo/
- name: store-artifacts - name: store-artifacts
uses: actions/upload-artifact@v2 uses: actions/upload-artifact@v4
with: with:
name: nydus-artifacts-darwin-${{ matrix.arch }} name: nydus-artifacts-darwin-${{ matrix.arch }}
path: | path: |
@ -87,31 +125,22 @@ jobs:
env: env:
DOCKER: false DOCKER: false
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v4
- uses: actions/setup-go@v2 - name: Setup Golang
uses: actions/setup-go@v5
with: with:
go-version: '1.18' go-version-file: 'go.work'
- name: cache go mod cache-dependency-path: "**/*.sum"
uses: actions/cache@v2
with:
path: /go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/contrib/nydusify/go.sum', '**/contrib/ctr-remote/go.sum', '**/contrib/docker-nydus-graphdriver/go.sum', '**/contrib/nydus-overlayfs/go.sum') }}
restore-keys: |
${{ runner.os }}-go
- name: build contrib go components - name: build contrib go components
run: | run: |
make -e GOARCH=${{ matrix.arch }} contrib-release make -e GOARCH=${{ matrix.arch }} contrib-release
sudo mv contrib/ctr-remote/bin/ctr-remote .
sudo mv contrib/docker-nydus-graphdriver/bin/nydus_graphdriver .
sudo mv contrib/nydusify/cmd/nydusify . sudo mv contrib/nydusify/cmd/nydusify .
sudo mv contrib/nydus-overlayfs/bin/nydus-overlayfs . sudo mv contrib/nydus-overlayfs/bin/nydus-overlayfs .
- name: store-artifacts - name: store-artifacts
uses: actions/upload-artifact@v2 uses: actions/upload-artifact@v4
with: with:
name: nydus-artifacts-linux-${{ matrix.arch }} name: nydus-artifacts-linux-${{ matrix.arch }}-contrib
path: | path: |
ctr-remote
nydus_graphdriver
nydusify nydusify
nydus-overlayfs nydus-overlayfs
containerd-nydus-grpc containerd-nydus-grpc
@ -125,9 +154,10 @@ jobs:
needs: [nydus-linux, contrib-linux] needs: [nydus-linux, contrib-linux]
steps: steps:
- name: download artifacts - name: download artifacts
uses: actions/download-artifact@v2 uses: actions/download-artifact@v4
with: with:
name: nydus-artifacts-${{ matrix.os }}-${{ matrix.arch }} pattern: nydus-artifacts-${{ matrix.os }}-${{ matrix.arch }}*
merge-multiple: true
path: nydus-static path: nydus-static
- name: prepare release tarball - name: prepare release tarball
run: | run: |
@ -141,9 +171,9 @@ jobs:
sha256sum $tarball > $shasum sha256sum $tarball > $shasum
echo "tarball_shasum=${shasum}" >> $GITHUB_ENV echo "tarball_shasum=${shasum}" >> $GITHUB_ENV
- name: store-artifacts - name: store-artifacts
uses: actions/upload-artifact@v2 uses: actions/upload-artifact@v4
with: with:
name: nydus-release-tarball name: nydus-release-tarball-${{ matrix.os }}-${{ matrix.arch }}
path: | path: |
${{ env.tarball }} ${{ env.tarball }}
${{ env.tarball_shasum }} ${{ env.tarball_shasum }}
@ -153,12 +183,12 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
strategy: strategy:
matrix: matrix:
arch: [amd64] arch: [amd64, arm64]
os: [darwin] os: [darwin]
needs: [nydus-macos] needs: [nydus-macos]
steps: steps:
- name: download artifacts - name: download artifacts
uses: actions/download-artifact@v2 uses: actions/download-artifact@v4
with: with:
name: nydus-artifacts-${{ matrix.os }}-${{ matrix.arch }} name: nydus-artifacts-${{ matrix.os }}-${{ matrix.arch }}
path: nydus-static path: nydus-static
@ -174,9 +204,9 @@ jobs:
sha256sum $tarball > $shasum sha256sum $tarball > $shasum
echo "tarball_shasum=${shasum}" >> $GITHUB_ENV echo "tarball_shasum=${shasum}" >> $GITHUB_ENV
- name: store-artifacts - name: store-artifacts
uses: actions/upload-artifact@v2 uses: actions/upload-artifact@v4
with: with:
name: nydus-release-tarball name: nydus-release-tarball-${{ matrix.os }}-${{ matrix.arch }}
path: | path: |
${{ env.tarball }} ${{ env.tarball }}
${{ env.tarball_shasum }} ${{ env.tarball_shasum }}
@ -186,15 +216,15 @@ jobs:
needs: [prepare-tarball-linux, prepare-tarball-darwin] needs: [prepare-tarball-linux, prepare-tarball-darwin]
steps: steps:
- name: download artifacts - name: download artifacts
uses: actions/download-artifact@v2 uses: actions/download-artifact@v4
with: with:
name: nydus-release-tarball pattern: nydus-release-tarball-*
merge-multiple: true
path: nydus-tarball path: nydus-tarball
- name: prepare release env - name: prepare release env
run: | run: |
echo "tarballs<<EOF" >> $GITHUB_ENV echo "tarballs<<EOF" >> $GITHUB_ENV
cnt=0 for I in $(ls nydus-tarball);do echo "nydus-tarball/${I}" >> $GITHUB_ENV; done
for I in $(ls nydus-tarball);do cnt=$((cnt+1)); echo "nydus-tarball/${I}" >> $GITHUB_ENV; done
echo "EOF" >> $GITHUB_ENV echo "EOF" >> $GITHUB_ENV
tag=$(echo $GITHUB_REF | cut -d/ -f3-) tag=$(echo $GITHUB_REF | cut -d/ -f3-)
echo "tag=${tag}" >> $GITHUB_ENV echo "tag=${tag}" >> $GITHUB_ENV
@ -205,7 +235,91 @@ jobs:
with: with:
name: "Nydus Image Service ${{ env.tag }}" name: "Nydus Image Service ${{ env.tag }}"
body: | body: |
Mirror (update in 10 min): https://registry.npmmirror.com/binary.html?path=nydus/${{ env.tag }}/ Binaries download mirror (sync within a few hours): https://registry.npmmirror.com/binary.html?path=nydus/${{ env.tag }}/
generate_release_notes: true generate_release_notes: true
files: | files: |
${{ env.tarballs }} ${{ env.tarballs }}
goreleaser:
if: github.event_name == 'push' && startsWith(github.ref, 'refs/tags/')
strategy:
matrix:
arch: [amd64, arm64]
os: [linux]
needs: [nydus-linux, contrib-linux]
permissions:
contents: write
runs-on: ubuntu-latest
timeout-minutes: 60
outputs:
hashes: ${{ steps.hash.outputs.hashes }}
steps:
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 0
submodules: recursive
- name: Setup Golang
uses: actions/setup-go@v5
with:
go-version-file: 'go.work'
cache-dependency-path: "**/*.sum"
- name: download artifacts
uses: actions/download-artifact@v4
with:
pattern: nydus-artifacts-${{ matrix.os }}-${{ matrix.arch }}*
merge-multiple: true
path: nydus-static
- name: prepare context
run: |
chmod +x nydus-static/*
export GOARCH=${{ matrix.arch }}
echo "GOARCH: $GOARCH"
sh ./goreleaser.sh
- name: Check GoReleaser config
uses: goreleaser/goreleaser-action@90a3faa9d0182683851fbfa97ca1a2cb983bfca3
with:
version: latest
args: check
- name: Run GoReleaser
uses: goreleaser/goreleaser-action@90a3faa9d0182683851fbfa97ca1a2cb983bfca3
id: run-goreleaser
with:
version: latest
args: release --clean
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Generate subject
id: hash
env:
ARTIFACTS: "${{ steps.run-goreleaser.outputs.artifacts }}"
run: |
set -euo pipefail
hashes=$(echo $ARTIFACTS | jq --raw-output '.[] | {name, "digest": (.extra.Digest // .extra.Checksum)} | select(.digest) | {digest} + {name} | join(" ") | sub("^sha256:";"")' | base64 -w0)
if test "$hashes" = ""; then # goreleaser < v1.13.0
checksum_file=$(echo "$ARTIFACTS" | jq -r '.[] | select (.type=="Checksum") | .path')
hashes=$(cat $checksum_file | base64 -w0)
fi
echo "hashes=$hashes" >> $GITHUB_OUTPUT
- name: Set tag output
id: tag
run: echo "tag_name=${GITHUB_REF#refs/*/}" >> "$GITHUB_OUTPUT"
provenance:
if: github.event_name == 'push' && startsWith(github.ref, 'refs/tags/')
needs: [goreleaser]
permissions:
actions: read # To read the workflow path.
id-token: write # To sign the provenance.
contents: write # To add assets to a release.
uses: slsa-framework/slsa-github-generator/.github/workflows/generator_generic_slsa3.yml@v2.1.0
with:
base64-subjects: "${{ needs.goreleaser.outputs.hashes }}"
upload-assets: true # upload to a new release
upload-tag-name: "${{ needs.release.outputs.tag_name }}"
draft-release: true

386
.github/workflows/smoke.yml vendored Normal file
View File

@ -0,0 +1,386 @@
name: Smoke Test
on:
push:
branches: ["**", "stable/**"]
paths-ignore: [ '**.md', '**.png', '**.jpg', '**.svg', '**/docs/**' ]
pull_request:
branches: ["**", "stable/**"]
paths-ignore: [ '**.md', '**.png', '**.jpg', '**.svg', '**/docs/**' ]
schedule:
# Run daily sanity check at 03:00 clock UTC
- cron: "0 03 * * *"
workflow_dispatch:
env:
CARGO_TERM_COLOR: always
jobs:
contrib-build:
runs-on: ubuntu-latest
strategy:
matrix:
arch: [amd64, arm64, ppc64le, riscv64]
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Golang
uses: actions/setup-go@v5
with:
go-version-file: 'go.work'
cache-dependency-path: "**/*.sum"
- name: Build Contrib
run: |
make -e DOCKER=false GOARCH=${{ matrix.arch }} contrib-release
- name: Upload Nydusify
if: matrix.arch == 'amd64'
uses: actions/upload-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
contrib-lint:
runs-on: ubuntu-latest
strategy:
matrix:
include:
- path: contrib/nydusify
- path: contrib/nydus-overlayfs
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Golang
uses: actions/setup-go@v5
with:
go-version-file: 'go.work'
cache: false
- name: Lint
uses: golangci/golangci-lint-action@v6
with:
version: v1.64
working-directory: ${{ matrix.path }}
args: --timeout=10m --verbose
nydus-build:
runs-on: ubuntu-latest
strategy:
matrix:
arch: [amd64, arm64, ppc64le, riscv64]
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Rust Cache
uses: Swatinem/rust-cache@v2
with:
cache-on-failure: true
shared-key: ${{ runner.os }}-cargo-${{ matrix.arch }}
save-if: ${{ github.ref == 'refs/heads/master' }}
- uses: dsherret/rust-toolchain-file@v1
- name: Read Rust toolchain version
id: set_toolchain_version
run: |
RUST_TOOLCHAIN_VERSION=$(grep -oP '(?<=channel = ")[^"]*' rust-toolchain.toml)
echo "Rust toolchain version: $RUST_TOOLCHAIN_VERSION"
echo "rust-version=$RUST_TOOLCHAIN_VERSION" >> $GITHUB_OUTPUT
shell: bash
- name: Set up Docker Buildx
if: matrix.arch == 'riscv64'
uses: docker/setup-buildx-action@v3
- name: Build and push Docker image
if: matrix.arch == 'riscv64'
uses: docker/build-push-action@v6
with:
context: .
file: ./.github/workflows/Dockerfile.cross
push: false
load: true
tags: rust-cross-compile-riscv64:latest
cache-from: type=gha
cache-to: type=gha,mode=max
build-args: |
RUST_VERSION=${{ steps.set_toolchain_version.outputs.rust-version }}
- name: Build Nydus Non-RISC-V
if: matrix.arch != 'riscv64'
run: |
declare -A rust_target_map=( ["amd64"]="x86_64-unknown-linux-musl" ["arm64"]="aarch64-unknown-linux-musl" ["ppc64le"]="powerpc64le-unknown-linux-gnu")
RUST_TARGET=${rust_target_map[${{ matrix.arch }}]}
cargo install --locked --version 0.2.5 cross
make -e RUST_TARGET_STATIC=$RUST_TARGET -e CARGO=cross static-release
- name: Build Nydus RISC-V
if: matrix.arch == 'riscv64'
run: |
RUST_TARGET=riscv64gc-unknown-linux-gnu
docker run --rm -v ${{ github.workspace }}:/root/src rust-cross-compile-riscv64:latest \
sh -c "cd /root/src && make -e RUST_TARGET_STATIC=$RUST_TARGET static-release"
- name: Prepare to upload artifacts
run: |
declare -A rust_target_map=( ["amd64"]="x86_64-unknown-linux-musl" ["arm64"]="aarch64-unknown-linux-musl" ["ppc64le"]="powerpc64le-unknown-linux-gnu" ["riscv64"]="riscv64gc-unknown-linux-gnu")
RUST_TARGET=${rust_target_map[${{ matrix.arch }}]}
sudo mv target/$RUST_TARGET/release/nydusd .
sudo mv target/$RUST_TARGET/release/nydus-image .
- name: Upload Nydus Binaries
if: matrix.arch == 'amd64'
uses: actions/upload-artifact@v4
with:
name: nydus-artifact
path: |
nydus-image
nydusd
nydusd-build-macos:
runs-on: macos-13
strategy:
matrix:
arch: [amd64, arm64]
steps:
- uses: actions/checkout@v4
- name: Cache cargo
uses: Swatinem/rust-cache@v2
with:
cache-on-failure: true
shared-key: ${{ runner.os }}-cargo-${{ matrix.arch }}
save-if: ${{ github.ref == 'refs/heads/master' }}
- uses: dsherret/rust-toolchain-file@v1
- name: build
run: |
if [[ "${{matrix.arch}}" == "amd64" ]]; then
RUST_TARGET="x86_64-apple-darwin"
else
RUST_TARGET="aarch64-apple-darwin"
fi
cargo install --version 0.2.5 cross
rustup target add ${RUST_TARGET}
make -e RUST_TARGET_STATIC=$RUST_TARGET -e CARGO=cross static-release
nydus-integration-test:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Docker Cache
uses: jpribyl/action-docker-layer-caching@v0.1.0
continue-on-error: true
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: |
target/release
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Older Binaries
id: prepare-binaries
run: |
export NYDUS_STABLE_VERSION=$(curl https://api.github.com/repos/Dragonflyoss/nydus/releases/latest | jq -r '.tag_name')
versions=(v0.1.0 ${NYDUS_STABLE_VERSION})
version_archs=(v0.1.0-x86_64 ${NYDUS_STABLE_VERSION}-linux-amd64)
for i in ${!versions[@]}; do
version=${versions[$i]}
version_arch=${version_archs[$i]}
wget -q https://github.com/dragonflyoss/nydus/releases/download/$version/nydus-static-$version_arch.tgz
sudo mkdir nydus-$version /usr/bin/nydus-$version
sudo tar xzf nydus-static-$version_arch.tgz -C nydus-$version
sudo cp -r nydus-$version/nydus-static/* /usr/bin/nydus-$version/
done
- name: Setup Golang
uses: actions/setup-go@v5
with:
go-version-file: 'go.work'
cache-dependency-path: "**/*.sum"
- name: Free Disk Space
uses: jlumbroso/free-disk-space@main
with:
# this might remove tools that are actually needed,
# if set to "true" but frees about 6 GB
tool-cache: false
# all of these default to true, but feel free to set to
# "false" if necessary for your workflow
android: true
dotnet: true
haskell: true
large-packages: true
docker-images: true
swap-storage: true
- name: Integration Test
run: |
sudo mkdir -p /usr/bin/nydus-latest /home/runner/work/workdir
sudo install -D -m 755 contrib/nydusify/cmd/nydusify /usr/bin/nydus-latest
sudo install -D -m 755 target/release/nydusd target/release/nydus-image /usr/bin/nydus-latest
sudo bash misc/prepare.sh
export NYDUS_STABLE_VERSION=$(curl https://api.github.com/repos/Dragonflyoss/nydus/releases/latest | jq -r '.tag_name')
export NYDUS_STABLE_VERSION_EXPORT="${NYDUS_STABLE_VERSION//./_}"
versions=(v0.1.0 ${NYDUS_STABLE_VERSION} latest)
version_exports=(v0_1_0 ${NYDUS_STABLE_VERSION_EXPORT} latest)
for i in ${!version_exports[@]}; do
version=${versions[$i]}
version_export=${version_exports[$i]}
export NYDUS_BUILDER_$version_export=/usr/bin/nydus-$version/nydus-image
export NYDUS_NYDUSD_$version_export=/usr/bin/nydus-$version/nydusd
export NYDUS_NYDUSIFY_$version_export=/usr/bin/nydus-$version/nydusify
done
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sudo sh -s -- -b /usr/bin v1.64.8
sudo -E make smoke-only
nydus-unit-test:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Rust Cache
uses: Swatinem/rust-cache@v2
with:
cache-on-failure: true
shared-key: Linux-cargo-amd64
save-if: ${{ github.ref == 'refs/heads/master' }}
- name: Install cargo nextest
uses: taiki-e/install-action@nextest
- name: Fscache Setup
run: sudo bash misc/fscache/setup.sh
- name: Unit Test
run: |
CARGO_HOME=${HOME}/.cargo
CARGO_BIN=$(which cargo)
RUSTUP_BIN=$(which rustup)
sudo -E RUSTUP=${RUSTUP_BIN} make ut-nextest
contrib-unit-test-coverage:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Golang
uses: actions/setup-go@v5
with:
go-version-file: 'go.work'
cache-dependency-path: "**/*.sum"
- name: Unit Test
run: |
make -e DOCKER=false contrib-test
- name: Upload contrib coverage file
uses: actions/upload-artifact@v4
with:
name: contrib-test-coverage-artifact
path: |
contrib/nydusify/coverage.txt
nydus-unit-test-coverage:
runs-on: ubuntu-latest
env:
CARGO_TERM_COLOR: always
steps:
- uses: actions/checkout@v4
- name: Rust Cache
uses: Swatinem/rust-cache@v2
with:
cache-on-failure: true
shared-key: Linux-cargo-amd64
save-if: ${{ github.ref == 'refs/heads/master' }}
- name: Install cargo-llvm-cov
uses: taiki-e/install-action@cargo-llvm-cov
- name: Fscache Setup
run: sudo bash misc/fscache/setup.sh
- name: Generate code coverage
run: |
CARGO_HOME=${HOME}/.cargo
CARGO_BIN=$(which cargo)
RUSTUP_BIN=$(which rustup)
sudo -E RUSTUP=${RUSTUP_BIN} make coverage-codecov
- name: Upload nydus coverage file
uses: actions/upload-artifact@v4
with:
name: nydus-test-coverage-artifact
path: |
codecov.json
upload-coverage-to-codecov:
runs-on: ubuntu-latest
needs: [contrib-unit-test-coverage, nydus-unit-test-coverage]
steps:
- uses: actions/checkout@v4
- name: Download nydus coverage file
uses: actions/download-artifact@v4
with:
name: nydus-test-coverage-artifact
- name: Download contrib coverage file
uses: actions/download-artifact@v4
with:
name: contrib-test-coverage-artifact
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v4
with:
files: ./codecov.json,./coverage.txt
token: ${{ secrets.CODECOV_TOKEN }}
verbose: true
fail_ci_if_error: true
nydus-cargo-deny:
name: cargo-deny
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- uses: actions/checkout@v4
- uses: EmbarkStudios/cargo-deny-action@v2
performance-test:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
strategy:
matrix:
include:
- mode: fs-version-5
- mode: fs-version-6
- mode: zran
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: target/release
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Nydus Container Environment
run: |
sudo bash misc/prepare.sh
- name: Performance Test
run: |
export PERFORMANCE_TEST_MODE=${{ matrix.mode }}
sudo -E make smoke-performance
takeover-test:
runs-on: ubuntu-latest
needs: [contrib-build, nydus-build]
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Download Nydus
uses: actions/download-artifact@v4
with:
name: nydus-artifact
path: target/release
- name: Download Nydusify
uses: actions/download-artifact@v4
with:
name: nydusify-artifact
path: contrib/nydusify/cmd
- name: Prepare Nydus Container Environment
run: |
sudo bash misc/prepare.sh takeover_test
- name: Takeover Test
run: |
export NEW_NYDUSD_BINARY_PATH=target/release/nydusd
sudo -E make smoke-takeover

31
.github/workflows/stale.yaml vendored Normal file
View File

@ -0,0 +1,31 @@
name: Close stale issues and PRs
on:
workflow_dispatch:
schedule:
- cron: "0 0 * * *"
permissions:
issues: write
pull-requests: write
jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@5bef64f19d7facfb25b37b414482c7164d639639 # v9.1.0
id: stale
with:
delete-branch: true
days-before-close: 7
days-before-stale: 60
days-before-pr-close: 7
days-before-pr-stale: 60
stale-issue-label: "stale"
exempt-issue-labels: bug,wip
exempt-pr-labels: bug,wip
exempt-all-milestones: true
stale-issue-message: 'This issue is stale because it has been open 60 days with no activity.'
close-issue-message: 'This issue was closed because it has been stalled for 7 days with no activity.'
stale-pr-message: 'This PR is stale because it has been open 60 days with no activity.'
close-pr-message: 'This PR was closed because it has been stalled for 7 days with no activity.'

9
.gitignore vendored
View File

@ -1,7 +1,14 @@
**/target* **/target*
**/*.rs.bk **/*.rs.bk
/.vscode **/.vscode
.idea .idea
.cargo .cargo
**/.pyc **/.pyc
__pycache__ __pycache__
.DS_Store
go.work.sum
dist/
nydus-static/
.goreleaser.yml
metadata.db
tests/texture/zran/233c72f2b6b698c07021c4da367cfe2dff4f049efbaa885ca0ff760ea297865a

16
ADOPTERS.md Normal file
View File

@ -0,0 +1,16 @@
## CNCF Dragonfly Nydus Adopters
A non-exhaustive list of Nydus adopters is provided below.
Please kindly share your experience about Nydus with us and help us to improve Nydus ❤️.
**_[Alibaba Cloud](https://www.alibabacloud.com)_** - Aliyun serverless image pull time drops from 20 seconds to 0.8s seconds.
**_[Ant Group](https://www.antgroup.com)_** - Serving large-scale clusters with millions of container creations each day.
**_[ByteDance](https://www.bytedance.com)_** - Serving container image acceleration in Technical Infrastructure of ByteDance.
**_[KuaiShou](https://www.kuaishou.com)_** - Starting to deploy millions of containers with Dragonfly and Nydus.
**_[Yue Miao](https://www.laiyuemiao.com)_** - The startup time of micro service has been greatly improved, and reduced the network consumption.
**_[CoreWeave](https://coreweave.com/)_** - Dramatically reduce the pull time of container image which embedded machine learning models.

2428
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -1,77 +1,130 @@
[package] [package]
name = "nydus-rs" name = "nydus-rs"
version = "2.1.0-rc.3.1" # will be overridden by real git tag during cargo build
version = "0.0.0-git"
description = "Nydus Image Service" description = "Nydus Image Service"
authors = ["The Nydus Developers"] authors = ["The Nydus Developers"]
license = "Apache-2.0 OR BSD-3-Clause" license = "Apache-2.0 OR BSD-3-Clause"
homepage = "https://nydus.dev/" homepage = "https://nydus.dev/"
repository = "https://github.com/dragonflyoss/image-service" repository = "https://github.com/dragonflyoss/nydus"
edition = "2018" exclude = ["contrib/", "smoke/", "tests/"]
edition = "2021"
resolver = "2" resolver = "2"
build = "build.rs"
[profile.release] [profile.release]
panic = "abort" panic = "abort"
[[bin]]
name = "nydusctl"
path = "src/bin/nydusctl/main.rs"
[[bin]] [[bin]]
name = "nydusd" name = "nydusd"
path = "src/bin/nydusd/main.rs" path = "src/bin/nydusd/main.rs"
[[bin]]
name = "nydus-image"
path = "src/bin/nydus-image/main.rs"
[lib] [lib]
name = "nydus" name = "nydus"
path = "src/lib.rs" path = "src/lib.rs"
[dependencies] [dependencies]
rlimit = "0.8.3" anyhow = "1"
log = "0.4.8" clap = { version = "4.0.18", features = ["derive", "cargo"] }
flexi_logger = { version = "0.25", features = ["compress"] }
fuse-backend-rs = "^0.12.0"
hex = "0.4.3"
hyper = "0.14.11"
hyperlocal = "0.8.0"
lazy_static = "1"
libc = "0.2" libc = "0.2"
vmm-sys-util = "0.10.0" log = "0.4.8"
clap = "2.33" log-panics = { version = "2.1.0", features = ["with-backtrace"] }
# pin regex to fix RUSTSEC-2022-0013 mio = { version = "0.8", features = ["os-poll", "os-ext"] }
regex = "1.5.5" nix = "0.24.0"
rlimit = "0.9.0"
rusqlite = { version = "0.30.0", features = ["bundled"] }
serde = { version = "1.0.110", features = ["serde_derive", "rc"] } serde = { version = "1.0.110", features = ["serde_derive", "rc"] }
serde_json = "1.0.51" serde_json = "1.0.51"
sha2 = "0.10.2" tar = "0.4.40"
time = { version = "0.3.14", features = ["serde-human-readable"] } tokio = { version = "1.35.1", features = ["macros"] }
lazy_static = "1.4.0"
xattr = "0.2.2"
nix = "0.24.0"
anyhow = "1.0.35"
base64 = "0.13.0"
rust-fsm = "0.6.0"
vm-memory = { version = "0.9.0", features = ["backend-mmap"], optional = true }
openssl = { version = "0.10.40", features = ["vendored"] }
# pin openssl-src to bring in fix for https://rustsec.org/advisories/RUSTSEC-2022-0032
openssl-src = { version = "111.22" }
hyperlocal = "0.8.0"
tokio = { version = "1.18.2", features = ["macros"] }
hyper = "0.14.11"
# pin rand_core to bring in fix for https://rustsec.org/advisories/RUSTSEC-2021-0023
rand_core = "0.6.2"
tar = "0.4.38"
mio = { version = "0.8", features = ["os-poll", "os-ext"] }
fuse-backend-rs = { version = "0.9" } # Build static linked openssl library
vhost = { version = "0.4.0", features = ["vhost-user-slave"], optional = true } openssl = { version = '0.10.72', features = ["vendored"] }
vhost-user-backend = { version = "0.5.1", optional = true }
virtio-bindings = { version = "0.1", features = ["virtio-v5_0_0"], optional = true }
virtio-queue = { version = "0.4.0", optional = true }
nydus-api = { version = "0.1.0", path = "api" } nydus-api = { version = "0.4.0", path = "api", features = [
nydus-app = { version = "0.3.0", path = "app" } "error-backtrace",
nydus-error = { version = "0.2.1", path = "error" } "handler",
nydus-rafs = { version = "0.1.0", path = "rafs", features = ["backend-registry", "backend-oss"] } ] }
nydus-storage = { version = "0.5.0", path = "storage" } nydus-builder = { version = "0.2.0", path = "builder" }
nydus-utils = { version = "0.3.0", path = "utils" } nydus-rafs = { version = "0.4.0", path = "rafs" }
nydus-blobfs = { version = "0.1.0", path = "blobfs", features = ["virtiofs"], optional = true } nydus-service = { version = "0.4.0", path = "service", features = [
"block-device",
] }
nydus-storage = { version = "0.7.0", path = "storage", features = [
"prefetch-rate-limit",
] }
nydus-utils = { version = "0.5.0", path = "utils" }
vhost = { version = "0.11.0", features = ["vhost-user"], optional = true }
vhost-user-backend = { version = "0.15.0", optional = true }
virtio-bindings = { version = "0.1", features = [
"virtio-v5_0_0",
], optional = true }
virtio-queue = { version = "0.12.0", optional = true }
vm-memory = { version = "0.14.1", features = ["backend-mmap","backend-atomic"], optional = true }
vmm-sys-util = { version = "0.12.1", optional = true }
[build-dependencies]
time = { version = "0.3.14", features = ["formatting"] }
[dev-dependencies] [dev-dependencies]
sendfd = "0.3.3" xattr = "1.0.1"
env_logger = "0.8.2" vmm-sys-util = "0.12.1"
rand = "0.8.5"
[features] [features]
default = ["fuse-backend-rs/fusedev"] default = [
virtiofs = ["fuse-backend-rs/vhost-user-fs", "vm-memory", "vhost", "vhost-user-backend", "virtio-queue", "virtio-bindings"] "fuse-backend-rs/fusedev",
"backend-registry",
"backend-oss",
"backend-s3",
"backend-http-proxy",
"backend-localdisk",
"dedup",
]
virtiofs = [
"nydus-service/virtiofs",
"vhost",
"vhost-user-backend",
"virtio-bindings",
"virtio-queue",
"vm-memory",
"vmm-sys-util",
]
block-nbd = ["nydus-service/block-nbd"]
backend-http-proxy = ["nydus-storage/backend-http-proxy"]
backend-localdisk = [
"nydus-storage/backend-localdisk",
"nydus-storage/backend-localdisk-gpt",
]
backend-oss = ["nydus-storage/backend-oss"]
backend-registry = ["nydus-storage/backend-registry"]
backend-s3 = ["nydus-storage/backend-s3"]
dedup = ["nydus-storage/dedup"]
[workspace] [workspace]
members = ["api", "app", "error", "rafs", "storage", "utils", "blobfs"] members = [
"api",
"builder",
"clib",
"rafs",
"storage",
"service",
"upgrade",
"utils",
]

2
Cross.toml Normal file
View File

@ -0,0 +1,2 @@
[build]
pre-build = ["apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y cmake"]

15
MAINTAINERS.md Normal file
View File

@ -0,0 +1,15 @@
# Maintainers
<!-- markdownlint-disable -->
| GitHub ID | Name | Email | Company |
| :-------------------------------------------: | :---------: | :-----------------------------: | :-----------: |
| [imeoer](https://github.com/imeoer) | Yan Song | imeoer@gmail.com | Ant Group |
| [bergwolf](https://github.com/bergwolf) | Peng Tao | bergwolf@hyper.sh | Ant Group |
| [jiangliu](https://github.com/jiangliu) | Jiang Liu | gerry@linux.alibaba.com | Alibaba Group |
| [liubogithub](https://github.com/liubogithub) | Liu Bo | liub.liubo@gmail.com | Alibaba Group |
| [luodw](https://github.com/luodw) | daowen luo | luodaowen.backend@bytedance.com | ByteDance |
| [changweige](https://github.com/changweige) | Changwei Ge | gechangwei@live.cn | ByteDance |
| [hsiangkao](https://github.com/hsiangkao) | Gao Xiang | hsiangkao@linux.alibaba.com | Alibaba Group |
<!-- markdownlint-restore -->

143
Makefile
View File

@ -1,4 +1,4 @@
all: build all: release
all-build: build contrib-build all-build: build contrib-build
@ -15,9 +15,10 @@ INSTALL_DIR_PREFIX ?= "/usr/local/bin"
DOCKER ?= "true" DOCKER ?= "true"
CARGO ?= $(shell which cargo) CARGO ?= $(shell which cargo)
RUSTUP ?= $(shell which rustup)
CARGO_BUILD_GEARS = -v ~/.ssh/id_rsa:/root/.ssh/id_rsa -v ~/.cargo/git:/root/.cargo/git -v ~/.cargo/registry:/root/.cargo/registry CARGO_BUILD_GEARS = -v ~/.ssh/id_rsa:/root/.ssh/id_rsa -v ~/.cargo/git:/root/.cargo/git -v ~/.cargo/registry:/root/.cargo/registry
SUDO = $(shell which sudo) SUDO = $(shell which sudo)
CARGO_COMMON ?= CARGO_COMMON ?=
EXCLUDE_PACKAGES = EXCLUDE_PACKAGES =
UNAME_M := $(shell uname -m) UNAME_M := $(shell uname -m)
@ -43,8 +44,6 @@ endif
endif endif
RUST_TARGET_STATIC ?= $(STATIC_TARGET) RUST_TARGET_STATIC ?= $(STATIC_TARGET)
CTR-REMOTE_PATH = contrib/ctr-remote
DOCKER-GRAPHDRIVER_PATH = contrib/docker-nydus-graphdriver
NYDUSIFY_PATH = contrib/nydusify NYDUSIFY_PATH = contrib/nydusify
NYDUS-OVERLAYFS_PATH = contrib/nydus-overlayfs NYDUS-OVERLAYFS_PATH = contrib/nydus-overlayfs
@ -52,12 +51,6 @@ current_dir := $(shell dirname $(realpath $(firstword $(MAKEFILE_LIST))))
env_go_path := $(shell go env GOPATH 2> /dev/null) env_go_path := $(shell go env GOPATH 2> /dev/null)
go_path := $(if $(env_go_path),$(env_go_path),"$(HOME)/go") go_path := $(if $(env_go_path),$(env_go_path),"$(HOME)/go")
# Set the env DIND_CACHE_DIR to specify a cache directory for
# docker-in-docker container, used to cache data for docker pull,
# then mitigate the impact of docker hub rate limit, for example:
# env DIND_CACHE_DIR=/path/to/host/var-lib-docker make docker-nydusify-smoke
dind_cache_mount := $(if $(DIND_CACHE_DIR),-v $(DIND_CACHE_DIR):/var/lib/docker,)
# Functions # Functions
# Func: build golang target in docker # Func: build golang target in docker
@ -67,13 +60,13 @@ dind_cache_mount := $(if $(DIND_CACHE_DIR),-v $(DIND_CACHE_DIR):/var/lib/docker,
define build_golang define build_golang
echo "Building target $@ by invoking: $(2)" echo "Building target $@ by invoking: $(2)"
if [ $(DOCKER) = "true" ]; then \ if [ $(DOCKER) = "true" ]; then \
docker run --rm -v ${go_path}:/go -v ${current_dir}:/nydus-rs --workdir /nydus-rs/$(1) golang:1.18 $(2) ;\ docker run --rm -v ${go_path}:/go -v ${current_dir}:/nydus-rs --workdir /nydus-rs/$(1) golang:1.21 $(2) ;\
else \ else \
$(2) -C $(1); \ $(2) -C $(1); \
fi fi
endef endef
.PHONY: .release_version .format .musl_target \ .PHONY: .release_version .format .musl_target .clean_libz_sys \
all all-build all-release all-static-release build release static-release all all-build all-release all-static-release build release static-release
.release_version: .release_version:
@ -85,15 +78,20 @@ endef
.musl_target: .musl_target:
$(eval CARGO_BUILD_FLAGS += --target ${RUST_TARGET_STATIC}) $(eval CARGO_BUILD_FLAGS += --target ${RUST_TARGET_STATIC})
# Workaround to clean up stale cache for libz-sys
.clean_libz_sys:
@${CARGO} clean --target ${RUST_TARGET_STATIC} -p libz-sys
@${CARGO} clean --target ${RUST_TARGET_STATIC} --release -p libz-sys
# Targets that are exposed to developers and users. # Targets that are exposed to developers and users.
build: .format build: .format
${CARGO} build $(CARGO_COMMON) $(CARGO_BUILD_FLAGS) ${CARGO} build $(CARGO_COMMON) $(CARGO_BUILD_FLAGS)
# Cargo will skip checking if it is already checked # Cargo will skip checking if it is already checked
${CARGO} clippy $(CARGO_COMMON) --workspace $(EXCLUDE_PACKAGES) --bins --tests -- -Dwarnings ${CARGO} clippy --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS) --bins --tests -- -Dwarnings --allow clippy::unnecessary_cast --allow clippy::needless_borrow
release: .format .release_version build release: .format .release_version build
static-release: .musl_target .format .release_version build static-release: .clean_libz_sys .musl_target .format .release_version build
clean: clean:
${CARGO} clean ${CARGO} clean
@ -104,59 +102,57 @@ install: release
@sudo install -m 755 target/release/nydus-image $(INSTALL_DIR_PREFIX)/nydus-image @sudo install -m 755 target/release/nydus-image $(INSTALL_DIR_PREFIX)/nydus-image
@sudo install -m 755 target/release/nydusctl $(INSTALL_DIR_PREFIX)/nydusctl @sudo install -m 755 target/release/nydusctl $(INSTALL_DIR_PREFIX)/nydusctl
ut: # unit test
TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) RUST_BACKTRACE=1 ${CARGO} test --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) -- --skip integration --nocapture --test-threads=8 ut: .release_version
TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) RUST_BACKTRACE=1 ${CARGO} test --no-fail-fast --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS) -- --skip integration --nocapture --test-threads=8
smoke: ut # you need install cargo nextest first from: https://nexte.st/book/pre-built-binaries.html
$(SUDO) TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) $(CARGO) test --test '*' $(CARGO_COMMON) -- --nocapture --test-threads=8 ut-nextest: .release_version
TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) RUST_BACKTRACE=1 ${RUSTUP} run stable cargo nextest run --no-fail-fast --filter-expr 'test(test) - test(integration)' --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS)
docker-nydus-smoke: # install miri first from https://github.com/rust-lang/miri/
docker build -t nydus-smoke --build-arg RUST_TARGET=${RUST_TARGET_STATIC} misc/nydus-smoke miri-ut-nextest: .release_version
docker run --rm --privileged ${CARGO_BUILD_GEARS} \ MIRIFLAGS=-Zmiri-disable-isolation TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) RUST_BACKTRACE=1 ${RUSTUP} run nightly cargo miri nextest run --no-fail-fast --filter-expr 'test(test) - test(integration) - test(deduplicate::tests) - test(inode_bitmap::tests::test_inode_bitmap)' --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS)
-e TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) \
-v ~/.cargo:/root/.cargo \
-v $(TEST_WORKDIR_PREFIX) \
-v ${current_dir}:/nydus-rs \
nydus-smoke
# TODO: Nydusify smoke has to be time consuming for a while since it relies on musl nydusd and nydus-image. # install test dependencies
# So musl compilation must be involved. pre-coverage:
# And docker-in-docker deployment involves image building? ${CARGO} +stable install cargo-llvm-cov --locked
docker-nydusify-smoke: docker-static ${RUSTUP} component add llvm-tools-preview
$(call build_golang,$(NYDUSIFY_PATH),make build-smoke)
docker build -t nydusify-smoke misc/nydusify-smoke
docker run --rm --privileged \
-e BACKEND_TYPE=$(BACKEND_TYPE) \
-e BACKEND_CONFIG=$(BACKEND_CONFIG) \
-v $(current_dir):/nydus-rs $(dind_cache_mount) nydusify-smoke TestSmoke
docker-nydusify-image-test: docker-static # print unit test coverage to console
$(call build_golang,$(NYDUSIFY_PATH),make build-smoke) coverage: pre-coverage
docker build -t nydusify-smoke misc/nydusify-smoke TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) ${CARGO} llvm-cov --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS) -- --skip integration --nocapture --test-threads=8
docker run --rm --privileged \
-e BACKEND_TYPE=$(BACKEND_TYPE) \
-e BACKEND_CONFIG=$(BACKEND_CONFIG) \
-v $(current_dir):/nydus-rs $(dind_cache_mount) nydusify-smoke TestDockerHubImage
# Run integration smoke test in docker-in-docker container. It requires some special settings, # write unit teset coverage to codecov.json, used for Github CI
# refer to `misc/example/README.md` for details. coverage-codecov:
docker-smoke: docker-nydus-smoke docker-nydusify-smoke TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) ${RUSTUP} run stable cargo llvm-cov --codecov --output-path codecov.json --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS) -- --skip integration --nocapture --test-threads=8
contrib-build: nydusify ctr-remote nydus-overlayfs docker-nydus-graphdriver smoke-only:
make -C smoke test
contrib-release: nydusify-release ctr-remote-release \ smoke-performance:
nydus-overlayfs-release docker-nydus-graphdriver-release make -C smoke test-performance
contrib-test: nydusify-test ctr-remote-test \ smoke-benchmark:
nydus-overlayfs-test docker-nydus-graphdriver-test make -C smoke test-benchmark
contrib-clean: nydusify-clean ctr-remote-clean \ smoke-takeover:
nydus-overlayfs-clean docker-nydus-graphdriver-clean make -C smoke test-takeover
smoke: release smoke-only
contrib-build: nydusify nydus-overlayfs
contrib-release: nydusify-release nydus-overlayfs-release
contrib-test: nydusify-test nydus-overlayfs-test
contrib-lint: nydusify-lint nydus-overlayfs-lint
contrib-clean: nydusify-clean nydus-overlayfs-clean
contrib-install: contrib-install:
@sudo mkdir -m 755 -p $(INSTALL_DIR_PREFIX) @sudo mkdir -m 755 -p $(INSTALL_DIR_PREFIX)
@sudo install -m 755 contrib/ctr-remote/bin/ctr-remote $(INSTALL_DIR_PREFIX)/ctr-remote
@sudo install -m 755 contrib/docker-nydus-graphdriver/bin/nydus-graphdriver $(INSTALL_DIR_PREFIX)/nydus-overlayfs
@sudo install -m 755 contrib/nydus-overlayfs/bin/nydus-overlayfs $(INSTALL_DIR_PREFIX)/nydus-overlayfs @sudo install -m 755 contrib/nydus-overlayfs/bin/nydus-overlayfs $(INSTALL_DIR_PREFIX)/nydus-overlayfs
@sudo install -m 755 contrib/nydusify/cmd/nydusify $(INSTALL_DIR_PREFIX)/nydusify @sudo install -m 755 contrib/nydusify/cmd/nydusify $(INSTALL_DIR_PREFIX)/nydusify
@ -172,17 +168,8 @@ nydusify-test:
nydusify-clean: nydusify-clean:
$(call build_golang,${NYDUSIFY_PATH},make clean) $(call build_golang,${NYDUSIFY_PATH},make clean)
ctr-remote: nydusify-lint:
$(call build_golang,${CTR-REMOTE_PATH},make) $(call build_golang,${NYDUSIFY_PATH},make lint)
ctr-remote-release:
$(call build_golang,${CTR-REMOTE_PATH},make release)
ctr-remote-test:
$(call build_golang,${CTR-REMOTE_PATH},make test)
ctr-remote-clean:
$(call build_golang,${CTR-REMOTE_PATH},make clean)
nydus-overlayfs: nydus-overlayfs:
$(call build_golang,${NYDUS-OVERLAYFS_PATH},make) $(call build_golang,${NYDUS-OVERLAYFS_PATH},make)
@ -196,29 +183,9 @@ nydus-overlayfs-test:
nydus-overlayfs-clean: nydus-overlayfs-clean:
$(call build_golang,${NYDUS-OVERLAYFS_PATH},make clean) $(call build_golang,${NYDUS-OVERLAYFS_PATH},make clean)
docker-nydus-graphdriver: nydus-overlayfs-lint:
$(call build_golang,${DOCKER-GRAPHDRIVER_PATH},make) $(call build_golang,${NYDUS-OVERLAYFS_PATH},make lint)
docker-nydus-graphdriver-release:
$(call build_golang,${DOCKER-GRAPHDRIVER_PATH},make release)
docker-nydus-graphdriver-test:
$(call build_golang,${DOCKER-GRAPHDRIVER_PATH},make test)
docker-nydus-graphdriver-clean:
$(call build_golang,${DOCKER-GRAPHDRIVER_PATH},make clean)
docker-static: docker-static:
docker build -t nydus-rs-static --build-arg RUST_TARGET=${RUST_TARGET_STATIC} misc/musl-static docker build -t nydus-rs-static --build-arg RUST_TARGET=${RUST_TARGET_STATIC} misc/musl-static
docker run --rm ${CARGO_BUILD_GEARS} -e RUST_TARGET=${RUST_TARGET_STATIC} --workdir /nydus-rs -v ${current_dir}:/nydus-rs nydus-rs-static docker run --rm ${CARGO_BUILD_GEARS} -e RUST_TARGET=${RUST_TARGET_STATIC} --workdir /nydus-rs -v ${current_dir}:/nydus-rs nydus-rs-static
docker-example: all-static-release
cp ${current_dir}/target/${RUST_TARGET_STATIC}/release/nydusd misc/example
cp ${current_dir}/target/${RUST_TARGET_STATIC}/release/nydus-image misc/example
cp contrib/nydusify/cmd/nydusify misc/example
docker build -t nydus-rs-example misc/example
@cid=$(shell docker run --rm -t -d --privileged $(dind_cache_mount) nydus-rs-example)
@docker exec $$cid /run.sh
@EXIT_CODE=$$?
@docker rm -f $$cid
@exit $$EXIT_CODE

184
README.md
View File

@ -1,70 +1,82 @@
[**[⬇️ Download]**](https://github.com/dragonflyoss/nydus/releases)
[**[📖 Website]**](https://nydus.dev/)
[**[☸ Quick Start (Kubernetes)**]](https://github.com/containerd/nydus-snapshotter/blob/main/docs/run_nydus_in_kubernetes.md)
[**[🤓 Quick Start (nerdctl)**]](https://github.com/containerd/nerdctl/blob/master/docs/nydus.md)
[**[❓ FAQs & Troubleshooting]**](https://github.com/dragonflyoss/nydus/wiki/FAQ)
# Nydus: Dragonfly Container Image Service # Nydus: Dragonfly Container Image Service
<p><img src="misc/logo.svg" width="170"></p> <p><img src="misc/logo.svg" width="170"></p>
![CI](https://github.com/dragonflyoss/image-service/actions/workflows/ci.yml/badge.svg?event=schedule) [![Release Version](https://img.shields.io/github/v/release/dragonflyoss/nydus?style=flat)](https://github.com/dragonflyoss/nydus/releases)
![Image Conversion](https://github.com/dragonflyoss/image-service/actions/workflows/convert.yml/badge.svg?event=schedule) [![License](https://img.shields.io/crates/l/nydus-rs)](https://crates.io/crates/nydus-rs)
![Release Test Daily](https://github.com/dragonflyoss/image-service/actions/workflows/release.yml/badge.svg?event=schedule) [![Twitter](https://img.shields.io/twitter/url?style=social&url=https%3A%2F%2Ftwitter.com%2Fdragonfly_oss)](https://twitter.com/dragonfly_oss)
[![Nydus Stars](https://img.shields.io/github/stars/dragonflyoss/nydus?label=Nydus%20Stars&style=social)](https://github.com/dragonflyoss/nydus)
[<img src="https://app.devin.ai/devin_v4.png" width="20" title="deepwiki">](https://deepwiki.com/dragonflyoss/nydus)
The nydus project implements a content-addressable filesystem on top of a RAFS format that improves the current OCI image specification, in terms of container launching speed, image space, and network bandwidth efficiency, as well as data integrity. [![Smoke Test](https://github.com/dragonflyoss/nydus/actions/workflows/smoke.yml/badge.svg?event=schedule)](https://github.com/dragonflyoss/nydus/actions/workflows/smoke.yml?query=event%3Aschedule)
[![Image Conversion](https://github.com/dragonflyoss/nydus/actions/workflows/convert.yml/badge.svg?event=schedule)](https://github.com/dragonflyoss/nydus/actions/workflows/convert.yml?query=event%3Aschedule)
[![Release Test Daily](https://github.com/dragonflyoss/nydus/actions/workflows/release.yml/badge.svg?event=schedule)](https://github.com/dragonflyoss/nydus/actions/workflows/release.yml?query=event%3Aschedule)
[![Benchmark](https://github.com/dragonflyoss/nydus/actions/workflows/benchmark.yml/badge.svg?event=schedule)](https://github.com/dragonflyoss/nydus/actions/workflows/benchmark.yml?query=event%3Aschedule)
[![Coverage](https://codecov.io/gh/dragonflyoss/nydus/branch/master/graph/badge.svg)](https://codecov.io/gh/dragonflyoss/nydus)
The following benchmarking result shows the performance improvement compared with the OCI image for the container cold startup elapsed time on containerd. As the OCI image size increases, the container startup time of using Nydus image remains very short. ## Introduction
Nydus implements a content-addressable file system on the RAFS format, which enhances the current OCI image specification by improving container launch speed, image space and network bandwidth efficiency, and data integrity.
The following Benchmarking results demonstrate that Nydus images significantly outperform OCI images in terms of container cold startup elapsed time on Containerd, particularly as the OCI image size increases.
![Container Cold Startup](./misc/perf.jpg) ![Container Cold Startup](./misc/perf.jpg)
Nydus' key features include: ## Principles
- Container images can be downloaded on demand in chunks for lazy pulling to boost container startup ***Provide Fast, Secure And Easy Access to Data Distribution***
- Chunk-based content-addressable data de-duplication to minimize storage, transmission and memory footprints
- Merged filesystem tree in order to remove all intermediate layers as an option
- in-kernel EROFS or FUSE filesystem together with overlayfs to provide full POSIX compatibility
- E2E image data integrity check. So security issues like "Supply Chain Attach" can be avoided and detected at runtime
- Compatible with the OCI artifacts spec and distribution spec, so nydus image can be stored in a regular container registry
- Native [eStargz](https://github.com/containerd/stargz-snapshotter) image support with remote snapshotter plugin `nydus-snapshotter` for containerd runtime.
- Various container image storage backends are supported. For example, Registry, NAS, Aliyun/OSS.
- Integrated with CNCF incubating project Dragonfly to distribute container images in P2P fashion and mitigate the pressure on container registries
- Capable to prefetch data block before user IO hits the block thus to reduce read latency
- Record files access pattern during runtime gathering access trace/log, by which user abnormal behaviors are easily caught
- Access trace based prefetch table
- User I/O amplification to reduce the amount of small requests to storage backend.
Currently Nydus includes following tools: - **Performance**: Second-level container startup speed, millisecond-level function computation code package loading speed.
- **Low Cost**: Written in memory-safed language `Rust`, numerous optimizations help improve memory, CPU, and network consumption.
- **Flexible**: Supports container runtimes such as [runC](https://github.com/opencontainers/runc) and [Kata](https://github.com/kata-containers), and provides [Confidential Containers](https://github.com/confidential-containers) and vulnerability scanning capabilities
- **Security**: End to end data integrity check, Supply Chain Attack can be detected and avoided at runtime.
| Tool | Description | ## Key features
| ---------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [nydusd](https://github.com/dragonflyoss/image-service/blob/master/docs/nydusd.md) | Nydus user-space daemon, it processes all fscache/FUSE messages from the kernel and parses Nydus images to fullfil those requests |
| [nydus-image](https://github.com/dragonflyoss/image-service/blob/master/docs/nydus-image.md) | Convert a single layer of OCI format container image into a nydus format container image generating meta part file and data part file respectively |
| [nydusify](https://github.com/dragonflyoss/image-service/blob/master/docs/nydusify.md) | It pulls OCI image down and unpack it, invokes `nydus-image create` to convert image and then pushes the converted image back to registry and data storage |
| [nydusctl](https://github.com/dragonflyoss/image-service/blob/master/docs/nydus-image.md) | Nydusd CLI client (`nydus-image inspect`), query daemon's working status/metrics and configure it |
| [ctr-remote](https://github.com/dragonflyoss/image-service/tree/master/contrib/ctr-remote) | An enhanced `containerd` CLI tool enable nydus support with `containerd` ctr |
| [nydus-docker-graphdriver](https://github.com/dragonflyoss/image-service/tree/master/contrib/docker-nydus-graphdriver) | Works as a `docker` remote graph driver to control how images and containers are stored and managed |
| [nydus-overlayfs](https://github.com/dragonflyoss/image-service/tree/master/contrib/nydus-overlayfs) | `Containerd` mount helper to invoke overlayfs mount with tweaking mount options a bit. So nydus prerequisites can be passed to vm-based runtime |
| [nydus-backend-proxy](./contrib/nydus-backend-proxy/README.md) | A simple HTTP server to serve local directory as a blob backend for nydusd |
Currently Nydus is supporting the following platforms in container ecosystem: - **On-demand Load**: Container images/packages are downloaded on-demand in chunk unit to boost startup.
- **Chunk Deduplication**: Chunk level data de-duplication cross-layer or cross-image to reduce storage, transport, and memory cost.
- **Compatible with Ecosystem**: Storage backend support with Registry, OSS, NAS, Shared Disk, and [P2P service](https://d7y.io/). Compatible with the [OCI images](https://github.com/dragonflyoss/nydus/blob/master/docs/nydus-zran.md), and provide native [eStargz images](https://github.com/containerd/stargz-snapshotter) support.
- **Data Analyzability**: Record accesses, data layout optimization, prefetch, IO amplification, abnormal behavior detection.
- **POSIX Compatibility**: In-Kernel EROFS or FUSE filesystems together with overlayfs provide full POSIX compatibility
- **I/O optimization**: Use merged filesystem tree, data prefetching and User I/O amplification to reduce read latency and improve user I/O performance.
## Ecosystem
### Nydus tools
| Tool | Description |
| ---------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [nydusd](https://github.com/dragonflyoss/nydus/blob/master/docs/nydusd.md) | Nydus user-space daemon, it processes all fscache/FUSE messages from the kernel and parses Nydus images to fullfil those requests |
| [nydus-image](https://github.com/dragonflyoss/nydus/blob/master/docs/nydus-image.md) | Convert a single layer of OCI format container image into a nydus format container image generating meta part file and data part file respectively |
| [nydusify](https://github.com/dragonflyoss/nydus/blob/master/docs/nydusify.md) | It pulls OCI image down and unpack it, invokes `nydus-image create` to convert image and then pushes the converted image back to registry and data storage |
| [nydusctl](https://github.com/dragonflyoss/nydus/blob/master/docs/nydus-image.md) | Nydusd CLI client (`nydus-image inspect`), query daemon's working status/metrics and configure it |
| [nydus-docker-graphdriver](https://github.com/nydusaccelerator/docker-nydus-graphdriver) | [Experimental] Works as a `docker` remote graph driver to control how images and containers are stored and managed |
| [nydus-overlayfs](https://github.com/dragonflyoss/nydus/tree/master/contrib/nydus-overlayfs) | `Containerd` mount helper to invoke overlayfs mount with tweaking mount options a bit. So nydus prerequisites can be passed to vm-based runtime |
| [nydus-backend-proxy](./contrib/nydus-backend-proxy/README.md) | A simple HTTP server to serve local directory as a blob backend for nydusd |
### Supported platforms
| Type | Platform | Description | Status | | Type | Platform | Description | Status |
| ------------- | --------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------ | | ------------- | --------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------ |
| Storage | Registry/OSS/NAS | Support for OCI-compatible distribution implementations such as Docker Hub, Harbor, Github GHCR, Aliyun ACR, NAS, and Aliyun OSS-like object storage service | ✅ | | Storage | Registry/OSS/S3/NAS | Support for OCI-compatible distribution implementations such as Docker Hub, Harbor, Github GHCR, Aliyun ACR, NAS, and Aliyun OSS-like object storage service | ✅ |
| Storage/Build | [Harbor](https://github.com/goharbor/acceleration-service) | Provides a general service for Harbor to support acceleration image conversion based on kinds of accelerator like Nydus and eStargz etc | ✅ | | Storage/Build | [Harbor](https://github.com/goharbor/acceleration-service) | Provides a general service for Harbor to support acceleration image conversion based on kinds of accelerator like Nydus and eStargz etc | ✅ |
| Distribution | [Dragonfly](https://github.com/dragonflyoss/Dragonfly2) | Improve the runtime performance of Nydus image even further with the Dragonfly P2P data distribution system | ✅ | | Distribution | [Dragonfly](https://github.com/dragonflyoss/dragonfly) | Improve the runtime performance of Nydus image even further with the Dragonfly P2P data distribution system | ✅ |
| Build | [Buildkit](https://github.com/moby/buildkit/pull/2581) | Provides the ability to build and export Nydus images directly from Dockerfile | 🚧 | | Build | [Buildkit](https://github.com/nydusaccelerator/buildkit/blob/master/docs/nydus.md) | Provides the ability to build and export Nydus images directly from Dockerfile | |
| Runtime | Kubernetes | Run Nydus image using CRI interface | ✅ | | Build/Runtime | [Nerdctl](https://github.com/containerd/nerdctl/blob/master/docs/nydus.md) | The containerd client to build or run (requires nydus snapshotter) Nydus image | ✅ |
| Runtime | [Containerd](https://github.com/containerd/nydus-snapshotter) | Run Nydus image in containerd with nydus-snapshotter | ✅ | | Runtime | [Docker / Moby](https://github.com/dragonflyoss/nydus/blob/master/docs/docker-env-setup.md) | Run Nydus image in Docker container with containerd and nydus-snapshotter | ✅ |
| Runtime | [CRI-O / Podman](https://github.com/containers/nydus-storage-plugin) | Run Nydus image with CRI-O or Podman | 🚧 | | Runtime | [Kubernetes](https://github.com/containerd/nydus-snapshotter/blob/main/docs/run_nydus_in_kubernetes.md) | Run Nydus image using CRI interface | |
| Runtime | [Docker](https://github.com/dragonflyoss/image-service/tree/master/contrib/docker-nydus-graphdriver) | Run Nydus image in Docker container with graphdriver plugin | ✅ | | Runtime | [Containerd](https://github.com/containerd/nydus-snapshotter) | Nydus Snapshotter, a containerd remote plugin to run Nydus image | ✅ |
| Runtime | [Nerdctl](https://github.com/containerd/nerdctl) | Run Nydus image with `nerdctl --snapshotter nydus run ...` | ✅ | | Runtime | [CRI-O / Podman](https://github.com/containers/nydus-storage-plugin) | Run Nydus image with CRI-O or Podman | 🚧 |
| Runtime | [KataContainers](https://github.com/kata-containers/kata-containers/blob/main/docs/design/kata-nydus-design.md) | Run Nydus image in KataContainers as a native solution | ✅ | | Runtime | [KataContainers](https://github.com/kata-containers/kata-containers/blob/main/docs/design/kata-nydus-design.md) | Run Nydus image in KataContainers as a native solution | ✅ |
| Runtime | [EROFS](https://www.kernel.org/doc/html/latest/filesystems/erofs.html) | Run Nydus image directly in-kernel EROFS for even greater performance improvement | ✅ | | Runtime | [EROFS](https://www.kernel.org/doc/html/latest/filesystems/erofs.html) | Run Nydus image directly in-kernel EROFS for even greater performance improvement | ✅ |
To try nydus image service: ## Build
1. Convert an original OCI image to nydus image and store it somewhere like Docker/Registry, NAS or Aliyun/OSS. This can be directly done by `nydusify`. Normal users don't have to get involved with `nydus-image`.
2. Get `nydus-snapshotter`(`containerd-nydus-grpc`) installed locally and configured properly. Or install `nydus-docker-graphdriver` plugin.
3. Operate container in legacy approaches. For example, `docker`, `nerdctl`, `crictl` and `ctr`.
## Build Binary
### Build Binary
```shell ```shell
# build debug binary # build debug binary
make make
@ -74,30 +86,36 @@ make release
make docker-static make docker-static
``` ```
## Quick Start with Kubernetes and Containerd ### Build Nydus Image
Convert OCIv1 image to Nydus image: [Nydusify](./docs/nydusify.md), [Acceld](https://github.com/goharbor/acceleration-service) or [Nerdctl](https://github.com/containerd/nerdctl/blob/master/docs/nydus.md#build-nydus-image-using-nerdctl-image-convert).
Build Nydus image from Dockerfile directly: [Buildkit](https://github.com/nydusaccelerator/buildkit/blob/master/docs/nydus.md).
Build Nydus layer from various sources: [Nydus Image Builder](./docs/nydus-image.md).
#### Image prefetch optimization
To further reduce container startup time, a nydus image with a prefetch list can be built using the NRI plugin (containerd >=1.7): [Container Image Optimizer](https://github.com/containerd/nydus-snapshotter/blob/main/docs/optimize_nydus_image.md)
## Run
### Quick Start
For more details on how to lazily start a container with `nydus-snapshotter` and nydus image on Kubernetes nodes or locally use `nerdctl` rather than CRI, please refer to [Nydus Setup](./docs/containerd-env-setup.md) For more details on how to lazily start a container with `nydus-snapshotter` and nydus image on Kubernetes nodes or locally use `nerdctl` rather than CRI, please refer to [Nydus Setup](./docs/containerd-env-setup.md)
## Build Nydus Image ### Run Nydus Snapshotter
Build Nydus image from directory source: [Nydus Image Builder](./docs/nydus-image.md).
Convert OCI image to Nydus image: [Nydusify](./docs/nydusify.md).
## Nydus Snapshotter
Nydus-snapshotter is a non-core sub-project of containerd. Nydus-snapshotter is a non-core sub-project of containerd.
Check out its code and tutorial from [Nydus-snapshotter repository](https://github.com/containerd/nydus-snapshotter). Check out its code and tutorial from [Nydus-snapshotter repository](https://github.com/containerd/nydus-snapshotter).
It works as a `containerd` remote snapshotter to help setup container rootfs with nydus images, which handles nydus image format when necessary. When running without nydus images, it is identical to the containerd's builtin overlayfs snapshotter. It works as a `containerd` remote snapshotter to help setup container rootfs with nydus images, which handles nydus image format when necessary. When running without nydus images, it is identical to the containerd's builtin overlayfs snapshotter.
## Run Nydusd Daemon ### Run Nydusd Daemon
Normally, users do not need to start `nydusd` by hand. It is started by `nydus-snapshotter` or `nydus-docker-graphdriver` when a container rootfs is prepared. Normally, users do not need to start `nydusd` by hand. It is started by `nydus-snapshotter` when a container rootfs is prepared.
Run Nydusd Daemon to serve Nydus image: [Nydusd](./docs/nydusd.md). Run Nydusd Daemon to serve Nydus image: [Nydusd](./docs/nydusd.md).
## Run Nydus with in-kernel EROFS filesystem ### Run Nydus with in-kernel EROFS filesystem
In-kernel EROFS has been fully compatible with RAFS v6 image format since Linux 5.16. In other words, uncompressed RAFS v6 images can be mounted over block devices since then. In-kernel EROFS has been fully compatible with RAFS v6 image format since Linux 5.16. In other words, uncompressed RAFS v6 images can be mounted over block devices since then.
@ -105,56 +123,52 @@ Since [Linux 5.19](https://lwn.net/Articles/896140), EROFS has added a new file-
Guide to running Nydus with fscache: [Nydus-fscache](./docs/nydus-fscache.md) Guide to running Nydus with fscache: [Nydus-fscache](./docs/nydus-fscache.md)
## Build Images via Harbor ### Run Nydus with Dragonfly P2P system
Nydus cooperates with Harbor community to develop [acceleration-service](https://github.com/goharbor/acceleration-service) which provides a general service for Harbor to support image acceleration based on kinds of accelerators like Nydus, eStargz, etc. Nydus is deeply integrated with [Dragonfly](https://d7y.io/) P2P system, which can greatly reduce the network latency and the single point pressure of the registry server. Benchmarking results in the production environment demonstrate that using Dragonfly can reduce network latency by more than 80%, to understand the performance results and integration steps, please refer to the [nydus integration](https://d7y.io/docs/setup/integration/nydus).
## Docker graph driver support If you want to deploy Dragonfly and Nydus at the same time through Helm, please refer to the **[Quick Start](https://github.com/dragonflyoss/helm-charts/blob/main/INSTALL.md)**.
Docker graph driver is also accompanied, it helps to start container from nydus image. For more particular instructions, please refer to ### Run OCI image directly with Nydus
- [Nydus Graph Driver](./contrib/docker-nydus-graphdriver/README.md) Nydus is able to generate a tiny artifact called a `nydus zran` from an existing OCI image in the short time. This artifact can be used to accelerate the container boot time without the need for a full image conversion. For more information, please see the [documentation](./docs/nydus-zran.md).
## Learn Concepts and Commands ### Run with Docker(Moby)
Browse the documentation to learn more. Here are some topics you may be interested in: Nydus provides a variety of methods to support running on docker(Moby), please refer to [Nydus Setup for Docker(Moby) Environment](./docs/docker-env-setup.md)
- [A Nydus Tutorial for Beginners](./docs/tutorial.md) ### Run with macOS
- [Nydus Design Doc](./docs/nydus-design.md)
- Our talk on Open Infra Summit 2020: [Toward Next Generation Container Image](https://drive.google.com/file/d/1LRfLUkNxShxxWU7SKjc_50U0N9ZnGIdV/view)
- [EROFS, What Are We Doing Now For Containers?](https://static.sched.com/hosted_files/kccncosschn21/fd/EROFS_What_Are_We_Doing_Now_For_Containers.pdf)
- [The Evolution of the Nydus Image Acceleration](https://d7y.io/blog/2022/06/06/evolution-of-nydus/) \([Video](https://youtu.be/yr6CB1JN1xg)\)
## Run with macos Nydus can also run with macfuse(a.k.a osxfuse). For more details please read [nydus with macOS](./docs/nydus_with_macos.md).
- Nydus can also run with macfuse(a.k.a osxfuse).For more details please read [nydus with macos](./docs/nydus_with_macos.md). ### Run eStargz image (with lazy pulling)
## Run eStargz image (with lazy pulling)
The containerd remote snapshotter plugin [nydus-snapshotter](https://github.com/containerd/nydus-snapshotter) can be used to run nydus images, or to run [eStargz](https://github.com/containerd/stargz-snapshotter) images directly by appending `--enable-stargz` command line option. The containerd remote snapshotter plugin [nydus-snapshotter](https://github.com/containerd/nydus-snapshotter) can be used to run nydus images, or to run [eStargz](https://github.com/containerd/stargz-snapshotter) images directly by appending `--enable-stargz` command line option.
In the future, `zstd::chunked` can work in this way as well. In the future, `zstd::chunked` can work in this way as well.
### Run Nydus Service
Using the key features of nydus as native in your project without preparing and invoking `nydusd` deliberately, [nydus-service](./service/README.md) helps to reuse the core services of nyuds.
## Documentation
Please visit [**Wiki**](https://github.com/dragonflyoss/nydus/wiki), or [**docs**](./docs)
There is also a very nice [Devin](https://devin.ai/) generated document available at [**deepwiki**](https://deepwiki.com/dragonflyoss/nydus).
## Community ## Community
Nydus aims to form a **vendor-neutral opensource** image distribution solution to all communities. Nydus aims to form a **vendor-neutral opensource** image distribution solution to all communities.
Questions, bug reports, technical discussion, feature requests and contribution are always welcomed! Questions, bug reports, technical discussion, feature requests and contribution are always welcomed!
We're very pleased to hear your use cases any time. We're very pleased to hear your use cases any time.
Feel free to reach/join us via Slack and/or Dingtalk Feel free to reach us via Slack or Dingtalk.
- Slack - **Slack:** [Nydus Workspace](https://join.slack.com/t/nydusimageservice/shared_invite/zt-pz4qvl4y-WIh4itPNILGhPS8JqdFm_w)
Join our Slack [workspace](https://join.slack.com/t/nydusimageservice/shared_invite/zt-pz4qvl4y-WIh4itPNILGhPS8JqdFm_w) - **Twitter:** [@dragonfly_oss](https://twitter.com/dragonfly_oss)
- Dingtalk - **Dingtalk:** [34971767](https://qr.dingtalk.com/action/joingroup?code=v1,k1,ioWGzuDZEIO10Bf+/ohz4RcQqAkW0MtOwoG1nbbMxQg=&_dt_no_comment=1&origin=11)
Join nydus-devel group by clicking [URL](https://h5.dingtalk.com/circle/healthCheckin.html?dtaction=os&corpId=dingbbd4fb77fb7c4f7f85db999db6125bc4&1fd25e0=3e15bd0&cbdbhh=qwertyuiop) on your phone.
You can also search our talking group by number _34971767_ and QR code
<img src="./misc/dingtalk.jpg" width="250" height="300"/> <img src="./misc/dingtalk.jpg" width="250" height="300"/>
Nydus bi-weekly technical community meeting is also regularly available, currrently held on Wednesdays
at 06:00 UTC (14:00 Beijing, Shanghai) starting from Aug 10, 2022.
For more details, please see our [HackMD](https://hackmd.io/@Nydus/Bk8u2X0p9) page.

View File

@ -1,25 +1,31 @@
[package] [package]
name = "nydus-api" name = "nydus-api"
version = "0.1.1" version = "0.4.0"
description = "APIs for Nydus Image Service" description = "APIs for Nydus Image Service"
authors = ["The Nydus Developers"] authors = ["The Nydus Developers"]
license = "Apache-2.0 OR BSD-3-Clause" license = "Apache-2.0 OR BSD-3-Clause"
homepage = "https://nydus.dev/" homepage = "https://nydus.dev/"
repository = "https://github.com/dragonflyoss/image-service" repository = "https://github.com/dragonflyoss/nydus"
edition = "2018" edition = "2021"
[dependencies] [dependencies]
dbs-uhttp = { version = "0.3.0" }
http = "0.2.1"
lazy_static = "1.4.0"
libc = "0.2" libc = "0.2"
log = "0.4.8" log = "0.4.8"
mio = { version = "0.8", features = ["os-poll", "os-ext"]}
serde = { version = "1.0.110", features = ["rc"] }
serde_derive = "1.0.110"
serde_json = "1.0.53" serde_json = "1.0.53"
url = "2.1.1" toml = "0.5"
vmm-sys-util = "0.10"
nydus-error = { version = "0.2", path = "../error" } thiserror = "1.0.30"
nydus-utils = { version = "0.3", path = "../utils" } backtrace = { version = "0.3", optional = true }
dbs-uhttp = { version = "0.3.0", optional = true }
http = { version = "0.2.1", optional = true }
lazy_static = { version = "1.4.0", optional = true }
mio = { version = "0.8", features = ["os-poll", "os-ext"], optional = true }
serde = { version = "1.0.110", features = ["rc", "serde_derive"] }
url = { version = "2.1.1", optional = true }
[dev-dependencies]
vmm-sys-util = { version = "0.12.1" }
[features]
error-backtrace = ["backtrace"]
handler = ["dbs-uhttp", "http", "lazy_static", "mio", "url"]

View File

@ -348,10 +348,8 @@ components:
description: usually to be the metadata source description: usually to be the metadata source
type: string type: string
prefetch_files: prefetch_files:
description: files that need to be prefetched description: local file path which recorded files/directories to be prefetched and separated by newlines
type: array type: string
items:
type: string
config: config:
description: inline request, use to configure fs backend. description: inline request, use to configure fs backend.
type: string type: string

View File

@ -44,7 +44,7 @@ paths:
application/json: application/json:
schema: schema:
$ref: "#/components/schemas/ErrorMsg" $ref: "#/components/schemas/ErrorMsg"
/blob_objects: /blobs:
summary: Manage cached blob objects summary: Manage cached blob objects
#################################################################### ####################################################################
get: get:
@ -96,6 +96,21 @@ paths:
application/json: application/json:
schema: schema:
$ref: "#/components/schemas/ErrorMsg" $ref: "#/components/schemas/ErrorMsg"
operationId: deleteBlobFile
requestBody:
content:
application/json:
schema:
$ref: "#/components/schemas/BlobId"
responses:
"204":
description: "Successfully deleted the blob file!"
"500":
description: "Can't delete the blob file!"
content:
application/json:
schema:
$ref: "#/components/schemas/ErrorMsg"
################################################################ ################################################################
components: components:
schemas: schemas:

2574
api/src/config.rs Normal file

File diff suppressed because it is too large Load Diff

252
api/src/error.rs Normal file
View File

@ -0,0 +1,252 @@
// Copyright 2020 Ant Group. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use std::fmt::Debug;
/// Display error messages with line number, file path and optional backtrace.
pub fn make_error(
err: std::io::Error,
_raw: impl Debug,
_file: &str,
_line: u32,
) -> std::io::Error {
#[cfg(feature = "error-backtrace")]
{
if let Ok(val) = std::env::var("RUST_BACKTRACE") {
if val.trim() != "0" {
error!("Stack:\n{:?}", backtrace::Backtrace::new());
error!("Error:\n\t{:?}\n\tat {}:{}", _raw, _file, _line);
return err;
}
}
error!(
"Error:\n\t{:?}\n\tat {}:{}\n\tnote: enable `RUST_BACKTRACE=1` env to display a backtrace",
_raw, _file, _line
);
}
err
}
/// Define error macro like `x!()` or `x!(err)`.
/// Note: The `x!()` macro will convert any origin error (Os, Simple, Custom) to Custom error.
macro_rules! define_error_macro {
($fn:ident, $err:expr) => {
#[macro_export]
macro_rules! $fn {
() => {
std::io::Error::new($err.kind(), format!("{}: {}:{}", $err, file!(), line!()))
};
($raw:expr) => {
$crate::error::make_error($err, &$raw, file!(), line!())
};
}
};
}
/// Define error macro for libc error codes
macro_rules! define_libc_error_macro {
($fn:ident, $code:ident) => {
define_error_macro!($fn, std::io::Error::from_raw_os_error(libc::$code));
};
}
// TODO: Add format string support
// Add more libc error macro here if necessary
define_libc_error_macro!(einval, EINVAL);
define_libc_error_macro!(enoent, ENOENT);
define_libc_error_macro!(ebadf, EBADF);
define_libc_error_macro!(eacces, EACCES);
define_libc_error_macro!(enotdir, ENOTDIR);
define_libc_error_macro!(eisdir, EISDIR);
define_libc_error_macro!(ealready, EALREADY);
define_libc_error_macro!(enosys, ENOSYS);
define_libc_error_macro!(epipe, EPIPE);
define_libc_error_macro!(eio, EIO);
/// Return EINVAL error with formatted error message.
#[macro_export]
macro_rules! bail_einval {
($($arg:tt)*) => {{
return Err(einval!(format!($($arg)*)))
}}
}
/// Return EIO error with formatted error message.
#[macro_export]
macro_rules! bail_eio {
($($arg:tt)*) => {{
return Err(eio!(format!($($arg)*)))
}}
}
// Add more custom error macro here if necessary
define_error_macro!(last_error, std::io::Error::last_os_error());
define_error_macro!(eother, std::io::Error::new(std::io::ErrorKind::Other, ""));
#[cfg(test)]
mod tests {
use std::io::{Error, ErrorKind};
fn check_size(size: usize) -> std::io::Result<()> {
if size > 0x1000 {
return Err(einval!());
}
Ok(())
}
#[test]
fn test_einval() {
assert_eq!(
check_size(0x2000).unwrap_err().kind(),
std::io::Error::from_raw_os_error(libc::EINVAL).kind()
);
}
#[test]
fn test_make_error() {
let original_error = Error::new(ErrorKind::Other, "test error");
let debug_info = "debug information";
let file = "test.rs";
let line = 42;
let result_error = super::make_error(original_error, debug_info, file, line);
assert_eq!(result_error.kind(), ErrorKind::Other);
}
#[test]
fn test_libc_error_macros() {
// Test einval macro
let err = einval!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EINVAL).kind());
// Test enoent macro
let err = enoent!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::ENOENT).kind());
// Test ebadf macro
let err = ebadf!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EBADF).kind());
// Test eacces macro
let err = eacces!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EACCES).kind());
// Test enotdir macro
let err = enotdir!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::ENOTDIR).kind());
// Test eisdir macro
let err = eisdir!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EISDIR).kind());
// Test ealready macro
let err = ealready!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EALREADY).kind());
// Test enosys macro
let err = enosys!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::ENOSYS).kind());
// Test epipe macro
let err = epipe!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EPIPE).kind());
// Test eio macro
let err = eio!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EIO).kind());
}
#[test]
fn test_libc_error_macros_with_context() {
let test_msg = "test context";
// Test einval macro with context
let err = einval!(test_msg);
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EINVAL).kind());
// Test enoent macro with context
let err = enoent!(test_msg);
assert_eq!(err.kind(), Error::from_raw_os_error(libc::ENOENT).kind());
// Test eio macro with context
let err = eio!(test_msg);
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EIO).kind());
}
#[test]
fn test_custom_error_macros() {
// Test last_error macro
let err = last_error!();
// We can't predict the exact error, but we can check it's a valid error
assert!(!err.to_string().is_empty());
// Test eother macro
let err = eother!();
assert_eq!(err.kind(), ErrorKind::Other);
// Test eother macro with context
let err = eother!("custom context");
assert_eq!(err.kind(), ErrorKind::Other);
}
fn test_bail_einval_function() -> std::io::Result<()> {
bail_einval!("test error message");
}
fn test_bail_eio_function() -> std::io::Result<()> {
bail_eio!("test error message");
}
#[test]
fn test_bail_macros() {
// Test bail_einval macro
let result = test_bail_einval_function();
assert!(result.is_err());
let err = result.unwrap_err();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EINVAL).kind());
// The error message format is controlled by the macro, so just check it's not empty
assert!(!err.to_string().is_empty());
// Test bail_eio macro
let result = test_bail_eio_function();
assert!(result.is_err());
let err = result.unwrap_err();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EIO).kind());
// The error message format is controlled by the macro, so just check it's not empty
assert!(!err.to_string().is_empty());
}
#[test]
fn test_bail_macros_with_formatting() {
fn test_bail_with_format(code: i32) -> std::io::Result<()> {
if code == 1 {
bail_einval!("error code: {}", code);
} else if code == 2 {
bail_eio!("I/O error with code: {}", code);
}
Ok(())
}
// Test bail_einval with formatting
let result = test_bail_with_format(1);
assert!(result.is_err());
let err = result.unwrap_err();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EINVAL).kind());
// The error message format is controlled by the macro, so just check it's not empty
assert!(!err.to_string().is_empty());
// Test bail_eio with formatting
let result = test_bail_with_format(2);
assert!(result.is_err());
let err = result.unwrap_err();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EIO).kind());
// The error message format is controlled by the macro, so just check it's not empty
assert!(!err.to_string().is_empty());
// Test success case
let result = test_bail_with_format(3);
assert!(result.is_ok());
}
}

File diff suppressed because it is too large Load Diff

View File

@ -3,12 +3,14 @@
// //
// SPDX-License-Identifier: Apache-2.0 // SPDX-License-Identifier: Apache-2.0
use crate::http::{
error_response, extract_query_part, parse_body, success_response, translate_status_code,
ApiError, ApiRequest, ApiResponse, ApiResponsePayload, EndpointHandler, HttpError, HttpResult,
};
use dbs_uhttp::{Method, Request, Response}; use dbs_uhttp::{Method, Request, Response};
use crate::http::{ApiError, ApiRequest, ApiResponse, ApiResponsePayload, HttpError};
use crate::http_handler::{
error_response, extract_query_part, parse_body, success_response, translate_status_code,
EndpointHandler, HttpResult,
};
// Convert an ApiResponse to a HTTP response. // Convert an ApiResponse to a HTTP response.
// //
// API server has successfully processed the request, but can't fulfill that. Therefore, // API server has successfully processed the request, but can't fulfill that. Therefore,

View File

@ -7,9 +7,10 @@
use dbs_uhttp::{Method, Request, Response}; use dbs_uhttp::{Method, Request, Response};
use crate::http::{ use crate::http::{ApiError, ApiRequest, ApiResponse, ApiResponsePayload, HttpError};
use crate::http_handler::{
error_response, extract_query_part, parse_body, success_response, translate_status_code, error_response, extract_query_part, parse_body, success_response, translate_status_code,
ApiError, ApiRequest, ApiResponse, ApiResponsePayload, EndpointHandler, HttpError, HttpResult, EndpointHandler, HttpResult,
}; };
/// HTTP URI prefix for API v1. /// HTTP URI prefix for API v1.
@ -139,7 +140,7 @@ impl EndpointHandler for MetricsFsFilesHandler {
(Method::Get, None) => { (Method::Get, None) => {
let id = extract_query_part(req, "id"); let id = extract_query_part(req, "id");
let latest_read_files = extract_query_part(req, "latest") let latest_read_files = extract_query_part(req, "latest")
.map_or(false, |b| b.parse::<bool>().unwrap_or(false)); .is_some_and(|b| b.parse::<bool>().unwrap_or(false));
let r = kicker(ApiRequest::ExportFsFilesMetrics(id, latest_read_files)); let r = kicker(ApiRequest::ExportFsFilesMetrics(id, latest_read_files));
Ok(convert_to_response(r, HttpError::FsFilesMetrics)) Ok(convert_to_response(r, HttpError::FsFilesMetrics))
} }

View File

@ -6,12 +6,15 @@
//! Nydus API v2. //! Nydus API v2.
use crate::BlobCacheEntry;
use dbs_uhttp::{Method, Request, Response}; use dbs_uhttp::{Method, Request, Response};
use crate::http::{ use crate::http::{
ApiError, ApiRequest, ApiResponse, ApiResponsePayload, BlobCacheObjectId, HttpError,
};
use crate::http_handler::{
error_response, extract_query_part, parse_body, success_response, translate_status_code, error_response, extract_query_part, parse_body, success_response, translate_status_code,
ApiError, ApiRequest, ApiResponse, ApiResponsePayload, BlobCacheObjectId, EndpointHandler, EndpointHandler, HttpResult,
HttpError, HttpResult,
}; };
/// HTTP URI prefix for API v2. /// HTTP URI prefix for API v2.
@ -83,7 +86,10 @@ impl EndpointHandler for BlobObjectListHandlerV2 {
Err(HttpError::BadRequest) Err(HttpError::BadRequest)
} }
(Method::Put, Some(body)) => { (Method::Put, Some(body)) => {
let conf = parse_body(body)?; let mut conf: Box<BlobCacheEntry> = parse_body(body)?;
if !conf.prepare_configuration_info() {
return Err(HttpError::BadRequest);
}
let r = kicker(ApiRequest::CreateBlobObject(conf)); let r = kicker(ApiRequest::CreateBlobObject(conf));
Ok(convert_to_response(r, HttpError::CreateBlobObject)) Ok(convert_to_response(r, HttpError::CreateBlobObject))
} }
@ -94,6 +100,10 @@ impl EndpointHandler for BlobObjectListHandlerV2 {
let r = kicker(ApiRequest::DeleteBlobObject(param)); let r = kicker(ApiRequest::DeleteBlobObject(param));
return Ok(convert_to_response(r, HttpError::DeleteBlobObject)); return Ok(convert_to_response(r, HttpError::DeleteBlobObject));
} }
if let Some(blob_id) = extract_query_part(req, "blob_id") {
let r = kicker(ApiRequest::DeleteBlobFile(blob_id));
return Ok(convert_to_response(r, HttpError::DeleteBlobFile));
}
Err(HttpError::BadRequest) Err(HttpError::BadRequest)
} }
_ => Err(HttpError::BadRequest), _ => Err(HttpError::BadRequest),

404
api/src/http_handler.rs Normal file
View File

@ -0,0 +1,404 @@
use std::collections::HashMap;
use std::io::{Error, ErrorKind, Result};
use std::os::unix::io::AsRawFd;
use std::path::PathBuf;
use std::sync::mpsc::{Receiver, Sender};
use std::sync::Arc;
use std::time::SystemTime;
use std::{fs, thread};
use dbs_uhttp::{Body, HttpServer, MediaType, Request, Response, ServerError, StatusCode, Version};
use http::uri::Uri;
use mio::unix::SourceFd;
use mio::{Events, Interest, Poll, Token, Waker};
use serde::Deserialize;
use url::Url;
use crate::http::{
ApiError, ApiRequest, ApiResponse, DaemonErrorKind, ErrorMessage, HttpError, MetricsError,
MetricsErrorKind,
};
use crate::http_endpoint_common::{
EventsHandler, ExitHandler, MetricsBackendHandler, MetricsBlobcacheHandler, MountHandler,
SendFuseFdHandler, StartHandler, TakeoverFuseFdHandler,
};
use crate::http_endpoint_v1::{
FsBackendInfo, InfoHandler, MetricsFsAccessPatternHandler, MetricsFsFilesHandler,
MetricsFsGlobalHandler, MetricsFsInflightHandler, HTTP_ROOT_V1,
};
use crate::http_endpoint_v2::{BlobObjectListHandlerV2, InfoV2Handler, HTTP_ROOT_V2};
const EXIT_TOKEN: Token = Token(usize::MAX);
const REQUEST_TOKEN: Token = Token(1);
/// Specialized version of [`std::result::Result`] for value returned by [`EndpointHandler`].
pub type HttpResult = std::result::Result<Response, HttpError>;
/// Get query parameter with `key` from the HTTP request.
pub fn extract_query_part(req: &Request, key: &str) -> Option<String> {
// Splicing req.uri with "http:" prefix might look weird, but since it depends on
// crate `Url` to generate query_pairs HashMap, which is working on top of Url not Uri.
// Better that we can add query part support to Micro-http in the future. But
// right now, below way makes it easy to obtain query parts from uri.
let http_prefix = format!("http:{}", req.uri().get_abs_path());
let url = Url::parse(&http_prefix)
.inspect_err(|e| {
error!("api: can't parse request {:?}", e);
})
.ok()?;
for (k, v) in url.query_pairs() {
if k == key {
trace!("api: got query param {}={}", k, v);
return Some(v.into_owned());
}
}
None
}
/// Parse HTTP request body.
pub(crate) fn parse_body<'a, F: Deserialize<'a>>(b: &'a Body) -> std::result::Result<F, HttpError> {
serde_json::from_slice::<F>(b.raw()).map_err(HttpError::ParseBody)
}
/// Translate ApiError message to HTTP status code.
pub(crate) fn translate_status_code(e: &ApiError) -> StatusCode {
match e {
ApiError::DaemonAbnormal(kind) | ApiError::MountFilesystem(kind) => match kind {
DaemonErrorKind::NotReady => StatusCode::ServiceUnavailable,
DaemonErrorKind::Unsupported => StatusCode::NotImplemented,
DaemonErrorKind::UnexpectedEvent(_) => StatusCode::BadRequest,
_ => StatusCode::InternalServerError,
},
ApiError::Metrics(MetricsErrorKind::Stats(MetricsError::NoCounter)) => StatusCode::NotFound,
_ => StatusCode::InternalServerError,
}
}
/// Generate a successful HTTP response message.
pub(crate) fn success_response(body: Option<String>) -> Response {
if let Some(body) = body {
let mut r = Response::new(Version::Http11, StatusCode::OK);
r.set_body(Body::new(body));
r
} else {
Response::new(Version::Http11, StatusCode::NoContent)
}
}
/// Generate a HTTP error response message with status code and error message.
pub(crate) fn error_response(error: HttpError, status: StatusCode) -> Response {
let mut response = Response::new(Version::Http11, status);
let err_msg = ErrorMessage {
code: "UNDEFINED".to_string(),
message: format!("{:?}", error),
};
response.set_body(Body::new(err_msg));
response
}
/// Trait for HTTP endpoints to handle HTTP requests.
pub trait EndpointHandler: Sync + Send {
/// Handles an HTTP request.
///
/// The main responsibilities of the handlers includes:
/// - parse and validate incoming request message
/// - send the request to subscriber
/// - wait response from the subscriber
/// - generate HTTP result
fn handle_request(
&self,
req: &Request,
kicker: &dyn Fn(ApiRequest) -> ApiResponse,
) -> HttpResult;
}
/// Struct to route HTTP requests to corresponding registered endpoint handlers.
pub struct HttpRoutes {
/// routes is a hash table mapping endpoint URIs to their endpoint handlers.
pub routes: HashMap<String, Box<dyn EndpointHandler + Sync + Send>>,
}
macro_rules! endpoint_v1 {
($path:expr) => {
format!("{}{}", HTTP_ROOT_V1, $path)
};
}
macro_rules! endpoint_v2 {
($path:expr) => {
format!("{}{}", HTTP_ROOT_V2, $path)
};
}
lazy_static! {
/// HTTP_ROUTES contain all the nydusd HTTP routes.
pub static ref HTTP_ROUTES: HttpRoutes = {
let mut r = HttpRoutes {
routes: HashMap::new(),
};
// Common
r.routes.insert(endpoint_v1!("/daemon/events"), Box::new(EventsHandler{}));
r.routes.insert(endpoint_v1!("/daemon/exit"), Box::new(ExitHandler{}));
r.routes.insert(endpoint_v1!("/daemon/start"), Box::new(StartHandler{}));
r.routes.insert(endpoint_v1!("/daemon/fuse/sendfd"), Box::new(SendFuseFdHandler{}));
r.routes.insert(endpoint_v1!("/daemon/fuse/takeover"), Box::new(TakeoverFuseFdHandler{}));
r.routes.insert(endpoint_v1!("/mount"), Box::new(MountHandler{}));
r.routes.insert(endpoint_v1!("/metrics/backend"), Box::new(MetricsBackendHandler{}));
r.routes.insert(endpoint_v1!("/metrics/blobcache"), Box::new(MetricsBlobcacheHandler{}));
// Nydus API, v1
r.routes.insert(endpoint_v1!("/daemon"), Box::new(InfoHandler{}));
r.routes.insert(endpoint_v1!("/daemon/backend"), Box::new(FsBackendInfo{}));
r.routes.insert(endpoint_v1!("/metrics"), Box::new(MetricsFsGlobalHandler{}));
r.routes.insert(endpoint_v1!("/metrics/files"), Box::new(MetricsFsFilesHandler{}));
r.routes.insert(endpoint_v1!("/metrics/inflight"), Box::new(MetricsFsInflightHandler{}));
r.routes.insert(endpoint_v1!("/metrics/pattern"), Box::new(MetricsFsAccessPatternHandler{}));
// Nydus API, v2
r.routes.insert(endpoint_v2!("/daemon"), Box::new(InfoV2Handler{}));
r.routes.insert(endpoint_v2!("/blobs"), Box::new(BlobObjectListHandlerV2{}));
r
};
}
fn kick_api_server(
to_api: &Sender<Option<ApiRequest>>,
from_api: &Receiver<ApiResponse>,
request: ApiRequest,
) -> ApiResponse {
to_api.send(Some(request)).map_err(ApiError::RequestSend)?;
from_api.recv().map_err(ApiError::ResponseRecv)?
}
// Example:
// <-- GET /
// --> GET / 200 835ms 746b
fn trace_api_begin(request: &dbs_uhttp::Request) {
debug!("<--- {:?} {:?}", request.method(), request.uri());
}
fn trace_api_end(response: &dbs_uhttp::Response, method: dbs_uhttp::Method, recv_time: SystemTime) {
let elapse = SystemTime::now().duration_since(recv_time);
debug!(
"---> {:?} Status Code: {:?}, Elapse: {:?}, Body Size: {:?}",
method,
response.status(),
elapse,
response.content_length()
);
}
fn exit_api_server(to_api: &Sender<Option<ApiRequest>>) {
if to_api.send(None).is_err() {
error!("failed to send stop request api server");
}
}
fn handle_http_request(
request: &Request,
to_api: &Sender<Option<ApiRequest>>,
from_api: &Receiver<ApiResponse>,
) -> Response {
let begin_time = SystemTime::now();
trace_api_begin(request);
// Micro http should ensure that req path is legal.
let uri_parsed = request.uri().get_abs_path().parse::<Uri>();
let mut response = match uri_parsed {
Ok(uri) => match HTTP_ROUTES.routes.get(uri.path()) {
Some(route) => route
.handle_request(request, &|r| kick_api_server(to_api, from_api, r))
.unwrap_or_else(|err| error_response(err, StatusCode::BadRequest)),
None => error_response(HttpError::NoRoute, StatusCode::NotFound),
},
Err(e) => {
error!("Failed parse URI, {}", e);
error_response(HttpError::BadRequest, StatusCode::BadRequest)
}
};
response.set_server("Nydus API");
response.set_content_type(MediaType::ApplicationJson);
trace_api_end(&response, request.method(), begin_time);
response
}
/// Start a HTTP server to serve API requests.
///
/// Start a HTTP server parsing http requests and send to nydus API server a concrete
/// request to operate nydus or fetch working status.
/// The HTTP server sends request by `to_api` channel and wait for response from `from_api` channel.
pub fn start_http_thread(
path: &str,
to_api: Sender<Option<ApiRequest>>,
from_api: Receiver<ApiResponse>,
) -> Result<(thread::JoinHandle<Result<()>>, Arc<Waker>)> {
// Try to remove existed unix domain socket
let _ = fs::remove_file(path);
let socket_path = PathBuf::from(path);
let mut poll = Poll::new()?;
let waker = Arc::new(Waker::new(poll.registry(), EXIT_TOKEN)?);
let waker2 = waker.clone();
let mut server = HttpServer::new(socket_path).map_err(|e| {
if let ServerError::IOError(e) = e {
e
} else {
Error::new(ErrorKind::Other, format!("{:?}", e))
}
})?;
poll.registry().register(
&mut SourceFd(&server.epoll().as_raw_fd()),
REQUEST_TOKEN,
Interest::READABLE,
)?;
let thread = thread::Builder::new()
.name("nydus-http-server".to_string())
.spawn(move || {
// Must start the server successfully or just die by panic
server.start_server().unwrap();
info!("http server started");
let mut events = Events::with_capacity(100);
let mut do_exit = false;
loop {
match poll.poll(&mut events, None) {
Err(e) if e.kind() == std::io::ErrorKind::Interrupted => continue,
Err(e) => {
error!("http server poll events failed, {}", e);
exit_api_server(&to_api);
return Err(e);
}
Ok(_) => {}
}
for event in &events {
match event.token() {
EXIT_TOKEN => do_exit = true,
REQUEST_TOKEN => match server.requests() {
Ok(request_vec) => {
for server_request in request_vec {
let reply = server_request.process(|request| {
handle_http_request(request, &to_api, &from_api)
});
// Ignore error when sending response
server.respond(reply).unwrap_or_else(|e| {
error!("HTTP server error on response: {}", e)
});
}
}
Err(e) => {
error!("HTTP server error on retrieving incoming request: {}", e);
}
},
_ => unreachable!("unknown poll token."),
}
}
if do_exit {
exit_api_server(&to_api);
break;
}
}
info!("http-server thread exits");
// Keep the Waker alive to match the lifetime of the poll loop above
drop(waker2);
Ok(())
})?;
Ok((thread, waker))
}
#[cfg(test)]
mod tests {
use super::*;
use std::sync::mpsc::channel;
use vmm_sys_util::tempfile::TempFile;
#[test]
fn test_http_api_routes_v1() {
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/daemon"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/daemon/events"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/daemon/backend"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/daemon/start"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/daemon/exit"));
assert!(HTTP_ROUTES
.routes
.contains_key("/api/v1/daemon/fuse/sendfd"));
assert!(HTTP_ROUTES
.routes
.contains_key("/api/v1/daemon/fuse/takeover"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/mount"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics/files"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics/pattern"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics/backend"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics/blobcache"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics/inflight"));
}
#[test]
fn test_http_api_routes_v2() {
assert!(HTTP_ROUTES.routes.contains_key("/api/v2/daemon"));
assert!(HTTP_ROUTES.routes.contains_key("/api/v2/blobs"));
}
#[test]
fn test_kick_api_server() {
let (to_api, from_route) = channel();
let (to_route, from_api) = channel();
let request = ApiRequest::GetDaemonInfo;
let thread = thread::spawn(move || match kick_api_server(&to_api, &from_api, request) {
Err(reply) => matches!(reply, ApiError::ResponsePayloadType),
Ok(_) => panic!("unexpected reply message"),
});
let req2 = from_route.recv().unwrap();
matches!(req2.as_ref().unwrap(), ApiRequest::GetDaemonInfo);
let reply: ApiResponse = Err(ApiError::ResponsePayloadType);
to_route.send(reply).unwrap();
thread.join().unwrap();
let (to_api, from_route) = channel();
let (to_route, from_api) = channel();
drop(to_route);
let request = ApiRequest::GetDaemonInfo;
assert!(kick_api_server(&to_api, &from_api, request).is_err());
drop(from_route);
let request = ApiRequest::GetDaemonInfo;
assert!(kick_api_server(&to_api, &from_api, request).is_err());
}
#[test]
fn test_extract_query_part() {
let req = Request::try_from(
b"GET http://localhost/api/v1/daemon?arg1=test HTTP/1.0\r\n\r\n",
None,
)
.unwrap();
let arg1 = extract_query_part(&req, "arg1").unwrap();
assert_eq!(arg1, "test");
assert!(extract_query_part(&req, "arg2").is_none());
}
#[test]
fn test_start_http_thread() {
let tmpdir = TempFile::new().unwrap();
let path = tmpdir.as_path().to_str().unwrap();
let (to_api, from_route) = channel();
let (_to_route, from_api) = channel();
let (thread, waker) = start_http_thread(path, to_api, from_api).unwrap();
waker.wake().unwrap();
let msg = from_route.recv().unwrap();
assert!(msg.is_none());
let _ = thread.join().unwrap();
}
}

View File

@ -7,16 +7,41 @@
//! The `nydus-api` crate defines API and related data structures for Nydus Image Service. //! The `nydus-api` crate defines API and related data structures for Nydus Image Service.
//! All data structures used by the API are encoded in JSON format. //! All data structures used by the API are encoded in JSON format.
#[macro_use] #[cfg_attr(feature = "handler", macro_use)]
extern crate log; extern crate log;
#[macro_use] #[macro_use]
extern crate serde_derive; extern crate serde;
#[cfg(feature = "handler")]
#[macro_use] #[macro_use]
extern crate lazy_static; extern crate lazy_static;
#[macro_use]
extern crate nydus_error;
pub mod config;
pub use config::*;
#[macro_use]
pub mod error;
pub mod http; pub mod http;
pub use self::http::*;
#[cfg(feature = "handler")]
pub(crate) mod http_endpoint_common; pub(crate) mod http_endpoint_common;
#[cfg(feature = "handler")]
pub(crate) mod http_endpoint_v1; pub(crate) mod http_endpoint_v1;
#[cfg(feature = "handler")]
pub(crate) mod http_endpoint_v2; pub(crate) mod http_endpoint_v2;
#[cfg(feature = "handler")]
pub(crate) mod http_handler;
#[cfg(feature = "handler")]
pub use http_handler::{
extract_query_part, start_http_thread, EndpointHandler, HttpResult, HttpRoutes, HTTP_ROUTES,
};
/// Application build and version information.
#[derive(Serialize, Clone)]
pub struct BuildTimeInfo {
pub package_ver: String,
pub git_commit: String,
pub build_time: String,
pub profile: String,
pub rustc: String,
}

View File

@ -1,14 +0,0 @@
# Changelog
## [Unreleased]
### Added
### Fixed
### Deprecated
## [v0.1.0]
### Added
- Initial release

View File

@ -1 +0,0 @@
* @bergwolf @imeoer @jiangliu

View File

@ -1,24 +0,0 @@
[package]
name = "nydus-app"
version = "0.3.0"
authors = ["The Nydus Developers"]
description = "Application framework for Nydus Image Service"
readme = "README.md"
repository = "https://github.com/dragonflyoss/image-service"
license = "Apache-2.0 OR BSD-3-Clause"
edition = "2018"
build = "build.rs"
[build-dependencies]
time = { version = "0.3.14", features = ["formatting"] }
[dependencies]
regex = "1.5.5"
flexi_logger = { version = "0.23", features = ["compress"] }
libc = "0.2"
log = "0.4"
nix = "0.24"
serde = { version = "1.0.110", features = ["serde_derive"] }
log-panics = { version = "2.1.0", features = ["with-backtrace"] }
nydus-error = { version = "0.2", path = "../error" }

View File

@ -1,202 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -1,57 +0,0 @@
# nydus-app
The `nydus-app` crate is a collection of utilities to help creating applications for [`Nydus Image Service`](https://github.com/dragonflyoss/image-service) project, which provides:
- `struct BuildTimeInfo`: application build and version information.
- `fn dump_program_info()`: dump program build and version information.
- `fn setup_logging()`: setup logging infrastructure for application.
## Support
**Platforms**:
- x86_64
- aarch64
**Operating Systems**:
- Linux
## Usage
Add `nydus-app` as a dependency in `Cargo.toml`
```toml
[dependencies]
nydus-app = "*"
```
Then add `extern crate nydus-app;` to your crate root if needed.
## Examples
- Setup application infrastructure.
```rust
#[macro_use(crate_authors, crate_version)]
extern crate clap;
use clap::App;
use std::io::Result;
use nydus_app::{BuildTimeInfo, setup_logging};
fn main() -> Result<()> {
let level = cmd.value_of("log-level").unwrap().parse().unwrap();
let (bti_string, build_info) = BuildTimeInfo::dump(crate_version!());
let _cmd = App::new("")
.version(bti_string.as_str())
.author(crate_authors!())
.get_matches();
setup_logging(None, level)?;
print!("{}", build_info);
Ok(())
}
```
## License
This code is licensed under [Apache-2.0](LICENSE).

View File

@ -1,33 +0,0 @@
[package]
name = "nydus-blobfs"
version = "0.1.0"
description = "Blob object file system for Nydus Image Service"
authors = ["The Nydus Developers"]
license = "Apache-2.0 OR BSD-3-Clause"
homepage = "https://nydus.dev/"
repository = "https://github.com/dragonflyoss/image-service"
edition = "2018"
[dependencies]
fuse-backend-rs = { version = "0.9" }
libc = "0.2"
log = "0.4.8"
serde = { version = "1.0.110", features = ["serde_derive", "rc"] }
serde_json = "1.0.53"
vm-memory = { version = "0.9" }
nydus-error = { version = "0.2", path = "../error" }
nydus-rafs = { version = "0.1", path = "../rafs" }
nydus-storage = { version = "0.5", path = "../storage", features = ["backend-localfs"] }
[dev-dependencies]
nydus-app = { version = "0.3", path = "../app" }
[features]
virtiofs = [ "fuse-backend-rs/virtiofs", "nydus-rafs/virtio-fs" ]
backend-oss = ["nydus-rafs/backend-oss"]
backend-registry = ["nydus-rafs/backend-registry"]
[package.metadata.docs.rs]
all-features = true
targets = ["x86_64-unknown-linux-gnu", "x86_64-apple-darwin"]

View File

@ -1,506 +0,0 @@
// Copyright (C) 2020 Alibaba Cloud. All rights reserved.
// SPDX-License-Identifier: Apache-2.0
//! Fuse blob passthrough file system, mirroring an existing FS hierarchy.
//!
//! This file system mirrors the existing file system hierarchy of the system, starting at the
//! root file system. This is implemented by just "passing through" all requests to the
//! corresponding underlying file system.
//!
//! The code is derived from the
//! [CrosVM](https://chromium.googlesource.com/chromiumos/platform/crosvm/) project,
//! with heavy modification/enhancements from Alibaba Cloud OS team.
#[macro_use]
extern crate log;
use fuse_backend_rs::{
api::{filesystem::*, BackendFileSystem, VFS_MAX_INO},
passthrough::Config as PassthroughConfig,
passthrough::PassthroughFs,
};
use nydus_error::{einval, eother};
use nydus_rafs::{
fs::{Rafs, RafsConfig},
RafsIoRead,
};
use serde::Deserialize;
use std::any::Any;
#[cfg(feature = "virtiofs")]
use std::ffi::CStr;
use std::ffi::CString;
use std::fs::create_dir_all;
#[cfg(feature = "virtiofs")]
use std::fs::File;
use std::io;
#[cfg(feature = "virtiofs")]
use std::mem::MaybeUninit;
#[cfg(feature = "virtiofs")]
use std::os::unix::ffi::OsStrExt;
#[cfg(feature = "virtiofs")]
use std::os::unix::io::{AsRawFd, FromRawFd};
use std::path::Path;
use std::str::FromStr;
use std::sync::{Arc, Mutex};
use std::thread;
#[cfg(feature = "virtiofs")]
use nydus_storage::device::BlobPrefetchRequest;
use vm_memory::ByteValued;
mod sync_io;
#[cfg(feature = "virtiofs")]
const EMPTY_CSTR: &[u8] = b"\0";
type Inode = u64;
type Handle = u64;
#[repr(C, packed)]
#[derive(Clone, Copy, Debug, Default)]
struct LinuxDirent64 {
d_ino: libc::ino64_t,
d_off: libc::off64_t,
d_reclen: libc::c_ushort,
d_ty: libc::c_uchar,
}
unsafe impl ByteValued for LinuxDirent64 {}
/// Options that configure xxx
#[derive(Clone, Default, Deserialize)]
pub struct BlobOndemandConfig {
/// The rafs config used to set up rafs device for the purpose of
/// `on demand read`.
pub rafs_conf: RafsConfig,
/// THe path of bootstrap of an container image (for rafs in
/// kernel).
///
/// The default is ``.
#[serde(default)]
pub bootstrap_path: String,
/// The path of blob cache directory.
#[serde(default)]
pub blob_cache_dir: String,
}
impl FromStr for BlobOndemandConfig {
type Err = io::Error;
fn from_str(s: &str) -> io::Result<BlobOndemandConfig> {
serde_json::from_str(s).map_err(|e| einval!(e))
}
}
/// Options that configure the behavior of the blobfs fuse file system.
#[derive(Default, Debug, Clone, PartialEq)]
pub struct Config {
/// Blobfs config is embedded with passthrough config
pub ps_config: PassthroughConfig,
/// This provides on demand config of blob management.
pub blob_ondemand_cfg: String,
}
#[allow(dead_code)]
struct RafsHandle {
rafs: Arc<Mutex<Option<Rafs>>>,
handle: Arc<Mutex<Option<thread::JoinHandle<Option<Rafs>>>>>,
}
#[allow(dead_code)]
struct BootstrapArgs {
rafs_handle: RafsHandle,
blob_cache_dir: String,
}
// Safe to Send/Sync because the underlying data structures are readonly
unsafe impl Sync for BootstrapArgs {}
unsafe impl Send for BootstrapArgs {}
#[cfg(feature = "virtiofs")]
impl BootstrapArgs {
fn get_rafs_handle(&self) -> io::Result<()> {
let mut c = self.rafs_handle.rafs.lock().unwrap();
match (*self.rafs_handle.handle.lock().unwrap()).take() {
Some(handle) => {
let rafs = handle.join().unwrap().ok_or_else(|| {
error!("blobfs: get rafs failed.");
einval!("create rafs failed in thread.")
})?;
debug!("blobfs: async create Rafs finish!");
*c = Some(rafs);
Ok(())
}
None => Err(einval!("create rafs failed in thread.")),
}
}
fn fetch_range_sync(&self, prefetches: &[BlobPrefetchRequest]) -> io::Result<()> {
let c = self.rafs_handle.rafs.lock().unwrap();
match &*c {
Some(rafs) => rafs.fetch_range_synchronous(prefetches),
None => Err(einval!("create rafs failed in thread.")),
}
}
}
/// A file system that simply "passes through" all requests it receives to the underlying file
/// system.
///
/// To keep the implementation simple it servers the contents of its root directory. Users
/// that wish to serve only a specific directory should set up the environment so that that
/// directory ends up as the root of the file system process. One way to accomplish this is via a
/// combination of mount namespaces and the pivot_root system call.
pub struct BlobFs {
pfs: PassthroughFs,
#[allow(dead_code)]
bootstrap_args: BootstrapArgs,
}
impl BlobFs {
fn ensure_path_exist(path: &Path) -> io::Result<()> {
if path.as_os_str().is_empty() {
return Err(einval!("path is empty"));
}
if !path.exists() {
create_dir_all(path).map_err(|e| {
error!(
"create dir error. directory is {:?}. {}:{}",
path,
file!(),
line!()
);
e
})?;
}
Ok(())
}
/// Create a Blob file system instance.
pub fn new(cfg: Config) -> io::Result<BlobFs> {
trace!("BlobFs config is: {:?}", cfg);
let bootstrap_args = Self::load_bootstrap(&cfg)?;
let pfs = PassthroughFs::new(cfg.ps_config)?;
Ok(BlobFs {
pfs,
bootstrap_args,
})
}
fn load_bootstrap(cfg: &Config) -> io::Result<BootstrapArgs> {
let blob_ondemand_conf = BlobOndemandConfig::from_str(&cfg.blob_ondemand_cfg)?;
// check if blob cache dir exists.
let path = Path::new(blob_ondemand_conf.blob_cache_dir.as_str());
Self::ensure_path_exist(path).map_err(|e| {
error!("blob_cache_dir not exist");
e
})?;
let path = Path::new(blob_ondemand_conf.bootstrap_path.as_str());
if !path.exists() || blob_ondemand_conf.bootstrap_path == String::default() {
return Err(einval!("no valid bootstrap"));
}
let mut rafs_conf = blob_ondemand_conf.rafs_conf.clone();
// we must use direct mode to get mmap'd bootstrap.
rafs_conf.mode = "direct".to_string();
let mut bootstrap =
<dyn RafsIoRead>::from_file(path.to_str().unwrap()).map_err(|e| eother!(e))?;
trace!("blobfs: async create Rafs start!");
let rafs_join_handle = std::thread::spawn(move || {
let mut rafs = match Rafs::new(rafs_conf, "blobfs", &mut bootstrap) {
Ok(rafs) => rafs,
Err(e) => {
error!("blobfs: new rafs failed {:?}.", e);
return None;
}
};
match rafs.import(bootstrap, None) {
Ok(_) => {}
Err(e) => {
error!("blobfs: new rafs failed {:?}.", e);
return None;
}
}
Some(rafs)
});
let rafs_handle = RafsHandle {
rafs: Arc::new(Mutex::new(None)),
handle: Arc::new(Mutex::new(Some(rafs_join_handle))),
};
Ok(BootstrapArgs {
rafs_handle,
blob_cache_dir: blob_ondemand_conf.blob_cache_dir,
})
}
#[cfg(feature = "virtiofs")]
fn stat(f: &File) -> io::Result<libc::stat64> {
// Safe because this is a constant value and a valid C string.
let pathname = unsafe { CStr::from_bytes_with_nul_unchecked(EMPTY_CSTR) };
let mut st = MaybeUninit::<libc::stat64>::zeroed();
// Safe because the kernel will only write data in `st` and we check the return value.
let res = unsafe {
libc::fstatat64(
f.as_raw_fd(),
pathname.as_ptr(),
st.as_mut_ptr(),
libc::AT_EMPTY_PATH | libc::AT_SYMLINK_NOFOLLOW,
)
};
if res >= 0 {
// Safe because the kernel guarantees that the struct is now fully initialized.
Ok(unsafe { st.assume_init() })
} else {
Err(io::Error::last_os_error())
}
}
/// Initialize the PassthroughFs
pub fn import(&self) -> io::Result<()> {
self.pfs.import()
}
#[cfg(feature = "virtiofs")]
fn open_file(dfd: i32, pathname: &Path, flags: i32, mode: u32) -> io::Result<File> {
let pathname = CString::new(pathname.as_os_str().as_bytes())
.map_err(|e| io::Error::new(io::ErrorKind::InvalidData, e))?;
let fd = if flags & libc::O_CREAT == libc::O_CREAT {
unsafe { libc::openat(dfd, pathname.as_ptr(), flags, mode) }
} else {
unsafe { libc::openat(dfd, pathname.as_ptr(), flags) }
};
if fd < 0 {
return Err(io::Error::last_os_error());
}
// Safe because we just opened this fd.
Ok(unsafe { File::from_raw_fd(fd) })
}
}
impl BackendFileSystem for BlobFs {
fn mount(&self) -> io::Result<(Entry, u64)> {
let ctx = &Context::default();
let entry = self.lookup(ctx, ROOT_ID, &CString::new(".").unwrap())?;
Ok((entry, VFS_MAX_INO))
}
fn as_any(&self) -> &dyn Any {
self
}
}
#[cfg(test2)]
mod tests {
use super::*;
use fuse_backend_rs::abi::virtio_fs;
use fuse_backend_rs::transport::FsCacheReqHandler;
use nydus_app::setup_logging;
use std::os::unix::prelude::RawFd;
struct DummyCacheReq {}
impl FsCacheReqHandler for DummyCacheReq {
fn map(
&mut self,
_foffset: u64,
_moffset: u64,
_len: u64,
_flags: u64,
_fd: RawFd,
) -> io::Result<()> {
Ok(())
}
fn unmap(&mut self, _requests: Vec<virtio_fs::RemovemappingOne>) -> io::Result<()> {
Ok(())
}
}
// #[test]
// #[cfg(feature = "virtiofs")]
// fn test_blobfs_new() {
// setup_logging(None, log::LevelFilter::Trace, 0).unwrap();
// let config = r#"
// {
// "device": {
// "backend": {
// "type": "localfs",
// "config": {
// "dir": "/home/b.liu/1_source/3_ali/virtiofs/qemu-my/build-kangaroo/share_dir1/test4k"
// }
// },
// "cache": {
// "type": "blobcache",
// "compressed": false,
// "config": {
// "work_dir": "/home/b.liu/1_source/3_ali/virtiofs/qemu-my/build-kangaroo/share_dir1/blobcache"
// }
// }
// },
// "mode": "direct",
// "digest_validate": true,
// "enable_xattr": false,
// "fs_prefetch": {
// "enable": false,
// "threads_count": 10,
// "merging_size": 131072,
// "bandwidth_rate": 10485760
// }
// }"#;
// // let rafs_conf = RafsConfig::from_str(config).unwrap();
// let fs_cfg = Config {
// root_dir: "/home/b.liu/1_source/3_ali/virtiofs/qemu-my/build-kangaroo/share_dir1"
// .to_string(),
// bootstrap_path: "test4k/bootstrap-link".to_string(),
// // blob_cache_dir: "blobcache".to_string(),
// do_import: false,
// no_open: true,
// rafs_conf: config.to_string(),
// ..Default::default()
// };
// assert!(BlobFs::new(fs_cfg).is_err());
// let fs_cfg = Config {
// root_dir: "/home/b.liu/1_source/3_ali/virtiofs/qemu-my/build-kangaroo/share_dir1"
// .to_string(),
// bootstrap_path: "test4k/bootstrap-link".to_string(),
// blob_cache_dir: "blobcache1".to_string(),
// do_import: false,
// no_open: true,
// rafs_conf: config.to_string(),
// ..Default::default()
// };
// assert!(BlobFs::new(fs_cfg).is_err());
// let fs_cfg = Config {
// root_dir: "/home/b.liu/1_source/3_ali/virtiofs/qemu-my/build-kangaroo/share_dir1"
// .to_string(),
// // bootstrap_path: "test4k/bootstrap-link".to_string(),
// blob_cache_dir: "blobcache".to_string(),
// do_import: false,
// no_open: true,
// rafs_conf: config.to_string(),
// ..Default::default()
// };
// assert!(BlobFs::new(fs_cfg).is_err());
// let fs_cfg = Config {
// root_dir: "/home/b.liu/1_source/3_ali/virtiofs/qemu-my/build-kangaroo/share_dir1"
// .to_string(),
// bootstrap_path: "test4k/bootstrap-foo".to_string(),
// blob_cache_dir: "blobcache".to_string(),
// do_import: false,
// no_open: true,
// rafs_conf: config.to_string(),
// ..Default::default()
// };
// assert!(BlobFs::new(fs_cfg).is_err());
// let fs_cfg = Config {
// root_dir: "/home/b.liu/1_source/3_ali/virtiofs/qemu-my/build-kangaroo/share_dir1"
// .to_string(),
// bootstrap_path: "test4k/bootstrap-link".to_string(),
// blob_cache_dir: "blobcache".to_string(),
// do_import: false,
// no_open: true,
// rafs_conf: config.to_string(),
// ..Default::default()
// };
// assert!(BlobFs::new(fs_cfg).is_ok());
// }
#[test]
fn test_blobfs_setupmapping() {
setup_logging(None, log::LevelFilter::Trace, 0).unwrap();
let config = r#"
{
"rafs_conf": {
"device": {
"backend": {
"type": "localfs",
"config": {
"blob_file": "/home/b.liu/1_source/3_ali/virtiofs/qemu-my/build-kangaroo/share_dir1/nydus-rs/myblob1/v6/blob-btrfs"
}
},
"cache": {
"type": "blobcache",
"compressed": false,
"config": {
"work_dir": "/home/b.liu/1_source/3_ali/virtiofs/qemu-my/build-kangaroo/share_dir1/blobcache"
}
}
},
"mode": "direct",
"digest_validate": false,
"enable_xattr": false,
"fs_prefetch": {
"enable": false,
"threads_count": 10,
"merging_size": 131072,
"bandwidth_rate": 10485760
}
},
"bootstrap_path": "nydus-rs/myblob1/v6/bootstrap-btrfs",
"blob_cache_dir": "/home/b.liu/1_source/3_ali/virtiofs/qemu-my/build-kangaroo/share_dir1/blobcache"
}"#;
// let rafs_conf = RafsConfig::from_str(config).unwrap();
let ps_config = PassthroughConfig {
root_dir: "/home/b.liu/1_source/3_ali/virtiofs/qemu-my/build-kangaroo/share_dir1"
.to_string(),
do_import: false,
no_open: true,
..Default::default()
};
let fs_cfg = Config {
ps_config,
blob_ondemand_cfg: config.to_string(),
};
let fs = BlobFs::new(fs_cfg).unwrap();
fs.import().unwrap();
fs.mount().unwrap();
let ctx = &Context::default();
// read bootstrap first, should return err as it's not in blobcache dir.
// let bootstrap = CString::new("foo").unwrap();
// let entry = fs.lookup(ctx, ROOT_ID, &bootstrap).unwrap();
// let mut req = DummyCacheReq {};
// fs.setupmapping(ctx, entry.inode, 0, 0, 4096, 0, 0, &mut req)
// .unwrap();
// FIXME: use a real blob id under test4k.
let blob_cache_dir = CString::new("blobcache").unwrap();
let parent_entry = fs.lookup(ctx, ROOT_ID, &blob_cache_dir).unwrap();
let blob_id = CString::new("80da976ee69d68af6bb9170395f71b4ef1e235e815e2").unwrap();
let entry = fs.lookup(ctx, parent_entry.inode, &blob_id).unwrap();
let foffset = 0;
let len = 1 << 21;
let mut req = DummyCacheReq {};
fs.setupmapping(ctx, entry.inode, 0, foffset, len, 0, 0, &mut req)
.unwrap();
// FIXME: release fs
fs.destroy();
}
}

View File

@ -28,7 +28,17 @@ fn get_git_commit_hash() -> String {
return commit.to_string(); return commit.to_string();
} }
} }
"Unknown".to_string() "unknown".to_string()
}
fn get_git_commit_version() -> String {
let tag = Command::new("git").args(["describe", "--tags"]).output();
if let Ok(tag) = tag {
if let Some(tag) = String::from_utf8_lossy(&tag.stdout).lines().next() {
return tag.to_string();
}
}
"unknown".to_string()
} }
fn main() { fn main() {
@ -43,10 +53,12 @@ fn main() {
.format(&time::format_description::well_known::Iso8601::DEFAULT) .format(&time::format_description::well_known::Iso8601::DEFAULT)
.unwrap(); .unwrap();
let git_commit_hash = get_git_commit_hash(); let git_commit_hash = get_git_commit_hash();
let git_commit_version = get_git_commit_version();
println!("cargo:rerun-if-changed=../git/HEAD"); println!("cargo:rerun-if-changed=../git/HEAD");
println!("cargo:rustc-env=RUSTC_VERSION={}", rustc_ver); println!("cargo:rustc-env=RUSTC_VERSION={}", rustc_ver);
println!("cargo:rustc-env=PROFILE={}", profile); println!("cargo:rustc-env=PROFILE={}", profile);
println!("cargo:rustc-env=BUILT_TIME_UTC={}", build_time); println!("cargo:rustc-env=BUILT_TIME_UTC={}", build_time);
println!("cargo:rustc-env=GIT_COMMIT_HASH={}", git_commit_hash); println!("cargo:rustc-env=GIT_COMMIT_HASH={}", git_commit_hash);
println!("cargo:rustc-env=GIT_COMMIT_VERSION={}", git_commit_version);
} }

35
builder/Cargo.toml Normal file
View File

@ -0,0 +1,35 @@
[package]
name = "nydus-builder"
version = "0.2.0"
description = "Nydus Image Builder"
authors = ["The Nydus Developers"]
license = "Apache-2.0"
homepage = "https://nydus.dev/"
repository = "https://github.com/dragonflyoss/nydus"
edition = "2021"
[dependencies]
anyhow = "1.0.35"
base64 = "0.21"
hex = "0.4.3"
indexmap = "2"
libc = "0.2"
log = "0.4"
nix = "0.24"
serde = { version = "1.0.110", features = ["serde_derive", "rc"] }
serde_json = "1.0.53"
sha2 = "0.10.2"
tar = "0.4.40"
vmm-sys-util = "0.12.1"
xattr = "1.0.1"
parse-size = "1.1.0"
nydus-api = { version = "0.4.0", path = "../api" }
nydus-rafs = { version = "0.4.0", path = "../rafs" }
nydus-storage = { version = "0.7.0", path = "../storage", features = ["backend-localfs"] }
nydus-utils = { version = "0.5.0", path = "../utils" }
gix-attributes = "0.25.0"
[package.metadata.docs.rs]
all-features = true
targets = ["x86_64-unknown-linux-gnu", "aarch64-unknown-linux-gnu", "aarch64-apple-darwin"]

189
builder/src/attributes.rs Normal file
View File

@ -0,0 +1,189 @@
// Copyright 2024 Nydus Developers. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use std::collections::HashMap;
use std::path::{Path, PathBuf};
use std::{fs, path};
use anyhow::Result;
use gix_attributes::parse;
use gix_attributes::parse::Kind;
const KEY_TYPE: &str = "type";
const KEY_CRCS: &str = "crcs";
const VAL_EXTERNAL: &str = "external";
pub struct Parser {}
#[derive(Clone, Debug, Eq, PartialEq, Default)]
pub struct Item {
pub pattern: PathBuf,
pub attributes: HashMap<String, String>,
}
#[derive(Clone, Debug, Eq, PartialEq, Default)]
pub struct Attributes {
pub items: HashMap<PathBuf, HashMap<String, String>>,
pub crcs: HashMap<PathBuf, Vec<u32>>,
}
impl Attributes {
/// Parse nydus attributes from a file.
pub fn from<P: AsRef<Path>>(path: P) -> Result<Attributes> {
let content = fs::read(path)?;
let _items = parse(&content);
let mut items = HashMap::new();
let mut crcs = HashMap::new();
for _item in _items {
let _item = _item?;
if let Kind::Pattern(pattern) = _item.0 {
let mut path = PathBuf::from(pattern.text.to_string());
if !path.is_absolute() {
path = path::Path::new("/").join(path);
}
let mut current_path = path.clone();
let mut attributes = HashMap::new();
let mut _type = String::new();
let mut _crcs = vec![];
for line in _item.1 {
let line = line?;
let name = line.name.as_str();
let state = line.state.as_bstr().unwrap_or_default();
if name == KEY_TYPE {
_type = state.to_string();
}
if name == KEY_CRCS {
_crcs = state
.to_string()
.split(',')
.map(|s| {
let trimmed = s.trim();
let hex_str = if let Some(stripped) = trimmed.strip_prefix("0x") {
stripped
} else {
trimmed
};
u32::from_str_radix(hex_str, 16).map_err(|e| anyhow::anyhow!(e))
})
.collect::<Result<Vec<u32>, _>>()?;
}
attributes.insert(name.to_string(), state.to_string());
}
crcs.insert(path.clone(), _crcs);
items.insert(path, attributes);
// process parent directory
while let Some(parent) = current_path.parent() {
if parent == Path::new("/") {
break;
}
let mut attributes = HashMap::new();
if !items.contains_key(parent) {
attributes.insert(KEY_TYPE.to_string(), VAL_EXTERNAL.to_string());
items.insert(parent.to_path_buf(), attributes);
}
current_path = parent.to_path_buf();
}
}
}
Ok(Attributes { items, crcs })
}
fn check_external(&self, attributes: &HashMap<String, String>) -> bool {
attributes.get(KEY_TYPE) == Some(&VAL_EXTERNAL.to_string())
}
pub fn is_external<P: AsRef<Path>>(&self, path: P) -> bool {
if let Some(attributes) = self.items.get(path.as_ref()) {
return self.check_external(attributes);
}
false
}
pub fn is_prefix_external<P: AsRef<Path>>(&self, target: P) -> bool {
self.items
.iter()
.any(|item| item.0.starts_with(&target) && self.check_external(item.1))
}
pub fn get_value<P: AsRef<Path>, K: AsRef<str>>(&self, path: P, key: K) -> Option<String> {
if let Some(attributes) = self.items.get(path.as_ref()) {
return attributes.get(key.as_ref()).map(|s| s.to_string());
}
None
}
pub fn get_values<P: AsRef<Path>>(&self, path: P) -> Option<&HashMap<String, String>> {
self.items.get(path.as_ref())
}
pub fn get_crcs<P: AsRef<Path>>(&self, path: P) -> Option<&Vec<u32>> {
self.crcs.get(path.as_ref())
}
}
#[cfg(test)]
mod tests {
use std::{collections::HashMap, fs, path::PathBuf};
use super::{Attributes, Item};
use vmm_sys_util::tempfile::TempFile;
#[test]
fn test_attribute_parse() {
let file = TempFile::new().unwrap();
fs::write(
file.as_path(),
"/foo type=external crcs=0x1234,0x5678
/bar type=external crcs=0x1234,0x5678
/models/foo/bar type=external",
)
.unwrap();
let attributes = Attributes::from(file.as_path()).unwrap();
let _attributes_base: HashMap<String, String> =
[("type".to_string(), "external".to_string())]
.iter()
.cloned()
.collect();
let _attributes: HashMap<String, String> = [
("type".to_string(), "external".to_string()),
("crcs".to_string(), "0x1234,0x5678".to_string()),
]
.iter()
.cloned()
.collect();
let items_map: HashMap<PathBuf, HashMap<String, String>> = vec![
Item {
pattern: PathBuf::from("/foo"),
attributes: _attributes.clone(),
},
Item {
pattern: PathBuf::from("/bar"),
attributes: _attributes.clone(),
},
Item {
pattern: PathBuf::from("/models"),
attributes: _attributes_base.clone(),
},
Item {
pattern: PathBuf::from("/models/foo"),
attributes: _attributes_base.clone(),
},
Item {
pattern: PathBuf::from("/models/foo/bar"),
attributes: _attributes_base.clone(),
},
]
.into_iter()
.map(|item| (item.pattern, item.attributes))
.collect();
assert_eq!(attributes.items, items_map);
assert_eq!(attributes.get_crcs("/foo"), Some(&vec![0x1234, 0x5678]))
}
}

View File

@ -0,0 +1,283 @@
// Copyright (C) 2023 Nydus Developers. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
//! Generate Chunkdict RAFS bootstrap.
//! -------------------------------------------------------------------------------------------------
//! Bug 1: Inconsistent Chunk Size Leading to Blob Size Less Than 4K(v6_block_size)
//! Description: The size of chunks is not consistent, which results in the possibility that a blob,
//! composed of a group of these chunks, may be less than 4K(v6_block_size) in size.
//! This inconsistency leads to a failure in passing the size check.
//! -------------------------------------------------------------------------------------------------
//! Bug 2: Incorrect Chunk Number Calculation Due to Premature Check Logic
//! Description: The current logic for calculating the chunk number is based on the formula size/chunk size.
//! However, this approach is flawed as it precedes the actual check which accounts for chunk statistics.
//! Consequently, this leads to inaccurate counting of chunk numbers.
use super::core::node::{ChunkSource, NodeInfo};
use super::{BlobManager, Bootstrap, BootstrapManager, BuildContext, BuildOutput, Tree};
use crate::core::node::Node;
use crate::NodeChunk;
use crate::OsString;
use anyhow::{Ok, Result};
use nydus_rafs::metadata::chunk::ChunkWrapper;
use nydus_rafs::metadata::inode::InodeWrapper;
use nydus_rafs::metadata::layout::RafsXAttrs;
use nydus_storage::meta::BlobChunkInfoV1Ondisk;
use nydus_utils::compress::Algorithm;
use nydus_utils::digest::RafsDigest;
use std::mem::size_of;
use std::path::PathBuf;
use std::str::FromStr;
use std::sync::Arc;
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
pub struct ChunkdictChunkInfo {
pub image_reference: String,
pub version: String,
pub chunk_blob_id: String,
pub chunk_digest: String,
pub chunk_crc32: u32,
pub chunk_compressed_size: u32,
pub chunk_uncompressed_size: u32,
pub chunk_compressed_offset: u64,
pub chunk_uncompressed_offset: u64,
}
pub struct ChunkdictBlobInfo {
pub blob_id: String,
pub blob_compressed_size: u64,
pub blob_uncompressed_size: u64,
pub blob_compressor: String,
pub blob_meta_ci_compressed_size: u64,
pub blob_meta_ci_uncompressed_size: u64,
pub blob_meta_ci_offset: u64,
}
/// Struct to generate chunkdict RAFS bootstrap.
pub struct Generator {}
impl Generator {
// Generate chunkdict RAFS bootstrap.
pub fn generate(
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_mgr: &mut BlobManager,
chunkdict_chunks_origin: Vec<ChunkdictChunkInfo>,
chunkdict_blobs: Vec<ChunkdictBlobInfo>,
) -> Result<BuildOutput> {
// Validate and remove chunks whose belonged blob sizes are smaller than a block.
let mut chunkdict_chunks = chunkdict_chunks_origin.to_vec();
Self::validate_and_remove_chunks(ctx, &mut chunkdict_chunks);
// Build root tree.
let mut tree = Self::build_root_tree(ctx)?;
// Build child tree.
let child = Self::build_child_tree(ctx, blob_mgr, &chunkdict_chunks, &chunkdict_blobs)?;
let result = vec![child];
tree.children = result;
Self::validate_tree(&tree)?;
// Build bootstrap.
let mut bootstrap_ctx = bootstrap_mgr.create_ctx()?;
let mut bootstrap = Bootstrap::new(tree)?;
bootstrap.build(ctx, &mut bootstrap_ctx)?;
let blob_table = blob_mgr.to_blob_table(ctx)?;
let storage = &mut bootstrap_mgr.bootstrap_storage;
bootstrap.dump(ctx, storage, &mut bootstrap_ctx, &blob_table)?;
BuildOutput::new(blob_mgr, None, &bootstrap_mgr.bootstrap_storage, &None)
}
/// Validate tree.
fn validate_tree(tree: &Tree) -> Result<()> {
let pre = &mut |t: &Tree| -> Result<()> {
let node = t.borrow_mut_node();
debug!("chunkdict tree: ");
debug!("inode: {}", node);
for chunk in &node.chunks {
debug!("\t chunk: {}", chunk);
}
Ok(())
};
tree.walk_dfs_pre(pre)?;
debug!("chunkdict tree is valid.");
Ok(())
}
/// Validates and removes chunks with a total uncompressed size smaller than the block size limit.
fn validate_and_remove_chunks(ctx: &mut BuildContext, chunkdict: &mut Vec<ChunkdictChunkInfo>) {
let mut chunk_sizes = std::collections::HashMap::new();
// Accumulate the uncompressed size for each chunk_blob_id.
for chunk in chunkdict.iter() {
*chunk_sizes.entry(chunk.chunk_blob_id.clone()).or_insert(0) +=
chunk.chunk_uncompressed_size as u64;
}
// Find all chunk_blob_ids with a total uncompressed size > v6_block_size.
let small_chunks: Vec<String> = chunk_sizes
.into_iter()
.filter(|&(_, size)| size < ctx.v6_block_size())
.inspect(|(id, _)| {
eprintln!(
"Warning: Blob with id '{}' is smaller than {} bytes.",
id,
ctx.v6_block_size()
)
})
.map(|(id, _)| id)
.collect();
// Retain only chunks with chunk_blob_id that has a total uncompressed size > v6_block_size.
chunkdict.retain(|chunk| !small_chunks.contains(&chunk.chunk_blob_id));
}
/// Build the root tree.
pub fn build_root_tree(ctx: &mut BuildContext) -> Result<Tree> {
let mut inode = InodeWrapper::new(ctx.fs_version);
inode.set_ino(1);
inode.set_uid(1000);
inode.set_gid(1000);
inode.set_projid(0);
inode.set_mode(0o660 | libc::S_IFDIR as u32);
inode.set_nlink(3);
inode.set_name_size("/".len());
inode.set_rdev(0);
inode.set_blocks(256);
let node_info = NodeInfo {
explicit_uidgid: true,
src_dev: 0,
src_ino: 0,
rdev: 0,
source: PathBuf::from("/"),
path: PathBuf::from("/"),
target: PathBuf::from("/"),
target_vec: vec![OsString::from("/")],
symlink: None,
xattrs: RafsXAttrs::default(),
v6_force_extended_inode: true,
};
let root_node = Node::new(inode, node_info, 0);
let tree = Tree::new(root_node);
Ok(tree)
}
/// Build the child tree.
fn build_child_tree(
ctx: &mut BuildContext,
blob_mgr: &mut BlobManager,
chunkdict_chunks: &[ChunkdictChunkInfo],
chunkdict_blobs: &[ChunkdictBlobInfo],
) -> Result<Tree> {
let mut inode = InodeWrapper::new(ctx.fs_version);
inode.set_ino(2);
inode.set_uid(0);
inode.set_gid(0);
inode.set_projid(0);
inode.set_mode(0o660 | libc::S_IFREG as u32);
inode.set_nlink(1);
inode.set_name_size("chunkdict".len());
inode.set_rdev(0);
inode.set_blocks(256);
let node_info = NodeInfo {
explicit_uidgid: true,
src_dev: 0,
src_ino: 1,
rdev: 0,
source: PathBuf::from("/"),
path: PathBuf::from("/chunkdict"),
target: PathBuf::from("/chunkdict"),
target_vec: vec![OsString::from("/"), OsString::from("/chunkdict")],
symlink: None,
xattrs: RafsXAttrs::new(),
v6_force_extended_inode: true,
};
let mut node = Node::new(inode, node_info, 0);
// Insert chunks.
Self::insert_chunks(ctx, blob_mgr, &mut node, chunkdict_chunks, chunkdict_blobs)?;
let node_size: u64 = node
.chunks
.iter()
.map(|chunk| chunk.inner.uncompressed_size() as u64)
.sum();
node.inode.set_size(node_size);
// Update child count.
node.inode.set_child_count(node.chunks.len() as u32);
let child = Tree::new(node);
child
.borrow_mut_node()
.v5_set_dir_size(ctx.fs_version, &child.children);
Ok(child)
}
/// Insert chunks.
fn insert_chunks(
ctx: &mut BuildContext,
blob_mgr: &mut BlobManager,
node: &mut Node,
chunkdict_chunks: &[ChunkdictChunkInfo],
chunkdict_blobs: &[ChunkdictBlobInfo],
) -> Result<()> {
for (index, chunk_info) in chunkdict_chunks.iter().enumerate() {
let chunk_size: u32 = chunk_info.chunk_compressed_size;
let file_offset = index as u64 * chunk_size as u64;
let mut chunk = ChunkWrapper::new(ctx.fs_version);
// Update blob context.
let (blob_index, blob_ctx) =
blob_mgr.get_or_cerate_blob_for_chunkdict(ctx, &chunk_info.chunk_blob_id)?;
let chunk_uncompressed_size = chunk_info.chunk_uncompressed_size;
let pre_d_offset = blob_ctx.current_uncompressed_offset;
blob_ctx.uncompressed_blob_size = pre_d_offset + chunk_uncompressed_size as u64;
blob_ctx.current_uncompressed_offset += chunk_uncompressed_size as u64;
blob_ctx.blob_meta_header.set_ci_uncompressed_size(
blob_ctx.blob_meta_header.ci_uncompressed_size()
+ size_of::<BlobChunkInfoV1Ondisk>() as u64,
);
blob_ctx.blob_meta_header.set_ci_compressed_size(
blob_ctx.blob_meta_header.ci_uncompressed_size()
+ size_of::<BlobChunkInfoV1Ondisk>() as u64,
);
let chunkdict_blob_info = chunkdict_blobs
.iter()
.find(|blob| blob.blob_id == chunk_info.chunk_blob_id)
.unwrap();
blob_ctx.blob_compressor =
Algorithm::from_str(chunkdict_blob_info.blob_compressor.as_str())?;
blob_ctx
.blob_meta_header
.set_ci_uncompressed_size(chunkdict_blob_info.blob_meta_ci_uncompressed_size);
blob_ctx
.blob_meta_header
.set_ci_compressed_size(chunkdict_blob_info.blob_meta_ci_compressed_size);
blob_ctx
.blob_meta_header
.set_ci_compressed_offset(chunkdict_blob_info.blob_meta_ci_offset);
blob_ctx.blob_meta_header.set_ci_compressor(Algorithm::Zstd);
// Update chunk context.
let chunk_index = blob_ctx.alloc_chunk_index()?;
chunk.set_blob_index(blob_index);
chunk.set_index(chunk_index);
chunk.set_file_offset(file_offset);
chunk.set_compressed_size(chunk_info.chunk_compressed_size);
chunk.set_compressed_offset(chunk_info.chunk_compressed_offset);
chunk.set_uncompressed_size(chunk_info.chunk_uncompressed_size);
chunk.set_uncompressed_offset(chunk_info.chunk_uncompressed_offset);
chunk.set_id(RafsDigest::from_string(&chunk_info.chunk_digest));
chunk.set_crc32(chunk_info.chunk_crc32);
node.chunks.push(NodeChunk {
source: ChunkSource::Build,
inner: Arc::new(chunk.clone()),
});
}
Ok(())
}
}

1362
builder/src/compact.rs Normal file

File diff suppressed because it is too large Load Diff

364
builder/src/core/blob.rs Normal file
View File

@ -0,0 +1,364 @@
// Copyright 2020 Ant Group. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use std::borrow::Cow;
use std::slice;
use anyhow::{bail, Context, Result};
use nydus_rafs::metadata::RAFS_MAX_CHUNK_SIZE;
use nydus_storage::device::BlobFeatures;
use nydus_storage::meta::{toc, BlobMetaChunkArray};
use nydus_utils::digest::{self, DigestHasher, RafsDigest};
use nydus_utils::{compress, crypt};
use sha2::digest::Digest;
use super::layout::BlobLayout;
use super::node::Node;
use crate::core::context::Artifact;
use crate::{BlobContext, BlobManager, BuildContext, ConversionType, Feature};
const VALID_BLOB_ID_LENGTH: usize = 64;
/// Generator for RAFS data blob.
pub(crate) struct Blob {}
impl Blob {
/// Dump blob file and generate chunks
pub(crate) fn dump(
ctx: &BuildContext,
blob_mgr: &mut BlobManager,
blob_writer: &mut dyn Artifact,
) -> Result<()> {
match ctx.conversion_type {
ConversionType::DirectoryToRafs => {
let mut chunk_data_buf = vec![0u8; RAFS_MAX_CHUNK_SIZE as usize];
let (inodes, prefetch_entries) = BlobLayout::layout_blob_simple(&ctx.prefetch)?;
for (idx, node) in inodes.iter().enumerate() {
let mut node = node.borrow_mut();
let size = node
.dump_node_data(ctx, blob_mgr, blob_writer, &mut chunk_data_buf)
.context("failed to dump blob chunks")?;
if idx < prefetch_entries {
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
blob_ctx.blob_prefetch_size += size;
}
}
}
Self::finalize_blob_data(ctx, blob_mgr, blob_writer)?;
}
ConversionType::TarToRafs
| ConversionType::TargzToRafs
| ConversionType::EStargzToRafs => {
Self::finalize_blob_data(ctx, blob_mgr, blob_writer)?;
}
ConversionType::TarToTarfs
| ConversionType::TarToRef
| ConversionType::TargzToRef
| ConversionType::EStargzToRef => {
// Use `sha256(tarball)` as `blob_id` for ref-type conversions.
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
if let Some(zran) = &ctx.blob_zran_generator {
let reader = zran.lock().unwrap().reader();
blob_ctx.compressed_blob_size = reader.get_data_size();
if blob_ctx.blob_id.is_empty() {
let hash = reader.get_data_digest();
blob_ctx.blob_id = format!("{:x}", hash.finalize());
}
} else if let Some(tar_reader) = &ctx.blob_tar_reader {
blob_ctx.compressed_blob_size = tar_reader.position();
if ctx.conversion_type == ConversionType::TarToTarfs {
blob_ctx.uncompressed_blob_size = blob_ctx.compressed_blob_size;
}
if blob_ctx.blob_id.is_empty() {
let hash = tar_reader.get_hash_object();
blob_ctx.blob_id = format!("{:x}", hash.finalize());
}
}
}
Self::finalize_blob_data(ctx, blob_mgr, blob_writer)?;
}
ConversionType::EStargzIndexToRef => {
Self::finalize_blob_data(ctx, blob_mgr, blob_writer)?;
}
ConversionType::TarToStargz
| ConversionType::DirectoryToTargz
| ConversionType::DirectoryToStargz
| ConversionType::TargzToStargz => {
unimplemented!()
}
}
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
blob_ctx.set_blob_prefetch_size(ctx);
}
Ok(())
}
pub fn finalize_blob_data(
ctx: &BuildContext,
blob_mgr: &mut BlobManager,
blob_writer: &mut dyn Artifact,
) -> Result<()> {
// Dump buffered batch chunk data if exists.
if let Some(ref batch) = ctx.blob_batch_generator {
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
let mut batch = batch.lock().unwrap();
if !batch.chunk_data_buf_is_empty() {
let (_, compressed_size, _) = Node::write_chunk_data(
&ctx,
blob_ctx,
blob_writer,
batch.chunk_data_buf(),
)?;
batch.add_context(compressed_size);
batch.clear_chunk_data_buf();
}
}
}
if !ctx.blob_features.contains(BlobFeatures::SEPARATE)
&& (ctx.blob_inline_meta || ctx.features.is_enabled(Feature::BlobToc))
{
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
if blob_ctx.external {
return Ok(());
}
blob_ctx.write_tar_header(
blob_writer,
toc::TOC_ENTRY_BLOB_RAW,
blob_ctx.compressed_blob_size,
)?;
if ctx.features.is_enabled(Feature::BlobToc) {
let blob_digest = RafsDigest {
data: blob_ctx.blob_hash.clone().finalize().into(),
};
blob_ctx.entry_list.add(
toc::TOC_ENTRY_BLOB_RAW,
compress::Algorithm::None,
blob_digest,
blob_ctx.compressed_offset(),
blob_ctx.compressed_blob_size,
blob_ctx.uncompressed_blob_size,
)?;
}
}
}
// check blobs to make sure all blobs are valid.
if blob_mgr.external {
for (index, blob_ctx) in blob_mgr.get_blobs().iter().enumerate() {
if blob_ctx.blob_id.len() != VALID_BLOB_ID_LENGTH {
bail!(
"invalid blob id:{}, length:{}, index:{}",
blob_ctx.blob_id,
blob_ctx.blob_id.len(),
index
);
}
}
}
Ok(())
}
fn get_compression_algorithm_for_meta(ctx: &BuildContext) -> compress::Algorithm {
if ctx.conversion_type.is_to_ref() {
compress::Algorithm::Zstd
} else {
ctx.compressor
}
}
pub(crate) fn dump_meta_data(
ctx: &BuildContext,
blob_ctx: &mut BlobContext,
blob_writer: &mut dyn Artifact,
) -> Result<()> {
// Dump blob meta for v6 when it has chunks or bootstrap is to be inlined.
if !blob_ctx.blob_meta_info_enabled || blob_ctx.uncompressed_blob_size == 0 {
return Ok(());
}
// Prepare blob meta information data.
let encrypt = ctx.cipher != crypt::Algorithm::None;
let cipher_obj = &blob_ctx.cipher_object;
let cipher_ctx = &blob_ctx.cipher_ctx;
let blob_meta_info = &blob_ctx.blob_meta_info;
let mut ci_data = blob_meta_info.as_byte_slice();
let mut inflate_buf = Vec::new();
let mut header = blob_ctx.blob_meta_header;
if let Some(ref zran) = ctx.blob_zran_generator {
let (inflate_data, inflate_count) = zran.lock().unwrap().to_vec()?;
header.set_ci_zran_count(inflate_count);
header.set_ci_zran_offset(ci_data.len() as u64);
header.set_ci_zran_size(inflate_data.len() as u64);
header.set_ci_zran(true);
header.set_separate_blob(true);
inflate_buf = [ci_data, &inflate_data].concat();
ci_data = &inflate_buf;
} else if let Some(ref batch) = ctx.blob_batch_generator {
let (inflate_data, inflate_count) = batch.lock().unwrap().to_vec()?;
header.set_ci_zran_count(inflate_count);
header.set_ci_zran_offset(ci_data.len() as u64);
header.set_ci_zran_size(inflate_data.len() as u64);
header.set_ci_batch(true);
inflate_buf = [ci_data, &inflate_data].concat();
ci_data = &inflate_buf;
} else if ctx.blob_tar_reader.is_some() {
header.set_separate_blob(true);
};
let mut compressor = Self::get_compression_algorithm_for_meta(ctx);
let (compressed_data, compressed) = compress::compress(ci_data, compressor)
.with_context(|| "failed to compress blob chunk info array".to_string())?;
if !compressed {
compressor = compress::Algorithm::None;
}
let encrypted_ci_data =
crypt::encrypt_with_context(&compressed_data, cipher_obj, cipher_ctx, encrypt)?;
let compressed_offset = blob_writer.pos()?;
let compressed_size = encrypted_ci_data.len() as u64;
let uncompressed_size = ci_data.len() as u64;
header.set_ci_compressor(compressor);
header.set_ci_entries(blob_meta_info.len() as u32);
header.set_ci_compressed_offset(compressed_offset);
header.set_ci_compressed_size(compressed_size as u64);
header.set_ci_uncompressed_size(uncompressed_size as u64);
header.set_aligned(true);
match blob_meta_info {
BlobMetaChunkArray::V1(_) => header.set_chunk_info_v2(false),
BlobMetaChunkArray::V2(_) => header.set_chunk_info_v2(true),
}
if ctx.features.is_enabled(Feature::BlobToc) && blob_ctx.chunk_count > 0 {
header.set_inlined_chunk_digest(true);
}
blob_ctx.blob_meta_header = header;
if let Some(blob_cache) = ctx.blob_cache_generator.as_ref() {
blob_cache.write_blob_meta(ci_data, &header)?;
}
let encrypted_header =
crypt::encrypt_with_context(header.as_bytes(), cipher_obj, cipher_ctx, encrypt)?;
let header_size = encrypted_header.len();
// Write blob meta data and header
match encrypted_ci_data {
Cow::Owned(v) => blob_ctx.write_data(blob_writer, &v)?,
Cow::Borrowed(v) => {
let buf = v.to_vec();
blob_ctx.write_data(blob_writer, &buf)?;
}
}
blob_ctx.write_data(blob_writer, &encrypted_header)?;
// Write tar header for `blob.meta`.
if ctx.blob_inline_meta || ctx.features.is_enabled(Feature::BlobToc) {
blob_ctx.write_tar_header(
blob_writer,
toc::TOC_ENTRY_BLOB_META,
compressed_size + header_size as u64,
)?;
}
// Generate ToC entry for `blob.meta` and write chunk digest array.
if ctx.features.is_enabled(Feature::BlobToc) {
let mut hasher = RafsDigest::hasher(digest::Algorithm::Sha256);
let ci_data = if ctx.blob_features.contains(BlobFeatures::BATCH)
|| ctx.blob_features.contains(BlobFeatures::ZRAN)
{
inflate_buf.as_slice()
} else {
blob_ctx.blob_meta_info.as_byte_slice()
};
hasher.digest_update(ci_data);
blob_ctx.entry_list.add(
toc::TOC_ENTRY_BLOB_META,
compressor,
hasher.digest_finalize(),
compressed_offset,
compressed_size as u64,
uncompressed_size as u64,
)?;
let mut hasher = RafsDigest::hasher(digest::Algorithm::Sha256);
hasher.digest_update(header.as_bytes());
blob_ctx.entry_list.add(
toc::TOC_ENTRY_BLOB_META_HEADER,
compress::Algorithm::None,
hasher.digest_finalize(),
compressed_offset + compressed_size,
header_size as u64,
header_size as u64,
)?;
let buf = unsafe {
slice::from_raw_parts(
blob_ctx.blob_chunk_digest.as_ptr() as *const u8,
blob_ctx.blob_chunk_digest.len() * 32,
)
};
assert!(!buf.is_empty());
// The chunk digest array is almost incompressible, no need for compression.
let digest = RafsDigest::from_buf(buf, digest::Algorithm::Sha256);
let compressed_offset = blob_writer.pos()?;
let size = buf.len() as u64;
blob_writer.write_all(buf)?;
blob_ctx.write_tar_header(blob_writer, toc::TOC_ENTRY_BLOB_DIGEST, size)?;
blob_ctx.entry_list.add(
toc::TOC_ENTRY_BLOB_DIGEST,
compress::Algorithm::None,
digest,
compressed_offset,
size,
size,
)?;
}
Ok(())
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_default_compression_algorithm_for_meta_ci() {
let mut ctx = BuildContext::default();
//TarToRef
ctx = BuildContext {
conversion_type: ConversionType::TarToRef,
..ctx
};
let compressor = Blob::get_compression_algorithm_for_meta(&ctx);
assert_eq!(compressor, compress::Algorithm::Zstd);
//EStargzIndexToRef
ctx = BuildContext {
conversion_type: ConversionType::EStargzIndexToRef,
..ctx
};
let compressor = Blob::get_compression_algorithm_for_meta(&ctx);
assert_eq!(compressor, compress::Algorithm::Zstd);
//TargzToRef
ctx = BuildContext {
conversion_type: ConversionType::TargzToRef,
..ctx
};
let compressor = Blob::get_compression_algorithm_for_meta(&ctx);
assert_eq!(compressor, compress::Algorithm::Zstd);
//TarToRef
ctx = BuildContext {
conversion_type: ConversionType::TarToRef,
..ctx
};
let compressor = Blob::get_compression_algorithm_for_meta(&ctx);
assert_eq!(compressor, compress::Algorithm::Zstd);
}
}

View File

@ -0,0 +1,214 @@
// Copyright 2020 Ant Group. All rights reserved.
// Copyright (C) 2023 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use anyhow::{Context, Error, Result};
use nydus_utils::digest::{self, RafsDigest};
use std::ops::Deref;
use nydus_rafs::metadata::layout::{RafsBlobTable, RAFS_V5_ROOT_INODE};
use nydus_rafs::metadata::{RafsSuper, RafsSuperConfig, RafsSuperFlags};
use crate::{ArtifactStorage, BlobManager, BootstrapContext, BootstrapManager, BuildContext, Tree};
/// RAFS bootstrap/meta builder.
pub struct Bootstrap {
pub(crate) tree: Tree,
}
impl Bootstrap {
/// Create a new instance of [Bootstrap].
pub fn new(tree: Tree) -> Result<Self> {
Ok(Self { tree })
}
/// Build the final view of the RAFS filesystem meta from the hierarchy `tree`.
pub fn build(
&mut self,
ctx: &mut BuildContext,
bootstrap_ctx: &mut BootstrapContext,
) -> Result<()> {
// Special handling of the root inode
let mut root_node = self.tree.borrow_mut_node();
assert!(root_node.is_dir());
let index = bootstrap_ctx.generate_next_ino();
// 0 is reserved and 1 also matches RAFS_V5_ROOT_INODE.
assert_eq!(index, RAFS_V5_ROOT_INODE);
root_node.index = index;
root_node.inode.set_ino(index);
ctx.prefetch.insert(&self.tree.node, root_node.deref());
bootstrap_ctx.inode_map.insert(
(
root_node.layer_idx,
root_node.info.src_ino,
root_node.info.src_dev,
),
vec![self.tree.node.clone()],
);
drop(root_node);
Self::build_rafs(ctx, bootstrap_ctx, &mut self.tree)?;
if ctx.fs_version.is_v6() {
let root_offset = self.tree.node.borrow().v6_offset;
Self::v6_update_dirents(&self.tree, root_offset);
}
Ok(())
}
/// Dump the RAFS filesystem meta information to meta blob.
pub fn dump(
&mut self,
ctx: &mut BuildContext,
bootstrap_storage: &mut Option<ArtifactStorage>,
bootstrap_ctx: &mut BootstrapContext,
blob_table: &RafsBlobTable,
) -> Result<()> {
match blob_table {
RafsBlobTable::V5(table) => self.v5_dump(ctx, bootstrap_ctx, table)?,
RafsBlobTable::V6(table) => self.v6_dump(ctx, bootstrap_ctx, table)?,
}
if let Some(ArtifactStorage::FileDir(p)) = bootstrap_storage {
let bootstrap_data = bootstrap_ctx.writer.as_bytes()?;
let digest = RafsDigest::from_buf(&bootstrap_data, digest::Algorithm::Sha256);
let name = digest.to_string();
bootstrap_ctx.writer.finalize(Some(name.clone()))?;
let mut path = p.0.join(name);
path.set_extension(&p.1);
*bootstrap_storage = Some(ArtifactStorage::SingleFile(path));
Ok(())
} else {
bootstrap_ctx.writer.finalize(Some(String::default()))
}
}
/// Traverse node tree, set inode index, ino, child_index and child_count etc according to the
/// RAFS metadata format, then store to nodes collection.
fn build_rafs(
ctx: &mut BuildContext,
bootstrap_ctx: &mut BootstrapContext,
tree: &mut Tree,
) -> Result<()> {
let parent_node = tree.node.clone();
let mut parent_node = parent_node.borrow_mut();
let parent_ino = parent_node.inode.ino();
let block_size = ctx.v6_block_size();
// In case of multi-layer building, it's possible that the parent node is not a directory.
if parent_node.is_dir() {
parent_node
.inode
.set_child_count(tree.children.len() as u32);
if ctx.fs_version.is_v5() {
parent_node
.inode
.set_child_index(bootstrap_ctx.get_next_ino() as u32);
} else if ctx.fs_version.is_v6() {
// Layout directory entries for v6.
let d_size = parent_node.v6_dirent_size(ctx, tree)?;
parent_node.v6_set_dir_offset(bootstrap_ctx, d_size, block_size)?;
}
}
let mut dirs: Vec<&mut Tree> = Vec::new();
for child in tree.children.iter_mut() {
let child_node = child.node.clone();
let mut child_node = child_node.borrow_mut();
let index = bootstrap_ctx.generate_next_ino();
child_node.index = index;
if ctx.fs_version.is_v5() {
child_node.inode.set_parent(parent_ino);
}
// Handle hardlink.
// All hardlink nodes' ino and nlink should be the same.
// We need to find hardlink node index list in the layer where the node is located
// because the real_ino may be different among different layers,
let mut v6_hardlink_offset: Option<u64> = None;
let key = (
child_node.layer_idx,
child_node.info.src_ino,
child_node.info.src_dev,
);
if let Some(indexes) = bootstrap_ctx.inode_map.get_mut(&key) {
let nlink = indexes.len() as u32 + 1;
// Update nlink for previous hardlink inodes
for n in indexes.iter() {
n.borrow_mut().inode.set_nlink(nlink);
}
let (first_ino, first_offset) = {
let first_node = indexes[0].borrow_mut();
(first_node.inode.ino(), first_node.v6_offset)
};
// set offset for rafs v6 hardlinks
v6_hardlink_offset = Some(first_offset);
child_node.inode.set_nlink(nlink);
child_node.inode.set_ino(first_ino);
indexes.push(child.node.clone());
} else {
child_node.inode.set_ino(index);
child_node.inode.set_nlink(1);
// Store inode real ino
bootstrap_ctx
.inode_map
.insert(key, vec![child.node.clone()]);
}
// update bootstrap_ctx.offset for rafs v6 non-dir nodes.
if !child_node.is_dir() && ctx.fs_version.is_v6() {
child_node.v6_set_offset(bootstrap_ctx, v6_hardlink_offset, block_size)?;
}
ctx.prefetch.insert(&child.node, child_node.deref());
if child_node.is_dir() {
dirs.push(child);
}
}
// According to filesystem semantics, a parent directory should have nlink equal to
// the number of its child directories plus 2.
if parent_node.is_dir() {
parent_node.inode.set_nlink((2 + dirs.len()) as u32);
}
for dir in dirs {
Self::build_rafs(ctx, bootstrap_ctx, dir)?;
}
Ok(())
}
/// Load a parent RAFS bootstrap and return the `Tree` object representing the filesystem.
pub fn load_parent_bootstrap(
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_mgr: &mut BlobManager,
) -> Result<Tree> {
let rs = if let Some(path) = bootstrap_mgr.f_parent_path.as_ref() {
RafsSuper::load_from_file(path, ctx.configuration.clone(), false).map(|(rs, _)| rs)?
} else {
return Err(Error::msg("bootstrap context's parent bootstrap is null"));
};
let config = RafsSuperConfig {
compressor: ctx.compressor,
digester: ctx.digester,
chunk_size: ctx.chunk_size,
batch_size: ctx.batch_size,
explicit_uidgid: ctx.explicit_uidgid,
version: ctx.fs_version,
is_tarfs_mode: rs.meta.flags.contains(RafsSuperFlags::TARTFS_MODE),
};
config.check_compatibility(&rs.meta)?;
// Reuse lower layer blob table,
// we need to append the blob entry of upper layer to the table
blob_mgr.extend_from_blob_table(ctx, rs.superblock.get_blob_infos())?;
// Build node tree of lower layer from a bootstrap file, and add chunks
// of lower node to layered_chunk_dict for chunk deduplication on next.
Tree::from_bootstrap(&rs, &mut blob_mgr.layered_chunk_dict)
.context("failed to build tree from bootstrap")
}
}

View File

@ -0,0 +1,280 @@
// Copyright 2020 Ant Group. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use std::collections::{BTreeMap, HashMap};
use std::mem::size_of;
use std::path::{Path, PathBuf};
use std::sync::atomic::{AtomicU32, Ordering};
use std::sync::{Arc, Mutex};
use anyhow::{bail, Context, Result};
use nydus_api::ConfigV2;
use nydus_rafs::metadata::chunk::ChunkWrapper;
use nydus_rafs::metadata::layout::v5::RafsV5ChunkInfo;
use nydus_rafs::metadata::{RafsSuper, RafsSuperConfig};
use nydus_storage::device::BlobInfo;
use nydus_utils::digest::{self, RafsDigest};
use crate::Tree;
#[derive(Debug, PartialEq, Eq, Hash, Ord, PartialOrd)]
pub struct DigestWithBlobIndex(pub RafsDigest, pub u32, pub Option<u32>);
/// Trait to manage chunk cache for chunk deduplication.
pub trait ChunkDict: Sync + Send + 'static {
/// Add a chunk into the cache.
fn add_chunk(&mut self, chunk: Arc<ChunkWrapper>, digester: digest::Algorithm);
/// Get a cached chunk from the cache.
fn get_chunk(&self, digest: &RafsDigest, uncompressed_size: u32) -> Option<&Arc<ChunkWrapper>>;
/// Get all `BlobInfo` objects referenced by cached chunks.
fn get_blobs(&self) -> Vec<Arc<BlobInfo>>;
/// Get the `BlobInfo` object with inner index `idx`.
fn get_blob_by_inner_idx(&self, idx: u32) -> Option<&Arc<BlobInfo>>;
/// Associate an external index with the inner index.
fn set_real_blob_idx(&self, inner_idx: u32, out_idx: u32);
/// Get the external index associated with an inner index.
fn get_real_blob_idx(&self, inner_idx: u32) -> Option<u32>;
/// Get the digest algorithm used to generate chunk digest.
fn digester(&self) -> digest::Algorithm;
}
impl ChunkDict for () {
fn add_chunk(&mut self, _chunk: Arc<ChunkWrapper>, _digester: digest::Algorithm) {}
fn get_chunk(
&self,
_digest: &RafsDigest,
_uncompressed_size: u32,
) -> Option<&Arc<ChunkWrapper>> {
None
}
fn get_blobs(&self) -> Vec<Arc<BlobInfo>> {
Vec::new()
}
fn get_blob_by_inner_idx(&self, _idx: u32) -> Option<&Arc<BlobInfo>> {
None
}
fn set_real_blob_idx(&self, _inner_idx: u32, _out_idx: u32) {
panic!("()::set_real_blob_idx() should not be invoked");
}
fn get_real_blob_idx(&self, inner_idx: u32) -> Option<u32> {
Some(inner_idx)
}
fn digester(&self) -> digest::Algorithm {
digest::Algorithm::Sha256
}
}
/// An implementation of [ChunkDict] based on [HashMap].
pub struct HashChunkDict {
m: HashMap<RafsDigest, (Arc<ChunkWrapper>, AtomicU32)>,
blobs: Vec<Arc<BlobInfo>>,
blob_idx_m: Mutex<BTreeMap<u32, u32>>,
digester: digest::Algorithm,
}
impl ChunkDict for HashChunkDict {
fn add_chunk(&mut self, chunk: Arc<ChunkWrapper>, digester: digest::Algorithm) {
if self.digester == digester {
if let Some(e) = self.m.get(chunk.id()) {
e.1.fetch_add(1, Ordering::AcqRel);
} else {
self.m
.insert(chunk.id().to_owned(), (chunk, AtomicU32::new(1)));
}
}
}
fn get_chunk(&self, digest: &RafsDigest, uncompressed_size: u32) -> Option<&Arc<ChunkWrapper>> {
if let Some((chunk, _)) = self.m.get(digest) {
if chunk.uncompressed_size() == 0 || chunk.uncompressed_size() == uncompressed_size {
return Some(chunk);
}
}
None
}
fn get_blobs(&self) -> Vec<Arc<BlobInfo>> {
self.blobs.clone()
}
fn get_blob_by_inner_idx(&self, idx: u32) -> Option<&Arc<BlobInfo>> {
self.blobs.get(idx as usize)
}
fn set_real_blob_idx(&self, inner_idx: u32, out_idx: u32) {
self.blob_idx_m.lock().unwrap().insert(inner_idx, out_idx);
}
fn get_real_blob_idx(&self, inner_idx: u32) -> Option<u32> {
self.blob_idx_m.lock().unwrap().get(&inner_idx).copied()
}
fn digester(&self) -> digest::Algorithm {
self.digester
}
}
impl HashChunkDict {
/// Create a new instance of [HashChunkDict].
pub fn new(digester: digest::Algorithm) -> Self {
HashChunkDict {
m: Default::default(),
blobs: vec![],
blob_idx_m: Mutex::new(Default::default()),
digester,
}
}
/// Get an immutable reference to the internal `HashMap`.
pub fn hashmap(&self) -> &HashMap<RafsDigest, (Arc<ChunkWrapper>, AtomicU32)> {
&self.m
}
/// Parse commandline argument for chunk dictionary and load chunks into the dictionary.
pub fn from_commandline_arg(
arg: &str,
config: Arc<ConfigV2>,
rafs_config: &RafsSuperConfig,
) -> Result<Arc<dyn ChunkDict>> {
let file_path = parse_chunk_dict_arg(arg)?;
HashChunkDict::from_bootstrap_file(&file_path, config, rafs_config)
.map(|d| Arc::new(d) as Arc<dyn ChunkDict>)
}
/// Load chunks from the RAFS filesystem into the chunk dictionary.
pub fn from_bootstrap_file(
path: &Path,
config: Arc<ConfigV2>,
rafs_config: &RafsSuperConfig,
) -> Result<Self> {
let (rs, _) = RafsSuper::load_from_file(path, config, true)
.with_context(|| format!("failed to open bootstrap file {:?}", path))?;
let mut d = HashChunkDict {
m: HashMap::new(),
blobs: rs.superblock.get_blob_infos(),
blob_idx_m: Mutex::new(BTreeMap::new()),
digester: rafs_config.digester,
};
rafs_config.check_compatibility(&rs.meta)?;
if rs.meta.is_v5() || rs.meta.has_inlined_chunk_digest() {
Tree::from_bootstrap(&rs, &mut d).context("failed to build tree from bootstrap")?;
} else if rs.meta.is_v6() {
d.load_chunk_table(&rs)
.context("failed to load chunk table")?;
} else {
unimplemented!()
}
Ok(d)
}
fn load_chunk_table(&mut self, rs: &RafsSuper) -> Result<()> {
let size = rs.meta.chunk_table_size as usize;
if size == 0 || self.digester != rs.meta.get_digester() {
return Ok(());
}
let unit_size = size_of::<RafsV5ChunkInfo>();
if size % unit_size != 0 {
return Err(std::io::Error::from_raw_os_error(libc::EINVAL)).with_context(|| {
format!(
"load_chunk_table: invalid rafs v6 chunk table size {}",
size
)
});
}
for idx in 0..(size / unit_size) {
let chunk = rs.superblock.get_chunk_info(idx)?;
let chunk_info = Arc::new(ChunkWrapper::from_chunk_info(chunk));
self.add_chunk(chunk_info, self.digester);
}
Ok(())
}
}
/// Parse a chunk dictionary argument string.
///
/// # Argument
/// `arg` may be in inform of:
/// - type=path: type of external source and corresponding path
/// - path: type default to "bootstrap"
///
/// for example:
/// bootstrap=image.boot
/// image.boot
/// ~/image/image.boot
/// boltdb=/var/db/dict.db (not supported yet)
pub fn parse_chunk_dict_arg(arg: &str) -> Result<PathBuf> {
let (file_type, file_path) = match arg.find('=') {
None => ("bootstrap", arg),
Some(idx) => (&arg[0..idx], &arg[idx + 1..]),
};
debug!("parse chunk dict argument {}={}", file_type, file_path);
match file_type {
"bootstrap" => Ok(PathBuf::from(file_path)),
_ => bail!("invalid chunk dict type {}", file_type),
}
}
#[cfg(test)]
mod tests {
use super::*;
use nydus_rafs::metadata::RafsVersion;
use nydus_utils::{compress, digest};
use std::path::PathBuf;
#[test]
fn test_null_dict() {
let mut dict = Box::new(()) as Box<dyn ChunkDict>;
let chunk = Arc::new(ChunkWrapper::new(RafsVersion::V5));
dict.add_chunk(chunk.clone(), digest::Algorithm::Sha256);
assert!(dict.get_chunk(chunk.id(), 0).is_none());
assert_eq!(dict.get_blobs().len(), 0);
assert_eq!(dict.get_real_blob_idx(5).unwrap(), 5);
}
#[test]
fn test_chunk_dict() {
let root_dir = &std::env::var("CARGO_MANIFEST_DIR").expect("$CARGO_MANIFEST_DIR");
let mut source_path = PathBuf::from(root_dir);
source_path.push("../tests/texture/bootstrap/rafs-v5.boot");
let path = source_path.to_str().unwrap();
let rafs_config = RafsSuperConfig {
version: RafsVersion::V5,
compressor: compress::Algorithm::Lz4Block,
digester: digest::Algorithm::Blake3,
chunk_size: 0x100000,
batch_size: 0,
explicit_uidgid: true,
is_tarfs_mode: false,
};
let dict =
HashChunkDict::from_commandline_arg(path, Arc::new(ConfigV2::default()), &rafs_config)
.unwrap();
assert!(dict.get_chunk(&RafsDigest::default(), 0).is_none());
assert_eq!(dict.get_blobs().len(), 18);
dict.set_real_blob_idx(0, 10);
assert_eq!(dict.get_real_blob_idx(0), Some(10));
assert_eq!(dict.get_real_blob_idx(1), None);
}
}

1677
builder/src/core/context.rs Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,94 @@
// Copyright (C) 2022 Nydus Developers. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use std::collections::HashSet;
use std::convert::TryFrom;
use anyhow::{bail, Result};
const ERR_UNSUPPORTED_FEATURE: &str = "unsupported feature";
/// Feature flags to control behavior of RAFS filesystem builder.
#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub enum Feature {
/// Append a Table Of Content footer to RAFS v6 data blob, to help locate data sections.
BlobToc,
}
impl TryFrom<&str> for Feature {
type Error = anyhow::Error;
fn try_from(f: &str) -> Result<Self> {
match f {
"blob-toc" => Ok(Self::BlobToc),
_ => bail!(
"{} `{}`, please try upgrading to the latest nydus-image",
ERR_UNSUPPORTED_FEATURE,
f,
),
}
}
}
/// A set of enabled feature flags to control behavior of RAFS filesystem builder
#[derive(Clone, Debug)]
pub struct Features(HashSet<Feature>);
impl Default for Features {
fn default() -> Self {
Self::new()
}
}
impl Features {
/// Create a new instance of [Features].
pub fn new() -> Self {
Self(HashSet::new())
}
/// Check whether a feature is enabled or not.
pub fn is_enabled(&self, feature: Feature) -> bool {
self.0.contains(&feature)
}
}
impl TryFrom<&str> for Features {
type Error = anyhow::Error;
fn try_from(features: &str) -> Result<Self> {
let mut list = Features::new();
for feat in features.trim().split(',') {
if !feat.is_empty() {
let feature = Feature::try_from(feat.trim())?;
list.0.insert(feature);
}
}
Ok(list)
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_feature() {
assert_eq!(Feature::try_from("blob-toc").unwrap(), Feature::BlobToc);
Feature::try_from("unknown-feature-bit").unwrap_err();
}
#[test]
fn test_features() {
let features = Features::try_from("blob-toc").unwrap();
assert!(features.is_enabled(Feature::BlobToc));
let features = Features::try_from("blob-toc,").unwrap();
assert!(features.is_enabled(Feature::BlobToc));
let features = Features::try_from("blob-toc, ").unwrap();
assert!(features.is_enabled(Feature::BlobToc));
let features = Features::try_from("blob-toc ").unwrap();
assert!(features.is_enabled(Feature::BlobToc));
let features = Features::try_from(" blob-toc ").unwrap();
assert!(features.is_enabled(Feature::BlobToc));
}
}

View File

@ -0,0 +1,62 @@
// Copyright (C) 2021 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use anyhow::Result;
use std::ops::Deref;
use super::node::Node;
use crate::{Overlay, Prefetch, TreeNode};
#[derive(Clone)]
pub struct BlobLayout {}
impl BlobLayout {
pub fn layout_blob_simple(prefetch: &Prefetch) -> Result<(Vec<TreeNode>, usize)> {
let (pre, non_pre) = prefetch.get_file_nodes();
let mut inodes: Vec<TreeNode> = pre
.into_iter()
.filter(|x| Self::should_dump_node(x.borrow().deref()))
.collect();
let mut non_prefetch_inodes: Vec<TreeNode> = non_pre
.into_iter()
.filter(|x| Self::should_dump_node(x.borrow().deref()))
.collect();
let prefetch_entries = inodes.len();
inodes.append(&mut non_prefetch_inodes);
Ok((inodes, prefetch_entries))
}
#[inline]
fn should_dump_node(node: &Node) -> bool {
node.overlay == Overlay::UpperAddition || node.overlay == Overlay::UpperModification
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::{core::node::NodeInfo, Tree};
use nydus_rafs::metadata::{inode::InodeWrapper, RafsVersion};
#[test]
fn test_layout_blob_simple() {
let mut inode = InodeWrapper::new(RafsVersion::V6);
inode.set_mode(0o755 | libc::S_IFREG as u32);
inode.set_size(1);
let mut node1 = Node::new(inode.clone(), NodeInfo::default(), 1);
node1.overlay = Overlay::UpperAddition;
let tree = Tree::new(node1);
let mut prefetch = Prefetch::default();
prefetch.insert(&tree.node, tree.node.borrow().deref());
let (inodes, prefetch_entries) = BlobLayout::layout_blob_simple(&prefetch).unwrap();
assert_eq!(inodes.len(), 1);
assert_eq!(prefetch_entries, 0);
}
}

View File

@ -3,11 +3,14 @@
// SPDX-License-Identifier: Apache-2.0 // SPDX-License-Identifier: Apache-2.0
pub(crate) mod blob; pub(crate) mod blob;
pub(crate) mod blob_compact;
pub(crate) mod bootstrap; pub(crate) mod bootstrap;
pub(crate) mod chunk_dict; pub(crate) mod chunk_dict;
pub(crate) mod context; pub(crate) mod context;
pub(crate) mod feature;
pub(crate) mod layout; pub(crate) mod layout;
pub(crate) mod node; pub(crate) mod node;
pub(crate) mod overlay;
pub(crate) mod prefetch; pub(crate) mod prefetch;
pub(crate) mod tree; pub(crate) mod tree;
pub(crate) mod v5;
pub(crate) mod v6;

1275
builder/src/core/node.rs Normal file

File diff suppressed because it is too large Load Diff

361
builder/src/core/overlay.rs Normal file
View File

@ -0,0 +1,361 @@
// Copyright 2020 Ant Group. All rights reserved.
// Copyright (C) 2021-2023 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
//! Execute file/directory whiteout rules when merging multiple RAFS filesystems
//! according to the OCI or Overlayfs specifications.
use std::ffi::{OsStr, OsString};
use std::fmt::{self, Display, Formatter};
use std::os::unix::ffi::OsStrExt;
use std::str::FromStr;
use anyhow::{anyhow, Error, Result};
use super::node::Node;
/// Prefix for OCI whiteout file.
pub const OCISPEC_WHITEOUT_PREFIX: &str = ".wh.";
/// Prefix for OCI whiteout opaque.
pub const OCISPEC_WHITEOUT_OPAQUE: &str = ".wh..wh..opq";
/// Extended attribute key for Overlayfs whiteout opaque.
pub const OVERLAYFS_WHITEOUT_OPAQUE: &str = "trusted.overlay.opaque";
/// RAFS filesystem overlay specifications.
///
/// When merging multiple RAFS filesystems into one, special rules are needed to white out
/// files/directories in lower/parent filesystems. The whiteout specification defined by the
/// OCI image specification and Linux Overlayfs are widely adopted, so both of them are supported
/// by RAFS filesystem.
///
/// # Overlayfs Whiteout
///
/// In order to support rm and rmdir without changing the lower filesystem, an overlay filesystem
/// needs to record in the upper filesystem that files have been removed. This is done using
/// whiteouts and opaque directories (non-directories are always opaque).
///
/// A whiteout is created as a character device with 0/0 device number. When a whiteout is found
/// in the upper level of a merged directory, any matching name in the lower level is ignored,
/// and the whiteout itself is also hidden.
///
/// A directory is made opaque by setting the xattr “trusted.overlay.opaque” to “y”. Where the upper
/// filesystem contains an opaque directory, any directory in the lower filesystem with the same
/// name is ignored.
///
/// # OCI Image Whiteout
/// - A whiteout file is an empty file with a special filename that signifies a path should be
/// deleted.
/// - A whiteout filename consists of the prefix .wh. plus the basename of the path to be deleted.
/// - As files prefixed with .wh. are special whiteout markers, it is not possible to create a
/// filesystem which has a file or directory with a name beginning with .wh..
/// - Once a whiteout is applied, the whiteout itself MUST also be hidden.
/// - Whiteout files MUST only apply to resources in lower/parent layers.
/// - Files that are present in the same layer as a whiteout file can only be hidden by whiteout
/// files in subsequent layers.
/// - In addition to expressing that a single entry should be removed from a lower layer, layers
/// may remove all of the children using an opaque whiteout entry.
/// - An opaque whiteout entry is a file with the name .wh..wh..opq indicating that all siblings
/// are hidden in the lower layer.
#[derive(Clone, Copy, PartialEq)]
pub enum WhiteoutSpec {
/// Overlay whiteout rules according to the OCI image specification.
///
/// https://github.com/opencontainers/image-spec/blob/master/layer.md#whiteouts
Oci,
/// Overlay whiteout rules according to the Linux Overlayfs specification.
///
/// "whiteouts and opaque directories" in https://www.kernel.org/doc/Documentation/filesystems/overlayfs.txt
Overlayfs,
/// No whiteout, keep all content from lower/parent filesystems.
None,
}
impl fmt::Display for WhiteoutSpec {
fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result {
match self {
WhiteoutSpec::Oci => write!(f, "oci"),
WhiteoutSpec::Overlayfs => write!(f, "overlayfs"),
WhiteoutSpec::None => write!(f, "none"),
}
}
}
impl Default for WhiteoutSpec {
fn default() -> Self {
Self::Oci
}
}
impl FromStr for WhiteoutSpec {
type Err = Error;
fn from_str(s: &str) -> Result<Self> {
match s.to_lowercase().as_str() {
"oci" => Ok(Self::Oci),
"overlayfs" => Ok(Self::Overlayfs),
"none" => Ok(Self::None),
_ => Err(anyhow!("invalid whiteout spec")),
}
}
}
/// RAFS filesystem overlay operation types.
#[derive(Clone, Copy, Debug, PartialEq)]
pub enum WhiteoutType {
OciOpaque,
OciRemoval,
OverlayFsOpaque,
OverlayFsRemoval,
}
impl WhiteoutType {
pub fn is_removal(&self) -> bool {
*self == WhiteoutType::OciRemoval || *self == WhiteoutType::OverlayFsRemoval
}
}
/// RAFS filesystem node overlay state.
#[allow(dead_code)]
#[derive(Clone, Debug, PartialEq)]
pub enum Overlay {
Lower,
UpperAddition,
UpperModification,
}
impl Overlay {
pub fn is_lower_layer(&self) -> bool {
self == &Overlay::Lower
}
}
impl Display for Overlay {
fn fmt(&self, f: &mut Formatter) -> fmt::Result {
match self {
Overlay::Lower => write!(f, "LOWER"),
Overlay::UpperAddition => write!(f, "ADDED"),
Overlay::UpperModification => write!(f, "MODIFIED"),
}
}
}
impl Node {
/// Check whether the inode is a special overlayfs whiteout file.
pub fn is_overlayfs_whiteout(&self, spec: WhiteoutSpec) -> bool {
if spec != WhiteoutSpec::Overlayfs {
return false;
}
self.inode.is_chrdev()
&& nydus_utils::compact::major_dev(self.info.rdev) == 0
&& nydus_utils::compact::minor_dev(self.info.rdev) == 0
}
/// Check whether the inode (directory) is a overlayfs whiteout opaque.
pub fn is_overlayfs_opaque(&self, spec: WhiteoutSpec) -> bool {
if spec != WhiteoutSpec::Overlayfs || !self.is_dir() {
return false;
}
// A directory is made opaque by setting the xattr "trusted.overlay.opaque" to "y".
if let Some(v) = self
.info
.xattrs
.get(&OsString::from(OVERLAYFS_WHITEOUT_OPAQUE))
{
if let Ok(v) = std::str::from_utf8(v.as_slice()) {
return v == "y";
}
}
false
}
/// Get whiteout type to process the inode.
pub fn whiteout_type(&self, spec: WhiteoutSpec) -> Option<WhiteoutType> {
if self.overlay == Overlay::Lower {
return None;
}
match spec {
WhiteoutSpec::Oci => {
if let Some(name) = self.name().to_str() {
if name == OCISPEC_WHITEOUT_OPAQUE {
return Some(WhiteoutType::OciOpaque);
} else if name.starts_with(OCISPEC_WHITEOUT_PREFIX) {
return Some(WhiteoutType::OciRemoval);
}
}
}
WhiteoutSpec::Overlayfs => {
if self.is_overlayfs_whiteout(spec) {
return Some(WhiteoutType::OverlayFsRemoval);
} else if self.is_overlayfs_opaque(spec) {
return Some(WhiteoutType::OverlayFsOpaque);
}
}
WhiteoutSpec::None => {
return None;
}
}
None
}
/// Get original filename from a whiteout filename.
pub fn origin_name(&self, t: WhiteoutType) -> Option<&OsStr> {
if let Some(name) = self.name().to_str() {
if t == WhiteoutType::OciRemoval {
// the whiteout filename prefixes the basename of the path to be deleted with ".wh.".
return Some(OsStr::from_bytes(
name[OCISPEC_WHITEOUT_PREFIX.len()..].as_bytes(),
));
} else if t == WhiteoutType::OverlayFsRemoval {
// the whiteout file has the same name as the file to be deleted.
return Some(name.as_ref());
}
}
None
}
}
#[cfg(test)]
mod tests {
use nydus_rafs::metadata::{inode::InodeWrapper, layout::v5::RafsV5Inode};
use crate::core::node::NodeInfo;
use super::*;
#[test]
fn test_white_spec_from_str() {
let spec = WhiteoutSpec::default();
assert!(matches!(spec, WhiteoutSpec::Oci));
assert!(WhiteoutSpec::from_str("oci").is_ok());
assert!(WhiteoutSpec::from_str("overlayfs").is_ok());
assert!(WhiteoutSpec::from_str("none").is_ok());
assert!(WhiteoutSpec::from_str("foo").is_err());
}
#[test]
fn test_white_type_removal_check() {
let t1 = WhiteoutType::OciOpaque;
let t2 = WhiteoutType::OciRemoval;
let t3 = WhiteoutType::OverlayFsOpaque;
let t4 = WhiteoutType::OverlayFsRemoval;
assert!(!t1.is_removal());
assert!(t2.is_removal());
assert!(!t3.is_removal());
assert!(t4.is_removal());
}
#[test]
fn test_overlay_low_layer_check() {
let t1 = Overlay::Lower;
let t2 = Overlay::UpperAddition;
let t3 = Overlay::UpperModification;
assert!(t1.is_lower_layer());
assert!(!t2.is_lower_layer());
assert!(!t3.is_lower_layer());
}
#[test]
fn test_node() {
let mut inode = InodeWrapper::V5(RafsV5Inode::default());
inode.set_mode(libc::S_IFCHR as u32);
let node = Node::new(inode, NodeInfo::default(), 0);
assert!(!node.is_overlayfs_whiteout(WhiteoutSpec::None));
assert!(node.is_overlayfs_whiteout(WhiteoutSpec::Overlayfs));
assert_eq!(
node.whiteout_type(WhiteoutSpec::Overlayfs).unwrap(),
WhiteoutType::OverlayFsRemoval
);
let mut inode = InodeWrapper::V5(RafsV5Inode::default());
let mut info: NodeInfo = NodeInfo::default();
assert!(info
.xattrs
.add(OVERLAYFS_WHITEOUT_OPAQUE.into(), "y".into())
.is_ok());
inode.set_mode(libc::S_IFDIR as u32);
let node = Node::new(inode, info, 0);
assert!(!node.is_overlayfs_opaque(WhiteoutSpec::None));
assert!(node.is_overlayfs_opaque(WhiteoutSpec::Overlayfs));
assert_eq!(
node.whiteout_type(WhiteoutSpec::Overlayfs).unwrap(),
WhiteoutType::OverlayFsOpaque
);
let mut inode = InodeWrapper::V5(RafsV5Inode::default());
let mut info = NodeInfo::default();
assert!(info
.xattrs
.add(OVERLAYFS_WHITEOUT_OPAQUE.into(), "n".into())
.is_ok());
inode.set_mode(libc::S_IFDIR as u32);
let node = Node::new(inode, info, 0);
assert!(!node.is_overlayfs_opaque(WhiteoutSpec::None));
assert!(!node.is_overlayfs_opaque(WhiteoutSpec::Overlayfs));
let mut inode = InodeWrapper::V5(RafsV5Inode::default());
let mut info = NodeInfo::default();
assert!(info
.xattrs
.add(OVERLAYFS_WHITEOUT_OPAQUE.into(), "y".into())
.is_ok());
inode.set_mode(libc::S_IFCHR as u32);
let node = Node::new(inode, info, 0);
assert!(!node.is_overlayfs_opaque(WhiteoutSpec::None));
assert!(!node.is_overlayfs_opaque(WhiteoutSpec::Overlayfs));
let mut inode = InodeWrapper::V5(RafsV5Inode::default());
let mut info = NodeInfo::default();
assert!(info
.xattrs
.add(OVERLAYFS_WHITEOUT_OPAQUE.into(), "n".into())
.is_ok());
inode.set_mode(libc::S_IFDIR as u32);
let node = Node::new(inode, info, 0);
assert!(!node.is_overlayfs_opaque(WhiteoutSpec::None));
assert!(!node.is_overlayfs_opaque(WhiteoutSpec::Overlayfs));
let inode = InodeWrapper::V5(RafsV5Inode::default());
let info = NodeInfo::default();
let mut node = Node::new(inode, info, 0);
assert_eq!(node.whiteout_type(WhiteoutSpec::None), None);
assert_eq!(node.whiteout_type(WhiteoutSpec::Oci), None);
assert_eq!(node.whiteout_type(WhiteoutSpec::Overlayfs), None);
node.overlay = Overlay::Lower;
assert_eq!(node.whiteout_type(WhiteoutSpec::Overlayfs), None);
let inode = InodeWrapper::V5(RafsV5Inode::default());
let mut info = NodeInfo::default();
let name = OCISPEC_WHITEOUT_PREFIX.to_string() + "foo";
info.target_vec.push(name.clone().into());
let node = Node::new(inode, info, 0);
assert_eq!(
node.whiteout_type(WhiteoutSpec::Oci).unwrap(),
WhiteoutType::OciRemoval
);
assert_eq!(node.origin_name(WhiteoutType::OciRemoval).unwrap(), "foo");
assert_eq!(node.origin_name(WhiteoutType::OciOpaque), None);
assert_eq!(
node.origin_name(WhiteoutType::OverlayFsRemoval).unwrap(),
OsStr::new(&name)
);
let inode = InodeWrapper::V5(RafsV5Inode::default());
let mut info = NodeInfo::default();
info.target_vec.push(OCISPEC_WHITEOUT_OPAQUE.into());
let node = Node::new(inode, info, 0);
assert_eq!(
node.whiteout_type(WhiteoutSpec::Oci).unwrap(),
WhiteoutType::OciOpaque
);
}
}

View File

@ -0,0 +1,391 @@
// Copyright 2020 Ant Group. All rights reserved.
// Copyright (C) 2023 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use std::path::PathBuf;
use std::str::FromStr;
use anyhow::{anyhow, Context, Error, Result};
use indexmap::IndexMap;
use nydus_rafs::metadata::layout::v5::RafsV5PrefetchTable;
use nydus_rafs::metadata::layout::v6::{calculate_nid, RafsV6PrefetchTable};
use super::node::Node;
use crate::core::tree::TreeNode;
/// Filesystem data prefetch policy.
#[derive(Clone, Copy, Debug, PartialEq)]
pub enum PrefetchPolicy {
None,
/// Prefetch will be issued from Fs layer, which leverages inode/chunkinfo to prefetch data
/// from blob no matter where it resides(OSS/Localfs). Basically, it is willing to cache the
/// data into blobcache(if exists). It's more nimble. With this policy applied, image builder
/// currently puts prefetch files' data into a continuous region within blob which behaves very
/// similar to `Blob` policy.
Fs,
/// Prefetch will be issued directly from backend/blob layer
Blob,
}
impl Default for PrefetchPolicy {
fn default() -> Self {
Self::None
}
}
impl FromStr for PrefetchPolicy {
type Err = Error;
fn from_str(s: &str) -> Result<Self> {
match s {
"none" => Ok(Self::None),
"fs" => Ok(Self::Fs),
"blob" => Ok(Self::Blob),
_ => Err(anyhow!("invalid prefetch policy")),
}
}
}
/// Gather prefetch patterns from STDIN line by line.
///
/// Input format:
/// printf "/relative/path/to/rootfs/1\n/relative/path/to/rootfs/2"
///
/// It does not guarantee that specified path exist in local filesystem because the specified path
/// may exist in parent image/layers.
fn get_patterns() -> Result<IndexMap<PathBuf, Option<TreeNode>>> {
let stdin = std::io::stdin();
let mut patterns = Vec::new();
loop {
let mut file = String::new();
let size = stdin
.read_line(&mut file)
.context("failed to read prefetch pattern")?;
if size == 0 {
return generate_patterns(patterns);
}
patterns.push(file);
}
}
fn generate_patterns(input: Vec<String>) -> Result<IndexMap<PathBuf, Option<TreeNode>>> {
let mut patterns = IndexMap::new();
for file in &input {
let file_trimmed: PathBuf = file.trim().into();
// Sanity check for the list format.
if !file_trimmed.is_absolute() {
warn!(
"Illegal file path {} specified, should be absolute path",
file
);
continue;
}
let mut current_path = file_trimmed.clone();
let mut skip = patterns.contains_key(&current_path);
while !skip && current_path.pop() {
if patterns.contains_key(&current_path) {
skip = true;
break;
}
}
if skip {
warn!(
"prefetch pattern {} is covered by previous pattern and thus omitted",
file
);
} else {
debug!(
"prefetch pattern: {}, trimmed file name {:?}",
file, file_trimmed
);
patterns.insert(file_trimmed, None);
}
}
Ok(patterns)
}
/// Manage filesystem data prefetch configuration and state for builder.
#[derive(Default, Clone)]
pub struct Prefetch {
pub policy: PrefetchPolicy,
pub disabled: bool,
// Patterns to generate prefetch inode array, which will be put into the prefetch array
// in the RAFS bootstrap. It may access directory or file inodes.
patterns: IndexMap<PathBuf, Option<TreeNode>>,
// File list to help optimizing layout of data blobs.
// Files from this list may be put at the head of data blob for better prefetch performance,
// The index of matched prefetch pattern is stored in `usize`,
// which will help to sort the prefetch files in the final layout.
// It only stores regular files.
files_prefetch: Vec<(TreeNode, usize)>,
// It stores all non-prefetch files that is not stored in `prefetch_files`,
// including regular files, dirs, symlinks, etc.,
// with the same order of BFS traversal of file tree.
files_non_prefetch: Vec<TreeNode>,
}
impl Prefetch {
/// Create a new instance of [Prefetch].
pub fn new(policy: PrefetchPolicy) -> Result<Self> {
let patterns = if policy != PrefetchPolicy::None {
get_patterns().context("failed to get prefetch patterns")?
} else {
IndexMap::new()
};
Ok(Self {
policy,
disabled: false,
patterns,
files_prefetch: Vec::with_capacity(10000),
files_non_prefetch: Vec::with_capacity(10000),
})
}
/// Insert node into the prefetch Vector if it matches prefetch rules,
/// while recording the index of matched prefetch pattern,
/// or insert it into non-prefetch Vector.
pub fn insert(&mut self, obj: &TreeNode, node: &Node) {
// Newly created root inode of this rafs has zero size
if self.policy == PrefetchPolicy::None
|| self.disabled
|| (node.inode.is_reg() && node.inode.size() == 0)
{
self.files_non_prefetch.push(obj.clone());
return;
}
let mut path = node.target().clone();
let mut exact_match = true;
loop {
if let Some((idx, _, v)) = self.patterns.get_full_mut(&path) {
if exact_match {
*v = Some(obj.clone());
}
if node.is_reg() {
self.files_prefetch.push((obj.clone(), idx));
} else {
self.files_non_prefetch.push(obj.clone());
}
return;
}
// If no exact match, try to match parent dir until root.
if !path.pop() {
self.files_non_prefetch.push(obj.clone());
return;
}
exact_match = false;
}
}
/// Get node Vector of files in the prefetch list and non-prefetch list.
/// The order of prefetch files is the same as the order of prefetch patterns.
/// The order of non-prefetch files is the same as the order of BFS traversal of file tree.
pub fn get_file_nodes(&self) -> (Vec<TreeNode>, Vec<TreeNode>) {
let mut p_files = self.files_prefetch.clone();
p_files.sort_by_key(|k| k.1);
let p_files = p_files.into_iter().map(|(s, _)| s).collect();
(p_files, self.files_non_prefetch.clone())
}
/// Get the number of ``valid`` prefetch rules.
pub fn fs_prefetch_rule_count(&self) -> u32 {
if self.policy == PrefetchPolicy::Fs {
self.patterns.values().filter(|v| v.is_some()).count() as u32
} else {
0
}
}
/// Generate filesystem layer prefetch list for RAFS v5.
pub fn get_v5_prefetch_table(&mut self) -> Option<RafsV5PrefetchTable> {
if self.policy == PrefetchPolicy::Fs {
let mut prefetch_table = RafsV5PrefetchTable::new();
for i in self.patterns.values().filter_map(|v| v.clone()) {
let node = i.borrow_mut();
assert!(node.inode.ino() < u32::MAX as u64);
prefetch_table.add_entry(node.inode.ino() as u32);
}
Some(prefetch_table)
} else {
None
}
}
/// Generate filesystem layer prefetch list for RAFS v6.
pub fn get_v6_prefetch_table(&mut self, meta_addr: u64) -> Option<RafsV6PrefetchTable> {
if self.policy == PrefetchPolicy::Fs {
let mut prefetch_table = RafsV6PrefetchTable::new();
for i in self.patterns.values().filter_map(|v| v.clone()) {
let node = i.borrow_mut();
let ino = node.inode.ino();
debug_assert!(ino > 0);
let nid = calculate_nid(node.v6_offset, meta_addr);
// 32bit nid can represent 128GB bootstrap, it is large enough, no need
// to worry about casting here
assert!(nid < u32::MAX as u64);
trace!(
"v6 prefetch table: map node index {} to offset {} nid {} path {:?} name {:?}",
ino,
node.v6_offset,
nid,
node.path(),
node.name()
);
prefetch_table.add_entry(nid as u32);
}
Some(prefetch_table)
} else {
None
}
}
/// Disable filesystem data prefetch.
pub fn disable(&mut self) {
self.disabled = true;
}
/// Reset to initialization state.
pub fn clear(&mut self) {
self.disabled = false;
self.patterns.clear();
self.files_prefetch.clear();
self.files_non_prefetch.clear();
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::core::node::NodeInfo;
use nydus_rafs::metadata::{inode::InodeWrapper, RafsVersion};
use std::cell::RefCell;
#[test]
fn test_generate_pattern() {
let input = vec![
"/a/b".to_string(),
"/a/b/c".to_string(),
"/a/b/d".to_string(),
"/a/b/d/e".to_string(),
"/f".to_string(),
"/h/i".to_string(),
];
let patterns = generate_patterns(input).unwrap();
assert_eq!(patterns.len(), 3);
assert!(patterns.contains_key(&PathBuf::from("/a/b")));
assert!(patterns.contains_key(&PathBuf::from("/f")));
assert!(patterns.contains_key(&PathBuf::from("/h/i")));
assert!(!patterns.contains_key(&PathBuf::from("/")));
assert!(!patterns.contains_key(&PathBuf::from("/a")));
assert!(!patterns.contains_key(&PathBuf::from("/a/b/c")));
assert!(!patterns.contains_key(&PathBuf::from("/a/b/d")));
assert!(!patterns.contains_key(&PathBuf::from("/a/b/d/e")));
assert!(!patterns.contains_key(&PathBuf::from("/k")));
}
#[test]
fn test_prefetch_policy() {
let policy = PrefetchPolicy::from_str("fs").unwrap();
assert_eq!(policy, PrefetchPolicy::Fs);
let policy = PrefetchPolicy::from_str("blob").unwrap();
assert_eq!(policy, PrefetchPolicy::Blob);
let policy = PrefetchPolicy::from_str("none").unwrap();
assert_eq!(policy, PrefetchPolicy::None);
PrefetchPolicy::from_str("").unwrap_err();
PrefetchPolicy::from_str("invalid").unwrap_err();
}
#[test]
fn test_prefetch() {
let input = vec![
"/a/b".to_string(),
"/f".to_string(),
"/h/i".to_string(),
"/k".to_string(),
];
let patterns = generate_patterns(input).unwrap();
let mut prefetch = Prefetch {
policy: PrefetchPolicy::Fs,
disabled: false,
patterns,
files_prefetch: Vec::with_capacity(10),
files_non_prefetch: Vec::with_capacity(10),
};
let mut inode = InodeWrapper::new(RafsVersion::V6);
inode.set_mode(0o755 | libc::S_IFREG as u32);
inode.set_size(1);
let info = NodeInfo::default();
let mut info1 = info.clone();
info1.target = PathBuf::from("/f");
let node1 = Node::new(inode.clone(), info1, 1);
let node1 = TreeNode::new(RefCell::from(node1));
prefetch.insert(&node1, &node1.borrow());
let inode2 = inode.clone();
let mut info2 = info.clone();
info2.target = PathBuf::from("/a/b");
let node2 = Node::new(inode2, info2, 1);
let node2 = TreeNode::new(RefCell::from(node2));
prefetch.insert(&node2, &node2.borrow());
let inode3 = inode.clone();
let mut info3 = info.clone();
info3.target = PathBuf::from("/h/i/j");
let node3 = Node::new(inode3, info3, 1);
let node3 = TreeNode::new(RefCell::from(node3));
prefetch.insert(&node3, &node3.borrow());
let inode4 = inode.clone();
let mut info4 = info.clone();
info4.target = PathBuf::from("/z");
let node4 = Node::new(inode4, info4, 1);
let node4 = TreeNode::new(RefCell::from(node4));
prefetch.insert(&node4, &node4.borrow());
let inode5 = inode.clone();
inode.set_mode(0o755 | libc::S_IFDIR as u32);
inode.set_size(0);
let mut info5 = info;
info5.target = PathBuf::from("/a/b/d");
let node5 = Node::new(inode5, info5, 1);
let node5 = TreeNode::new(RefCell::from(node5));
prefetch.insert(&node5, &node5.borrow());
// node1, node2
assert_eq!(prefetch.fs_prefetch_rule_count(), 2);
let (pre, non_pre) = prefetch.get_file_nodes();
assert_eq!(pre.len(), 4);
assert_eq!(non_pre.len(), 1);
let pre_str: Vec<String> = pre
.iter()
.map(|n| n.borrow().target().to_str().unwrap().to_owned())
.collect();
assert_eq!(pre_str, vec!["/a/b", "/a/b/d", "/f", "/h/i/j"]);
let non_pre_str: Vec<String> = non_pre
.iter()
.map(|n| n.borrow().target().to_str().unwrap().to_owned())
.collect();
assert_eq!(non_pre_str, vec!["/z"]);
prefetch.clear();
assert_eq!(prefetch.fs_prefetch_rule_count(), 0);
let (pre, non_pre) = prefetch.get_file_nodes();
assert_eq!(pre.len(), 0);
assert_eq!(non_pre.len(), 0);
}
}

533
builder/src/core/tree.rs Normal file
View File

@ -0,0 +1,533 @@
// Copyright 2020 Ant Group. All rights reserved.
// Copyright 2023 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
//! An in-memory tree structure to maintain information for filesystem metadata.
//!
//! Steps to build the first layer for a Rafs image:
//! - Build the upper tree (FileSystemTree) from the source directory.
//! - Traverse the upper tree (FileSystemTree) to dump bootstrap and data blobs.
//!
//! Steps to build the second and following on layers for a Rafs image:
//! - Build the upper tree (FileSystemTree) from the source directory.
//! - Load the lower tree (MetadataTree) from a metadata blob.
//! - Merge the final tree (OverlayTree) by applying the upper tree (FileSystemTree) to the
//! lower tree (MetadataTree).
//! - Traverse the merged tree (OverlayTree) to dump bootstrap and data blobs.
use std::cell::{RefCell, RefMut};
use std::ffi::OsString;
use std::os::unix::ffi::OsStrExt;
use std::path::{Path, PathBuf};
use std::rc::Rc;
use std::sync::Arc;
use anyhow::{bail, Result};
use nydus_rafs::metadata::chunk::ChunkWrapper;
use nydus_rafs::metadata::inode::InodeWrapper;
use nydus_rafs::metadata::layout::{bytes_to_os_str, RafsXAttrs};
use nydus_rafs::metadata::{Inode, RafsInodeExt, RafsSuper};
use nydus_utils::{lazy_drop, root_tracer, timing_tracer};
use super::node::{ChunkSource, Node, NodeChunk, NodeInfo};
use super::overlay::{Overlay, WhiteoutType};
use crate::core::overlay::OVERLAYFS_WHITEOUT_OPAQUE;
use crate::{BuildContext, ChunkDict};
/// Type alias for tree internal node.
pub type TreeNode = Rc<RefCell<Node>>;
/// An in-memory tree structure to maintain information and topology of filesystem nodes.
#[derive(Clone)]
pub struct Tree {
/// Filesystem node.
pub node: TreeNode,
/// Cached base name.
name: Vec<u8>,
/// Children tree nodes.
pub children: Vec<Tree>,
}
impl Tree {
/// Create a new instance of `Tree` from a filesystem node.
pub fn new(node: Node) -> Self {
let name = node.name().as_bytes().to_vec();
Tree {
node: Rc::new(RefCell::new(node)),
name,
children: Vec::new(),
}
}
/// Load a `Tree` from a bootstrap file, and optionally caches chunk information.
pub fn from_bootstrap<T: ChunkDict>(rs: &RafsSuper, chunk_dict: &mut T) -> Result<Self> {
let tree_builder = MetadataTreeBuilder::new(rs);
let root_ino = rs.superblock.root_ino();
let root_inode = rs.get_extended_inode(root_ino, true)?;
let root_node = MetadataTreeBuilder::parse_node(rs, root_inode, PathBuf::from("/"))?;
let mut tree = Tree::new(root_node);
tree.children = timing_tracer!(
{ tree_builder.load_children(root_ino, Option::<PathBuf>::None, chunk_dict, true,) },
"load_tree_from_bootstrap"
)?;
Ok(tree)
}
/// Get name of the tree node.
pub fn name(&self) -> &[u8] {
&self.name
}
/// Set `Node` associated with the tree node.
pub fn set_node(&mut self, node: Node) {
self.node.replace(node);
}
/// Get mutably borrowed value to access the associated `Node` object.
pub fn borrow_mut_node(&self) -> RefMut<'_, Node> {
self.node.as_ref().borrow_mut()
}
/// Walk all nodes in DFS mode.
pub fn walk_dfs<F1, F2>(&self, pre: &mut F1, post: &mut F2) -> Result<()>
where
F1: FnMut(&Tree) -> Result<()>,
F2: FnMut(&Tree) -> Result<()>,
{
pre(self)?;
for child in &self.children {
child.walk_dfs(pre, post)?;
}
post(self)?;
Ok(())
}
/// Walk all nodes in pre DFS mode.
pub fn walk_dfs_pre<F>(&self, cb: &mut F) -> Result<()>
where
F: FnMut(&Tree) -> Result<()>,
{
self.walk_dfs(cb, &mut |_t| Ok(()))
}
/// Walk all nodes in post DFS mode.
pub fn walk_dfs_post<F>(&self, cb: &mut F) -> Result<()>
where
F: FnMut(&Tree) -> Result<()>,
{
self.walk_dfs(&mut |_t| Ok(()), cb)
}
/// Walk the tree in BFS mode.
pub fn walk_bfs<F>(&self, handle_self: bool, cb: &mut F) -> Result<()>
where
F: FnMut(&Tree) -> Result<()>,
{
if handle_self {
cb(self)?;
}
let mut dirs = Vec::with_capacity(32);
for child in &self.children {
cb(child)?;
if child.borrow_mut_node().is_dir() {
dirs.push(child);
}
}
for dir in dirs {
dir.walk_bfs(false, cb)?;
}
Ok(())
}
/// Insert a new child node into the tree.
pub fn insert_child(&mut self, child: Tree) {
if let Err(idx) = self
.children
.binary_search_by_key(&&child.name, |n| &n.name)
{
self.children.insert(idx, child);
}
}
/// Get index of child node with specified `name`.
pub fn get_child_idx(&self, name: &[u8]) -> Option<usize> {
self.children.binary_search_by_key(&name, |n| &n.name).ok()
}
/// Get the tree node corresponding to the path.
pub fn get_node(&self, path: &Path) -> Option<&Tree> {
let target_vec = Node::generate_target_vec(path);
assert!(!target_vec.is_empty());
let mut tree = self;
for name in &target_vec[1..] {
match tree.get_child_idx(name.as_bytes()) {
Some(idx) => tree = &tree.children[idx],
None => return None,
}
}
Some(tree)
}
/// Get the mutable tree node corresponding to the path.
pub fn get_node_mut(&mut self, path: &Path) -> Option<&mut Tree> {
let target_vec = Node::generate_target_vec(path);
assert!(!target_vec.is_empty());
let mut tree = self;
let last_idx = target_vec.len() - 1;
for name in &target_vec[1..last_idx] {
match tree.get_child_idx(name.as_bytes()) {
Some(idx) => tree = &mut tree.children[idx],
None => return None,
}
}
if let Some(last_name) = target_vec.last() {
match tree.get_child_idx(last_name.as_bytes()) {
Some(idx) => Some(&mut tree.children[idx]),
None => None,
}
} else {
Some(tree)
}
}
/// Merge the upper layer tree into the lower layer tree, applying whiteout rules.
pub fn merge_overaly(&mut self, ctx: &BuildContext, upper: Tree) -> Result<()> {
assert_eq!(self.name, "/".as_bytes());
assert_eq!(upper.name, "/".as_bytes());
// Handle the root node.
upper.borrow_mut_node().overlay = Overlay::UpperModification;
self.node = upper.node.clone();
self.merge_children(ctx, &upper)?;
lazy_drop(upper);
Ok(())
}
fn merge_children(&mut self, ctx: &BuildContext, upper: &Tree) -> Result<()> {
// Handle whiteout nodes in the first round, and handle other nodes in the second round.
let mut modified = Vec::with_capacity(upper.children.len());
for u in upper.children.iter() {
let mut u_node = u.borrow_mut_node();
match u_node.whiteout_type(ctx.whiteout_spec) {
Some(WhiteoutType::OciRemoval) => {
if let Some(origin_name) = u_node.origin_name(WhiteoutType::OciRemoval) {
if let Some(idx) = self.get_child_idx(origin_name.as_bytes()) {
self.children.remove(idx);
}
}
}
Some(WhiteoutType::OciOpaque) => {
self.children.clear();
}
Some(WhiteoutType::OverlayFsRemoval) => {
if let Some(idx) = self.get_child_idx(&u.name) {
self.children.remove(idx);
}
}
Some(WhiteoutType::OverlayFsOpaque) => {
if let Some(idx) = self.get_child_idx(&u.name) {
self.children[idx].children.clear();
}
u_node.remove_xattr(&OsString::from(OVERLAYFS_WHITEOUT_OPAQUE));
modified.push(u);
}
None => modified.push(u),
}
}
let mut dirs = Vec::new();
for u in modified {
let mut u_node = u.borrow_mut_node();
if let Some(idx) = self.get_child_idx(&u.name) {
u_node.overlay = Overlay::UpperModification;
self.children[idx].node = u.node.clone();
} else {
u_node.overlay = Overlay::UpperAddition;
self.insert_child(Tree {
node: u.node.clone(),
name: u.name.clone(),
children: vec![],
});
}
if u_node.is_dir() {
dirs.push(u);
}
}
for dir in dirs {
if let Some(idx) = self.get_child_idx(&dir.name) {
self.children[idx].merge_children(ctx, dir)?;
} else {
bail!("builder: can not find directory in merged tree");
}
}
Ok(())
}
}
pub struct MetadataTreeBuilder<'a> {
rs: &'a RafsSuper,
}
impl<'a> MetadataTreeBuilder<'a> {
fn new(rs: &'a RafsSuper) -> Self {
Self { rs }
}
/// Build node tree by loading bootstrap file
fn load_children<T: ChunkDict, P: AsRef<Path>>(
&self,
ino: Inode,
parent: Option<P>,
chunk_dict: &mut T,
validate_digest: bool,
) -> Result<Vec<Tree>> {
let inode = self.rs.get_extended_inode(ino, validate_digest)?;
if !inode.is_dir() {
return Ok(Vec::new());
}
let parent_path = if let Some(parent) = parent {
parent.as_ref().join(inode.name())
} else {
PathBuf::from("/")
};
let blobs = self.rs.superblock.get_blob_infos();
let child_count = inode.get_child_count();
let mut children = Vec::with_capacity(child_count as usize);
for idx in 0..child_count {
let child = inode.get_child_by_index(idx)?;
let child_path = parent_path.join(child.name());
let child = Self::parse_node(self.rs, child.clone(), child_path)?;
if child.is_reg() {
for chunk in &child.chunks {
let blob_idx = chunk.inner.blob_index();
if let Some(blob) = blobs.get(blob_idx as usize) {
chunk_dict.add_chunk(chunk.inner.clone(), blob.digester());
}
}
}
let child = Tree::new(child);
children.push(child);
}
children.sort_unstable_by(|a, b| a.name.cmp(&b.name));
for child in children.iter_mut() {
let child_node = child.borrow_mut_node();
if child_node.is_dir() {
let child_ino = child_node.inode.ino();
drop(child_node);
child.children =
self.load_children(child_ino, Some(&parent_path), chunk_dict, validate_digest)?;
}
}
Ok(children)
}
/// Convert a `RafsInode` object to an in-memory `Node` object.
pub fn parse_node(rs: &RafsSuper, inode: Arc<dyn RafsInodeExt>, path: PathBuf) -> Result<Node> {
let chunks = if inode.is_reg() {
let chunk_count = inode.get_chunk_count();
let mut chunks = Vec::with_capacity(chunk_count as usize);
for i in 0..chunk_count {
let cki = inode.get_chunk_info(i)?;
chunks.push(NodeChunk {
source: ChunkSource::Parent,
inner: Arc::new(ChunkWrapper::from_chunk_info(cki)),
});
}
chunks
} else {
Vec::new()
};
let symlink = if inode.is_symlink() {
Some(inode.get_symlink()?)
} else {
None
};
let mut xattrs = RafsXAttrs::new();
for name in inode.get_xattrs()? {
let name = bytes_to_os_str(&name);
let value = inode.get_xattr(name)?;
xattrs.add(name.to_os_string(), value.unwrap_or_default())?;
}
// Nodes loaded from bootstrap will only be used as `Overlay::Lower`, so make `dev` invalid
// to avoid breaking hardlink detecting logic.
let src_dev = u64::MAX;
let rdev = inode.rdev() as u64;
let inode = InodeWrapper::from_inode_info(inode.clone());
let source = PathBuf::from("/");
let target = Node::generate_target(&path, &source);
let target_vec = Node::generate_target_vec(&target);
let info = NodeInfo {
explicit_uidgid: rs.meta.explicit_uidgid(),
src_ino: inode.ino(),
src_dev,
rdev,
path,
source,
target,
target_vec,
symlink,
xattrs,
v6_force_extended_inode: false,
};
Ok(Node {
info: Arc::new(info),
index: 0,
layer_idx: 0,
overlay: Overlay::Lower,
inode,
chunks,
v6_offset: 0,
v6_dirents: Vec::new(),
v6_datalayout: 0,
v6_compact_inode: false,
v6_dirents_offset: 0,
})
}
}
#[cfg(test)]
mod tests {
use super::*;
use nydus_rafs::metadata::RafsVersion;
use nydus_storage::RAFS_DEFAULT_CHUNK_SIZE;
use vmm_sys_util::tempdir::TempDir;
use vmm_sys_util::tempfile::TempFile;
#[test]
fn test_set_lock_node() {
let tmpdir = TempDir::new().unwrap();
let tmpfile = TempFile::new_in(tmpdir.as_path()).unwrap();
let node = Node::from_fs_object(
RafsVersion::V6,
tmpdir.as_path().to_path_buf(),
tmpfile.as_path().to_path_buf(),
Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true,
false,
)
.unwrap();
let mut tree = Tree::new(node);
assert_eq!(tree.name, tmpfile.as_path().file_name().unwrap().as_bytes());
let node1 = tree.borrow_mut_node();
drop(node1);
let tmpfile = TempFile::new_in(tmpdir.as_path()).unwrap();
let node = Node::from_fs_object(
RafsVersion::V6,
tmpdir.as_path().to_path_buf(),
tmpfile.as_path().to_path_buf(),
Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true,
false,
)
.unwrap();
tree.set_node(node);
let node2 = tree.borrow_mut_node();
assert_eq!(node2.name(), tmpfile.as_path().file_name().unwrap());
}
#[test]
fn test_walk_tree() {
let tmpdir = TempDir::new().unwrap();
let tmpfile = TempFile::new_in(tmpdir.as_path()).unwrap();
let node = Node::from_fs_object(
RafsVersion::V6,
tmpdir.as_path().to_path_buf(),
tmpfile.as_path().to_path_buf(),
Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true,
false,
)
.unwrap();
let mut tree = Tree::new(node);
let tmpfile2 = TempFile::new_in(tmpdir.as_path()).unwrap();
let node = Node::from_fs_object(
RafsVersion::V6,
tmpdir.as_path().to_path_buf(),
tmpfile2.as_path().to_path_buf(),
Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true,
false,
)
.unwrap();
let tree2 = Tree::new(node);
tree.insert_child(tree2);
let tmpfile3 = TempFile::new_in(tmpdir.as_path()).unwrap();
let node = Node::from_fs_object(
RafsVersion::V6,
tmpdir.as_path().to_path_buf(),
tmpfile3.as_path().to_path_buf(),
Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true,
false,
)
.unwrap();
let tree3 = Tree::new(node);
tree.insert_child(tree3);
let mut count = 0;
tree.walk_bfs(true, &mut |_n| -> Result<()> {
count += 1;
Ok(())
})
.unwrap();
assert_eq!(count, 3);
let mut count = 0;
tree.walk_bfs(false, &mut |_n| -> Result<()> {
count += 1;
Ok(())
})
.unwrap();
assert_eq!(count, 2);
let mut count = 0;
tree.walk_bfs(true, &mut |_n| -> Result<()> {
count += 1;
bail!("test")
})
.unwrap_err();
assert_eq!(count, 1);
let idx = tree
.get_child_idx(tmpfile2.as_path().file_name().unwrap().as_bytes())
.unwrap();
assert!(idx == 0 || idx == 1);
let idx = tree
.get_child_idx(tmpfile3.as_path().file_name().unwrap().as_bytes())
.unwrap();
assert!(idx == 0 || idx == 1);
}
}

266
builder/src/core/v5.rs Normal file
View File

@ -0,0 +1,266 @@
// Copyright 2020 Ant Group. All rights reserved.
// Copyright (C) 2021 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use std::convert::TryFrom;
use std::mem::size_of;
use anyhow::{bail, Context, Result};
use nydus_rafs::metadata::inode::InodeWrapper;
use nydus_rafs::metadata::layout::v5::{
RafsV5BlobTable, RafsV5ChunkInfo, RafsV5InodeTable, RafsV5InodeWrapper, RafsV5SuperBlock,
RafsV5XAttrsTable,
};
use nydus_rafs::metadata::{RafsStore, RafsVersion};
use nydus_rafs::RafsIoWrite;
use nydus_utils::digest::{DigestHasher, RafsDigest};
use nydus_utils::{div_round_up, root_tracer, timing_tracer, try_round_up_4k};
use super::node::Node;
use crate::{Bootstrap, BootstrapContext, BuildContext, Tree};
// Filesystem may have different algorithms to calculate `i_size` for directory entries,
// which may break "repeatable build". To support repeatable build, instead of reuse the value
// provided by the source filesystem, we use our own algorithm to calculate `i_size` for directory
// entries for stable `i_size`.
//
// Rafs v6 already has its own algorithm to calculate `i_size` for directory entries, but we don't
// have directory entries for Rafs v5. So let's generate a pseudo `i_size` for Rafs v5 directory
// inode.
const RAFS_V5_VIRTUAL_ENTRY_SIZE: u64 = 8;
impl Node {
/// Dump RAFS v5 inode metadata to meta blob.
pub fn dump_bootstrap_v5(
&self,
ctx: &mut BuildContext,
f_bootstrap: &mut dyn RafsIoWrite,
) -> Result<()> {
trace!("[{}]\t{}", self.overlay, self);
if let InodeWrapper::V5(raw_inode) = &self.inode {
// Dump inode info
let name = self.name();
let inode = RafsV5InodeWrapper {
name,
symlink: self.info.symlink.as_deref(),
inode: raw_inode,
};
inode
.store(f_bootstrap)
.context("failed to dump inode to bootstrap")?;
// Dump inode xattr
if !self.info.xattrs.is_empty() {
self.info
.xattrs
.store_v5(f_bootstrap)
.context("failed to dump xattr to bootstrap")?;
ctx.has_xattr = true;
}
// Dump chunk info
if self.is_reg() && self.inode.child_count() as usize != self.chunks.len() {
bail!("invalid chunk count {}: {}", self.chunks.len(), self);
}
for chunk in &self.chunks {
chunk
.inner
.store(f_bootstrap)
.context("failed to dump chunk info to bootstrap")?;
trace!("\t\tchunk: {} compressor {}", chunk, ctx.compressor,);
}
Ok(())
} else {
bail!("dump_bootstrap_v5() encounters non-v5-inode");
}
}
// Filesystem may have different algorithms to calculate `i_size` for directory entries,
// which may break "repeatable build". To support repeatable build, instead of reuse the value
// provided by the source filesystem, we use our own algorithm to calculate `i_size` for
// directory entries for stable `i_size`.
//
// Rafs v6 already has its own algorithm to calculate `i_size` for directory entries, but we
// don't have directory entries for Rafs v5. So let's generate a pseudo `i_size` for Rafs v5
// directory inode.
pub fn v5_set_dir_size(&mut self, fs_version: RafsVersion, children: &[Tree]) {
if !self.is_dir() || !fs_version.is_v5() {
return;
}
let mut d_size = 0u64;
for child in children.iter() {
d_size += child.borrow_mut_node().inode.name_size() as u64 + RAFS_V5_VIRTUAL_ENTRY_SIZE;
}
if d_size == 0 {
self.inode.set_size(4096);
} else {
// Safe to unwrap() because we have u32 for child count.
self.inode.set_size(try_round_up_4k(d_size).unwrap());
}
self.v5_set_inode_blocks();
}
/// Calculate and set `i_blocks` for inode.
///
/// In order to support repeatable build, we can't reuse `i_blocks` from source filesystems,
/// so let's calculate it by ourself for stable `i_block`.
///
/// Normal filesystem includes the space occupied by Xattr into the directory size,
/// let's follow the normal behavior.
pub fn v5_set_inode_blocks(&mut self) {
// Set inode blocks for RAFS v5 inode, v6 will calculate it at runtime.
if let InodeWrapper::V5(_) = self.inode {
self.inode.set_blocks(div_round_up(
self.inode.size() + self.info.xattrs.aligned_size_v5() as u64,
512,
));
}
}
}
impl Bootstrap {
/// Calculate inode digest for directory.
fn v5_digest_node(&self, ctx: &mut BuildContext, tree: &Tree) {
let mut node = tree.borrow_mut_node();
// We have set digest for non-directory inode in the previous dump_blob workflow.
if node.is_dir() {
let mut inode_hasher = RafsDigest::hasher(ctx.digester);
for child in tree.children.iter() {
let child = child.borrow_mut_node();
inode_hasher.digest_update(child.inode.digest().as_ref());
}
node.inode.set_digest(inode_hasher.digest_finalize());
}
}
/// Dump bootstrap and blob file, return (Vec<blob_id>, blob_size)
pub(crate) fn v5_dump(
&mut self,
ctx: &mut BuildContext,
bootstrap_ctx: &mut BootstrapContext,
blob_table: &RafsV5BlobTable,
) -> Result<()> {
// Set inode digest, use reverse iteration order to reduce repeated digest calculations.
self.tree.walk_dfs_post(&mut |t| {
self.v5_digest_node(ctx, t);
Ok(())
})?;
// Set inode table
let super_block_size = size_of::<RafsV5SuperBlock>();
let inode_table_entries = bootstrap_ctx.get_next_ino() as u32 - 1;
let mut inode_table = RafsV5InodeTable::new(inode_table_entries as usize);
let inode_table_size = inode_table.size();
// Set prefetch table
let (prefetch_table_size, prefetch_table_entries) =
if let Some(prefetch_table) = ctx.prefetch.get_v5_prefetch_table() {
(prefetch_table.size(), prefetch_table.len() as u32)
} else {
(0, 0u32)
};
// Set blob table, use sha256 string (length 64) as blob id if not specified
let prefetch_table_offset = super_block_size + inode_table_size;
let blob_table_offset = prefetch_table_offset + prefetch_table_size;
let blob_table_size = blob_table.size();
let extended_blob_table_offset = blob_table_offset + blob_table_size;
let extended_blob_table_size = blob_table.extended.size();
let extended_blob_table_entries = blob_table.extended.entries();
// Set super block
let mut super_block = RafsV5SuperBlock::new();
let inodes_count = bootstrap_ctx.inode_map.len() as u64;
super_block.set_inodes_count(inodes_count);
super_block.set_inode_table_offset(super_block_size as u64);
super_block.set_inode_table_entries(inode_table_entries);
super_block.set_blob_table_offset(blob_table_offset as u64);
super_block.set_blob_table_size(blob_table_size as u32);
super_block.set_extended_blob_table_offset(extended_blob_table_offset as u64);
super_block.set_extended_blob_table_entries(u32::try_from(extended_blob_table_entries)?);
super_block.set_prefetch_table_offset(prefetch_table_offset as u64);
super_block.set_prefetch_table_entries(prefetch_table_entries);
super_block.set_compressor(ctx.compressor);
super_block.set_digester(ctx.digester);
super_block.set_chunk_size(ctx.chunk_size);
if ctx.explicit_uidgid {
super_block.set_explicit_uidgid();
}
// Set inodes and chunks
let mut inode_offset = (super_block_size
+ inode_table_size
+ prefetch_table_size
+ blob_table_size
+ extended_blob_table_size) as u32;
let mut has_xattr = false;
self.tree.walk_dfs_pre(&mut |t| {
let node = t.borrow_mut_node();
inode_table.set(node.index, inode_offset)?;
// Add inode size
inode_offset += node.inode.inode_size() as u32;
if node.inode.has_xattr() {
has_xattr = true;
if !node.info.xattrs.is_empty() {
inode_offset += (size_of::<RafsV5XAttrsTable>()
+ node.info.xattrs.aligned_size_v5())
as u32;
}
}
// Add chunks size
if node.is_reg() {
inode_offset += node.inode.child_count() * size_of::<RafsV5ChunkInfo>() as u32;
}
Ok(())
})?;
if has_xattr {
super_block.set_has_xattr();
}
// Dump super block
super_block
.store(bootstrap_ctx.writer.as_mut())
.context("failed to store superblock")?;
// Dump inode table
inode_table
.store(bootstrap_ctx.writer.as_mut())
.context("failed to store inode table")?;
// Dump prefetch table
if let Some(mut prefetch_table) = ctx.prefetch.get_v5_prefetch_table() {
prefetch_table
.store(bootstrap_ctx.writer.as_mut())
.context("failed to store prefetch table")?;
}
// Dump blob table
blob_table
.store(bootstrap_ctx.writer.as_mut())
.context("failed to store blob table")?;
// Dump extended blob table
blob_table
.store_extended(bootstrap_ctx.writer.as_mut())
.context("failed to store extended blob table")?;
// Dump inodes and chunks
timing_tracer!(
{
self.tree.walk_dfs_pre(&mut |t| {
t.borrow_mut_node()
.dump_bootstrap_v5(ctx, bootstrap_ctx.writer.as_mut())
.context("failed to dump bootstrap")
})
},
"dump_bootstrap"
)?;
Ok(())
}
}

1072
builder/src/core/v6.rs Normal file

File diff suppressed because it is too large Load Diff

267
builder/src/directory.rs Normal file
View File

@ -0,0 +1,267 @@
// Copyright 2020 Ant Group. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use std::fs;
use std::fs::DirEntry;
use anyhow::{anyhow, Context, Result};
use nydus_utils::{event_tracer, lazy_drop, root_tracer, timing_tracer};
use crate::core::context::{Artifact, NoopArtifactWriter};
use crate::core::prefetch;
use super::core::blob::Blob;
use super::core::context::{
ArtifactWriter, BlobManager, BootstrapManager, BuildContext, BuildOutput,
};
use super::core::node::Node;
use super::{build_bootstrap, dump_bootstrap, finalize_blob, Builder, Overlay, Tree, TreeNode};
struct FilesystemTreeBuilder {}
impl FilesystemTreeBuilder {
fn new() -> Self {
Self {}
}
#[allow(clippy::only_used_in_recursion)]
/// Walk directory to build node tree by DFS
fn load_children(
&self,
ctx: &mut BuildContext,
parent: &TreeNode,
layer_idx: u16,
) -> Result<(Vec<Tree>, Vec<Tree>)> {
let mut trees = Vec::new();
let mut external_trees = Vec::new();
let parent = parent.borrow();
if !parent.is_dir() {
return Ok((trees.clone(), external_trees));
}
let children = fs::read_dir(parent.path())
.with_context(|| format!("failed to read dir {:?}", parent.path()))?;
let children = children.collect::<Result<Vec<DirEntry>, std::io::Error>>()?;
event_tracer!("load_from_directory", +children.len());
for child in children {
let path = child.path();
let target = Node::generate_target(&path, &ctx.source_path);
let mut file_size: u64 = 0;
if ctx.attributes.is_external(&target) {
if let Some(value) = ctx.attributes.get_value(&target, "file_size") {
file_size = value.parse::<u64>().ok().ok_or_else(|| {
anyhow!(
"failed to parse file_size for external file {}",
&target.display()
)
})?;
}
}
let mut child = Node::from_fs_object(
ctx.fs_version,
ctx.source_path.clone(),
path.clone(),
Overlay::UpperAddition,
ctx.chunk_size,
file_size,
parent.info.explicit_uidgid,
true,
)
.with_context(|| format!("failed to create node {:?}", path))?;
child.layer_idx = layer_idx;
// as per OCI spec, whiteout file should not be present within final image
// or filesystem, only existed in layers.
if layer_idx == 0
&& child.whiteout_type(ctx.whiteout_spec).is_some()
&& !child.is_overlayfs_opaque(ctx.whiteout_spec)
{
continue;
}
let (mut child, mut external_child) = (Tree::new(child.clone()), Tree::new(child));
let (child_children, external_children) =
self.load_children(ctx, &child.node, layer_idx)?;
child.children = child_children;
external_child.children = external_children;
child
.borrow_mut_node()
.v5_set_dir_size(ctx.fs_version, &child.children);
external_child
.borrow_mut_node()
.v5_set_dir_size(ctx.fs_version, &external_child.children);
if ctx.attributes.is_external(&target) {
external_trees.push(external_child);
} else {
// TODO: need to implement type=ignore for nydus attributes,
// let's ignore the tree for workaround.
trees.push(child.clone());
if ctx.attributes.is_prefix_external(target) {
external_trees.push(external_child);
}
};
}
trees.sort_unstable_by(|a, b| a.name().cmp(b.name()));
external_trees.sort_unstable_by(|a, b| a.name().cmp(b.name()));
Ok((trees, external_trees))
}
}
#[derive(Default)]
pub struct DirectoryBuilder {}
impl DirectoryBuilder {
pub fn new() -> Self {
Self {}
}
/// Build node tree from a filesystem directory
fn build_tree(&mut self, ctx: &mut BuildContext, layer_idx: u16) -> Result<(Tree, Tree)> {
let node = Node::from_fs_object(
ctx.fs_version,
ctx.source_path.clone(),
ctx.source_path.clone(),
Overlay::UpperAddition,
ctx.chunk_size,
0,
ctx.explicit_uidgid,
true,
)?;
let mut tree = Tree::new(node.clone());
let mut external_tree = Tree::new(node);
let tree_builder = FilesystemTreeBuilder::new();
let (tree_children, external_tree_children) = timing_tracer!(
{ tree_builder.load_children(ctx, &tree.node, layer_idx) },
"load_from_directory"
)?;
tree.children = tree_children;
external_tree.children = external_tree_children;
tree.borrow_mut_node()
.v5_set_dir_size(ctx.fs_version, &tree.children);
external_tree
.borrow_mut_node()
.v5_set_dir_size(ctx.fs_version, &external_tree.children);
Ok((tree, external_tree))
}
fn one_build(
&mut self,
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_mgr: &mut BlobManager,
blob_writer: &mut Box<dyn Artifact>,
tree: Tree,
) -> Result<BuildOutput> {
// Build bootstrap
let mut bootstrap_ctx = bootstrap_mgr.create_ctx()?;
let mut bootstrap = timing_tracer!(
{ build_bootstrap(ctx, bootstrap_mgr, &mut bootstrap_ctx, blob_mgr, tree) },
"build_bootstrap"
)?;
// Dump blob file
timing_tracer!(
{ Blob::dump(ctx, blob_mgr, blob_writer.as_mut()) },
"dump_blob"
)?;
// Dump blob meta information
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
Blob::dump_meta_data(ctx, blob_ctx, blob_writer.as_mut())?;
}
// Dump RAFS meta/bootstrap and finalize the data blob.
if ctx.blob_inline_meta {
timing_tracer!(
{
dump_bootstrap(
ctx,
bootstrap_mgr,
&mut bootstrap_ctx,
&mut bootstrap,
blob_mgr,
blob_writer.as_mut(),
)
},
"dump_bootstrap"
)?;
finalize_blob(ctx, blob_mgr, blob_writer.as_mut())?;
} else {
finalize_blob(ctx, blob_mgr, blob_writer.as_mut())?;
timing_tracer!(
{
dump_bootstrap(
ctx,
bootstrap_mgr,
&mut bootstrap_ctx,
&mut bootstrap,
blob_mgr,
blob_writer.as_mut(),
)
},
"dump_bootstrap"
)?;
}
lazy_drop(bootstrap_ctx);
BuildOutput::new(blob_mgr, None, &bootstrap_mgr.bootstrap_storage, &None)
}
}
impl Builder for DirectoryBuilder {
fn build(
&mut self,
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_mgr: &mut BlobManager,
) -> Result<BuildOutput> {
let layer_idx = u16::from(bootstrap_mgr.f_parent_path.is_some());
// Scan source directory to build upper layer tree.
let (tree, external_tree) =
timing_tracer!({ self.build_tree(ctx, layer_idx) }, "build_tree")?;
// Build for tree
let mut blob_writer: Box<dyn Artifact> = if let Some(blob_stor) = ctx.blob_storage.clone() {
Box::new(ArtifactWriter::new(blob_stor)?)
} else {
Box::<NoopArtifactWriter>::default()
};
let mut output = self.one_build(ctx, bootstrap_mgr, blob_mgr, &mut blob_writer, tree)?;
// Build for external tree
ctx.prefetch = prefetch::Prefetch::new(prefetch::PrefetchPolicy::None)?;
let mut external_blob_mgr = BlobManager::new(ctx.digester, true);
let mut external_bootstrap_mgr = bootstrap_mgr.clone();
if let Some(stor) = external_bootstrap_mgr.bootstrap_storage.as_mut() {
stor.add_suffix("external")
}
let mut external_blob_writer: Box<dyn Artifact> =
if let Some(blob_stor) = ctx.external_blob_storage.clone() {
Box::new(ArtifactWriter::new(blob_stor)?)
} else {
Box::<NoopArtifactWriter>::default()
};
let external_output = self.one_build(
ctx,
&mut external_bootstrap_mgr,
&mut external_blob_mgr,
&mut external_blob_writer,
external_tree,
)?;
output.external_bootstrap_path = external_output.bootstrap_path;
output.external_blobs = external_output.blobs;
Ok(output)
}
}

411
builder/src/lib.rs Normal file
View File

@ -0,0 +1,411 @@
// Copyright 2020 Ant Group. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
//! Builder to create RAFS filesystems from directories and tarballs.
#[macro_use]
extern crate log;
use crate::core::context::Artifact;
use std::ffi::OsString;
use std::os::unix::ffi::OsStrExt;
use std::path::{Path, PathBuf};
use anyhow::{anyhow, Context, Result};
use nydus_rafs::metadata::inode::InodeWrapper;
use nydus_rafs::metadata::layout::RafsXAttrs;
use nydus_rafs::metadata::{Inode, RafsVersion};
use nydus_storage::meta::toc;
use nydus_utils::digest::{DigestHasher, RafsDigest};
use nydus_utils::{compress, digest, root_tracer, timing_tracer};
use sha2::Digest;
use self::core::node::{Node, NodeInfo};
pub use self::chunkdict_generator::ChunkdictBlobInfo;
pub use self::chunkdict_generator::ChunkdictChunkInfo;
pub use self::chunkdict_generator::Generator;
pub use self::compact::BlobCompactor;
pub use self::compact::Config as CompactConfig;
pub use self::core::bootstrap::Bootstrap;
pub use self::core::chunk_dict::{parse_chunk_dict_arg, ChunkDict, HashChunkDict};
pub use self::core::context::{
ArtifactStorage, ArtifactWriter, BlobCacheGenerator, BlobContext, BlobManager,
BootstrapContext, BootstrapManager, BuildContext, BuildOutput, ConversionType,
};
pub use self::core::feature::{Feature, Features};
pub use self::core::node::{ChunkSource, NodeChunk};
pub use self::core::overlay::{Overlay, WhiteoutSpec};
pub use self::core::prefetch::{Prefetch, PrefetchPolicy};
pub use self::core::tree::{MetadataTreeBuilder, Tree, TreeNode};
pub use self::directory::DirectoryBuilder;
pub use self::merge::Merger;
pub use self::optimize_prefetch::update_ctx_from_bootstrap;
pub use self::optimize_prefetch::OptimizePrefetch;
pub use self::stargz::StargzBuilder;
pub use self::tarball::TarballBuilder;
pub mod attributes;
mod chunkdict_generator;
mod compact;
mod core;
mod directory;
mod merge;
mod optimize_prefetch;
mod stargz;
mod tarball;
/// Trait to generate a RAFS filesystem from the source.
pub trait Builder {
fn build(
&mut self,
build_ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_mgr: &mut BlobManager,
) -> Result<BuildOutput>;
}
fn build_bootstrap(
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
bootstrap_ctx: &mut BootstrapContext,
blob_mgr: &mut BlobManager,
mut tree: Tree,
) -> Result<Bootstrap> {
// For multi-layer build, merge the upper layer and lower layer with overlay whiteout applied.
if bootstrap_ctx.layered {
let mut parent = Bootstrap::load_parent_bootstrap(ctx, bootstrap_mgr, blob_mgr)?;
timing_tracer!({ parent.merge_overaly(ctx, tree) }, "merge_bootstrap")?;
tree = parent;
}
let mut bootstrap = Bootstrap::new(tree)?;
timing_tracer!({ bootstrap.build(ctx, bootstrap_ctx) }, "build_bootstrap")?;
Ok(bootstrap)
}
fn dump_bootstrap(
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
bootstrap_ctx: &mut BootstrapContext,
bootstrap: &mut Bootstrap,
blob_mgr: &mut BlobManager,
blob_writer: &mut dyn Artifact,
) -> Result<()> {
// Make sure blob id is updated according to blob hash if not specified by user.
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
if blob_ctx.blob_id.is_empty() {
// `Blob::dump()` should have set `blob_ctx.blob_id` to referenced OCI tarball for
// ref-type conversion.
assert!(!ctx.conversion_type.is_to_ref());
if ctx.blob_inline_meta {
// Set special blob id for blob with inlined meta.
blob_ctx.blob_id = "x".repeat(64);
} else {
blob_ctx.blob_id = format!("{:x}", blob_ctx.blob_hash.clone().finalize());
}
}
if !ctx.conversion_type.is_to_ref() {
blob_ctx.compressed_blob_size = blob_writer.pos()?;
}
}
// Dump bootstrap file
let blob_table = blob_mgr.to_blob_table(ctx)?;
let storage = &mut bootstrap_mgr.bootstrap_storage;
bootstrap.dump(ctx, storage, bootstrap_ctx, &blob_table)?;
// Dump RAFS meta to data blob if inline meta is enabled.
if ctx.blob_inline_meta {
assert_ne!(ctx.conversion_type, ConversionType::TarToTarfs);
// Ensure the blob object is created in case of no chunks generated for the blob.
let blob_ctx = if blob_mgr.external {
&mut blob_mgr.new_blob_ctx(ctx)?
} else {
let (_, blob_ctx) = blob_mgr
.get_or_create_current_blob(ctx)
.map_err(|_e| anyhow!("failed to get current blob object"))?;
blob_ctx
};
let bootstrap_offset = blob_writer.pos()?;
let uncompressed_bootstrap = bootstrap_ctx.writer.as_bytes()?;
let uncompressed_size = uncompressed_bootstrap.len();
let uncompressed_digest =
RafsDigest::from_buf(&uncompressed_bootstrap, digest::Algorithm::Sha256);
// Output uncompressed data for backward compatibility and compressed data for new format.
let (bootstrap_data, compressor) = if ctx.features.is_enabled(Feature::BlobToc) {
let mut compressor = compress::Algorithm::Zstd;
let (compressed_data, compressed) =
compress::compress(&uncompressed_bootstrap, compressor)
.with_context(|| "failed to compress bootstrap".to_string())?;
blob_ctx.write_data(blob_writer, &compressed_data)?;
if !compressed {
compressor = compress::Algorithm::None;
}
(compressed_data, compressor)
} else {
blob_ctx.write_data(blob_writer, &uncompressed_bootstrap)?;
(uncompressed_bootstrap, compress::Algorithm::None)
};
let compressed_size = bootstrap_data.len();
blob_ctx.write_tar_header(
blob_writer,
toc::TOC_ENTRY_BOOTSTRAP,
compressed_size as u64,
)?;
if ctx.features.is_enabled(Feature::BlobToc) {
blob_ctx.entry_list.add(
toc::TOC_ENTRY_BOOTSTRAP,
compressor,
uncompressed_digest,
bootstrap_offset,
compressed_size as u64,
uncompressed_size as u64,
)?;
}
}
Ok(())
}
fn dump_toc(
ctx: &mut BuildContext,
blob_ctx: &mut BlobContext,
blob_writer: &mut dyn Artifact,
) -> Result<()> {
if ctx.features.is_enabled(Feature::BlobToc) {
assert_ne!(ctx.conversion_type, ConversionType::TarToTarfs);
let mut hasher = RafsDigest::hasher(digest::Algorithm::Sha256);
let data = blob_ctx.entry_list.as_bytes().to_vec();
let toc_size = data.len() as u64;
blob_ctx.write_data(blob_writer, &data)?;
hasher.digest_update(&data);
let header = blob_ctx.write_tar_header(blob_writer, toc::TOC_ENTRY_BLOB_TOC, toc_size)?;
hasher.digest_update(header.as_bytes());
blob_ctx.blob_toc_digest = hasher.digest_finalize().data;
blob_ctx.blob_toc_size = toc_size as u32 + header.as_bytes().len() as u32;
}
Ok(())
}
fn finalize_blob(
ctx: &mut BuildContext,
blob_mgr: &mut BlobManager,
blob_writer: &mut dyn Artifact,
) -> Result<()> {
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
let is_tarfs = ctx.conversion_type == ConversionType::TarToTarfs;
if !is_tarfs {
dump_toc(ctx, blob_ctx, blob_writer)?;
}
if !ctx.conversion_type.is_to_ref() {
blob_ctx.compressed_blob_size = blob_writer.pos()?;
}
if ctx.blob_inline_meta && blob_ctx.blob_id == "x".repeat(64) {
blob_ctx.blob_id = String::new();
}
let hash = blob_ctx.blob_hash.clone().finalize();
let blob_meta_id = if ctx.blob_id.is_empty() {
format!("{:x}", hash)
} else {
assert!(!ctx.conversion_type.is_to_ref() || is_tarfs);
ctx.blob_id.clone()
};
if ctx.conversion_type.is_to_ref() {
if blob_ctx.blob_id.is_empty() {
// Use `sha256(tarball)` as `blob_id`. A tarball without files will fall through
// this path because `Blob::dump()` hasn't generated `blob_ctx.blob_id`.
if let Some(zran) = &ctx.blob_zran_generator {
let reader = zran.lock().unwrap().reader();
blob_ctx.compressed_blob_size = reader.get_data_size();
if blob_ctx.blob_id.is_empty() {
let hash = reader.get_data_digest();
blob_ctx.blob_id = format!("{:x}", hash.finalize());
}
} else if let Some(tar_reader) = &ctx.blob_tar_reader {
blob_ctx.compressed_blob_size = tar_reader.position();
if blob_ctx.blob_id.is_empty() {
let hash = tar_reader.get_hash_object();
blob_ctx.blob_id = format!("{:x}", hash.finalize());
}
}
}
// Tarfs mode only has tar stream and meta blob, there's no data blob.
if !ctx.blob_inline_meta && !is_tarfs {
blob_ctx.blob_meta_digest = hash.into();
blob_ctx.blob_meta_size = blob_writer.pos()?;
}
} else if blob_ctx.blob_id.is_empty() {
// `blob_ctx.blob_id` should be RAFS blob id.
blob_ctx.blob_id = blob_meta_id.clone();
}
// Tarfs mode directly use the tar file as RAFS data blob, so no need to generate the data
// blob file.
if !is_tarfs {
blob_writer.finalize(Some(blob_meta_id))?;
}
if let Some(blob_cache) = ctx.blob_cache_generator.as_ref() {
blob_cache.finalize(&blob_ctx.blob_id)?;
}
}
Ok(())
}
/// Helper for TarballBuilder/StargzBuilder to build the filesystem tree.
pub struct TarBuilder {
pub explicit_uidgid: bool,
pub layer_idx: u16,
pub version: RafsVersion,
next_ino: Inode,
}
impl TarBuilder {
/// Create a new instance of [TarBuilder].
pub fn new(explicit_uidgid: bool, layer_idx: u16, version: RafsVersion) -> Self {
TarBuilder {
explicit_uidgid,
layer_idx,
next_ino: 0,
version,
}
}
/// Allocate an inode number.
pub fn next_ino(&mut self) -> Inode {
self.next_ino += 1;
self.next_ino
}
/// Insert a node into the tree, creating any missing intermediate directories.
pub fn insert_into_tree(&mut self, tree: &mut Tree, node: Node) -> Result<()> {
let target_paths = node.target_vec();
let target_paths_len = target_paths.len();
if target_paths_len == 1 {
// Handle root node modification
assert_eq!(node.path(), Path::new("/"));
tree.set_node(node);
} else {
let mut tmp_tree = tree;
for idx in 1..target_paths.len() {
match tmp_tree.get_child_idx(target_paths[idx].as_bytes()) {
Some(i) => {
if idx == target_paths_len - 1 {
tmp_tree.children[i].set_node(node);
break;
} else {
tmp_tree = &mut tmp_tree.children[i];
}
}
None => {
if idx == target_paths_len - 1 {
tmp_tree.insert_child(Tree::new(node));
break;
} else {
let node = self.create_directory(&target_paths[..=idx])?;
tmp_tree.insert_child(Tree::new(node));
let last_idx = tmp_tree.children.len() - 1;
tmp_tree = &mut tmp_tree.children[last_idx];
}
}
}
}
}
Ok(())
}
/// Create a new node for a directory.
pub fn create_directory(&mut self, target_paths: &[OsString]) -> Result<Node> {
let ino = self.next_ino();
let name = &target_paths[target_paths.len() - 1];
let mut inode = InodeWrapper::new(self.version);
inode.set_ino(ino);
inode.set_mode(0o755 | libc::S_IFDIR as u32);
inode.set_nlink(2);
inode.set_name_size(name.len());
inode.set_rdev(u32::MAX);
let source = PathBuf::from("/");
let target_vec = target_paths.to_vec();
let mut target = PathBuf::new();
for name in target_paths.iter() {
target = target.join(name);
}
let info = NodeInfo {
explicit_uidgid: self.explicit_uidgid,
src_ino: ino,
src_dev: u64::MAX,
rdev: u64::MAX,
path: target.clone(),
source,
target,
target_vec,
symlink: None,
xattrs: RafsXAttrs::new(),
v6_force_extended_inode: false,
};
Ok(Node::new(inode, info, self.layer_idx))
}
/// Check whether the path is a eStargz special file.
pub fn is_stargz_special_files(&self, path: &Path) -> bool {
path == Path::new("/stargz.index.json")
|| path == Path::new("/.prefetch.landmark")
|| path == Path::new("/.no.prefetch.landmark")
}
}
#[cfg(test)]
mod tests {
use vmm_sys_util::tempdir::TempDir;
use super::*;
#[test]
fn test_tar_builder_is_stargz_special_files() {
let builder = TarBuilder::new(true, 0, RafsVersion::V6);
let path = Path::new("/stargz.index.json");
assert!(builder.is_stargz_special_files(&path));
let path = Path::new("/.prefetch.landmark");
assert!(builder.is_stargz_special_files(&path));
let path = Path::new("/.no.prefetch.landmark");
assert!(builder.is_stargz_special_files(&path));
let path = Path::new("/no.prefetch.landmark");
assert!(!builder.is_stargz_special_files(&path));
let path = Path::new("/prefetch.landmark");
assert!(!builder.is_stargz_special_files(&path));
let path = Path::new("/tar.index.json");
assert!(!builder.is_stargz_special_files(&path));
}
#[test]
fn test_tar_builder_create_directory() {
let tmp_dir = TempDir::new().unwrap();
let target_paths = [OsString::from(tmp_dir.as_path())];
let mut builder = TarBuilder::new(true, 0, RafsVersion::V6);
let node = builder.create_directory(&target_paths);
assert!(node.is_ok());
let node = node.unwrap();
println!("Node: {}", node);
assert_eq!(node.file_type(), "dir");
assert_eq!(node.target(), tmp_dir.as_path());
assert_eq!(builder.next_ino, 1);
assert_eq!(builder.next_ino(), 2);
}
}

440
builder/src/merge.rs Normal file
View File

@ -0,0 +1,440 @@
// Copyright (C) 2022 Nydus Developers. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use std::collections::hash_map::Entry;
use std::collections::{HashMap, HashSet};
use std::convert::TryFrom;
use std::path::PathBuf;
use std::sync::Arc;
use anyhow::{anyhow, bail, ensure, Context, Result};
use hex::FromHex;
use nydus_api::ConfigV2;
use nydus_rafs::metadata::{RafsSuper, RafsVersion};
use nydus_storage::device::{BlobFeatures, BlobInfo};
use nydus_utils::crypt;
use super::{
ArtifactStorage, BlobContext, BlobManager, Bootstrap, BootstrapContext, BuildContext,
BuildOutput, ChunkSource, ConversionType, Overlay, Tree,
};
/// Struct to generate the merged RAFS bootstrap for an image from per layer RAFS bootstraps.
///
/// A container image contains one or more layers, a RAFS bootstrap is built for each layer.
/// Those per layer bootstraps could be mounted by overlayfs to form the container rootfs.
/// To improve performance by avoiding overlayfs, an image level bootstrap is generated by
/// merging per layer bootstrap with overlayfs rules applied.
pub struct Merger {}
impl Merger {
fn get_string_from_list(
original_ids: &Option<Vec<String>>,
idx: usize,
) -> Result<Option<String>> {
Ok(if let Some(id) = &original_ids {
let id_string = id
.get(idx)
.ok_or_else(|| anyhow!("unmatched digest index {}", idx))?;
Some(id_string.clone())
} else {
None
})
}
fn get_digest_from_list(digests: &Option<Vec<String>>, idx: usize) -> Result<Option<[u8; 32]>> {
Ok(if let Some(digests) = &digests {
let digest = digests
.get(idx)
.ok_or_else(|| anyhow!("unmatched digest index {}", idx))?;
Some(<[u8; 32]>::from_hex(digest)?)
} else {
None
})
}
fn get_size_from_list(sizes: &Option<Vec<u64>>, idx: usize) -> Result<Option<u64>> {
Ok(if let Some(sizes) = &sizes {
let size = sizes
.get(idx)
.ok_or_else(|| anyhow!("unmatched size index {}", idx))?;
Some(*size)
} else {
None
})
}
/// Overlay multiple RAFS filesystems into a merged RAFS filesystem.
///
/// # Arguments
/// - sources: contains one or more per layer bootstraps in order of lower to higher.
/// - chunk_dict: contain the chunk dictionary used to build per layer boostrap, or None.
#[allow(clippy::too_many_arguments)]
pub fn merge(
ctx: &mut BuildContext,
parent_bootstrap_path: Option<String>,
sources: Vec<PathBuf>,
blob_digests: Option<Vec<String>>,
original_blob_ids: Option<Vec<String>>,
blob_sizes: Option<Vec<u64>>,
blob_toc_digests: Option<Vec<String>>,
blob_toc_sizes: Option<Vec<u64>>,
target: ArtifactStorage,
chunk_dict: Option<PathBuf>,
config_v2: Arc<ConfigV2>,
) -> Result<BuildOutput> {
if sources.is_empty() {
bail!("source bootstrap list is empty , at least one bootstrap is required");
}
if let Some(digests) = blob_digests.as_ref() {
ensure!(
digests.len() == sources.len(),
"number of blob digest entries {} doesn't match number of sources {}",
digests.len(),
sources.len(),
);
}
if let Some(original_ids) = original_blob_ids.as_ref() {
ensure!(
original_ids.len() == sources.len(),
"number of original blob id entries {} doesn't match number of sources {}",
original_ids.len(),
sources.len(),
);
}
if let Some(sizes) = blob_sizes.as_ref() {
ensure!(
sizes.len() == sources.len(),
"number of blob size entries {} doesn't match number of sources {}",
sizes.len(),
sources.len(),
);
}
if let Some(toc_digests) = blob_toc_digests.as_ref() {
ensure!(
toc_digests.len() == sources.len(),
"number of toc digest entries {} doesn't match number of sources {}",
toc_digests.len(),
sources.len(),
);
}
if let Some(sizes) = blob_toc_sizes.as_ref() {
ensure!(
sizes.len() == sources.len(),
"number of toc size entries {} doesn't match number of sources {}",
sizes.len(),
sources.len(),
);
}
let mut tree: Option<Tree> = None;
let mut blob_mgr = BlobManager::new(ctx.digester, false);
let mut blob_idx_map = HashMap::new();
let mut parent_layers = 0;
// Load parent bootstrap
if let Some(parent_bootstrap_path) = &parent_bootstrap_path {
let (rs, _) =
RafsSuper::load_from_file(parent_bootstrap_path, config_v2.clone(), false)
.context(format!("load parent bootstrap {:?}", parent_bootstrap_path))?;
let blobs = rs.superblock.get_blob_infos();
for blob in &blobs {
let blob_ctx = BlobContext::from(ctx, &blob, ChunkSource::Parent)?;
blob_idx_map.insert(blob_ctx.blob_id.clone(), blob_mgr.len());
blob_mgr.add_blob(blob_ctx);
}
parent_layers = blobs.len();
tree = Some(Tree::from_bootstrap(&rs, &mut ())?);
}
// Get the blobs come from chunk dictionary.
let mut chunk_dict_blobs = HashSet::new();
let mut config = None;
if let Some(chunk_dict_path) = &chunk_dict {
let (rs, _) = RafsSuper::load_from_file(chunk_dict_path, config_v2.clone(), false)
.context(format!("load chunk dict bootstrap {:?}", chunk_dict_path))?;
config = Some(rs.meta.get_config());
for blob in rs.superblock.get_blob_infos() {
chunk_dict_blobs.insert(blob.blob_id().to_string());
}
}
let mut fs_version = RafsVersion::V6;
let mut chunk_size = None;
for (layer_idx, bootstrap_path) in sources.iter().enumerate() {
let (rs, _) = RafsSuper::load_from_file(bootstrap_path, config_v2.clone(), false)
.context(format!("load bootstrap {:?}", bootstrap_path))?;
config
.get_or_insert_with(|| rs.meta.get_config())
.check_compatibility(&rs.meta)?;
fs_version = RafsVersion::try_from(rs.meta.version)
.context("failed to get RAFS version number")?;
ctx.compressor = rs.meta.get_compressor();
ctx.digester = rs.meta.get_digester();
// If any RAFS filesystems are encrypted, the merged boostrap will be marked as encrypted.
match rs.meta.get_cipher() {
crypt::Algorithm::None => (),
crypt::Algorithm::Aes128Xts => ctx.cipher = crypt::Algorithm::Aes128Xts,
_ => bail!("invalid per layer bootstrap, only supports aes-128-xts"),
}
ctx.explicit_uidgid = rs.meta.explicit_uidgid();
if config.as_ref().unwrap().is_tarfs_mode {
ctx.conversion_type = ConversionType::TarToTarfs;
ctx.blob_features |= BlobFeatures::TARFS;
}
let mut parent_blob_added = false;
let blobs = &rs.superblock.get_blob_infos();
for blob in blobs {
let mut blob_ctx = BlobContext::from(ctx, &blob, ChunkSource::Parent)?;
if let Some(chunk_size) = chunk_size {
ensure!(
chunk_size == blob_ctx.chunk_size,
"can not merge bootstraps with inconsistent chunk size, current bootstrap {:?} with chunk size {:x}, expected {:x}",
bootstrap_path,
blob_ctx.chunk_size,
chunk_size,
);
} else {
chunk_size = Some(blob_ctx.chunk_size);
}
if !chunk_dict_blobs.contains(&blob.blob_id()) {
// It is assumed that the `nydus-image create` at each layer and `nydus-image merge` commands
// use the same chunk dict bootstrap. So the parent bootstrap includes multiple blobs, but
// only at most one new blob, the other blobs should be from the chunk dict image.
if parent_blob_added {
bail!("invalid per layer bootstrap, having multiple associated data blobs");
}
parent_blob_added = true;
if ctx.configuration.internal.blob_accessible()
|| ctx.conversion_type == ConversionType::TarToTarfs
{
// `blob.blob_id()` should have been fixed when loading the bootstrap.
blob_ctx.blob_id = blob.blob_id();
} else {
// The blob id (blob sha256 hash) in parent bootstrap is invalid for nydusd
// runtime, should change it to the hash of whole tar blob.
if let Some(original_id) =
Self::get_string_from_list(&original_blob_ids, layer_idx)?
{
blob_ctx.blob_id = original_id;
} else {
blob_ctx.blob_id =
BlobInfo::get_blob_id_from_meta_path(bootstrap_path)?;
}
}
if let Some(digest) = Self::get_digest_from_list(&blob_digests, layer_idx)? {
if blob.has_feature(BlobFeatures::SEPARATE) {
blob_ctx.blob_meta_digest = digest;
} else {
blob_ctx.blob_id = hex::encode(digest);
}
}
if let Some(size) = Self::get_size_from_list(&blob_sizes, layer_idx)? {
if blob.has_feature(BlobFeatures::SEPARATE) {
blob_ctx.blob_meta_size = size;
} else {
blob_ctx.compressed_blob_size = size;
}
}
if let Some(digest) = Self::get_digest_from_list(&blob_toc_digests, layer_idx)?
{
blob_ctx.blob_toc_digest = digest;
}
if let Some(size) = Self::get_size_from_list(&blob_toc_sizes, layer_idx)? {
blob_ctx.blob_toc_size = size as u32;
}
}
if let Entry::Vacant(e) = blob_idx_map.entry(blob.blob_id()) {
e.insert(blob_mgr.len());
blob_mgr.add_blob(blob_ctx);
}
}
let upper = Tree::from_bootstrap(&rs, &mut ())?;
upper.walk_bfs(true, &mut |n| {
let mut node = n.borrow_mut_node();
for chunk in &mut node.chunks {
let origin_blob_index = chunk.inner.blob_index() as usize;
let blob_ctx = blobs[origin_blob_index].as_ref();
if let Some(blob_index) = blob_idx_map.get(&blob_ctx.blob_id()) {
// Set the blob index of chunk to real index in blob table of final bootstrap.
chunk.set_blob_index(*blob_index as u32);
}
}
// Set node's layer index to distinguish same inode number (from bootstrap)
// between different layers.
let idx = u16::try_from(layer_idx).context(format!(
"too many layers {}, limited to {}",
layer_idx,
u16::MAX
))?;
if parent_layers + idx as usize > u16::MAX as usize {
bail!("too many layers {}, limited to {}", layer_idx, u16::MAX);
}
node.layer_idx = idx + parent_layers as u16;
node.overlay = Overlay::UpperAddition;
Ok(())
})?;
if let Some(tree) = &mut tree {
tree.merge_overaly(ctx, upper)?;
} else {
tree = Some(upper);
}
}
if ctx.conversion_type == ConversionType::TarToTarfs {
if parent_layers > 0 {
bail!("merging RAFS in TARFS mode conflicts with `--parent-bootstrap`");
}
if !chunk_dict_blobs.is_empty() {
bail!("merging RAFS in TARFS mode conflicts with `--chunk-dict`");
}
}
// Safe to unwrap because there is at least one source bootstrap.
let tree = tree.unwrap();
ctx.fs_version = fs_version;
if let Some(chunk_size) = chunk_size {
ctx.chunk_size = chunk_size;
}
// After merging all trees, we need to re-calculate the blob index of
// referenced blobs, as the upper tree might have deleted some files
// or directories by opaques, and some blobs are dereferenced.
let mut used_blobs = HashMap::new(); // HashMap<blob_id, new_blob_index>
let mut used_blob_mgr = BlobManager::new(ctx.digester, false);
let origin_blobs = blob_mgr.get_blobs();
tree.walk_bfs(true, &mut |n| {
let mut node = n.borrow_mut_node();
for chunk in &mut node.chunks {
let origin_blob_index = chunk.inner.blob_index() as usize;
let blob_ctx = origin_blobs[origin_blob_index].clone();
let origin_blob_id = blob_ctx.blob_id();
let new_blob_index = if let Some(new_blob_index) = used_blobs.get(&origin_blob_id) {
*new_blob_index
} else {
let new_blob_index = used_blob_mgr.len();
used_blobs.insert(origin_blob_id, new_blob_index);
used_blob_mgr.add_blob(blob_ctx);
new_blob_index
};
chunk.set_blob_index(new_blob_index as u32);
}
Ok(())
})?;
let mut bootstrap_ctx = BootstrapContext::new(Some(target.clone()), false)?;
let mut bootstrap = Bootstrap::new(tree)?;
bootstrap.build(ctx, &mut bootstrap_ctx)?;
let blob_table = used_blob_mgr.to_blob_table(ctx)?;
let mut bootstrap_storage = Some(target.clone());
bootstrap
.dump(ctx, &mut bootstrap_storage, &mut bootstrap_ctx, &blob_table)
.context(format!("dump bootstrap to {:?}", target.display()))?;
BuildOutput::new(&used_blob_mgr, None, &bootstrap_storage, &None)
}
}
#[cfg(test)]
mod tests {
use nydus_utils::digest;
use vmm_sys_util::tempfile::TempFile;
use super::*;
#[test]
fn test_merger_get_string_from_list() {
let res = Merger::get_string_from_list(&None, 1);
assert!(res.is_ok());
assert!(res.unwrap().is_none());
let original_ids = vec!["string1".to_owned(), "string2".to_owned()];
let original_ids = Some(original_ids);
let res = Merger::get_string_from_list(&original_ids, 0);
assert!(res.is_ok());
assert_eq!(res.unwrap(), Some("string1".to_owned()));
assert!(Merger::get_string_from_list(&original_ids, 2).is_err());
}
#[test]
fn test_merger_get_digest_from_list() {
let res = Merger::get_digest_from_list(&None, 1);
assert!(res.is_ok());
assert!(res.unwrap().is_none());
let original_ids = vec!["string1".to_owned(), "12ab".repeat(16)];
let original_ids = Some(original_ids);
let res = Merger::get_digest_from_list(&original_ids, 1);
assert!(res.is_ok());
assert_eq!(
res.unwrap(),
Some([
18u8, 171, 18, 171, 18, 171, 18, 171, 18, 171, 18, 171, 18, 171, 18, 171, 18, 171,
18, 171, 18, 171, 18, 171, 18, 171, 18, 171, 18, 171, 18, 171
])
);
assert!(Merger::get_digest_from_list(&original_ids, 0).is_err());
assert!(Merger::get_digest_from_list(&original_ids, 2).is_err());
}
#[test]
fn test_merger_get_size_from_list() {
let res = Merger::get_size_from_list(&None, 1);
assert!(res.is_ok());
assert!(res.unwrap().is_none());
let original_ids = vec![1u64, 2, 3, 4];
let original_ids = Some(original_ids);
let res = Merger::get_size_from_list(&original_ids, 1);
assert!(res.is_ok());
assert_eq!(res.unwrap(), Some(2u64));
assert!(Merger::get_size_from_list(&original_ids, 4).is_err());
}
#[test]
fn test_merger_merge() {
let mut ctx = BuildContext::default();
ctx.configuration.internal.set_blob_accessible(false);
ctx.digester = digest::Algorithm::Sha256;
let root_dir = &std::env::var("CARGO_MANIFEST_DIR").expect("$CARGO_MANIFEST_DIR");
let mut source_path1 = PathBuf::from(root_dir);
source_path1.push("../tests/texture/bootstrap/rafs-v6-2.2.boot");
let mut source_path2 = PathBuf::from(root_dir);
source_path2.push("../tests/texture/bootstrap/rafs-v6-2.2.boot");
let tmp_file = TempFile::new().unwrap();
let target = ArtifactStorage::SingleFile(tmp_file.as_path().to_path_buf());
let blob_toc_digests = Some(vec![
"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855".to_owned(),
"4cf0c409788fc1c149afbf4c81276b92427ae41e46412334ca495991b8526650".to_owned(),
]);
let build_output = Merger::merge(
&mut ctx,
None,
vec![source_path1, source_path2],
Some(vec!["a70f".repeat(16), "9bd3".repeat(16)]),
Some(vec!["blob_id".to_owned(), "blob_id2".to_owned()]),
Some(vec![16u64, 32u64]),
blob_toc_digests,
Some(vec![64u64, 128]),
target,
None,
Arc::new(ConfigV2::new("config_v2")),
);
assert!(build_output.is_ok());
let build_output = build_output.unwrap();
println!("BuildOutput: {}", build_output);
assert_eq!(build_output.blob_size, Some(16));
}
}

View File

@ -0,0 +1,302 @@
use crate::anyhow;
use crate::core::blob::Blob;
use crate::finalize_blob;
use crate::Artifact;
use crate::ArtifactWriter;
use crate::BlobContext;
use crate::BlobManager;
use crate::Bootstrap;
use crate::BootstrapManager;
use crate::BuildContext;
use crate::BuildOutput;
use crate::ChunkSource;
use crate::ConversionType;
use crate::NodeChunk;
use crate::Path;
use crate::PathBuf;
use crate::Tree;
use crate::TreeNode;
use anyhow::Context;
use anyhow::{Ok, Result};
use nydus_api::ConfigV2;
use nydus_rafs::metadata::layout::RafsBlobTable;
use nydus_rafs::metadata::RafsSuper;
use nydus_rafs::metadata::RafsVersion;
use nydus_storage::device::BlobInfo;
use nydus_storage::meta::BatchContextGenerator;
use nydus_storage::meta::BlobChunkInfoV1Ondisk;
use nydus_utils::compress;
use sha2::Digest;
use std::fs::File;
use std::io::{Read, Seek, Write};
use std::mem::size_of;
use std::sync::Arc;
pub struct OptimizePrefetch {}
struct PrefetchBlobState {
blob_info: BlobInfo,
blob_ctx: BlobContext,
blob_writer: Box<dyn Artifact>,
}
impl PrefetchBlobState {
fn new(ctx: &BuildContext, blob_layer_num: u32, blobs_dir_path: &Path) -> Result<Self> {
let mut blob_info = BlobInfo::new(
blob_layer_num,
String::from("prefetch-blob"),
0,
0,
ctx.chunk_size,
u32::MAX,
ctx.blob_features,
);
blob_info.set_compressor(ctx.compressor);
blob_info.set_separated_with_prefetch_files_feature(true);
let mut blob_ctx = BlobContext::from(ctx, &blob_info, ChunkSource::Build)?;
blob_ctx.blob_meta_info_enabled = true;
let blob_writer = ArtifactWriter::new(crate::ArtifactStorage::FileDir((
blobs_dir_path.to_path_buf(),
String::new(),
)))
.map(|writer| Box::new(writer) as Box<dyn Artifact>)?;
Ok(Self {
blob_info,
blob_ctx,
blob_writer,
})
}
}
impl OptimizePrefetch {
/// Generate a new bootstrap for prefetch.
pub fn generate_prefetch(
tree: &mut Tree,
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_table: &mut RafsBlobTable,
blobs_dir_path: PathBuf,
prefetch_nodes: Vec<TreeNode>,
) -> Result<BuildOutput> {
// create a new blob for prefetch layer
let blob_layer_num = match blob_table {
RafsBlobTable::V5(table) => table.get_all().len(),
RafsBlobTable::V6(table) => table.get_all().len(),
};
let mut blob_state = PrefetchBlobState::new(&ctx, blob_layer_num as u32, &blobs_dir_path)?;
let mut batch = BatchContextGenerator::new(0)?;
for node in &prefetch_nodes {
Self::process_prefetch_node(
tree,
&node,
&mut blob_state,
&mut batch,
blob_table,
&blobs_dir_path,
)?;
}
let blob_mgr = Self::dump_blob(ctx, blob_table, &mut blob_state)?;
debug!("prefetch blob id: {}", ctx.blob_id);
Self::build_dump_bootstrap(tree, ctx, bootstrap_mgr, blob_table)?;
BuildOutput::new(&blob_mgr, None, &bootstrap_mgr.bootstrap_storage, &None)
}
fn build_dump_bootstrap(
tree: &mut Tree,
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_table: &mut RafsBlobTable,
) -> Result<()> {
let mut bootstrap_ctx = bootstrap_mgr.create_ctx()?;
let mut bootstrap = Bootstrap::new(tree.clone())?;
// Build bootstrap
bootstrap.build(ctx, &mut bootstrap_ctx)?;
let blob_table_withprefetch = match blob_table {
RafsBlobTable::V5(table) => RafsBlobTable::V5(table.clone()),
RafsBlobTable::V6(table) => RafsBlobTable::V6(table.clone()),
};
bootstrap.dump(
ctx,
&mut bootstrap_mgr.bootstrap_storage,
&mut bootstrap_ctx,
&blob_table_withprefetch,
)?;
Ok(())
}
fn dump_blob(
ctx: &mut BuildContext,
blob_table: &mut RafsBlobTable,
blob_state: &mut PrefetchBlobState,
) -> Result<BlobManager> {
match blob_table {
RafsBlobTable::V5(table) => {
table.entries.push(blob_state.blob_info.clone().into());
}
RafsBlobTable::V6(table) => {
table.entries.push(blob_state.blob_info.clone().into());
}
}
let mut blob_mgr = BlobManager::new(ctx.digester, false);
blob_mgr.add_blob(blob_state.blob_ctx.clone());
blob_mgr.set_current_blob_index(0);
Blob::finalize_blob_data(&ctx, &mut blob_mgr, blob_state.blob_writer.as_mut())?;
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
Blob::dump_meta_data(&ctx, blob_ctx, blob_state.blob_writer.as_mut()).unwrap();
};
ctx.blob_id = String::from("");
blob_mgr.get_current_blob().unwrap().1.blob_id = String::from("");
finalize_blob(ctx, &mut blob_mgr, blob_state.blob_writer.as_mut())?;
ctx.blob_id = blob_mgr
.get_current_blob()
.ok_or(anyhow!("failed to get current blob"))?
.1
.blob_id
.clone();
let entries = match blob_table {
RafsBlobTable::V5(table) => table.get_all(),
RafsBlobTable::V6(table) => table.get_all(),
};
// Verify and update prefetch blob
assert!(
entries
.iter()
.filter(|blob| blob.blob_id() == "prefetch-blob")
.count()
== 1,
"Expected exactly one prefetch-blob"
);
// Rewrite prefetch blob id
match blob_table {
RafsBlobTable::V5(table) => {
rewrite_blob_id(&mut table.entries, "prefetch-blob", ctx.blob_id.clone())
}
RafsBlobTable::V6(table) => {
rewrite_blob_id(&mut table.entries, "prefetch-blob", ctx.blob_id.clone())
}
}
Ok(blob_mgr)
}
fn process_prefetch_node(
tree: &mut Tree,
node: &TreeNode,
prefetch_state: &mut PrefetchBlobState,
batch: &mut BatchContextGenerator,
blob_table: &RafsBlobTable,
blobs_dir_path: &Path,
) -> Result<()> {
let tree_node = tree
.get_node_mut(&node.borrow().path())
.ok_or(anyhow!("failed to get node"))?
.node
.as_ref();
let entries = match blob_table {
RafsBlobTable::V5(table) => table.get_all(),
RafsBlobTable::V6(table) => table.get_all(),
};
let blob_id = tree_node
.borrow()
.chunks
.first()
.and_then(|chunk| entries.get(chunk.inner.blob_index() as usize).cloned())
.map(|entry| entry.blob_id())
.ok_or(anyhow!("failed to get blob id"))?;
let mut blob_file = Arc::new(File::open(blobs_dir_path.join(blob_id))?);
tree_node.borrow_mut().layer_idx = prefetch_state.blob_info.blob_index() as u16;
let mut child = tree_node.borrow_mut();
let chunks: &mut Vec<NodeChunk> = child.chunks.as_mut();
let blob_ctx = &mut prefetch_state.blob_ctx;
let blob_info = &mut prefetch_state.blob_info;
let encrypted = blob_ctx.blob_compressor != compress::Algorithm::None;
for chunk in chunks {
let inner = Arc::make_mut(&mut chunk.inner);
let mut buf = vec![0u8; inner.compressed_size() as usize];
blob_file.seek(std::io::SeekFrom::Start(inner.compressed_offset()))?;
blob_file.read_exact(&mut buf)?;
prefetch_state.blob_writer.write_all(&buf)?;
let info = batch.generate_chunk_info(
blob_ctx.current_compressed_offset,
blob_ctx.current_uncompressed_offset,
inner.uncompressed_size(),
encrypted,
)?;
inner.set_blob_index(blob_info.blob_index());
if blob_ctx.chunk_count == u32::MAX {
blob_ctx.chunk_count = 0;
}
inner.set_index(blob_ctx.chunk_count);
blob_ctx.chunk_count += 1;
inner.set_compressed_offset(blob_ctx.current_compressed_offset);
inner.set_uncompressed_offset(blob_ctx.current_uncompressed_offset);
let aligned_d_size: u64 = nydus_utils::try_round_up_4k(inner.uncompressed_size())
.ok_or_else(|| anyhow!("invalid size"))?;
blob_ctx.compressed_blob_size += inner.compressed_size() as u64;
blob_ctx.uncompressed_blob_size += aligned_d_size;
blob_ctx.current_compressed_offset += inner.compressed_size() as u64;
blob_ctx.current_uncompressed_offset += aligned_d_size;
blob_ctx.add_chunk_meta_info(&inner, Some(info))?;
blob_ctx.blob_hash.update(&buf);
blob_info.set_meta_ci_compressed_size(
(blob_info.meta_ci_compressed_size() + size_of::<BlobChunkInfoV1Ondisk>() as u64)
as usize,
);
blob_info.set_meta_ci_uncompressed_size(
(blob_info.meta_ci_uncompressed_size() + size_of::<BlobChunkInfoV1Ondisk>() as u64)
as usize,
);
}
Ok(())
}
}
fn rewrite_blob_id(entries: &mut [Arc<BlobInfo>], blob_id: &str, new_blob_id: String) {
entries
.iter_mut()
.filter(|blob| blob.blob_id() == blob_id)
.for_each(|blob| {
let mut info = (**blob).clone();
info.set_blob_id(new_blob_id.clone());
*blob = Arc::new(info);
});
}
pub fn update_ctx_from_bootstrap(
ctx: &mut BuildContext,
config: Arc<ConfigV2>,
bootstrap_path: &Path,
) -> Result<RafsSuper> {
let (sb, _) = RafsSuper::load_from_file(bootstrap_path, config, false)?;
ctx.blob_features = sb
.superblock
.get_blob_infos()
.first()
.ok_or_else(|| anyhow!("No blob info found in superblock"))?
.features();
let config = sb.meta.get_config();
if config.is_tarfs_mode {
ctx.conversion_type = ConversionType::TarToRafs;
}
ctx.fs_version =
RafsVersion::try_from(sb.meta.version).context("Failed to get RAFS version")?;
ctx.compressor = config.compressor;
Ok(sb)
}

1059
builder/src/stargz.rs Normal file

File diff suppressed because it is too large Load Diff

744
builder/src/tarball.rs Normal file
View File

@ -0,0 +1,744 @@
// Copyright 2022 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
//! Generate RAFS filesystem from a tarball.
//!
//! It support generating RAFS filesystem from a tar/targz/stargz file with or without data blob.
//!
//! The tarball data is arrange as a sequence of tar headers with associated file data interleaved.
//! - (tar header) (tar header) (file data) (tar header) (file data) (tar header)
//! And to support read tarball data from FIFO, we could only go over the tarball stream once.
//! So the workflow is as:
//! - for each tar header from the stream
//! -- generate RAFS filesystem node from the tar header
//! -- optionally dump file data associated with the tar header into RAFS data blob
//! - arrange all generated RAFS nodes into a RAFS filesystem tree
//! - dump the RAFS filesystem tree into RAFS metadata blob
use std::ffi::{OsStr, OsString};
use std::fs::{File, OpenOptions};
use std::io::{BufReader, Read, Seek, SeekFrom};
use std::os::unix::ffi::OsStrExt;
use std::path::{Path, PathBuf};
use std::sync::Mutex;
use anyhow::{anyhow, bail, Context, Result};
use tar::{Archive, Entry, EntryType, Header};
use nydus_api::enosys;
use nydus_rafs::metadata::inode::{InodeWrapper, RafsInodeFlags, RafsV6Inode};
use nydus_rafs::metadata::layout::v5::RafsV5Inode;
use nydus_rafs::metadata::layout::RafsXAttrs;
use nydus_rafs::metadata::RafsVersion;
use nydus_storage::device::BlobFeatures;
use nydus_storage::meta::ZranContextGenerator;
use nydus_storage::RAFS_MAX_CHUNKS_PER_BLOB;
use nydus_utils::compact::makedev;
use nydus_utils::compress::zlib_random::{ZranReader, ZRAN_READER_BUF_SIZE};
use nydus_utils::compress::ZlibDecoder;
use nydus_utils::digest::RafsDigest;
use nydus_utils::{div_round_up, lazy_drop, root_tracer, timing_tracer, BufReaderInfo, ByteSize};
use crate::core::context::{Artifact, NoopArtifactWriter};
use super::core::blob::Blob;
use super::core::context::{
ArtifactWriter, BlobManager, BootstrapManager, BuildContext, BuildOutput, ConversionType,
};
use super::core::node::{Node, NodeInfo};
use super::core::tree::Tree;
use super::{build_bootstrap, dump_bootstrap, finalize_blob, Builder, TarBuilder};
enum CompressionType {
None,
Gzip,
}
enum TarReader {
File(File),
BufReader(BufReader<File>),
BufReaderInfo(BufReaderInfo<File>),
BufReaderInfoSeekable(BufReaderInfo<File>),
TarGzFile(Box<ZlibDecoder<File>>),
TarGzBufReader(Box<ZlibDecoder<BufReader<File>>>),
ZranReader(ZranReader<File>),
}
impl Read for TarReader {
fn read(&mut self, buf: &mut [u8]) -> std::io::Result<usize> {
match self {
TarReader::File(f) => f.read(buf),
TarReader::BufReader(f) => f.read(buf),
TarReader::BufReaderInfo(b) => b.read(buf),
TarReader::BufReaderInfoSeekable(b) => b.read(buf),
TarReader::TarGzFile(f) => f.read(buf),
TarReader::TarGzBufReader(b) => b.read(buf),
TarReader::ZranReader(f) => f.read(buf),
}
}
}
impl TarReader {
fn seekable(&self) -> bool {
matches!(
self,
TarReader::File(_) | TarReader::BufReaderInfoSeekable(_)
)
}
}
impl Seek for TarReader {
fn seek(&mut self, pos: SeekFrom) -> std::io::Result<u64> {
match self {
TarReader::File(f) => f.seek(pos),
TarReader::BufReaderInfoSeekable(b) => b.seek(pos),
_ => Err(enosys!("seek() not supported!")),
}
}
}
struct TarballTreeBuilder<'a> {
ty: ConversionType,
ctx: &'a mut BuildContext,
blob_mgr: &'a mut BlobManager,
blob_writer: &'a mut dyn Artifact,
buf: Vec<u8>,
builder: TarBuilder,
}
impl<'a> TarballTreeBuilder<'a> {
/// Create a new instance of `TarballBuilder`.
pub fn new(
ty: ConversionType,
ctx: &'a mut BuildContext,
blob_mgr: &'a mut BlobManager,
blob_writer: &'a mut dyn Artifact,
layer_idx: u16,
) -> Self {
let builder = TarBuilder::new(ctx.explicit_uidgid, layer_idx, ctx.fs_version);
Self {
ty,
ctx,
blob_mgr,
buf: Vec::new(),
blob_writer,
builder,
}
}
fn build_tree(&mut self) -> Result<Tree> {
let file = OpenOptions::new()
.read(true)
.open(self.ctx.source_path.clone())
.context("tarball: can not open source file for conversion")?;
let mut is_file = match file.metadata() {
Ok(md) => md.file_type().is_file(),
Err(_) => false,
};
let reader = match self.ty {
ConversionType::EStargzToRef
| ConversionType::TargzToRef
| ConversionType::TarToRef => match Self::detect_compression_algo(file)? {
(CompressionType::Gzip, buf_reader) => {
let generator = ZranContextGenerator::from_buf_reader(buf_reader)?;
let reader = generator.reader();
self.ctx.blob_zran_generator = Some(Mutex::new(generator));
self.ctx.blob_features.insert(BlobFeatures::ZRAN);
TarReader::ZranReader(reader)
}
(CompressionType::None, buf_reader) => {
self.ty = ConversionType::TarToRef;
let reader = BufReaderInfo::from_buf_reader(buf_reader);
self.ctx.blob_tar_reader = Some(reader.clone());
TarReader::BufReaderInfo(reader)
}
},
ConversionType::EStargzToRafs
| ConversionType::TargzToRafs
| ConversionType::TarToRafs => match Self::detect_compression_algo(file)? {
(CompressionType::Gzip, buf_reader) => {
if is_file {
let mut file = buf_reader.into_inner();
file.seek(SeekFrom::Start(0))?;
TarReader::TarGzFile(Box::new(ZlibDecoder::new(file)))
} else {
TarReader::TarGzBufReader(Box::new(ZlibDecoder::new(buf_reader)))
}
}
(CompressionType::None, buf_reader) => {
if is_file {
let mut file = buf_reader.into_inner();
file.seek(SeekFrom::Start(0))?;
TarReader::File(file)
} else {
TarReader::BufReader(buf_reader)
}
}
},
ConversionType::TarToTarfs => {
let mut reader = BufReaderInfo::from_buf_reader(BufReader::new(file));
self.ctx.blob_tar_reader = Some(reader.clone());
if !self.ctx.blob_id.is_empty() {
reader.enable_digest_calculation(false);
} else {
// Disable seek when need to calculate hash value.
is_file = false;
}
// only enable seek when hash computing is disabled.
if is_file {
TarReader::BufReaderInfoSeekable(reader)
} else {
TarReader::BufReaderInfo(reader)
}
}
_ => return Err(anyhow!("tarball: unsupported image conversion type")),
};
let is_seekable = reader.seekable();
let mut tar = Archive::new(reader);
tar.set_ignore_zeros(true);
tar.set_preserve_mtime(true);
tar.set_preserve_permissions(true);
tar.set_unpack_xattrs(true);
// Prepare scratch buffer for dumping file data.
if self.buf.len() < self.ctx.chunk_size as usize {
self.buf = vec![0u8; self.ctx.chunk_size as usize];
}
// Generate the root node in advance, it may be overwritten by entries from the tar stream.
let root = self.builder.create_directory(&[OsString::from("/")])?;
let mut tree = Tree::new(root);
// Generate RAFS node for each tar entry, and optionally adding missing parents.
let entries = if is_seekable {
tar.entries_with_seek()
.context("tarball: failed to read entries from tar")?
} else {
tar.entries()
.context("tarball: failed to read entries from tar")?
};
for entry in entries {
let mut entry = entry.context("tarball: failed to read entry from tar")?;
let path = entry
.path()
.context("tarball: failed to to get path from tar entry")?;
let path = PathBuf::from("/").join(path);
let path = path.components().as_path();
if !self.builder.is_stargz_special_files(path) {
self.parse_entry(&mut tree, &mut entry, path)?;
}
}
// Update directory size for RAFS V5 after generating the tree.
if self.ctx.fs_version.is_v5() {
Self::set_v5_dir_size(&mut tree);
}
Ok(tree)
}
fn parse_entry<R: Read>(
&mut self,
tree: &mut Tree,
entry: &mut Entry<R>,
path: &Path,
) -> Result<()> {
let header = entry.header();
let entry_type = header.entry_type();
if entry_type.is_gnu_longname() {
return Err(anyhow!("tarball: unsupported gnu_longname from tar header"));
} else if entry_type.is_gnu_longlink() {
return Err(anyhow!("tarball: unsupported gnu_longlink from tar header"));
} else if entry_type.is_pax_local_extensions() {
return Err(anyhow!(
"tarball: unsupported pax_local_extensions from tar header"
));
} else if entry_type.is_pax_global_extensions() {
return Err(anyhow!(
"tarball: unsupported pax_global_extensions from tar header"
));
} else if entry_type.is_contiguous() {
return Err(anyhow!(
"tarball: unsupported contiguous entry type from tar header"
));
} else if entry_type.is_gnu_sparse() {
return Err(anyhow!(
"tarball: unsupported gnu sparse file extension from tar header"
));
}
let mut file_size = entry.size();
let name = Self::get_file_name(path)?;
let mode = Self::get_mode(header)?;
let (uid, gid) = Self::get_uid_gid(self.ctx, header)?;
let mtime = header.mtime().unwrap_or_default();
let mut flags = match self.ctx.fs_version {
RafsVersion::V5 => RafsInodeFlags::default(),
RafsVersion::V6 => RafsInodeFlags::default(),
};
// Parse special files
let rdev = if entry_type.is_block_special()
|| entry_type.is_character_special()
|| entry_type.is_fifo()
{
let major = header
.device_major()
.context("tarball: failed to get device major from tar entry")?
.ok_or_else(|| anyhow!("tarball: failed to get major device from tar entry"))?;
let minor = header
.device_minor()
.context("tarball: failed to get device major from tar entry")?
.ok_or_else(|| anyhow!("tarball: failed to get minor device from tar entry"))?;
makedev(major as u64, minor as u64) as u32
} else {
u32::MAX
};
// Parse symlink
let (symlink, symlink_size) = if entry_type.is_symlink() {
let symlink_link_path = entry
.link_name()
.context("tarball: failed to get target path for tar symlink entry")?
.ok_or_else(|| anyhow!("tarball: failed to get symlink target tor tar entry"))?;
let symlink_size = symlink_link_path.as_os_str().byte_size();
if symlink_size > u16::MAX as usize {
bail!("tarball: symlink target from tar entry is too big");
}
file_size = symlink_size as u64;
flags |= RafsInodeFlags::SYMLINK;
(
Some(symlink_link_path.as_os_str().to_owned()),
symlink_size as u16,
)
} else {
(None, 0)
};
let mut child_count = 0;
if entry_type.is_file() {
child_count = div_round_up(file_size, self.ctx.chunk_size as u64);
if child_count > RAFS_MAX_CHUNKS_PER_BLOB as u64 {
bail!("tarball: file size 0x{:x} is too big", file_size);
}
}
// Handle hardlink ino
let mut hardlink_target = None;
let ino = if entry_type.is_hard_link() {
let link_path = entry
.link_name()
.context("tarball: failed to get target path for tar symlink entry")?
.ok_or_else(|| anyhow!("tarball: failed to get symlink target tor tar entry"))?;
let link_path = PathBuf::from("/").join(link_path);
let link_path = link_path.components().as_path();
let targets = Node::generate_target_vec(link_path);
assert!(!targets.is_empty());
let mut tmp_tree: &Tree = tree;
for name in &targets[1..] {
match tmp_tree.get_child_idx(name.as_bytes()) {
Some(idx) => tmp_tree = &tmp_tree.children[idx],
None => {
bail!(
"tarball: unknown target {} for hardlink {}",
link_path.display(),
path.display()
);
}
}
}
let mut tmp_node = tmp_tree.borrow_mut_node();
if !tmp_node.is_reg() {
bail!(
"tarball: target {} for hardlink {} is not a regular file",
link_path.display(),
path.display()
);
}
hardlink_target = Some(tmp_tree);
flags |= RafsInodeFlags::HARDLINK;
tmp_node.inode.set_has_hardlink(true);
tmp_node.inode.ino()
} else {
self.builder.next_ino()
};
// Parse xattrs
let mut xattrs = RafsXAttrs::new();
if let Some(exts) = entry.pax_extensions()? {
for p in exts {
match p {
Ok(pax) => {
let prefix = b"SCHILY.xattr.";
let key = pax.key_bytes();
if key.starts_with(prefix) {
let x_key = OsStr::from_bytes(&key[prefix.len()..]);
xattrs.add(x_key.to_os_string(), pax.value_bytes().to_vec())?;
}
}
Err(e) => {
return Err(anyhow!(
"tarball: failed to parse PaxExtension from tar header, {}",
e
))
}
}
}
}
let mut inode = match self.ctx.fs_version {
RafsVersion::V5 => InodeWrapper::V5(RafsV5Inode {
i_digest: RafsDigest::default(),
i_parent: 0,
i_ino: ino,
i_projid: 0,
i_uid: uid,
i_gid: gid,
i_mode: mode,
i_size: file_size,
i_nlink: 1,
i_blocks: 0,
i_flags: flags,
i_child_index: 0,
i_child_count: child_count as u32,
i_name_size: name.len() as u16,
i_symlink_size: symlink_size,
i_rdev: rdev,
i_mtime: mtime,
i_mtime_nsec: 0,
i_reserved: [0; 8],
}),
RafsVersion::V6 => InodeWrapper::V6(RafsV6Inode {
i_ino: ino,
i_projid: 0,
i_uid: uid,
i_gid: gid,
i_mode: mode,
i_size: file_size,
i_nlink: 1,
i_blocks: 0,
i_flags: flags,
i_child_count: child_count as u32,
i_name_size: name.len() as u16,
i_symlink_size: symlink_size,
i_rdev: rdev,
i_mtime: mtime,
i_mtime_nsec: 0,
}),
};
inode.set_has_xattr(!xattrs.is_empty());
let source = PathBuf::from("/");
let target = Node::generate_target(path, &source);
let target_vec = Node::generate_target_vec(&target);
let info = NodeInfo {
explicit_uidgid: self.ctx.explicit_uidgid,
src_ino: ino,
src_dev: u64::MAX,
rdev: rdev as u64,
path: path.to_path_buf(),
source,
target,
target_vec,
symlink,
xattrs,
v6_force_extended_inode: false,
};
let mut node = Node::new(inode, info, self.builder.layer_idx);
// Special handling of hardlink.
// Tar hardlink header has zero file size and no file data associated, so copy value from
// the associated regular file.
if let Some(t) = hardlink_target {
let n = t.borrow_mut_node();
if n.inode.is_v5() {
node.inode.set_digest(n.inode.digest().to_owned());
}
node.inode.set_size(n.inode.size());
node.inode.set_child_count(n.inode.child_count());
node.chunks = n.chunks.clone();
node.set_xattr(n.info.xattrs.clone());
} else {
node.dump_node_data_with_reader(
self.ctx,
self.blob_mgr,
self.blob_writer,
Some(entry),
&mut self.buf,
)?;
}
// Update inode.i_blocks for RAFS v5.
if self.ctx.fs_version == RafsVersion::V5 && !entry_type.is_dir() {
node.v5_set_inode_blocks();
}
self.builder.insert_into_tree(tree, node)
}
fn get_uid_gid(ctx: &BuildContext, header: &Header) -> Result<(u32, u32)> {
let uid = if ctx.explicit_uidgid {
header.uid().unwrap_or_default()
} else {
0
};
let gid = if ctx.explicit_uidgid {
header.gid().unwrap_or_default()
} else {
0
};
if uid > u32::MAX as u64 || gid > u32::MAX as u64 {
bail!(
"tarball: uid {:x} or gid {:x} from tar entry is out of range",
uid,
gid
);
}
Ok((uid as u32, gid as u32))
}
fn get_mode(header: &Header) -> Result<u32> {
let mode = header
.mode()
.context("tarball: failed to get permission/mode from tar entry")?;
let ty = match header.entry_type() {
EntryType::Regular | EntryType::Link => libc::S_IFREG,
EntryType::Directory => libc::S_IFDIR,
EntryType::Symlink => libc::S_IFLNK,
EntryType::Block => libc::S_IFBLK,
EntryType::Char => libc::S_IFCHR,
EntryType::Fifo => libc::S_IFIFO,
_ => bail!("tarball: unsupported tar entry type"),
};
Ok((mode & !libc::S_IFMT as u32) | ty as u32)
}
fn get_file_name(path: &Path) -> Result<&OsStr> {
let name = if path == Path::new("/") {
path.as_os_str()
} else {
path.file_name().ok_or_else(|| {
anyhow!(
"tarball: failed to get file name from tar entry with path {}",
path.display()
)
})?
};
if name.len() > u16::MAX as usize {
bail!(
"tarball: file name {} from tar entry is too long",
name.to_str().unwrap_or_default()
);
}
Ok(name)
}
fn set_v5_dir_size(tree: &mut Tree) {
for c in &mut tree.children {
Self::set_v5_dir_size(c);
}
let mut node = tree.borrow_mut_node();
node.v5_set_dir_size(RafsVersion::V5, &tree.children);
}
fn detect_compression_algo(file: File) -> Result<(CompressionType, BufReader<File>)> {
// Use 64K buffer to keep consistence with zlib-random.
let mut buf_reader = BufReader::with_capacity(ZRAN_READER_BUF_SIZE, file);
let mut buf = [0u8; 3];
buf_reader.read_exact(&mut buf)?;
if buf[0] == 0x1f && buf[1] == 0x8b && buf[2] == 0x08 {
buf_reader.seek_relative(-3).unwrap();
Ok((CompressionType::Gzip, buf_reader))
} else {
buf_reader.seek_relative(-3).unwrap();
Ok((CompressionType::None, buf_reader))
}
}
}
/// Builder to create RAFS filesystems from tarballs.
pub struct TarballBuilder {
ty: ConversionType,
}
impl TarballBuilder {
/// Create a new instance of [TarballBuilder] to build a RAFS filesystem from a tarball.
pub fn new(conversion_type: ConversionType) -> Self {
Self {
ty: conversion_type,
}
}
}
impl Builder for TarballBuilder {
fn build(
&mut self,
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_mgr: &mut BlobManager,
) -> Result<BuildOutput> {
let mut bootstrap_ctx = bootstrap_mgr.create_ctx()?;
let layer_idx = u16::from(bootstrap_ctx.layered);
let mut blob_writer: Box<dyn Artifact> = match self.ty {
ConversionType::EStargzToRafs
| ConversionType::EStargzToRef
| ConversionType::TargzToRafs
| ConversionType::TargzToRef
| ConversionType::TarToRafs
| ConversionType::TarToTarfs => {
if let Some(blob_stor) = ctx.blob_storage.clone() {
Box::new(ArtifactWriter::new(blob_stor)?)
} else {
Box::<NoopArtifactWriter>::default()
}
}
_ => {
return Err(anyhow!(
"tarball: unsupported image conversion type '{}'",
self.ty
))
}
};
let mut tree_builder =
TarballTreeBuilder::new(self.ty, ctx, blob_mgr, blob_writer.as_mut(), layer_idx);
let tree = timing_tracer!({ tree_builder.build_tree() }, "build_tree")?;
// Build bootstrap
let mut bootstrap = timing_tracer!(
{ build_bootstrap(ctx, bootstrap_mgr, &mut bootstrap_ctx, blob_mgr, tree) },
"build_bootstrap"
)?;
// Dump blob file
timing_tracer!(
{ Blob::dump(ctx, blob_mgr, blob_writer.as_mut()) },
"dump_blob"
)?;
// Dump blob meta information
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
Blob::dump_meta_data(ctx, blob_ctx, blob_writer.as_mut())?;
}
// Dump RAFS meta/bootstrap and finalize the data blob.
if ctx.blob_inline_meta {
timing_tracer!(
{
dump_bootstrap(
ctx,
bootstrap_mgr,
&mut bootstrap_ctx,
&mut bootstrap,
blob_mgr,
blob_writer.as_mut(),
)
},
"dump_bootstrap"
)?;
finalize_blob(ctx, blob_mgr, blob_writer.as_mut())?;
} else {
finalize_blob(ctx, blob_mgr, blob_writer.as_mut())?;
timing_tracer!(
{
dump_bootstrap(
ctx,
bootstrap_mgr,
&mut bootstrap_ctx,
&mut bootstrap,
blob_mgr,
blob_writer.as_mut(),
)
},
"dump_bootstrap"
)?;
}
lazy_drop(bootstrap_ctx);
BuildOutput::new(blob_mgr, None, &bootstrap_mgr.bootstrap_storage, &None)
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::attributes::Attributes;
use crate::{ArtifactStorage, Features, Prefetch, WhiteoutSpec};
use nydus_utils::{compress, digest};
#[test]
fn test_build_tarfs() {
let tmp_dir = vmm_sys_util::tempdir::TempDir::new().unwrap();
let tmp_dir = tmp_dir.as_path().to_path_buf();
let root_dir = &std::env::var("CARGO_MANIFEST_DIR").expect("$CARGO_MANIFEST_DIR");
let source_path = PathBuf::from(root_dir).join("../tests/texture/tar/all-entry-type.tar");
let prefetch = Prefetch::default();
let mut ctx = BuildContext::new(
"test".to_string(),
true,
0,
compress::Algorithm::None,
digest::Algorithm::Sha256,
true,
WhiteoutSpec::Oci,
ConversionType::TarToTarfs,
source_path,
prefetch,
Some(ArtifactStorage::FileDir((tmp_dir.clone(), String::new()))),
None,
false,
Features::new(),
false,
Attributes::default(),
);
let mut bootstrap_mgr = BootstrapManager::new(
Some(ArtifactStorage::FileDir((tmp_dir, String::new()))),
None,
);
let mut blob_mgr = BlobManager::new(digest::Algorithm::Sha256, false);
let mut builder = TarballBuilder::new(ConversionType::TarToTarfs);
builder
.build(&mut ctx, &mut bootstrap_mgr, &mut blob_mgr)
.unwrap();
}
#[test]
fn test_build_encrypted_tarfs() {
let tmp_dir = vmm_sys_util::tempdir::TempDir::new().unwrap();
let tmp_dir = tmp_dir.as_path().to_path_buf();
let root_dir = &std::env::var("CARGO_MANIFEST_DIR").expect("$CARGO_MANIFEST_DIR");
let source_path = PathBuf::from(root_dir).join("../tests/texture/tar/all-entry-type.tar");
let prefetch = Prefetch::default();
let mut ctx = BuildContext::new(
"test".to_string(),
true,
0,
compress::Algorithm::None,
digest::Algorithm::Sha256,
true,
WhiteoutSpec::Oci,
ConversionType::TarToTarfs,
source_path,
prefetch,
Some(ArtifactStorage::FileDir((tmp_dir.clone(), String::new()))),
None,
false,
Features::new(),
true,
Attributes::default(),
);
let mut bootstrap_mgr = BootstrapManager::new(
Some(ArtifactStorage::FileDir((tmp_dir, String::new()))),
None,
);
let mut blob_mgr = BlobManager::new(digest::Algorithm::Sha256, false);
let mut builder = TarballBuilder::new(ConversionType::TarToTarfs);
builder
.build(&mut ctx, &mut bootstrap_mgr, &mut blob_mgr)
.unwrap();
}
}

28
clib/Cargo.toml Normal file
View File

@ -0,0 +1,28 @@
[package]
name = "nydus-clib"
version = "0.1.0"
description = "C wrapper library for Nydus SDK"
authors = ["The Nydus Developers"]
license = "Apache-2.0"
homepage = "https://nydus.dev/"
repository = "https://github.com/dragonflyoss/nydus"
edition = "2021"
[lib]
name = "nydus_clib"
crate-type = ["cdylib", "staticlib"]
[dependencies]
libc = "0.2.137"
log = "0.4.17"
fuse-backend-rs = "^0.12.0"
nydus-api = { version = "0.4.0", path = "../api" }
nydus-rafs = { version = "0.4.0", path = "../rafs" }
nydus-storage = { version = "0.7.0", path = "../storage" }
[features]
baekend-s3 = ["nydus-storage/backend-s3"]
backend-oss = ["nydus-storage/backend-oss"]
backend-registry = ["nydus-storage/backend-registry"]
backend-http-proxy = ["nydus-storage/backend-http-proxy"]
backend-localdisk = ["nydus-storage/backend-localdisk"]

1
clib/LICENSE-APACHE Symbolic link
View File

@ -0,0 +1 @@
../LICENSE-APACHE

View File

@ -0,0 +1,20 @@
#include <stdio.h>
#include "../nydus.h"
int main(int argc, char **argv)
{
char *bootstrap = "../../tests/texture/repeatable/sha256-nocompress-repeatable";
char *config = "version = 2\nid = \"my_id\"\n[backend]\ntype = \"localfs\"\n[backend.localfs]\ndir = \"../../tests/texture/repeatable/blobs\"\n[cache]\ntype = \"dummycache\"\n[rafs]";
NydusFsHandle fs_handle;
fs_handle = nydus_open_rafs(bootstrap, config);
if (fs_handle == NYDUS_INVALID_FS_HANDLE) {
printf("failed to open rafs filesystem from ../../tests/texture/repeatable/sha256-nocompress-repeatable\n");
return -1;
}
printf("succeed to open rafs filesystem from ../../tests/texture/repeatable/sha256-nocompress-repeatable\n");
nydus_close_rafs(fs_handle);
return 0;
}

70
clib/include/nydus.h Normal file
View File

@ -0,0 +1,70 @@
#include <stdarg.h>
#include <stdbool.h>
#include <stdint.h>
#include <stdlib.h>
/**
* Magic number for Nydus file handle.
*/
#define NYDUS_FILE_HANDLE_MAGIC 17148644263605784967ull
/**
* Value representing an invalid Nydus file handle.
*/
#define NYDUS_INVALID_FILE_HANDLE 0
/**
* Magic number for Nydus filesystem handle.
*/
#define NYDUS_FS_HANDLE_MAGIC 17148643159786606983ull
/**
* Value representing an invalid Nydus filesystem handle.
*/
#define NYDUS_INVALID_FS_HANDLE 0
/**
* Handle representing a Nydus file object.
*/
typedef uintptr_t NydusFileHandle;
/**
* Handle representing a Nydus filesystem object.
*/
typedef uintptr_t NydusFsHandle;
/**
* Open the file with `path` in readonly mode.
*
* The `NydusFileHandle` returned should be freed by calling `nydus_close()`.
*/
NydusFileHandle nydus_fopen(NydusFsHandle fs_handle, const char *path);
/**
* Close the file handle returned by `nydus_fopen()`.
*/
void nydus_fclose(NydusFileHandle handle);
/**
* Open a RAFS filesystem and return a handle to the filesystem object.
*
* The returned filesystem handle should be freed by calling `nydus_close_rafs()`, otherwise
* it will cause memory leak.
*/
NydusFsHandle nydus_open_rafs(const char *bootstrap, const char *config);
/**
* Open a RAFS filesystem with default configuration and return a handle to the filesystem object.
*
* The returned filesystem handle should be freed by calling `nydus_close_rafs()`, otherwise
* it will cause memory leak.
*/
NydusFsHandle nydus_open_rafs_default(const char *bootstrap, const char *dir_path);
/**
* Close the RAFS filesystem returned by `nydus_open_rafs()` and friends.
*
* All `NydusFileHandle` objects created from the `NydusFsHandle` should be freed before calling
* `nydus_close_rafs()`, otherwise it may cause panic.
*/
void nydus_close_rafs(NydusFsHandle handle);

90
clib/src/file.rs Normal file
View File

@ -0,0 +1,90 @@
// Copyright (C) 2021 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
//! Implement file operations for RAFS filesystem in userspace.
//!
//! Provide following file operation functions to access files in a RAFS filesystem:
//! - fopen:
//! - fclose:
//! - fread:
//! - fwrite:
//! - fseek:
//! - ftell
use std::os::raw::c_char;
use std::ptr::null_mut;
use fuse_backend_rs::api::filesystem::{Context, FileSystem};
use crate::{set_errno, FileSystemState, Inode, NydusFsHandle};
/// Magic number for Nydus file handle.
pub const NYDUS_FILE_HANDLE_MAGIC: u64 = 0xedfc_3919_afc3_5187;
/// Value representing an invalid Nydus file handle.
pub const NYDUS_INVALID_FILE_HANDLE: usize = 0;
/// Handle representing a Nydus file object.
pub type NydusFileHandle = usize;
#[repr(C)]
pub(crate) struct FileState {
magic: u64,
ino: Inode,
pos: u64,
fs_handle: NydusFsHandle,
}
/// Open the file with `path` in readonly mode.
///
/// The `NydusFileHandle` returned should be freed by calling `nydus_close()`.
///
/// # Safety
/// Caller needs to ensure `fs_handle` and `path` are valid, otherwise it may cause memory access
/// violation.
#[no_mangle]
pub unsafe extern "C" fn nydus_fopen(
fs_handle: NydusFsHandle,
path: *const c_char,
) -> NydusFileHandle {
if path.is_null() {
set_errno(libc::EINVAL);
return null_mut::<FileState>() as NydusFileHandle;
}
let fs = match FileSystemState::try_from_handle(fs_handle) {
Err(e) => {
set_errno(e);
return null_mut::<FileState>() as NydusFileHandle;
}
Ok(v) => v,
};
////////////////////////////////////////////////////////////
// TODO: open file;
//////////////////////////////////////////////////////////////////////////
let file = Box::new(FileState {
magic: NYDUS_FILE_HANDLE_MAGIC,
ino: fs.root_ino,
pos: 0,
fs_handle,
});
Box::into_raw(file) as NydusFileHandle
}
/// Close the file handle returned by `nydus_fopen()`.
///
/// # Safety
/// Caller needs to ensure `fs_handle` is valid, otherwise it may cause memory access violation.
#[no_mangle]
pub unsafe extern "C" fn nydus_fclose(handle: NydusFileHandle) {
let mut file = Box::from_raw(handle as *mut FileState);
assert_eq!(file.magic, NYDUS_FILE_HANDLE_MAGIC);
let ctx = Context::default();
let fs = FileSystemState::from_handle(file.fs_handle);
fs.rafs.forget(&ctx, file.ino, 1);
file.magic -= 0x4fdf_ae34_9d9a_03cd;
}

251
clib/src/fs.rs Normal file
View File

@ -0,0 +1,251 @@
// Copyright (C) 2021 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
//! Provide structures and functions to open/close/access a filesystem instance.
use std::ffi::CStr;
use std::os::raw::c_char;
use std::path::Path;
use std::ptr::{null, null_mut};
use std::str::FromStr;
use std::sync::Arc;
use nydus_api::ConfigV2;
use nydus_rafs::fs::Rafs;
use crate::{cstr_to_str, set_errno, Inode};
/// Magic number for Nydus filesystem handle.
pub const NYDUS_FS_HANDLE_MAGIC: u64 = 0xedfc_3818_af03_5187;
/// Value representing an invalid Nydus filesystem handle.
pub const NYDUS_INVALID_FS_HANDLE: usize = 0;
/// Handle representing a Nydus filesystem object.
pub type NydusFsHandle = usize;
#[repr(C)]
pub(crate) struct FileSystemState {
magic: u64,
pub(crate) root_ino: Inode,
pub(crate) rafs: Rafs,
}
impl FileSystemState {
/// Caller needs to ensure the lifetime of returned reference.
pub(crate) unsafe fn from_handle(hdl: NydusFsHandle) -> &'static mut Self {
let fs = &mut *(hdl as *const FileSystemState as *mut FileSystemState);
assert_eq!(fs.magic, NYDUS_FS_HANDLE_MAGIC);
fs
}
/// Caller needs to ensure the lifetime of returned reference.
pub(crate) unsafe fn try_from_handle(hdl: NydusFsHandle) -> Result<&'static mut Self, i32> {
if hdl == null::<FileSystemState>() as usize {
return Err(libc::EINVAL);
}
let fs = &mut *(hdl as *const FileSystemState as *mut FileSystemState);
assert_eq!(fs.magic, NYDUS_FS_HANDLE_MAGIC);
Ok(fs)
}
}
fn fs_error_einval() -> NydusFsHandle {
set_errno(libc::EINVAL);
null_mut::<FileSystemState>() as NydusFsHandle
}
fn default_localfs_rafs_config(dir: &str) -> String {
format!(
r#"
version = 2
id = "my_id"
[backend]
type = "localfs"
[backend.localfs]
dir = "{}"
[cache]
type = "dummycache"
[rafs]
"#,
dir
)
}
fn do_nydus_open_rafs(bootstrap: &str, config: &str) -> NydusFsHandle {
let cfg = match ConfigV2::from_str(config) {
Ok(v) => v,
Err(e) => {
warn!("failed to parse configuration info: {}", e);
return fs_error_einval();
}
};
let cfg = Arc::new(cfg);
let (mut rafs, reader) = match Rafs::new(&cfg, &cfg.id, Path::new(bootstrap)) {
Err(e) => {
warn!(
"failed to open filesystem from bootstrap {}, {}",
bootstrap, e
);
return fs_error_einval();
}
Ok(v) => v,
};
if let Err(e) = rafs.import(reader, None) {
warn!("failed to import RAFS filesystem, {}", e);
return fs_error_einval();
}
let root_ino = rafs.metadata().root_inode;
let fs = Box::new(FileSystemState {
magic: NYDUS_FS_HANDLE_MAGIC,
root_ino,
rafs,
});
Box::into_raw(fs) as NydusFsHandle
}
/// Open a RAFS filesystem and return a handle to the filesystem object.
///
/// The returned filesystem handle should be freed by calling `nydus_close_rafs()`, otherwise
/// it will cause memory leak.
///
/// # Safety
/// Caller needs to ensure `bootstrap` and `config` are valid, otherwise it may cause memory access
/// violation.
#[no_mangle]
pub unsafe extern "C" fn nydus_open_rafs(
bootstrap: *const c_char,
config: *const c_char,
) -> NydusFsHandle {
if bootstrap.is_null() || config.is_null() {
return fs_error_einval();
}
let bootstrap = cstr_to_str!(bootstrap, null_mut::<FileSystemState>() as NydusFsHandle);
let config = cstr_to_str!(config, null_mut::<FileSystemState>() as NydusFsHandle);
do_nydus_open_rafs(bootstrap, config)
}
/// Open a RAFS filesystem with default configuration and return a handle to the filesystem object.
///
/// The returned filesystem handle should be freed by calling `nydus_close_rafs()`, otherwise
/// it will cause memory leak.
///
/// # Safety
/// Caller needs to ensure `bootstrap` and `dir_path` are valid, otherwise it may cause memory
/// access violation.
#[no_mangle]
pub unsafe extern "C" fn nydus_open_rafs_default(
bootstrap: *const c_char,
dir_path: *const c_char,
) -> NydusFsHandle {
if bootstrap.is_null() || dir_path.is_null() {
return fs_error_einval();
}
let bootstrap = cstr_to_str!(bootstrap, null_mut::<FileSystemState>() as NydusFsHandle);
let dir_path = cstr_to_str!(dir_path, null_mut::<FileSystemState>() as NydusFsHandle);
let p_tmp;
let mut path = Path::new(bootstrap);
if path.parent().is_none() {
p_tmp = Path::new(dir_path).join(bootstrap);
path = &p_tmp
}
let bootstrap = match path.to_str() {
Some(v) => v,
None => {
warn!("invalid bootstrap path '{}'", bootstrap);
return fs_error_einval();
}
};
let config = default_localfs_rafs_config(dir_path);
do_nydus_open_rafs(bootstrap, &config)
}
/// Close the RAFS filesystem returned by `nydus_open_rafs()` and friends.
///
/// All `NydusFileHandle` objects created from the `NydusFsHandle` should be freed before calling
/// `nydus_close_rafs()`, otherwise it may cause panic.
///
/// # Safety
/// Caller needs to ensure `handle` is valid, otherwise it may cause memory access violation.
#[no_mangle]
pub unsafe extern "C" fn nydus_close_rafs(handle: NydusFsHandle) {
let mut fs = Box::from_raw(handle as *mut FileSystemState);
assert_eq!(fs.magic, NYDUS_FS_HANDLE_MAGIC);
fs.magic -= 0x4fdf_03cd_ae34_9d9a;
fs.rafs.destroy().unwrap();
}
#[cfg(test)]
mod tests {
use super::*;
use std::ffi::CString;
use std::io::Error;
use std::path::PathBuf;
use std::ptr::null;
pub(crate) fn open_file_system() -> NydusFsHandle {
let ret = unsafe { nydus_open_rafs(null(), null()) };
assert_eq!(ret, NYDUS_INVALID_FS_HANDLE);
assert_eq!(
Error::raw_os_error(&Error::last_os_error()),
Some(libc::EINVAL)
);
let root_dir = &std::env::var("CARGO_MANIFEST_DIR").expect("$CARGO_MANIFEST_DIR");
let bootstrap = PathBuf::from(root_dir)
.join("../tests/texture/repeatable/sha256-nocompress-repeatable");
let bootstrap = bootstrap.to_str().unwrap();
let bootstrap = CString::new(bootstrap).unwrap();
let blob_dir = PathBuf::from(root_dir).join("../tests/texture/repeatable/blobs");
let config = format!(
r#"
version = 2
id = "my_id"
[backend]
type = "localfs"
[backend.localfs]
dir = "{}"
[cache]
type = "dummycache"
[rafs]
"#,
blob_dir.display()
);
let config = CString::new(config).unwrap();
let fs = unsafe {
nydus_open_rafs(
bootstrap.as_ptr() as *const c_char,
config.as_ptr() as *const c_char,
)
};
assert_ne!(fs, NYDUS_INVALID_FS_HANDLE);
fs
}
#[test]
fn test_open_rafs() {
let fs = open_file_system();
unsafe { nydus_close_rafs(fs) };
}
#[test]
fn test_open_rafs_default() {
let root_dir = &std::env::var("CARGO_MANIFEST_DIR").expect("$CARGO_MANIFEST_DIR");
let bootstrap = PathBuf::from(root_dir)
.join("../tests/texture/repeatable/sha256-nocompress-repeatable");
let bootstrap = bootstrap.to_str().unwrap();
let bootstrap = CString::new(bootstrap).unwrap();
let blob_dir = PathBuf::from(root_dir).join("../tests/texture/repeatable/blobs");
let blob_dir = blob_dir.to_str().unwrap();
let fs = unsafe {
nydus_open_rafs_default(bootstrap.as_ptr(), blob_dir.as_ptr() as *const c_char)
};
unsafe { nydus_close_rafs(fs) };
}
}

80
clib/src/lib.rs Normal file
View File

@ -0,0 +1,80 @@
// Copyright (C) 2021 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
//! SDK C wrappers to access `nydus-rafs` and `nydus-storage` functionalities.
//!
//! # Generate Header File
//! Please use cbindgen to generate `nydus.h` header file from rust source code by:
//! ```
//! cargo install cbindgen
//! cbindgen -l c -v -o include/nydus.h
//! ```
//!
//! # Run C Test
//! ```
//! gcc -o nydus -L ../../target/debug/ -lnydus_clib nydus_rafs.c
//! ```
#[macro_use]
extern crate log;
extern crate core;
pub use file::*;
pub use fs::*;
mod file;
mod fs;
/// Type for RAFS filesystem inode number.
pub type Inode = u64;
/// Helper to set libc::errno
#[cfg(target_os = "linux")]
fn set_errno(errno: i32) {
unsafe { *libc::__errno_location() = errno };
}
/// Helper to set libc::errno
#[cfg(target_os = "macos")]
fn set_errno(errno: i32) {
unsafe { *libc::__error() = errno };
}
/// Macro to convert C `char *` into rust `&str`.
#[macro_export]
macro_rules! cstr_to_str {
($var: ident, $ret: expr) => {{
let s = CStr::from_ptr($var);
match s.to_str() {
Ok(v) => v,
Err(_e) => {
set_errno(libc::EINVAL);
return $ret;
}
}
}};
}
#[cfg(test)]
mod tests {
use super::*;
use std::io::Error;
#[test]
fn test_set_errno() {
assert_eq!(Error::raw_os_error(&Error::last_os_error()), Some(0));
set_errno(libc::EINVAL);
assert_eq!(
Error::raw_os_error(&Error::last_os_error()),
Some(libc::EINVAL)
);
set_errno(libc::ENOSYS);
assert_eq!(
Error::raw_os_error(&Error::last_os_error()),
Some(libc::ENOSYS)
);
set_errno(0);
assert_eq!(Error::raw_os_error(&Error::last_os_error()), Some(0));
}
}

View File

@ -1 +0,0 @@
bin/

View File

@ -1,27 +0,0 @@
GIT_COMMIT := $(shell git rev-list -1 HEAD)
BUILD_TIME := $(shell date -u +%Y%m%d.%H%M)
PACKAGES ?= $(shell go list ./... | grep -v /vendor/)
GOARCH ?= amd64
GOPROXY ?= https://goproxy.io
ifdef GOPROXY
PROXY := GOPROXY=${GOPROXY}
endif
.PHONY: all build release test clean
all: build
build:
@CGO_ENABLED=0 ${PROXY} GOOS=linux GOARCH=${GOARCH} go build -v -o bin/ctr-remote ./cmd/main.go
release:
@CGO_ENABLED=0 ${PROXY} GOOS=linux GOARCH=${GOARCH} go build -ldflags '-s -w -extldflags "-static"' -v -o bin/ctr-remote ./cmd/main.go
test: build
go vet $(PACKAGES)
golangci-lint run
go test -v -cover ${PACKAGES}
clean:
rm -f bin/*

View File

@ -1,65 +0,0 @@
/*
Copyright The containerd Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
"fmt"
"os"
"github.com/containerd/containerd/cmd/ctr/app"
"github.com/containerd/containerd/pkg/seed"
"github.com/dragonflyoss/image-service/contrib/ctr-remote/commands"
"github.com/urfave/cli"
)
func init() {
seed.WithTimeAndRand()
}
func main() {
customCommands := []cli.Command{commands.RpullCommand}
app := app.New()
app.Description = "NOTE: Enhanced for nydus-snapshotter\n" + app.Description
for i := range app.Commands {
if app.Commands[i].Name == "images" {
sc := map[string]cli.Command{}
for _, subcmd := range customCommands {
sc[subcmd.Name] = subcmd
}
// First, replace duplicated subcommands
for j := range app.Commands[i].Subcommands {
for name, subcmd := range sc {
if name == app.Commands[i].Subcommands[j].Name {
app.Commands[i].Subcommands[j] = subcmd
delete(sc, name)
}
}
}
// Next, append all new sub commands
for _, subcmd := range sc {
app.Commands[i].Subcommands = append(app.Commands[i].Subcommands, subcmd)
}
break
}
}
if err := app.Run(os.Args); err != nil {
fmt.Fprintf(os.Stderr, "ctr-remote: %v\n", err)
os.Exit(1)
}
}

View File

@ -1,103 +0,0 @@
/*
Copyright The containerd Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package commands
import (
"context"
"fmt"
"github.com/containerd/containerd"
"github.com/containerd/containerd/cmd/ctr/commands"
"github.com/containerd/containerd/cmd/ctr/commands/content"
"github.com/containerd/containerd/images"
"github.com/containerd/containerd/log"
"github.com/containerd/nydus-snapshotter/pkg/label"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/urfave/cli"
)
const (
remoteSnapshotterName = "nydus"
)
var RpullCommand = cli.Command{
Name: "rpull",
Usage: "pull an image from a registry leveraging nydus-snapshotter",
ArgsUsage: "[flags] <ref>",
Description: `Fetch and prepare an image for use in containerd leveraging nydus-snapshotter.
After pulling an image, it should be ready to use the same reference in a run command.`,
Flags: append(commands.RegistryFlags, commands.LabelFlag),
Action: func(context *cli.Context) error {
var (
ref = context.Args().First()
config = &rPullConfig{}
)
if ref == "" {
return fmt.Errorf("please provide an image reference to pull")
}
client, ctx, cancel, err := commands.NewClient(context)
if err != nil {
return err
}
defer cancel()
ctx, done, err := client.WithLease(ctx)
if err != nil {
return err
}
defer done(ctx)
fc, err := content.NewFetchConfig(ctx, context)
if err != nil {
return err
}
config.FetchConfig = fc
return pull(ctx, client, ref, config)
},
}
type rPullConfig struct {
*content.FetchConfig
}
func pull(ctx context.Context, client *containerd.Client, ref string, config *rPullConfig) error {
pCtx := ctx
h := images.HandlerFunc(func(ctx context.Context, desc ocispec.Descriptor) ([]ocispec.Descriptor, error) {
if desc.MediaType != images.MediaTypeDockerSchema1Manifest {
fmt.Printf("fetching %v... %v\n", desc.Digest.String()[:15], desc.MediaType)
}
return nil, nil
})
log.G(pCtx).WithField("image", ref).Debug("fetching")
configLabels := commands.LabelArgs(config.Labels)
if _, err := client.Pull(pCtx, ref, []containerd.RemoteOpt{
containerd.WithPullLabels(configLabels),
containerd.WithResolver(config.Resolver),
containerd.WithImageHandler(h),
containerd.WithSchema1Conversion,
containerd.WithPullUnpack,
containerd.WithPullSnapshotter(remoteSnapshotterName),
containerd.WithImageHandlerWrapper(label.AppendLabelsHandlerWrapper(ref)),
}...); err != nil {
return err
}
return nil
}

View File

@ -1,63 +0,0 @@
module github.com/dragonflyoss/image-service/contrib/ctr-remote
go 1.18
require (
github.com/containerd/containerd v1.6.6
github.com/containerd/nydus-snapshotter v0.3.0-alpha.1
github.com/opencontainers/image-spec v1.0.3-0.20211202183452-c5a74bcca799
github.com/urfave/cli v1.22.5
)
require (
github.com/Microsoft/go-winio v0.5.1 // indirect
github.com/Microsoft/hcsshim v0.9.3 // indirect
github.com/cilium/ebpf v0.7.0 // indirect
github.com/containerd/cgroups v1.0.3 // indirect
github.com/containerd/console v1.0.3 // indirect
github.com/containerd/continuity v0.2.2 // indirect
github.com/containerd/fifo v1.0.0 // indirect
github.com/containerd/go-cni v1.1.6 // indirect
github.com/containerd/go-runc v1.0.0 // indirect
github.com/containerd/ttrpc v1.1.0 // indirect
github.com/containerd/typeurl v1.0.2 // indirect
github.com/containernetworking/cni v1.1.1 // indirect
github.com/containernetworking/plugins v1.1.1 // indirect
github.com/coreos/go-systemd/v22 v22.3.2 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.0 // indirect
github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c // indirect
github.com/docker/go-units v0.4.0 // indirect
github.com/godbus/dbus/v5 v5.0.6 // indirect
github.com/gogo/googleapis v1.4.0 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.2 // indirect
github.com/google/uuid v1.2.0 // indirect
github.com/klauspost/compress v1.15.1 // indirect
github.com/moby/locker v1.0.1 // indirect
github.com/moby/sys/mountinfo v0.5.0 // indirect
github.com/moby/sys/signal v0.6.0 // indirect
github.com/moby/sys/symlink v0.2.0 // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/opencontainers/runc v1.1.2 // indirect
github.com/opencontainers/runtime-spec v1.0.3-0.20210326190908-1c3f411f0417 // indirect
github.com/opencontainers/selinux v1.10.1 // indirect
github.com/pelletier/go-toml v1.9.3 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/russross/blackfriday/v2 v2.0.1 // indirect
github.com/shurcooL/sanitized_anchor_name v1.0.0 // indirect
github.com/sirupsen/logrus v1.8.1 // indirect
go.opencensus.io v0.23.0 // indirect
golang.org/x/net v0.0.0-20211216030914-fe4d6282115f // indirect
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c // indirect
golang.org/x/sys v0.0.0-20220412211240-33da011f77ad // indirect
golang.org/x/text v0.3.7 // indirect
google.golang.org/genproto v0.0.0-20211208223120-3a66f561d7aa // indirect
google.golang.org/grpc v1.43.0 // indirect
google.golang.org/protobuf v1.27.1 // indirect
)
replace (
github.com/opencontainers/image-spec => github.com/opencontainers/image-spec v1.0.2-0.20211117181255-693428a734f5
github.com/opencontainers/runc => github.com/opencontainers/runc v1.1.2
)

File diff suppressed because it is too large Load Diff

View File

@ -1 +0,0 @@
/bin

View File

@ -1,21 +0,0 @@
# https://golangci-lint.run/usage/configuration#config-file
linters:
enable:
- staticcheck
- unconvert
- gofmt
- goimports
- revive
- ineffassign
- vet
- unused
- misspell
disable:
- errcheck
run:
deadline: 4m
skip-dirs:
- misc

View File

@ -1,13 +0,0 @@
FROM golang:1.18
ARG GOPROXY="https://goproxy.cn,direct"
RUN mkdir -p /app
WORKDIR /app
COPY . ./
RUN CGO_ENABLED=0 GOOS=linux go build -v .
FROM alpine:3.13.6
RUN mkdir -p /plugin; mkdir -p /nydus
ARG NYDUSD_PATH=./nydusd
COPY --from=0 /app/nydus_graphdriver /plugin/nydus_graphdriver
COPY ${NYDUSD_PATH} /nydus
ENTRYPOINT [ "/plugin/nydus_graphdriver" ]

View File

@ -1,27 +0,0 @@
GIT_COMMIT := $(shell git rev-list -1 HEAD)
BUILD_TIME := $(shell date -u +%Y%m%d.%H%M)
PACKAGES ?= $(shell go list ./... | grep -v /vendor/)
GOARCH ?= amd64
GOPROXY ?= https://goproxy.io
ifdef GOPROXY
PROXY := GOPROXY=${GOPROXY}
endif
.PHONY: all build release test clean
all: build
build:
@CGO_ENABLED=0 ${PROXY} GOOS=linux GOARCH=${GOARCH} go build -v -o bin/nydus_graphdriver .
release:
@CGO_ENABLED=0 ${PROXY} GOOS=linux GOARCH=${GOARCH} go build -ldflags '-s -w -extldflags "-static"' -v -o bin/nydus_graphdriver .
test: build
go vet $(PACKAGES)
golangci-lint run
go test -v -cover ${PACKAGES}
clean:
rm -f bin/*

View File

@ -1,67 +1,3 @@
# Docker Nydus Graph Driver # Docker Nydus Graph Driver
Docker supports remote graph driver as a plugin. With the nydus graph driver, you can start a container from previously converted nydus image. The initial intent to build the graph driver is to provide a way to help users quickly experience the speed starting a container from nydus image. So it is **not ready for productive usage**. If you think docker is important in your use case, a PR is welcomed to listen to your story. We might enhance this in the future. Moved to [docker-nydus-graphdriver](https://github.com/nydusaccelerator/docker-nydus-graphdriver).
Chinese: [使用 Docker 启动容器](../../docs/chinese_docker_graph_driver_guide.md)
## Architecture
---
![Docker Info](../../docs/images/docker_graphdriver_arch.png)
## Procedures
### 1 Configure Nydus
Put your nydus configuration into path `/var/lib/nydus/config.json`, where nydus remote backend is also specified.
### 2 Install Graph Driver Plugin
#### Install from DockerHub
```
$ docker plugin install gechangwei/docker-nydus-graphdriver:0.2.0
```
### 3 Enable the Graph Driver
Before facilitating nydus graph driver to start container, the plugin must be enabled.
```
$ sudo docker plugin enable gechangwei/docker-nydus-graphdriver:0.2.0
```
### 4 Switch to Docker Graph Driver
By default, docker manages all images by build-in `overlay` graph driver which can be switched to another like nydus graph driver by specifying a new one in its
daemon configuration file.
```
{
"experimental": true,
"storage-driver": "gechangwei/docker-nydus-graphdriver:0.2.0"
}
```
### 5 Restart Docker Service
```
$ sudo systemctl restart docker
```
## Verification
Execute `docker info` to verify above steps were all done and nydus graph driver works normally.
![Docker Info](../../docs/images/docker_info_storage_driver.png)
## Start Container
Now, just `run` container or `pull` image like what you are used to
## Limitation
1. docker's version >=20.10.2. Lower version probably works well, but it is not tested yet
2. When converting images through `nydusify`, backend must be specified as `oss`.
3. Nydus graph driver is not compatible with classic oci images. So you have to switch back to build-in graphdriver to use those images.

View File

@ -1,43 +0,0 @@
{
"description": "nydus image service plugin for Docker",
"documentation": "https://docs.docker.com/engine/extend/plugins/",
"entrypoint": [
"/plugin/nydus_graphdriver"
],
"network": {
"type": "host"
},
"interface": {
"types": [
"docker.graphdriver/1.0"
],
"socket": "plugin.sock"
},
"linux": {
"capabilities": [
"CAP_SYS_ADMIN",
"CAP_SYS_RESOURCE"
],
"Devices": [
{
"Path": "/dev/fuse"
}
]
},
"PropagatedMount": "/home",
"Mounts": [
{
"Name": "NYDUS_CONFIG",
"Source": "/var/lib/nydus/config.json",
"Destination": "/nydus/config.json",
"Type": "none",
"Options": [
"bind",
"ro"
],
"Settable": [
"source"
]
}
]
}

View File

@ -1,44 +0,0 @@
module github.com/dragonflyoss/image-service/contrib/nydus_graphdriver
go 1.18
require (
github.com/docker/docker v20.10.3-0.20211206061157-934f955e3d62+incompatible
github.com/docker/go-plugins-helpers v0.0.0-20211224144127-6eecb7beb651
github.com/moby/sys/mountinfo v0.5.0
github.com/opencontainers/selinux v1.10.1
github.com/pkg/errors v0.9.1
github.com/sirupsen/logrus v1.8.1
golang.org/x/sys v0.0.0-20220412211240-33da011f77ad
)
require (
github.com/Microsoft/go-winio v0.5.1 // indirect
github.com/containerd/containerd v1.6.6 // indirect
github.com/containerd/continuity v0.2.2 // indirect
github.com/coreos/go-systemd v0.0.0-20191104093116-d3cd4ed1dbcf // indirect
github.com/docker/distribution v2.8.1+incompatible // indirect
github.com/docker/go-connections v0.4.0 // indirect
github.com/docker/go-units v0.4.0 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/protobuf v1.5.2 // indirect
github.com/klauspost/compress v1.11.13 // indirect
github.com/moby/sys/mount v0.3.0 // indirect
github.com/moby/sys/symlink v0.2.0 // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/opencontainers/image-spec v1.0.3-0.20211202183452-c5a74bcca799 // indirect
github.com/opencontainers/runc v1.1.2 // indirect
github.com/opencontainers/runtime-spec v1.0.3-0.20210326190908-1c3f411f0417 // indirect
github.com/vbatts/tar-split v0.11.1 // indirect
golang.org/x/net v0.0.0-20211216030914-fe4d6282115f // indirect
google.golang.org/genproto v0.0.0-20211208223120-3a66f561d7aa // indirect
google.golang.org/grpc v1.43.0 // indirect
google.golang.org/protobuf v1.27.1 // indirect
)
replace (
github.com/containerd/go-runc => github.com/containerd/go-runc v1.0.0
github.com/docker/distribution => github.com/docker/distribution v2.8.1+incompatible
github.com/opencontainers/image-spec => github.com/opencontainers/image-spec v1.0.2
github.com/opencontainers/runc => github.com/opencontainers/runc v1.1.2
)

View File

@ -1,330 +0,0 @@
bazil.org/fuse v0.0.0-20200407214033-5883e5a4b512/go.mod h1:FbcW6z/2VytnFDhZfumh8Ss8zxHE6qpMP5sHTRe0EaM=
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/Microsoft/go-winio v0.5.1 h1:aPJp2QD7OOrhO5tQXqQoGSJc+DjDtWTGLOmNyAm6FgY=
github.com/Microsoft/go-winio v0.5.1/go.mod h1:JPGBdM1cNvN/6ISo+n8V5iA4v8pBzdOpzfwIujj1a84=
github.com/Microsoft/hcsshim v0.9.3 h1:k371PzBuRrz2b+ebGuI2nVgVhgsVX60jMfSw80NECxo=
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/antihax/optional v1.0.0/go.mod h1:uupD/76wgC+ih3iEmQUL+0Ugr19nfwCT1kdvxnR2qWY=
github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/checkpoint-restore/go-criu/v5 v5.3.0/go.mod h1:E/eQpaFtUKGOOSEBZgmKAcn+zUUwWxqcaKZlF54wK8E=
github.com/cilium/ebpf v0.7.0/go.mod h1:/oI2+1shJiTGAMgl6/RgJr36Eo1jzrRcAWbcXO2usCA=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
github.com/cncf/udpa/go v0.0.0-20201120205902-5459f2c99403/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk=
github.com/cncf/udpa/go v0.0.0-20210930031921-04548b0d99d4/go.mod h1:6pvJx4me5XPnfI9Z40ddWsdw2W/uZgQLFXToKeRcDiI=
github.com/cncf/xds/go v0.0.0-20210312221358-fbca930ec8ed/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/cncf/xds/go v0.0.0-20210805033703-aa0b78936158/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/cncf/xds/go v0.0.0-20210922020428-25de7278fc84/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/cncf/xds/go v0.0.0-20211011173535-cb28da3451f1/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/containerd/console v1.0.3/go.mod h1:7LqA/THxQ86k76b8c/EMSiaJ3h1eZkMkXar0TQ1gf3U=
github.com/containerd/containerd v1.6.6 h1:xJNPhbrmz8xAMDNoVjHy9YHtWwEQNS+CDkcIRh7t8Y0=
github.com/containerd/containerd v1.6.6/go.mod h1:ZoP1geJldzCVY3Tonoz7b1IXk8rIX0Nltt5QE4OMNk0=
github.com/containerd/continuity v0.2.2 h1:QSqfxcn8c+12slxwu00AtzXrsami0MJb/MQs9lOLHLA=
github.com/containerd/continuity v0.2.2/go.mod h1:pWygW9u7LtS1o4N/Tn0FoCFDIXZ7rxcMX7HX1Dmibvk=
github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=
github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/go-systemd v0.0.0-20191104093116-d3cd4ed1dbcf h1:iW4rZ826su+pqaw19uhpSCzhj44qo35pNgKFGqzDKkU=
github.com/coreos/go-systemd v0.0.0-20191104093116-d3cd4ed1dbcf/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/go-systemd/v22 v22.3.2/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/cpuguy83/go-md2man/v2 v2.0.0/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/cyphar/filepath-securejoin v0.2.3/go.mod h1:aPGpWjXOXUn2NCNjFvBE6aRxGGx79pTxQpKOJNYHHl4=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
github.com/docker/distribution v2.8.1+incompatible h1:Q50tZOPR6T/hjNsyc9g8/syEs6bk8XXApsHjKukMl68=
github.com/docker/distribution v2.8.1+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
github.com/docker/docker v20.10.3-0.20211206061157-934f955e3d62+incompatible h1:zOc/xrISG6HmrZoMs10Jrzeqbm4Zfop2CmeDoBRynfI=
github.com/docker/docker v20.10.3-0.20211206061157-934f955e3d62+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/go-connections v0.4.0 h1:El9xVISelRB7BuFusrZozjnkIM5YnzCViNKohAFqRJQ=
github.com/docker/go-connections v0.4.0/go.mod h1:Gbd7IOopHjR8Iph03tsViu4nIes5XhDvyHbTtUxmeec=
github.com/docker/go-plugins-helpers v0.0.0-20211224144127-6eecb7beb651 h1:YcvzLmdrP/b8kLAGJ8GT7bdncgCAiWxJZIlt84D+RJg=
github.com/docker/go-plugins-helpers v0.0.0-20211224144127-6eecb7beb651/go.mod h1:LFyLie6XcDbyKGeVK6bHe+9aJTYCxWLBg5IrJZOaXKA=
github.com/docker/go-units v0.4.0 h1:3uh0PgVws3nIA0Q+MwDC8yjEPf9zjRfZZWXZYDct3Tw=
github.com/docker/go-units v0.4.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98=
github.com/envoyproxy/go-control-plane v0.9.9-0.20201210154907-fd9021fe5dad/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk=
github.com/envoyproxy/go-control-plane v0.9.9-0.20210512163311-63b5d3c536b0/go.mod h1:hliV/p42l8fGbc6Y9bQ70uLwIvmJyVE5k4iMKlh8wCQ=
github.com/envoyproxy/go-control-plane v0.9.10-0.20210907150352-cf90f659a021/go.mod h1:AFq3mo9L8Lqqiid3OhADV3RfLJnjiw63cSpi+fDTRC0=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/frankban/quicktest v1.11.3/go.mod h1:wRf/ReqHper53s+kmmSZizM8NamnL3IM0I9ntUbOk+k=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/godbus/dbus/v5 v5.0.6/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.3/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
github.com/golang/protobuf v1.3.5/go.mod h1:6O5/vntMXwX2lRkT1hjjk0nAC1IDOTvTlVgjlRvqsdk=
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8=
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/golang/protobuf v1.5.2 h1:ROPKBNFfQgOUMifHyP+KYbvpjbdoFNs+aK7DXlji0Tw=
github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.6 h1:BKbKCqvP6I+rmFHt06ZmyQtvB8xAkWdhFyr0ZUNZcxQ=
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFbcEtjT1g+wF4CSlocrBnw=
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/klauspost/compress v1.11.13 h1:eSvu8Tmq6j2psUJqJrLcWH6K3w5Dwc+qipbaA6eVEN4=
github.com/klauspost/compress v1.11.13/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/moby/sys/mount v0.3.0 h1:bXZYMmq7DBQPwHRxH/MG+u9+XF90ZOwoXpHTOznMGp0=
github.com/moby/sys/mount v0.3.0/go.mod h1:U2Z3ur2rXPFrFmy4q6WMwWrBOAQGYtYTRVM8BIvzbwk=
github.com/moby/sys/mountinfo v0.5.0 h1:2Ks8/r6lopsxWi9m58nlwjaeSzUX9iiL1vj5qB/9ObI=
github.com/moby/sys/mountinfo v0.5.0/go.mod h1:3bMD3Rg+zkqx8MRYPi7Pyb0Ie97QEBmdxbhnCLlSvSU=
github.com/moby/sys/symlink v0.2.0 h1:tk1rOM+Ljp0nFmfOIBtlV3rTDlWOwFRhjEeAhZB0nZc=
github.com/moby/sys/symlink v0.2.0/go.mod h1:7uZVF2dqJjG/NsClqul95CqKOBRQyYSNnJ6BMgR/gFs=
github.com/mrunalp/fileutils v0.5.0/go.mod h1:M1WthSahJixYnrXQl/DFQuteStB1weuxD2QJNHXfbSQ=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U=
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
github.com/opencontainers/image-spec v1.0.2 h1:9yCKha/T5XdGtO0q9Q9a6T5NUCsTn/DrBg0D7ufOcFM=
github.com/opencontainers/image-spec v1.0.2/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
github.com/opencontainers/runc v1.1.2 h1:2VSZwLx5k/BfsBxMMipG/LYUnmqOD/BPkIVgQUcTlLw=
github.com/opencontainers/runc v1.1.2/go.mod h1:Tj1hFw6eFWp/o33uxGf5yF2BX5yz2Z6iptFpuvbbKqc=
github.com/opencontainers/runtime-spec v1.0.3-0.20210326190908-1c3f411f0417 h1:3snG66yBm59tKhhSPQrQ/0bCrv1LQbKt40LnUPiUxdc=
github.com/opencontainers/runtime-spec v1.0.3-0.20210326190908-1c3f411f0417/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/opencontainers/selinux v1.10.0/go.mod h1:2i0OySw99QjzBBQByd1Gr9gSjvuho1lHsJxIJ3gGbJI=
github.com/opencontainers/selinux v1.10.1 h1:09LIPVRP3uuZGQvgR+SgMSNBd1Eb3vlRbGqQpoHsF8w=
github.com/opencontainers/selinux v1.10.1/go.mod h1:2i0OySw99QjzBBQByd1Gr9gSjvuho1lHsJxIJ3gGbJI=
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU=
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ=
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/seccomp/libseccomp-golang v0.9.2-0.20210429002308-3879420cc921/go.mod h1:JA8cRccbGaA1s33RQf7Y1+q9gHmZX1yB/z9WDN1C6fg=
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/sirupsen/logrus v1.7.0/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
github.com/sirupsen/logrus v1.8.1 h1:dJKuHgqk1NNQlqoA6BTlM1Wf9DOH3NBjQyu0h9+AZZE=
github.com/sirupsen/logrus v1.8.1/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM=
github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cobra v1.0.0/go.mod h1:/6GTrnGXV9HjY+aR4k0oJ5tcvakLuG6EuKReYlHNrgE=
github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/viper v1.4.0/go.mod h1:PTJ7Z/lr49W6bUbkmS1V3by4uWynFiR9p7+dSq/yZzE=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5CcY=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/tv42/httpunix v0.0.0-20191220191345-2ba4b9c3382c/go.mod h1:hzIxponao9Kjc7aWznkXaL4U4TWaDSs8zcsY4Ka08nM=
github.com/ugorji/go v1.1.4/go.mod h1:uQMGLiO92mf5W77hV/PUCpI3pbzQx3CRekS0kk+RGrc=
github.com/urfave/cli v1.22.1/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
github.com/vbatts/tar-split v0.11.1 h1:0Odu65rhcZ3JZaPHxl7tCI3V/C/Q9Zf82UFravl02dE=
github.com/vbatts/tar-split v0.11.1/go.mod h1:LEuURwDEiWjRjwu46yU3KVGuUdVv/dcnpcEPSzR8z6g=
github.com/vishvananda/netlink v1.1.0/go.mod h1:cTgwzPIzzgDAYoQrMm0EdrjRUBkTqKYppBueQtXaqoE=
github.com/vishvananda/netns v0.0.0-20191106174202-0a2b9b5464df/go.mod h1:JP3t17pCcGlemwknint6hfoeCVQrEMVwxRLRjXpq+BU=
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
go.opentelemetry.io/proto/otlp v0.7.0/go.mod h1:PqfVotwruBrMGOCsRd/89rSnXhoiJIqeYNgFYFoEGnI=
go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201224014010-6772e930b67b/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
golang.org/x/net v0.0.0-20211216030914-fe4d6282115f h1:hEYJvxw1lSnWIl8X9ofsYMklzaDs90JI2az5YMd4fPM=
golang.org/x/net v0.0.0-20211216030914-fe4d6282115f/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190606203320-7fc4e5ec1444/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191115151921-52ab43148777/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191210023423-ac6580df4449/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210906170528-6f6e22806c34/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211025201205-69cdffdb9359/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211116061358-0a5406a5449c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220412211240-33da011f77ad h1:ntjMns5wyP/fN65tdBD4g8J5w8n015+iIIs9rtjXkY0=
golang.org/x/sys v0.0.0-20220412211240-33da011f77ad/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7 h1:olpwvP2KacW1ZWvsR7uQhoyTYvKAupfQrRGBFM352Gk=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 h1:go1bK/D/BFZV2I8cIQd1NKEZ+0owSTG1fDTci4IqFcE=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20200513103714-09dca8ec2884/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
google.golang.org/genproto v0.0.0-20211208223120-3a66f561d7aa h1:I0YcKz0I7OAhddo7ya8kMnvprhcWM045PmkBdMO9zN0=
google.golang.org/genproto v0.0.0-20211208223120-3a66f561d7aa/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY=
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.33.1/go.mod h1:fr5YgcSWrqhRRxogOsw7RzIpsmvOZ6IcH4kBYTpR3n0=
google.golang.org/grpc v1.36.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
google.golang.org/grpc v1.40.0/go.mod h1:ogyxbiOoUXAkP+4+xa6PZSE9DZgIHtSpzjDTB9KAK34=
google.golang.org/grpc v1.43.0 h1:Eeu7bZtDZ2DpRCsLhUlcrLnvYaMK1Gz86a+hMVvELmM=
google.golang.org/grpc v1.43.0/go.mod h1:k+4IHHFw41K8+bbowsex27ge2rCb65oeWqe4jJ590SU=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=
google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=
google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.27.1 h1:SnqbnDw1V7RiZcXPx5MEeqPv2s79L9i7BJUlG/+RurQ=
google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo=
gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bli9HhUf9+ttbYbLASfIpnQbh74=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.3/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b h1:h8qDotaEPuJATrMmW04NCwg7v22aHH28wwpauUhK9Oo=
gotest.tools/v3 v3.0.3 h1:4AuOwCGf4lLR9u3YOe2awrHygurzhO/HeQ6laiA6Sx0=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=

View File

@ -1,16 +0,0 @@
// Copyright 2020 Ant Group. All rights reserved.
// Copyright (C) 2020 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: (Apache-2.0 AND BSD-3-Clause)
package main
import (
"github.com/docker/go-plugins-helpers/graphdriver/shim"
"github.com/dragonflyoss/image-service/contrib/nydus_graphdriver/plugin/nydus"
)
func main() {
handler := shim.NewHandlerFromGraphDriver(nydus.Init)
handler.ServeUnix("plugin", 0)
}

View File

@ -1,133 +0,0 @@
// Copyright 2020 Ant Group. All rights reserved.
// Copyright (C) 2020 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: (Apache-2.0 AND BSD-3-Clause)
package nydus
import (
"context"
"encoding/json"
"io/ioutil"
"net"
"net/http"
"os"
"os/exec"
"time"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
const (
NudusdConfigPath = "/nydus/config.json"
NydusdBin = "/nydus/nydusd"
NydusdSocket = "/nydus/api.sock"
)
type Nydus struct {
command *exec.Cmd
}
func New() *Nydus {
return &Nydus{}
}
type DaemonInfo struct {
ID string `json:"id"`
State string `json:"state"`
}
type errorMessage struct {
Code string `json:"code"`
Message string `json:"message"`
}
func getDaemonStatus(socket string) error {
transport := http.Transport{
MaxIdleConns: 10,
IdleConnTimeout: 10 * time.Second,
ExpectContinueTimeout: 1 * time.Second,
DialContext: func(ctx context.Context, _, _ string) (net.Conn, error) {
dialer := &net.Dialer{
Timeout: 5 * time.Second,
KeepAlive: 5 * time.Second,
}
return dialer.DialContext(ctx, "unix", socket)
},
}
client := http.Client{Transport: &transport, Timeout: 30 * time.Second}
resp, err := client.Get("http://unix/api/v1/daemon")
if err != nil {
return err
}
defer resp.Body.Close()
b, err := ioutil.ReadAll(resp.Body)
if err != nil {
return err
}
if resp.StatusCode >= 400 {
var message errorMessage
json.Unmarshal(b, &message)
return errors.Errorf("request error, status = %d, message %s", resp.StatusCode, message)
}
var info DaemonInfo
if err = json.Unmarshal(b, &info); err != nil {
return err
}
if info.State != "RUNNING" {
return errors.Errorf("nydus is not ready. current stat %s", info.State)
}
return nil
}
func (nydus *Nydus) Mount(bootstrap, mountpoint string) error {
args := []string{
"--apisock", NydusdSocket,
"--log-level", "info",
"--thread-num", "4",
"--bootstrap", bootstrap,
"--config", NudusdConfigPath,
"--mountpoint", mountpoint,
}
cmd := exec.Command(NydusdBin, args...)
logrus.Infof("Start nydusd. %s", cmd.String())
// Redirect logs from nydusd daemon to a proper place.
cmd.Stderr = os.Stderr
cmd.Stdout = os.Stdout
if err := cmd.Start(); err != nil {
return errors.Wrapf(err, "start nydusd")
}
nydus.command = cmd
ready := false
// return error if nydusd does not reach normal state after elapse.
for i := 0; i < 30; i++ {
err := getDaemonStatus(NydusdSocket)
if err == nil {
ready = true
break
} else {
logrus.Error(err)
time.Sleep(100 * time.Millisecond)
}
}
if !ready {
logrus.Errorf("It take too long until nydusd gets RUNNING")
cmd.Process.Kill()
cmd.Wait()
}
return nil
}

View File

@ -1,496 +0,0 @@
// Copyright 2020 Ant Group. All rights reserved.
// Copyright (C) 2020 Alibaba Cloud. All rights reserved.
//
// SPDX-License-Identifier: (Apache-2.0 AND BSD-3-Clause)
package nydus
import (
"context"
"fmt"
"io"
"io/ioutil"
"os"
"path"
"strings"
"github.com/pkg/errors"
"github.com/docker/docker/daemon/graphdriver"
"github.com/docker/docker/pkg/archive"
"github.com/docker/docker/pkg/containerfs"
"github.com/docker/docker/pkg/directory"
"github.com/docker/docker/pkg/idtools"
"github.com/docker/docker/pkg/system"
"github.com/moby/sys/mountinfo"
"github.com/opencontainers/selinux/go-selinux/label"
"github.com/sirupsen/logrus"
"golang.org/x/sys/unix"
)
// With nydus image layer, there won't be plenty of layers that need to be stacked.
const (
diffDirName = "diff"
workDirName = "work"
mergedDirName = "merged"
lowerFile = "lower"
nydusDirName = "nydus"
nydusMetaRelapath = "image/image.boot"
parentFile = "parent"
)
var backingFs = "<unknown>"
func isFileExisted(file string) (bool, error) {
if _, err := os.Stat(file); err == nil {
return true, nil
} else if os.IsNotExist(err) {
return false, nil
} else {
return false, err
}
}
// Nydus graphdriver contains information about the home directory and the list of active
// mounts that are created using this driver.
type Driver struct {
home string
nydus *Nydus
NydusMountpoint string
uidMaps []idtools.IDMap
gidMaps []idtools.IDMap
ctr *graphdriver.RefCounter
}
func (d *Driver) dir(id string) string {
return path.Join(d.home, id)
}
func Init(home string, options []string, uidMaps, gidMaps []idtools.IDMap) (graphdriver.Driver, error) {
os.MkdirAll(home, os.ModePerm)
fsMagic, err := graphdriver.GetFSMagic(home)
if err != nil {
return nil, err
}
if fsName, ok := graphdriver.FsNames[fsMagic]; ok {
backingFs = fsName
}
// check if they are running over btrfs, aufs, zfs, overlay, or ecryptfs
switch fsMagic {
case graphdriver.FsMagicBtrfs, graphdriver.FsMagicAufs, graphdriver.FsMagicZfs, graphdriver.FsMagicOverlay, graphdriver.FsMagicEcryptfs:
logrus.Errorf("'overlay2' is not supported over %s", backingFs)
return nil, graphdriver.ErrIncompatibleFS
}
return &Driver{
home: home,
uidMaps: uidMaps,
gidMaps: gidMaps,
ctr: graphdriver.NewRefCounter(graphdriver.NewFsChecker(graphdriver.FsMagicOverlay))}, nil
}
// Status returns current driver information in a two dimensional string array.
// Output contains "Backing Filesystem" used in this implementation.
func (d *Driver) Status() [][2]string {
return [][2]string{
{"Backing Filesystem", backingFs},
// TODO: Add nydusd working status and version here.
{"Nydusd", "TBD"},
}
}
func (d *Driver) String() string {
return "Nydus graph driver"
}
// GetMetadata returns meta data about the overlay driver such as
// LowerDir, UpperDir, WorkDir and MergeDir used to store data.
func (d *Driver) GetMetadata(id string) (map[string]string, error) {
dir := d.dir(id)
if _, err := os.Stat(dir); err != nil {
return nil, err
}
metadata := map[string]string{
"WorkDir": path.Join(dir, "work"),
"MergedDir": path.Join(dir, "merged"),
"UpperDir": path.Join(dir, "diff"),
}
lowerDirs, err := d.getLowerDirs(id)
if err != nil {
return nil, err
}
if len(lowerDirs) > 0 {
metadata["LowerDir"] = strings.Join(lowerDirs, ":")
}
return metadata, nil
}
// Cleanup any state created by overlay which should be cleaned when daemon
// is being shutdown. For now, we just have to unmount the bind mounted
// we had created.
func (d *Driver) Cleanup() error {
if d.nydus != nil {
d.nydus.command.Process.Signal(os.Interrupt)
d.nydus.command.Wait()
}
return nil
}
// CreateReadWrite creates a layer that is writable for use as a container
// file system.
func (d *Driver) CreateReadWrite(id, parent string, opts *graphdriver.CreateOpts) error {
logrus.Infof("Create read write - id %s parent %s", id, parent)
return d.Create(id, parent, opts)
}
// Create is used to create the upper, lower, and merged directories required for
// overlay fs for a given id.
// The parent filesystem is used to configure these directories for the overlay.
func (d *Driver) Create(id, parent string, opts *graphdriver.CreateOpts) (retErr error) {
logrus.Infof("Create. id %s, parent %s", id, parent)
dir := d.dir(id)
rootUID, rootGID, err := idtools.GetRootUIDGID(d.uidMaps, d.gidMaps)
if err != nil {
return err
}
root := idtools.Identity{UID: rootUID, GID: rootGID}
if err := idtools.MkdirAllAndChown(path.Dir(dir), 0700, root); err != nil {
return err
}
if err := idtools.MkdirAndChown(dir, 0700, root); err != nil {
return err
}
defer func() {
// Clean up on failure
if retErr != nil {
os.RemoveAll(dir)
}
}()
if err := idtools.MkdirAndChown(path.Join(dir, diffDirName), 0755, root); err != nil {
return err
}
// if no parent directory, done
if parent == "" {
return nil
}
if err := idtools.MkdirAndChown(path.Join(dir, mergedDirName), 0700, root); err != nil {
return err
}
if err := idtools.MkdirAndChown(path.Join(dir, workDirName), 0700, root); err != nil {
return err
}
if err := ioutil.WriteFile(path.Join(dir, parentFile), []byte(parent), 0666); err != nil {
return err
}
if parentLowers, err := d.getLowerDirs(parent); err == nil {
lowers := strings.Join(append(parentLowers, parent), ":")
lowerFilePath := path.Join(d.dir(id), lowerFile)
if len(lowers) > 0 {
if err := ioutil.WriteFile(lowerFilePath, []byte(lowers), 0666); err != nil {
return err
}
}
} else {
return err
}
return nil
}
func (d *Driver) getLowerDirs(id string) ([]string, error) {
var lowersArray []string
lowers, err := ioutil.ReadFile(path.Join(d.dir(id), lowerFile))
if err == nil {
lowersArray = strings.Split(string(lowers), ":")
} else if !os.IsNotExist(err) {
return nil, err
}
return lowersArray, nil
}
// Remove cleans the directories that are created for this id.
func (d *Driver) Remove(id string) error {
logrus.Infof("Remove %s", id)
dir := d.dir(id)
if err := system.EnsureRemoveAll(dir); err != nil && !os.IsNotExist(err) {
return errors.Errorf("Can't remove %s", dir)
}
return nil
}
// Get creates and mounts the required file system for the given id and returns the mount path.
// The `id` is mount-id.
func (d *Driver) Get(id, mountLabel string) (fs containerfs.ContainerFS, retErr error) {
logrus.Infof("Mount layer - id %s, label %s", id, mountLabel)
dir := d.dir(id)
if _, err := os.Stat(dir); err != nil {
return nil, err
}
var lowers []string
lowers, retErr = d.getLowerDirs(id)
if retErr != nil {
return
}
newLowers := make([]string, 0)
for _, l := range lowers {
if l == id {
newLowers = append(newLowers, id)
break
}
// Encounter nydus layer, start nydusd daemon, thus to mount rafs as
// overlay lower dir for later use.
if isNydus, err := d.isNydusLayer(l); isNydus {
if mounted, err := d.isNydusMounted(l); !mounted {
bootstrapPath := path.Join(d.dir(l), diffDirName, nydusMetaRelapath)
absMountpoint := path.Join(d.dir(l), nydusDirName)
rootUID, rootGID, err := idtools.GetRootUIDGID(d.uidMaps, d.gidMaps)
if err != nil {
return nil, err
}
root := idtools.Identity{UID: rootUID, GID: rootGID}
if err := idtools.MkdirAllAndChown(absMountpoint, 0700, root); err != nil {
return nil, errors.Wrap(err, "failed in creating nydus mountpoint")
}
nydus := New()
// Keep it, so we can wait for process termination.
d.nydus = nydus
if e := nydus.Mount(bootstrapPath, absMountpoint); e != nil {
return nil, e
}
} else if err != nil {
return nil, err
}
} else if err != nil {
return nil, err
}
// Relative path
nydusRelaMountpoint := path.Join(l, nydusDirName)
if _, err := os.Stat(path.Join(d.home, nydusRelaMountpoint)); err == nil {
newLowers = append(newLowers, nydusRelaMountpoint)
} else {
diffDir := path.Join(l, "diff")
if _, err := os.Stat(diffDir); err == nil {
newLowers = append(newLowers, diffDir)
}
}
}
mergedDir := path.Join(dir, mergedDirName)
if count := d.ctr.Increment(mergedDir); count > 1 {
return containerfs.NewLocalContainerFS(mergedDir), nil
}
defer func() {
if retErr != nil {
if c := d.ctr.Decrement(mergedDir); c <= 0 {
if err := unix.Unmount(mergedDir, 0); err != nil {
logrus.Warnf("unmount error %v: %v", mergedDir, err)
}
if err := unix.Rmdir(mergedDir); err != nil && !os.IsNotExist(err) {
logrus.Warnf("failed to remove %s: %v", id, err)
}
}
}
}()
os.Chdir(path.Join(d.home))
upperDir := path.Join(id, diffDirName)
workDir := path.Join(id, workDirName)
opts := "lowerdir=" + strings.Join(newLowers, ":") + ",upperdir=" + upperDir + ",workdir=" + workDir
mountData := label.FormatMountLabel(opts, mountLabel)
mount := unix.Mount
mountTarget := mergedDir
logrus.Infof("mount options %s, target %s", opts, mountTarget)
rootUID, rootGID, err := idtools.GetRootUIDGID(d.uidMaps, d.gidMaps)
if err != nil {
return nil, err
}
if err := idtools.MkdirAndChown(mergedDir, 0700, idtools.Identity{UID: rootUID, GID: rootGID}); err != nil {
return nil, err
}
pageSize := unix.Getpagesize()
if len(mountData) > pageSize {
return nil, fmt.Errorf("cannot mount layer, mount label too large %d", len(mountData))
}
if err := mount("overlay", mountTarget, "overlay", 0, mountData); err != nil {
return nil, fmt.Errorf("error creating overlay mount to %s: %v", mergedDir, err)
}
// chown "workdir/work" to the remapped root UID/GID. Overlay fs inside a
// user namespace requires this to move a directory from lower to upper.
if err := os.Chown(path.Join(workDir, workDirName), rootUID, rootGID); err != nil {
return nil, err
}
return containerfs.NewLocalContainerFS(mergedDir), nil
}
func (d *Driver) isNydusLayer(id string) (bool, error) {
dir := d.dir(id)
bootstrapPath := path.Join(dir, diffDirName, nydusMetaRelapath)
return isFileExisted(bootstrapPath)
}
func (d *Driver) isNydusMounted(id string) (bool, error) {
if isNydus, err := d.isNydusLayer(id); !isNydus {
return isNydus, err
}
mp := path.Join(d.dir(id), nydusDirName)
if exited, err := isFileExisted(mp); !exited {
return exited, err
}
if mounted, err := mountinfo.Mounted(mp); !mounted {
return mounted, err
}
return true, nil
}
// Put unmounts the mount path created for the give id.
func (d *Driver) Put(id string) error {
if mounted, _ := d.isNydusMounted(id); mounted {
if d.nydus != nil {
// Signal to nydusd causes it umount itself before terminating.
// So we don't have to invoke os/umount here.
// Note: this only umount nydusd fuse mount point rather than overlay merged dir
d.nydus.command.Process.Signal(os.Interrupt)
d.nydus.command.Wait()
}
}
dir := d.dir(id)
mountpoint := path.Join(dir, mergedDirName)
if count := d.ctr.Decrement(mountpoint); count > 0 {
return nil
}
if err := unix.Unmount(mountpoint, unix.MNT_DETACH); err != nil {
return errors.Wrapf(err, "failed to unmount from %s", mountpoint)
}
if err := unix.Rmdir(mountpoint); err != nil && !os.IsNotExist(err) {
return errors.Wrapf(err, "failed in removing %s", mountpoint)
}
return nil
}
// Exists checks to see if the id is already mounted.
func (d *Driver) Exists(id string) bool {
logrus.Info("Execute `Exists()`")
_, err := os.Stat(d.dir(id))
return err == nil
}
// isParent returns if the passed in parent is the direct parent of the passed in layer
func (d *Driver) isParent(id, parent string) bool {
lowers, err := d.getLowerDirs(id)
if err != nil || len(lowers) == 0 && parent != "" {
return false
}
if parent == "" {
return len(lowers) == 0
}
return parent == lowers[len(lowers)-1]
}
// ApplyDiff applies the new layer into a root
func (d *Driver) ApplyDiff(id, parent string, diff io.Reader) (size int64, err error) {
if !d.isParent(id, parent) {
return 0, errors.Errorf("Parent %s is not true parent of id %s", parent, id)
}
applyDir := path.Join(d.dir(id), diffDirName)
if err := archive.Unpack(diff, applyDir, &archive.TarOptions{
UIDMaps: d.uidMaps,
GIDMaps: d.gidMaps,
WhiteoutFormat: archive.OverlayWhiteoutFormat,
InUserNS: false,
}); err != nil {
return 0, err
}
parentLowers, err := d.getLowerDirs(parent)
if err != nil {
return 0, err
}
newLowers := strings.Join(append(parentLowers, parent), ":")
lowerFilePath := path.Join(d.dir(id), lowerFile)
if len(newLowers) > 0 {
ioutil.WriteFile(lowerFilePath, []byte(newLowers), 0666)
}
return directory.Size(context.TODO(), applyDir)
}
// DiffSize calculates the changes between the specified id
// and its parent and returns the size in bytes of the changes
// relative to its base filesystem directory.
func (d *Driver) DiffSize(id, parent string) (size int64, err error) {
return 0, errors.Errorf("Not implemented. id=%s, parent=%s", id, parent)
}
// Diff produces an archive of the changes between the specified
// layer and its parent layer which may be "".
func (d *Driver) Diff(id, parent string) (io.ReadCloser, error) {
return nil, errors.Errorf("Not implemented. id=%s, parent=%s", id, parent)
}
// Changes produces a list of changes between the specified layer
// and its parent layer. If parent is "", then all changes will be ADD changes.
func (d *Driver) Changes(id, parent string) ([]archive.Change, error) {
return nil, errors.Errorf("Not implemented. id=%s, parent=%s", id, parent)
}

View File

@ -0,0 +1,8 @@
package main
import "fmt"
// This is a dummy program, to workaround the goreleaser can't pre build the binary.
func main() {
fmt.Println("Hello, World!")
}

File diff suppressed because it is too large Load Diff

View File

@ -1,19 +1,19 @@
[package] [package]
name = "nydus-backend-proxy" name = "nydus-backend-proxy"
version = "0.1.0" version = "0.2.0"
authors = ["The Nydus Developers"] authors = ["The Nydus Developers"]
description = "A simple HTTP server to provide a fake container registry for nydusd" description = "A simple HTTP server to provide a fake container registry for nydusd"
homepage = "https://nydus.dev/" homepage = "https://nydus.dev/"
repository = "https://github.com/dragonflyoss/image-service" repository = "https://github.com/dragonflyoss/nydus"
edition = "2018" edition = "2021"
license = "Apache-2.0" license = "Apache-2.0"
[dependencies] [dependencies]
rocket = "0.5.0-rc" rocket = "0.5.0"
http-range = "0.1.3" http-range = "0.1.5"
nix = ">=0.23.0" nix = { version = "0.28", features = ["uio"] }
clap = "2.33" clap = "4.4"
once_cell = "1.10.0" once_cell = "1.19.0"
lazy_static = "1.4" lazy_static = "1.4"
[workspace] [workspace]

View File

@ -2,29 +2,22 @@
// //
// SPDX-License-Identifier: (Apache-2.0 AND BSD-3-Clause) // SPDX-License-Identifier: (Apache-2.0 AND BSD-3-Clause)
#[macro_use]
extern crate rocket;
#[macro_use]
extern crate lazy_static;
#[macro_use(crate_authors, crate_version)]
extern crate clap;
use std::collections::HashMap; use std::collections::HashMap;
use std::env; use std::env;
use std::os::unix::io::AsRawFd;
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use std::sync::Arc; use std::sync::Arc;
use std::{fs, io}; use std::{fs, io};
use clap::{App, Arg}; use clap::*;
use http_range::HttpRange; use http_range::HttpRange;
use lazy_static::lazy_static;
use nix::sys::uio; use nix::sys::uio;
use rocket::fs::{FileServer, NamedFile}; use rocket::fs::{FileServer, NamedFile};
use rocket::futures::lock::{Mutex, MutexGuard}; use rocket::futures::lock::{Mutex, MutexGuard};
use rocket::http::Status; use rocket::http::Status;
use rocket::request::{self, FromRequest, Outcome}; use rocket::request::{self, FromRequest, Outcome};
use rocket::response::{self, stream::ReaderStream, Responder}; use rocket::response::{self, stream::ReaderStream, Responder};
use rocket::{Request, Response}; use rocket::*;
lazy_static! { lazy_static! {
static ref BLOB_BACKEND: Mutex<BlobBackend> = Mutex::new(BlobBackend { static ref BLOB_BACKEND: Mutex<BlobBackend> = Mutex::new(BlobBackend {
@ -165,12 +158,12 @@ impl<'r> Responder<'r, 'static> for RangeStream {
let mut read = 0u64; let mut read = 0u64;
let startpos = self.start as i64; let startpos = self.start as i64;
let size = self.len; let size = self.len;
let raw_fd = self.file.as_raw_fd(); let file = self.file.clone();
Response::build() Response::build()
.streamed_body(ReaderStream! { .streamed_body(ReaderStream! {
while read < size { while read < size {
match uio::pread(raw_fd, &mut buf, startpos + read as i64) { match uio::pread(file.as_ref(), &mut buf, startpos + read as i64) {
Ok(mut n) => { Ok(mut n) => {
n = std::cmp::min(n, (size - read) as usize); n = std::cmp::min(n, (size - read) as usize);
read += n as u64; read += n as u64;
@ -268,20 +261,31 @@ async fn fetch(
#[rocket::main] #[rocket::main]
async fn main() { async fn main() {
let cmd = App::new("nydus-backend-proxy") let cmd = Command::new("nydus-backend-proxy")
.author(crate_authors!()) .author(env!("CARGO_PKG_AUTHORS"))
.version(crate_version!()) .version(env!("CARGO_PKG_VERSION"))
.about("A simple HTTP server to provide a fake container registry for nydusd.") .about("A simple HTTP server to provide a fake container registry for nydusd.")
.arg( .arg(
Arg::with_name("blobsdir") Arg::new("blobsdir")
.short("b") .short('b')
.long("blobsdir") .long("blobsdir")
.takes_value(true) .required(true)
.help("path to directory hosting nydus blob files"), .help("path to directory hosting nydus blob files"),
) )
.help_template(
"\
{before-help}{name} {version}
{author-with-newline}{about-with-newline}
{usage-heading} {usage}
{all-args}{after-help}
",
)
.get_matches(); .get_matches();
// Safe to unwrap() because `blobsdir` takes a value. // Safe to unwrap() because `blobsdir` takes a value.
let path = cmd.value_of("blobsdir").unwrap(); let path = cmd
.get_one::<String>("blobsdir")
.expect("required argument");
init_blob_backend(Path::new(path)).await; init_blob_backend(Path::new(path)).await;

View File

@ -8,14 +8,14 @@ linters:
- goimports - goimports
- revive - revive
- ineffassign - ineffassign
- vet - govet
- unused - unused
- misspell - misspell
disable: disable:
- errcheck - errcheck
run: run:
deadline: 4m timeout: 5m
skip-dirs: issues:
- misc exclude-dirs:
- misc

View File

@ -1,8 +1,8 @@
GIT_COMMIT := $(shell git rev-parse --verify HEAD --short=7) GIT_COMMIT := $(shell git rev-parse --verify HEAD --short=7)
BUILD_TIME := $(shell date -u +%Y%m%d.%H%M) BUILD_TIME := $(shell date -u +%Y%m%d.%H%M)
PACKAGES ?= $(shell go list ./... | grep -v /vendor/) PACKAGES ?= $(shell go list ./... | grep -v /vendor/)
GOARCH ?= amd64 GOARCH ?= $(shell go env GOARCH)
GOPROXY ?= https://goproxy.io GOPROXY ?=
ifdef GOPROXY ifdef GOPROXY
PROXY := GOPROXY=${GOPROXY} PROXY := GOPROXY=${GOPROXY}
@ -13,15 +13,17 @@ endif
all: build all: build
build: build:
@CGO_ENABLED=0 ${PROXY} GOOS=linux GOARCH=${GOARCH} go build -ldflags="-s -w -X 'main.Version=${GIT_COMMIT}' -X 'main.BuildTime=${BUILD_TIME}'" -v -o bin/nydus-overlayfs ./cmd/main.go @CGO_ENABLED=0 ${PROXY} GOOS=linux GOARCH=${GOARCH} go build -ldflags="-s -w -X 'main.Version=${GIT_COMMIT}' -X 'main.BuildTime=${BUILD_TIME}'" -v -o bin/nydus-overlayfs ./cmd/main.go
release: release:
@CGO_ENABLED=0 ${PROXY} GOOS=linux GOARCH=${GOARCH} go build -ldflags '-s -w -extldflags "-static"' -v -o bin/nydus-overlayfs ./cmd/main.go @CGO_ENABLED=0 ${PROXY} GOOS=linux GOARCH=${GOARCH} go build -ldflags '-s -w -extldflags "-static"' -v -o bin/nydus-overlayfs ./cmd/main.go
test: build test: build
go vet $(PACKAGES) go vet $(PACKAGES)
golangci-lint run
go test -v -cover ${PACKAGES} go test -v -cover ${PACKAGES}
lint:
golangci-lint run
clean: clean:
rm -f bin/* rm -f bin/*

View File

@ -8,12 +8,16 @@ import (
"syscall" "syscall"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/urfave/cli/v2" cli "github.com/urfave/cli/v2"
"golang.org/x/sys/unix" "golang.org/x/sys/unix"
) )
const ( const (
// Extra mount option to pass Nydus specific information from snapshotter to runtime through containerd.
extraOptionKey = "extraoption=" extraOptionKey = "extraoption="
// Kata virtual volume infmation passed from snapshotter to runtime through containerd, superset of `extraOptionKey`.
// Please refer to `KataVirtualVolume` in https://github.com/kata-containers/kata-containers/blob/main/src/libs/kata-types/src/mount.rs
kataVolumeOptionKey = "io.katacontainers.volume="
) )
var ( var (
@ -44,7 +48,7 @@ func parseArgs(args []string) (*mountArgs, error) {
} }
if args[2] == "-o" && len(args[3]) != 0 { if args[2] == "-o" && len(args[3]) != 0 {
for _, opt := range strings.Split(args[3], ",") { for _, opt := range strings.Split(args[3], ",") {
if strings.HasPrefix(opt, extraOptionKey) { if strings.HasPrefix(opt, extraOptionKey) || strings.HasPrefix(opt, kataVolumeOptionKey) {
// filter extraoption // filter extraoption
continue continue
} }

View File

@ -1,15 +1,15 @@
module github.com/dragonflyoss/image-service/contrib/nydus-overlayfs module github.com/dragonflyoss/nydus/contrib/nydus-overlayfs
go 1.18 go 1.21
require ( require (
github.com/pkg/errors v0.9.1 github.com/pkg/errors v0.9.1
github.com/urfave/cli/v2 v2.3.0 github.com/urfave/cli/v2 v2.27.1
golang.org/x/sys v0.0.0-20211007075335-d3039528d8ac golang.org/x/sys v0.15.0
) )
require ( require (
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d // indirect github.com/cpuguy83/go-md2man/v2 v2.0.3 // indirect
github.com/russross/blackfriday/v2 v2.0.1 // indirect github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/shurcooL/sanitized_anchor_name v1.0.0 // indirect github.com/xrash/smetrics v0.0.0-20231213231151-1d8dd44e695e // indirect
) )

View File

@ -1,17 +1,10 @@
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= github.com/cpuguy83/go-md2man/v2 v2.0.3 h1:qMCsGGgs+MAzDFyp9LpAe1Lqy/fY/qCovCm0qnXZOBM=
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d h1:U+s90UTSYgptZMwQh2aRr3LuazLJIa+Pg3Kc1ylSYVY=
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4= github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/russross/blackfriday/v2 v2.0.1 h1:lPqVAte+HuHNfhJ/0LC98ESWRz8afy9tM/0RK8m9o+Q= github.com/urfave/cli/v2 v2.27.1 h1:8xSQ6szndafKVRmfyeUMxkNUJQMjL1F2zmsZ+qHpfho=
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= github.com/urfave/cli/v2 v2.27.1/go.mod h1:8qnjx1vcq5s2/wpsqoZFndg2CE5tNFyrTvS6SinrnYQ=
github.com/shurcooL/sanitized_anchor_name v1.0.0 h1:PdmoCO6wvbs+7yrJyMORt4/BmY5IYyJwS/kOiWx8mHo= github.com/xrash/smetrics v0.0.0-20231213231151-1d8dd44e695e h1:+SOyEddqYF09QP7vr7CgJ1eti3pY9Fn3LHO1M1r/0sI=
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc= golang.org/x/sys v0.15.0 h1:h48lPFYpsTvQJZF4EKyI4aLHaev3CxivZmv7yZig9pc=
github.com/urfave/cli/v2 v2.3.0 h1:qph92Y649prgesehzOrQjdWyxFOp/QVM+6imKHad91M= golang.org/x/sys v0.15.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
github.com/urfave/cli/v2 v2.3.0/go.mod h1:LJmUH05zAU44vOAcrfzZQKsZbVcdbOG8rtL3/XcUArI=
golang.org/x/sys v0.0.0-20211007075335-d3039528d8ac h1:oN6lz7iLW/YC7un8pq+9bOLyXrprv2+DKfkJY+2LJJw=
golang.org/x/sys v0.0.0-20211007075335-d3039528d8ac/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v2 v2.2.3/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=

View File

@ -1,150 +0,0 @@
# Nydus Functional Test
## Introduction
Nydus functional test a.k.a nydus-test is built on top of [pytest](https://docs.pytest.org/en/stable/).
It basically includes two parts:
* Specific test cases located at sub-directory functional-test
* Test framework located at sub-directory framework
## Prerequisites
Debian/Ubuntu
```bash
sudo apt update && sudo apt install --no-install-recommends -y attr libattr1-dev fio pkg-config libssl-dev python3-pip libpython3.7-dev libffi-dev
python3 -m pip install --upgrade pip
# Ensure you install below modules as root user
sudo pip3 install pytest xattr requests psutil requests_unixsocket libconf py-splice fallocate pytest-repeat PyYAML six docker toml
```
## Getting Started
### Configure framework
Nydus-test is controlled and configured by `anchor_conf.json`. Nydus-test will try to find it from its root directory before executing all tests.
```json
{
"workspace": "/path/to/where/nydus-test/stores/intermediates",
"nydus_project": "/path/to/image-service/repo",
"nydus_runtime_conf": {
"profile": "release",
"log_level": "info"
},
"registry": {
"registry_url": "127.0.0.1:5000",
"registry_namespace": "nydus",
"registry_auth": "YourRegistryAuth",
"backend_proxy_url": "127.0.0.1:8000",
"backend_proxy_blobs_dir": "/path/to/where/backend/simulator/stores/blobs"
},
"images": {
"images_array": [
"busybox:latest"
]
},
"artifacts": {
"containerd": "/usr/bin/containerd"
},
"logging_file": "stderr",
"target": "gnu"
}
```
### Compile Nydus components
Before running nydus-test, please compile nydus components.
`nydusd` and `nydus-image`
```bash
cd /path/to/image-service/repo
make release
```
`nydus-backend-proxy`
```bash
cd /path/to/image-service/repo
make -C contrib/nydus-backend-proxy
```
### Define target fs structure
```yaml
depth: 4
width: 6
layers:
- layer1:
- size: 10KB
type: regular
count: 5
- size: 4MB
type: regular
count: 30
- size: 128KB
type: regular
count: 100
- size: 90MB
type: regular
count: 1
- type: symlink
count: 100
```
### Generate your own original rootfs
The framework provides a tool to generate rootfs which will be the test target.
```text
$ sudo python3 nydus_test_config.py --dist fs_structure.yaml
INFO [nydus_test_config - 49:put_files] - putting regular, count 5
INFO [nydus_test_config - 49:put_files] - putting regular, count 30
INFO [nydus_test_config - 49:put_files] - putting regular, count 100
INFO [nydus_test_config - 49:put_files] - putting regular, count 1
INFO [nydus_test_config - 49:put_files] - putting symlink, count 100
INFO [utils - 171:timer] - Generating test layer, Takes time 0.857 seconds
INFO [nydus_test_config - 49:put_files] - putting regular, count 5
INFO [nydus_test_config - 49:put_files] - putting regular, count 30
INFO [nydus_test_config - 49:put_files] - putting regular, count 100
INFO [nydus_test_config - 49:put_files] - putting regular, count 1
INFO [nydus_test_config - 49:put_files] - putting symlink, count 100
INFO [utils - 171:timer] - Generating test parent layer, Takes time 0.760 seconds
```
## Run test
Please run tests as root user.
### Run All Test Cases
The whole nydus functional test suit works on top of pytest.
### Run a Specific Test Case
```bash
pytest -sv functional-test/test_nydus.py::test_basic
```
### Run a Set of Test Cases
```bash
pytest -sv functional-test/test_nydus.py
```
### Stop Once a Case Fails
```bash
pytest -sv functional-test/test_nydus.py::test_basic --pdb
```
### Run case Step by Step
```bash
pytest -sv functional-test/test_nydus.py::test_basic --trace
```

View File

@ -1,220 +0,0 @@
import sys
import os
import re
import shutil
import logging
import pytest
import docker
sys.path.append(os.path.realpath("framework"))
from nydus_anchor import NydusAnchor
from rafs import RafsImage, RafsConf
from backend_proxy import BackendProxy
import utils
ANCHOR = NydusAnchor()
utils.logging_setup(ANCHOR.logging_file)
os.environ["RUST_BACKTRACE"] = "1"
from tools import artifact
@pytest.fixture()
def nydus_anchor(request):
# TODO: check if nydusd executable exists and have a proper version
# TODO: check if bootstrap exists
# TODO: check if blob cache file exists and try to clear it if it does
# TODO: check if blob file was put to oss
nyta = NydusAnchor()
nyta.check_prerequisites()
logging.info("*** Testing case %s ***", os.environ.get("PYTEST_CURRENT_TEST"))
yield nyta
nyta.clear_blobcache()
if hasattr(nyta, "scratch_dir"):
logging.info("Clean up scratch dir")
shutil.rmtree(nyta.scratch_dir)
if hasattr(nyta, "nydusd") and nyta.nydusd is not None:
nyta.nydusd.shutdown()
if hasattr(nyta, "overlayfs") and os.path.ismount(nyta.overlayfs):
nyta.umount_overlayfs()
# Check if nydusd is crashed.
# TODO: Where the core file is places is controlled by kernel.
# Check `/proc/sys/kernel/core_pattern`
files = os.listdir()
for one in files:
assert re.match("^core\..*", one) is None
try:
shutil.rmtree(nyta.localfs_workdir)
except FileNotFoundError:
pass
try:
nyta.cleanup_dustbin()
except FileNotFoundError:
pass
# All nydusd should stop.
assert not NydusAnchor.capture_running_nydusd()
@pytest.fixture()
def nydus_image(nydus_anchor: NydusAnchor, request):
"""
Create images using previous version nydus image tool.
This fixture provides rafs image file, case is not responsible for performing
creating image.
"""
image = RafsImage(
nydus_anchor, nydus_anchor.source_dir, "bootstrap", "blob", clear_from_oss=True
)
yield image
try:
image.clean_up()
except FileNotFoundError as _:
pass
@pytest.fixture()
def nydus_scratch_image(nydus_anchor: NydusAnchor):
"""No longer use source_dir but use scratch_dir,
Scratch image's creation is delayed until runtime of each case.
"""
nydus_anchor.prepare_scratch_dir()
# Scratch image is not made here since specific case decides how to
# scratch this dir
image = RafsImage(
nydus_anchor,
nydus_anchor.scratch_dir,
"bootstrap_scratched",
"blob_scratched",
clear_from_oss=True,
)
yield image
if not image.created:
return
try:
image.clean_up()
except FileNotFoundError as _:
pass
@pytest.fixture()
def nydus_parent_image(nydus_anchor: NydusAnchor):
parent_image = RafsImage(
nydus_anchor, nydus_anchor.parent_rootfs, "bootstrap_parent", "blob_parent"
)
yield parent_image
try:
parent_image.clean_up()
except FileNotFoundError as _:
pass
@pytest.fixture()
def nydus_scratch_parent_image(nydus_anchor: NydusAnchor):
nydus_anchor.prepare_scratch_parent_dir()
parent_image = RafsImage(
nydus_anchor, nydus_anchor.scratch_parent_dir, "bs_parent", "blob_parent"
)
yield parent_image
try:
parent_image.clean_up()
except FileNotFoundError as _:
pass
@pytest.fixture(scope="session", autouse=False)
def collect_report(request):
"""
To enable code coverage report, let @autouse be True.
"""
build_dir = ANCHOR.build_dir
from coverage_collect import collect_coverage
def CC():
collect_coverage(build_dir)
request.addfinalizer(CC)
@pytest.fixture
def rafs_conf(nydus_anchor):
"""Generate conf file via libconf(https://pypi.org/project/libconf/)"""
rc = RafsConf(nydus_anchor)
rc.dump_rafs_conf()
yield rc
@pytest.fixture(scope="session")
def nydusify_converter():
# Can't access a `function` scope fixture.
os.environ["GOTRACEBACK"] = "crash"
nydusify_source_dir = os.path.join(ANCHOR.nydus_project, "contrib/nydusify")
with utils.pushd(nydusify_source_dir):
ret, _ = utils.execute(["make", "release"])
assert ret
@pytest.fixture(scope="session")
def nydus_snapshotter():
# Can't access a `function` scope fixture.
snapshotter_source = os.path.join(ANCHOR.nydus_project, "contrib/nydus-snapshotter")
with utils.pushd(snapshotter_source):
ret, _ = utils.execute(["make"])
assert ret
@pytest.fixture()
def local_registry():
docker_client = docker.from_env()
registry_container = docker_client.containers.run(
"registry:latest", detach=True, network_mode="host", remove=True
)
yield registry_container
try:
registry_container.stop()
except docker.errors.APIError:
assert False, "fail in stopping container"
try:
ANCHOR.backend_proxy_blobs_dir
@pytest.fixture(scope="module", autouse=True)
def nydus_backend_proxy():
backend_proxy = BackendProxy(
ANCHOR,
ANCHOR.backend_proxy_blobs_dir,
bin=os.path.join(
ANCHOR.nydus_project,
"contrib",
"nydus-backend-proxy",
"target",
"release",
"nydus-backend-proxy",
),
)
backend_proxy.start()
yield
backend_proxy.stop()
except AttributeError:
pass

View File

@ -1,24 +0,0 @@
from os import PathLike
import utils
class BackendProxy:
def __init__(self, anchor, blobs_dir: PathLike, bin:PathLike):
self.__blobs_dir = blobs_dir
self.bin = bin
self.anchor = anchor
def start(self):
_, self.p = utils.run(
[self.bin, "-b", self.blobs_dir()],
wait=False,
stdout=self.anchor.logging_file,
stderr=self.anchor.logging_file,
)
def stop(self):
self.p.terminate()
self.p.wait()
def blobs_dir(self):
return self.__blobs_dir

Some files were not shown because too many files have changed in this diff Show More