Compare commits

...

73 Commits

Author SHA1 Message Date
Fan Shang f7d513844d Remove mirrors configuration
Signed-off-by: Fan Shang <2444576154@qq.com>
2025-08-05 10:38:09 +08:00
Baptiste Girard-Carrabin 29dc8ec5c8 [registry] Accept empty scope during token auth challenge
The distribution spec (https://distribution.github.io/distribution/spec/auth/scope/#authorization-server-use) mentions that the access token provided during auth challenge "may include a scope" which means that it's not necessary to have one either to comply with the spec.
Additionally, this is something that is already accepted by containerd which will simply log a warning when no scope is specified: https://github.com/containerd/containerd/blob/main/core/remotes/docker/auth/fetch.go#L64
To match with what containerd and the spec suggest, the commit modifies the `parse_auth` logic to accept an empty `scope` field. It also logs the same warning as containerd.

Signed-off-by: Baptiste Girard-Carrabin <baptiste.girardcarrabin@datadoghq.com>
2025-07-31 20:28:47 +08:00
imeoer 7886e1868f storage: fix redirect in registry backend
To fix https://github.com/dragonflyoss/nydus/issues/1720

Signed-off-by: imeoer <yansong.ys@antgroup.com>
2025-07-31 11:49:44 +08:00
Peng Tao e1dffec213 api: increase error.rs UT coverage
Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2025-07-17 11:33:17 +08:00
Peng Tao cc62dd6890 github: add project common copilot instructions
Copilot generated with slight modification.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2025-07-17 11:33:17 +08:00
Peng Tao d140d60bea rafs: increase UT coverage for cached_v5.rs
Copilot generated.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2025-07-17 11:33:17 +08:00
Peng Tao f323c7f6e3 gitignore: ignore temp files generated by UTs
Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2025-07-17 11:33:17 +08:00
Peng Tao 5c8299c7f7 service: skip init fscache test if cachefiles is unavailable
Also skip the test for non-root users.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2025-07-17 11:33:17 +08:00
Jack Decker 14c0062cee Make filesystem sync operation fatal on failure
Signed-off-by: Jack Decker <jack@thundercompute.com>
2025-07-11 10:42:45 +08:00
Jack Decker d3bbc3e509 Add filesystem sync in both container and host namespaces before pausing container for commit to ensure all changes are flushed to disk.
Signed-off-by: Jack Decker <jack@thundercompute.com>
2025-07-11 10:42:45 +08:00
imeoer 80f80dda0e cargo: bump crates version
Signed-off-by: imeoer <yansong.ys@antgroup.com>
2025-07-08 10:38:27 +08:00
Yang Kaiyong a26c7bf99c test: support miri for unit test in actions
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-07-04 10:17:32 +08:00
imeoer 72b1955387 misc: add issue / PR stale workflow
Signed-off-by: imeoer <yansong.ys@antgroup.com>
2025-06-18 10:38:00 +08:00
ymy d589292ebc feat(nydusify): After converting the image, if the push operation fails, increase the number of retries.
Signed-off-by: ymy <ymy@zetyun.com>
2025-06-17 17:11:38 +08:00
Zephyrcf 344a208e86 Make ssl fallback check case-insensitive
Signed-off-by: Zephyrcf <zinsist77@gmail.com>
2025-06-12 19:03:49 +08:00
imeoer 9645820222 docs: add MAINTAINERS doc
Signed-off-by: imeoer <yansong.ys@antgroup.com>
2025-05-30 18:40:33 +08:00
Baptiste Girard-Carrabin d36295a21e [registry] Modify TokenResponse instead
Apply github comment.
Use `serde:default` in TokenResponse to have the same behavior as Option<String> without changing the struct signature.

Signed-off-by: Baptiste Girard-Carrabin <baptiste.girardcarrabin@datadoghq.com>
2025-04-28 16:11:23 +08:00
Baptiste Girard-Carrabin c048fcc45f [registry] Fix auth token parsing for access_token
Extend auth token parsing to support token in different json fields.
There is no real consensus on Oauth2 token response format, which means that each registry can implement their own. In particular, Azure ACR uses `access_token` as described here https://github.com/Azure/acr/blob/main/docs/Token-BasicAuth.md#get-a-pull-access-token-for-the-user. As such, when attempting to parse the JSON response containing the authorization token, we should attempt to deserialize using either `token` or `access_token` (and potentially more fields in the future if needed).
To not break the integration with existing registry, the behavior is to fallback to `access_token` only if `token` does not exist in the response.

Signed-off-by: Baptiste Girard-Carrabin <baptiste.girardcarrabin@datadoghq.com>
2025-04-28 16:11:23 +08:00
Baptiste Girard-Carrabin 67bf8b8283 [storage] Modify redirect policy to follow 10 redirects
From 2378d074fe (diff-c9f1f654cf0ba5d46a4ed25d8bb0ea22c942840c6693d31927a9fd912bcb9456R125-R131)
it seems that the redirect policy of the http client has always been to not follow redirects. However, this means that pulling from registries which have redirects when pulling blobs does not work. This is the case for instance on GCP's former container registries that were migrated to artifact registries.
Additionally, containerd's behavior is to follow up to 10 redirects https://github.com/containerd/containerd/blob/main/core/remotes/docker/resolver.go#L596 so it makes sense to use the same value.

Signed-off-by: Baptiste Girard-Carrabin <baptiste.girardcarrabin@datadoghq.com>
2025-04-27 18:54:04 +08:00
Peng Tao d74629233b readme: add deepwiki reference
Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2025-04-27 18:53:16 +08:00
Yang Kaiyong 21206e75b3 nydusify(refactor): handle layer with retry
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-04-23 11:04:54 +08:00
Yan Song c288169c1a action: add free-disk-space job
Try to fix the broken CI: https://github.com/dragonflyoss/nydus/actions/runs/14569290750/job/40863611290
It might be due to insufficient disk space.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2025-04-23 10:28:06 +08:00
Yang Kaiyong 23fdda1020 nydusify(feat): support for specifing log file and concurrently processing external model manifests
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-04-21 15:16:57 +08:00
Yang Kaiyong 9b915529a9 nydusify(feat): add crc32 in file attributes
Read CRC32 from external models' manifest and pass it to builder.

Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-04-18 18:30:18 +08:00
Yang Kaiyong 96c3e5569a nydus-image: only add crc32 flag in chunk level
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-04-18 14:39:03 +08:00
Yang Kaiyong 44069d6091 feat: support crc32 validation when validating chunks
- Add CRC32 algorithm implementation wiht crc-rs crate.
- Introduced a crc_enable option to the nydus builder.
- Support for generating CRC32 checksums when building images.
- Support for validating CRC32 in both normal chunk or external chunks.

Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-04-18 14:39:03 +08:00
Yang Kaiyong 31c8e896f0 chore: fix cargo-deny check failed
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-04-16 19:39:21 +08:00
Yang Kaiyong 8593498dbd nydusify: remove nydusd code which is working in progress
- remove the unready nydusd (runtime) implemention.
- remove the debug code.
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-04-02 22:21:27 +08:00
Yang Kaiyong 6161868e41 builder: suport build external model image from modctl
builder: add support for build external model image from modctl in local
context or remote registery.

feat(nydusify): add support for mount external large model images

chore: introduce GoReleaser for RPM package generation

nydusify(feat): add support for model image in check command

nydusify(test): add support for binary-based testing in external model's smoke tests

Signed-off-by: Yan Song <yansong.ys@antgroup.com>

Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-04-02 22:21:27 +08:00
Yang Kaiyong 871e1c6e4f chore(smoke): fix broken CI in smoke test
Run `rustup run stable cargo` instead of `cargo` to explicitly specify the toolchain.

Since `nextest` fails due to symlink resolution with new rustup v1.28.

Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-03-25 18:23:18 +08:00
Yan Song 8c0925b091 action: fix bootstrap path for fsck.erofs check
The output bootstrap path has been changed in the nydusify
check subcommand.

Related PR: https://github.com/dragonflyoss/nydus/pull/1652

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2025-03-25 14:51:28 +08:00
Yan Song baadb3990d misc: remove centos image from image conversion CI
The centos image has been deprecated on Docker Hub, so we can't
pull it in "Convert & Check Images" CI pipeline.

See https://hub.docker.com/_/centos

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2025-03-25 14:51:28 +08:00
Yan Song bd2123f2ed smoke: add v0.1.0 nydusd into native layer cases
To check the compatibility between the newer builder and old nydusd.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2025-03-24 20:34:54 +08:00
Yan Song c41ac4760d builder: remove redundant blobs for merge subcommand
After merging all trees, we need to re-calculate the blob index of
referenced blobs, as the upper tree might have deleted some files
or directories by opaques, and some blobs are dereferenced.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2025-03-24 20:34:54 +08:00
Yan Song 7daa0a3cd9 nydusify: refactor check subcommand
- allow either the source or target to be an OCI or nydus image;
- improve output directory structure and log format;

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2025-03-24 17:45:50 +08:00
ymy 7e5147990c feat(nydusify): A short container id is supported when you commit a container
Signed-off-by: ymy <ymy@zetyun.com>
2025-03-10 10:21:06 +08:00
ymy 36382b54dd Optimize: Improve code style in push lower blob section
Signed-off-by: ymy <ymy@zetyun.com>
2025-03-10 10:21:06 +08:00
yumy 8b03fd7593 fix: nydusify golang ci arg
Signed-off-by: ymy <ymy@zetyun.com>
2025-03-04 23:48:02 +08:00
ymy 76651c319a nydusify: fix the issue of blob not found when modifying image name during commit
Signed-off-by: ymy <ymy@zetyun.com>
2025-03-04 23:48:02 +08:00
Yang Kaiyong 91931607f8 fix(nydusd): fix parsing of failover-policy argument
Use `inspect_err` instead of `inspect` to to correctly handle and log
errors when parsing the `failover-policy` argument.

Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-02-24 11:25:26 +08:00
Yan Song dd9ba54e33 misc: remove goproxy.io for go build
The goproxy.io service is unstable for now, it effects,
the github CI, let's remove it.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2025-02-17 09:55:13 +08:00
Yan Song 09b81c50b4 nydusify: fix layer push retry for copy subcommand
Add push retry mechanism, enhance the success rate for image copy
when a single layer copy failure.

Signed-off-by: Yan Song <yansong.ys@antgroup.com>
2025-02-17 09:55:13 +08:00
Yang Kaiyong 3beb9a72d9 chore: bump deps to address rustsec warnning
- Bump vm-memory to 1.14.1, vmm-sys-util to 0.12.1 and vhost to 0.11.0.
- Bump cargo-deny-action version from v1 to v2 in workflows.

Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-02-11 20:29:22 +08:00
Yang Kaiyong 3c10b59324 chore: comment the unused code to address clippy error
The backend-oss feature is nerver enabled, so comment the test code.

Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-02-10 10:21:58 +08:00
Yang Kaiyong bf17d221d6 fix: Support building rafs without the dedup feature
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-02-10 10:21:58 +08:00
Yang Kaiyong ee5ef64cdd chore: pass rust version to build docker container in CI
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-02-10 10:21:58 +08:00
Yang Kaiyong 05ea41d159 chore: specify the rust version to 1.84.0 and enable docker cache
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-02-10 10:21:58 +08:00
Yang Kaiyong 4def4db396 chore: fix the broken CI on riscv64
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-02-10 10:21:58 +08:00
Yang Kaiyong d48d3dbdb3 chore: bump rust version to 1.8.4 and update deps to reslove cargo deny check failures
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
2025-02-10 10:21:58 +08:00
Kostis Papazafeiropoulos f60e40aafa fix(blobfs): Use correct result types for `open` and `create`
Use the correct result types for `open` and `create` expected by the
`fuse_backend_rs` 0.12.0 `Filesystem` trait

Signed-off-by: Kostis Papazafeiropoulos <papazof@gmail.com>
2025-01-15 10:18:59 +08:00
Kostis Papazafeiropoulos 83fa946897 build(rafs): Add missing `dedup` feature for `storage` crate dependency
Fix `rafs` build by adding missing `dedup` feature for `storage` crate
dependency

Signed-off-by: Kostis Papazafeiropoulos <papazof@gmail.com>
2025-01-15 10:18:59 +08:00
Gaius 365f13edcf chore: rename repo Dragonfly2 to dragonfly
Signed-off-by: Gaius <gaius.qi@gmail.com>
2024-12-20 17:09:10 +08:00
Lin Wang e23d5bc570 fix: dragonflyoss#1644 and #1651 resolve Algorithm to_string and FromStr inconsistency
Signed-off-by: Lin Wang <l.wang@mail.dlut.edu.cn>
2024-12-16 20:39:08 +08:00
Liu Bo acdf021ec9 rafs: fix typo
Fix an invalid info! usage.

Signed-off-by: Liu Bo <liub.liubo@gmail.com>
2024-12-13 14:40:50 +08:00
Xing Ma b175fc4baa nydusify: introduce optimize subcommand of nydusify
We can statically analyze the image entrypoint dependency, or use runtime dynamic
analysis technologies such as ebpf, fanotify, metric, etc. to obtain the container
file access pattern, and then build this part of data into an independent image layer:

* preferentially fetch blob during the image startup phase to reduce network and disk IO.
* avoid frequent image builds, allows for better local cache utilization.

Implement optimize subcommand of nydusify to generate a new image, which references a new
blob included prefetch file chunks.
```
nydusify optimize --policy separated-prefetch-blob \
	--source $existed-nydus-image \
	--target $new-nydus-image \
	--prefetch-files /path/to/prefetch-files
```

More detailed process is as follows:
1. nydusify first downloads the source image and bootstrap, utilize nydus-image to output a
new bootstrap along with an independent prefetchblob;
2. nydusify generate&push new meta layer including new bootstrap and the prefetch-files ,
also generates&push new manifest/config/prefetchblob, completing the incremental image build.

Signed-off-by: Xing Ma <maxing.lan@bytedance.com>
2024-12-09 14:51:13 +08:00
Xing Ma 8edc031a31 builder: Enhance optimize subcommand for prefetch
Major changes:
1. Added compatibility for rafs v5/v6 formats;
2. Set IS_SEPARATED_WITH_PREFETCH_FILES flag in BlobInfo for prefetchblob;
3. Add option output-json to store build output.

Signed-off-by: Xing Ma <maxing.lan@bytedance.com>
2024-12-09 14:51:13 +08:00
pyq bb4744c7fb docs: fix docker-env-setup.md
Signed-off-by: pyq <eilo.pengyq@gmail.com>
2024-12-04 10:10:26 +08:00
dDai Yongxuan 375f55f32e builder: introduce optimize subcommand for prefetch
We can statically analyze the image entrypoint dependency, or use runtime dynamic
analysis technologies such as ebpf, fanotify, metric, etc. to obtain the container
file access pattern, and then build this part of data into an independent image layer:

* preferentially fetch blob during the image startup phase to reduce network and disk IO.
* avoid frequent image builds, allows for better local cache utilization.

Implement optimize subcommand to optimize image bootstrap
from a prefetch file list, generate a new blob.

```
nydus-image optimize --prefetch-files /path/to/prefetch-files.txt \
  --bootstrap /path/to/bootstrap \
  --blob-dir /path/to/blobs
```
This will generate a new bootstrap and new blob in `blob-dir`.

Signed-off-by: daiyongxuan <daiyongxuan20@mails.ucas.ac.cn>
2024-10-29 14:52:17 +08:00
abushwang a575439471 fix: correct some typos about nerdctl image rm
Signed-off-by: abushwang <abushwangs@gmail.com>
2024-10-25 16:11:22 +08:00
abushwang 4ee6ddd931 fix: correct some typos in nydus-fscache.md
Signed-off-by: abushwang <abushwangs@gmail.com>
2024-10-25 15:05:32 +08:00
Yadong Ding 57c112a998 smoke: add smoking test for cas and chunk dedup
Add smoking test case for cas and chunk dedup.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-10-23 18:25:01 +08:00
Jiang Liu b9ba409f13 docs: add documentation for cas
Add documentation for cas.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2024-10-23 18:25:01 +08:00
Yadong Ding 2387fe8217 storage: enable chunk deduplication for file cache
Enable chunk deduplication for file cache. It works in this way:
- When a chunk is not in blob cache file yet, inquery CAS database
  whether other blob data files have the required chunk. If there's
  duplicated data chunk in other data files, copy the chunk data
  into current blob cache file by using copy_file_range().
- After downloading a data chunk from remote, save file/offset/chunk-id
  into CAS database, so it can be reused later.

Co-authored-by: Jiang Liu <gerry@linux.alibaba.com>
Co-authored-by: Yading Ding <ding_yadong@foxmail.com>
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-10-23 18:25:01 +08:00
Yadong Ding 4b1fd55e6e storage: add garbage collection in CasMgr
- Changed `delete_blobs` method in `CasDb` to take an immutable reference (`&self`) instead of a mutable reference (`&mut self`).
- Updated `dedup_chunk` method in `CasMgr` to correctly handle the deletion of non-existent blob files from both the file descriptor cache and the database.
- Implemented the `gc` (garbage collection) method in `CasMgr` to identify and remove blobs that no longer exist on the filesystem, ensuring the database and cache remain consistent.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-10-23 18:25:01 +08:00
Jiang Liu 45e07eab3d storage: implement CasManager to support chunk dedup at runtime
Implement CasManager to support chunk dedup at runtime.
The manager provides to major interfaces:
- add chunk data to the CAS database
- check whether a chunk exists in CAS database and copy it to blob file
  by copy_file_range() if the chunk exists.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2024-10-23 18:25:01 +08:00
Yadong Ding 51a6045d74 storage: improve copy_file_range
- improve copy_file_range when target os is not linux
- add more comprehensive tests

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-10-23 18:25:01 +08:00
Jiang Liu 7d1c2e635a storage: add helper copy_file_range
Add helper copy_file_range() which:
- avoid copy data into userspace
- may support reflink on xfs etc

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2024-10-23 18:25:01 +08:00
Mike Hotan 15ec192e3d Nydusify `localfs` support
Signed-off-by: Mike Hotan <mike@union.ai>
2024-10-17 09:42:59 +08:00
Yadong Ding da2510b6f5 action: bump macos-13
The macOS 12 Actions runner image will begin deprecation on 10/7/24.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-10-16 18:35:50 +08:00
Yadong Ding 47025395fa lint: bump golangci-lint v1.61.0 and fix lint
Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-10-16 09:45:05 +08:00
Yadong Ding 678b44ba32 rust: upgrade to 1.75.0
1. reduce the binary size.
2. use more rust-clippy lints.

Signed-off-by: Yadong Ding <ding_yadong@foxmail.com>
2024-10-16 09:45:05 +08:00
Yifan Zhao 7c498497fb nydusify: modify compact interface
This patch modifies the compact interface to meet the change in
nydus-image.

Signed-off-by: Yifan Zhao <zhaoyifan@sjtu.edu.cn>
2024-10-15 09:27:34 +08:00
Yifan Zhao 1ccc603525 nydus-image: modify compact interface
This commit uses compact parameter directly  insteadof compact config
file in the cli interface. It also fix a bug where chunk key for
ChunkWrapper::Ref is not generated correctly.

Signed-off-by: Yifan Zhao <zhaoyifan@sjtu.edu.cn>
2024-10-15 09:27:34 +08:00
204 changed files with 13485 additions and 2633 deletions

250
.github/copilot-instructions.md vendored Normal file
View File

@ -0,0 +1,250 @@
# GitHub Copilot Instructions for Nydus
## Project Overview
Nydus is a high-performance container image service that implements a content-addressable file system on the RAFS format. It enhances the OCI image specification by enabling on-demand loading, chunk-level deduplication, and improved container startup performance.
### Key Components
- **nydusd**: User-space daemon that processes FUSE/fscache/virtiofs messages and serves Nydus images
- **nydus-image**: CLI tool to convert OCI image layers to Nydus format
- **nydusify**: Tool to convert entire OCI images to Nydus format with registry integration
- **nydusctl**: CLI client for managing and querying nydusd daemon
- **nydus-service**: Library crate for integrating Nydus services into other projects
## Architecture Guidelines
### Crate Structure
```
- api/ # Nydus Image Service APIs and data structures
- builder/ # Image building and conversion logic
- rafs/ # RAFS filesystem implementation
- service/ # Daemon and service management framework
- storage/ # Core storage subsystem with backends and caching
- utils/ # Common utilities and helper functions
- src/bin/ # Binary executables (nydusd, nydus-image, nydusctl)
```
### Key Technologies
- **Language**: Rust with memory safety focus
- **Filesystems**: FUSE, virtiofs, EROFS, fscache
- **Storage Backends**: Registry, OSS, S3, LocalFS, HTTP proxy
- **Compression**: LZ4, Gzip, Zstd
- **Async Runtime**: Tokio (current thread for io-uring compatibility)
## Code Style and Patterns
### Rust Conventions
- Use `#![deny(warnings)]` in all binary crates
- Follow standard Rust naming conventions (snake_case, PascalCase)
- Prefer `anyhow::Result` for error handling in applications
- Use custom error types with `thiserror` for libraries
- Apply `#[macro_use]` for frequently used external crates like `log`
- Always format the code with `cargo fmt`
- Use `clippy` for linting and follow its suggestions
### Error Handling
```rust
// Prefer anyhow for applications
use anyhow::{bail, Context, Result};
// Use custom error types for libraries
use thiserror::Error;
#[derive(Error, Debug)]
pub enum NydusError {
#[error("Invalid arguments: {0}")]
InvalidArguments(String),
#[error("IO error: {0}")]
Io(#[from] std::io::Error),
}
```
### Logging Patterns
- Use structured logging with appropriate levels (trace, debug, info, warn, error)
- Include context in error messages: `.with_context(|| "description")`
- Use `info!`, `warn!`, `error!` macros consistently
### Configuration Management
- Use `serde` for JSON configuration serialization/deserialization
- Support both file-based and environment variable configuration
- Validate configurations at startup with clear error messages
- Follow the `ConfigV2` pattern for versioned configurations
## Development Guidelines
### Storage Backend Development
- When implementing new storage backends:
- - Implement the `BlobBackend` trait
- - Support timeout, retry, and connection management
- - Add configuration in the backend config structure
- - Consider proxy support for high availability
- - Implement proper error handling and logging
### Daemon Service Development
- Use the `NydusDaemon` trait for service implementations
- Support save/restore for hot upgrade functionality
- Implement proper state machine transitions
- Use `DaemonController` for lifecycle management
### RAFS Filesystem Features
- Support both RAFS v5 and v6 formats
- Implement chunk-level deduplication
- Handle prefetch optimization for container startup
- Support overlay filesystem operations
- Maintain POSIX compatibility
### API Development
- Use versioned APIs (v1, v2) with backward compatibility
- Implement HTTP endpoints with proper error handling
- Support both Unix socket and TCP communication
- Follow OpenAPI specification patterns
## Testing Patterns
### Unit Tests
- Test individual functions and modules in isolation
- Use `#[cfg(test)]` modules within source files
- Mock external dependencies when necessary
- Focus on error conditions and edge cases
### Integration Tests
- Place integration tests in `tests/` directory
- Test complete workflows and component interactions
- Use temporary directories for filesystem operations
- Clean up resources properly in test teardown
### Smoke Tests
- Located in `smoke/` directory using Go
- Test real-world scenarios with actual images
- Verify performance and functionality
- Use Bats framework for shell-based testing
## Performance Considerations
### I/O Optimization
- Use async I/O patterns with Tokio
- Implement prefetching for predictable access patterns
- Optimize chunk size (default 1MB) for workload characteristics
- Consider io-uring for high-performance scenarios
### Memory Management
- Use `Arc<T>` for shared ownership of large objects
- Implement lazy loading for metadata structures
- Consider memory mapping for large files
- Profile memory usage in performance-critical paths
### Caching Strategy
- Implement blob caching with configurable backends
- Support compression in cache to save space
- Use chunk-level caching with efficient eviction policies
- Consider cache warming strategies for frequently accessed data
## Security Guidelines
### Data Integrity
- Implement end-to-end digest validation
- Support multiple hash algorithms (SHA256, Blake3)
- Verify chunk integrity on read operations
- Detect and prevent supply chain attacks
### Authentication
- Support registry authentication (basic auth, bearer tokens)
- Handle credential rotation and refresh
- Implement secure credential storage
- Support mutual TLS for backend connections
## Specific Code Patterns
### Configuration Loading
```rust
// Standard pattern for configuration loading
let config = match config_path {
Some(path) => ConfigV2::from_file(path)?,
None => ConfigV2::default(),
};
// Environment variable override
if let Ok(auth) = std::env::var("IMAGE_PULL_AUTH") {
config.update_registry_auth_info(&auth);
}
```
### Daemon Lifecycle
```rust
// Standard daemon initialization pattern
let daemon = create_daemon(config, build_info)?;
DAEMON_CONTROLLER.set_daemon(daemon);
// Event loop management
if DAEMON_CONTROLLER.is_active() {
DAEMON_CONTROLLER.run_loop();
}
// Graceful shutdown
DAEMON_CONTROLLER.shutdown();
```
### Blob Access Pattern
```rust
// Standard blob read pattern
let mut bio = BlobIoDesc::new(blob_id, blob_address, blob_size, user_io);
let blob_device = factory.get_device(&blob_info)?;
blob_device.read(&mut bio)?;
```
## Documentation Standards
### Code Documentation
- Document all public APIs with `///` comments
- Include examples in documentation
- Document safety requirements for unsafe code
- Explain complex algorithms and data structures
### Architecture Documentation
- Maintain design documents in `docs/` directory
- Update documentation when adding new features
- Include diagrams for complex interactions
- Document configuration options comprehensively
### Release Notes
- Document breaking changes clearly
- Include migration guides for major versions
- Highlight performance improvements
- List new features and bug fixes
## Container and Cloud Native Patterns
### OCI Compatibility
- Maintain compatibility with OCI image spec
- Support standard container runtimes (runc, Kata)
- Implement proper layer handling and manifest generation
- Support multi-architecture images
### Kubernetes Integration
- Design for Kubernetes CRI integration
- Support containerd snapshotter pattern
- Handle pod lifecycle events appropriately
- Implement proper resource cleanup
### Cloud Storage Integration
- Support major cloud providers (AWS S3, Alibaba OSS)
- Implement proper credential management
- Handle network interruptions gracefully
- Support cross-region replication patterns
## Build and Release
### Build Configuration
- Use `Cargo.toml` workspace configuration
- Support cross-compilation for multiple architectures
- Implement proper feature flags for optional components
- Use consistent dependency versioning
### Release Process
- Tag releases with semantic versioning
- Generate release binaries for supported platforms
- Update documentation with release notes
- Validate release artifacts before publishing
Remember to follow these guidelines when contributing to or working with the Nydus codebase. The project emphasizes performance, security, and compatibility with the broader container ecosystem.

40
.github/workflows/Dockerfile.cross vendored Normal file
View File

@ -0,0 +1,40 @@
FROM ubuntu:latest
ENV DEBIAN_FRONTEND=noninteractive
ARG RUST_VERSION=1.84.0
RUN apt-get update && apt-get install -y \
software-properties-common \
build-essential \
curl \
git \
libssl-dev \
pkg-config \
cmake \
gcc-riscv64-linux-gnu \
g++-riscv64-linux-gnu \
&& rm -rf /var/lib/apt/lists/*
RUN add-apt-repository ppa:ubuntu-toolchain-r/test \
&& apt-get update && apt-get install -y \
gcc-14 \
g++-14 \
gcc-14-riscv64-linux-gnu \
g++-14-riscv64-linux-gnu \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /root
RUN curl https://sh.rustup.rs -sSf | sh -s -- -y --default-toolchain $RUST_VERSION
ENV PATH="/root/.cargo/bin:${PATH}"
RUN rustup target add \
riscv64gc-unknown-linux-gnu
RUN mkdir -p ~/.cargo && echo '\
[target.riscv64gc-unknown-linux-gnu]\n\
linker = "riscv64-linux-gnu-gcc-14"' > ~/.cargo/config.toml
CMD ["/bin/bash"]

View File

@ -26,7 +26,7 @@ jobs:
cache-dependency-path: "**/*.sum" cache-dependency-path: "**/*.sum"
- name: Build Contrib - name: Build Contrib
run: | run: |
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sudo sh -s -- -b /usr/local/bin v1.54.2 curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sudo sh -s -- -b /usr/local/bin v1.61.0
make -e DOCKER=false nydusify-release make -e DOCKER=false nydusify-release
- name: Upload Nydusify - name: Upload Nydusify
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v4
@ -139,7 +139,7 @@ jobs:
--source localhost:5000/$I \ --source localhost:5000/$I \
--target localhost:5000/$I:nydus-nightly-oci-ref --target localhost:5000/$I:nydus-nightly-oci-ref
sudo fsck.erofs -d1 output/nydus_bootstrap sudo fsck.erofs -d1 ./output/target/nydus_bootstrap/image/image.boot
sudo rm -rf ./output sudo rm -rf ./output
done done
- name: Save Nydusify Metric - name: Save Nydusify Metric
@ -256,7 +256,7 @@ jobs:
sudo DOCKER_CONFIG=$HOME/.docker nydusify check --source $I:latest \ sudo DOCKER_CONFIG=$HOME/.docker nydusify check --source $I:latest \
--target localhost:5000/$I:nydus-nightly-v6 --target localhost:5000/$I:nydus-nightly-v6
sudo fsck.erofs -d1 output/nydus_bootstrap sudo fsck.erofs -d1 ./output/target/nydus_bootstrap/image/image.boot
sudo rm -rf ./output sudo rm -rf ./output
done done
- name: Save Nydusify Metric - name: Save Nydusify Metric
@ -321,7 +321,7 @@ jobs:
sudo DOCKER_CONFIG=$HOME/.docker nydusify check --source $I:latest \ sudo DOCKER_CONFIG=$HOME/.docker nydusify check --source $I:latest \
--target localhost:5000/$I:nydus-nightly-v6-batch --target localhost:5000/$I:nydus-nightly-v6-batch
sudo fsck.erofs -d1 output/nydus_bootstrap sudo fsck.erofs -d1 ./output/target/nydus_bootstrap/image/image.boot
sudo rm -rf ./output sudo rm -rf ./output
done done
- name: Save Nydusify Metric - name: Save Nydusify Metric

45
.github/workflows/miri.yml vendored Normal file
View File

@ -0,0 +1,45 @@
name: Miri Test
on:
push:
branches: ["**", "stable/**"]
paths-ignore: [ '**.md', '**.png', '**.jpg', '**.svg', '**/docs/**' ]
pull_request:
branches: ["**", "stable/**"]
paths-ignore: [ '**.md', '**.png', '**.jpg', '**.svg', '**/docs/**' ]
schedule:
# Run daily sanity check at 03:00 clock UTC
- cron: "0 03 * * *"
workflow_dispatch:
env:
CARGO_TERM_COLOR: always
jobs:
nydus-unit-test-with-miri:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Rust Cache
uses: Swatinem/rust-cache@v2
with:
cache-on-failure: true
shared-key: Linux-cargo-amd64
save-if: ${{ github.ref == 'refs/heads/master' }}
- name: Install cargo nextest
uses: taiki-e/install-action@nextest
- name: Fscache Setup
run: sudo bash misc/fscache/setup.sh
- name: Install Miri
run: |
rustup toolchain install nightly --component miri
rustup override set nightly
cargo miri setup
- name: Unit Test with Miri
run: |
CARGO_HOME=${HOME}/.cargo
CARGO_BIN=$(which cargo)
RUSTUP_BIN=$(which rustup)
sudo -E RUSTUP=${RUSTUP_BIN} make miri-ut-nextest 2>&1 | tee miri-ut.log
grep -C 2 'Undefined Behavior' miri-ut.log

View File

@ -26,12 +26,46 @@ jobs:
cache-on-failure: true cache-on-failure: true
shared-key: ${{ runner.os }}-cargo-${{ matrix.arch }} shared-key: ${{ runner.os }}-cargo-${{ matrix.arch }}
- uses: dsherret/rust-toolchain-file@v1 - uses: dsherret/rust-toolchain-file@v1
- name: Build nydus-rs - name: Read Rust toolchain version
id: set_toolchain_version
run: |
RUST_TOOLCHAIN_VERSION=$(grep -oP '(?<=channel = ")[^"]*' rust-toolchain.toml)
echo "Rust toolchain version: $RUST_TOOLCHAIN_VERSION"
echo "rust-version=$RUST_TOOLCHAIN_VERSION" >> $GITHUB_OUTPUT
shell: bash
- name: Set up Docker Buildx
if: matrix.arch == 'riscv64'
uses: docker/setup-buildx-action@v3
- name: Build and push Docker image
if: matrix.arch == 'riscv64'
uses: docker/build-push-action@v6
with:
context: .
file: ./.github/workflows/Dockerfile.cross
push: false
load: true
tags: rust-cross-compile-riscv64:latest
cache-from: type=gha
cache-to: type=gha,mode=max
build-args: |
RUST_VERSION=${{ steps.set_toolchain_version.outputs.rust-version }}
- name: Build nydus-rs Non-RISC-V
if: matrix.arch != 'riscv64'
run: |
declare -A rust_target_map=( ["amd64"]="x86_64-unknown-linux-musl" ["arm64"]="aarch64-unknown-linux-musl" ["ppc64le"]="powerpc64le-unknown-linux-gnu")
RUST_TARGET=${rust_target_map[${{ matrix.arch }}]}
cargo install --locked --version 0.2.5 cross
make -e RUST_TARGET_STATIC=$RUST_TARGET -e CARGO=cross static-release
- name : Build Nydus-rs RISC-V
if: matrix.arch == 'riscv64'
run: |
RUST_TARGET=riscv64gc-unknown-linux-gnu
docker run --rm -v ${{ github.workspace }}:/root/src rust-cross-compile-riscv64:latest \
sh -c "cd /root/src && make -e RUST_TARGET_STATIC=$RUST_TARGET static-release"
- name: Prepare to upload artifacts
run: | run: |
declare -A rust_target_map=( ["amd64"]="x86_64-unknown-linux-musl" ["arm64"]="aarch64-unknown-linux-musl" ["ppc64le"]="powerpc64le-unknown-linux-gnu" ["riscv64"]="riscv64gc-unknown-linux-gnu") declare -A rust_target_map=( ["amd64"]="x86_64-unknown-linux-musl" ["arm64"]="aarch64-unknown-linux-musl" ["ppc64le"]="powerpc64le-unknown-linux-gnu" ["riscv64"]="riscv64gc-unknown-linux-gnu")
RUST_TARGET=${rust_target_map[${{ matrix.arch }}]} RUST_TARGET=${rust_target_map[${{ matrix.arch }}]}
cargo install --locked --version 0.2.4 cross
make -e RUST_TARGET_STATIC=$RUST_TARGET -e CARGO=cross static-release
sudo mv target/$RUST_TARGET/release/nydusd nydusd sudo mv target/$RUST_TARGET/release/nydusd nydusd
sudo mv target/$RUST_TARGET/release/nydus-image . sudo mv target/$RUST_TARGET/release/nydus-image .
sudo mv target/$RUST_TARGET/release/nydusctl . sudo mv target/$RUST_TARGET/release/nydusctl .
@ -48,7 +82,7 @@ jobs:
configs configs
nydus-macos: nydus-macos:
runs-on: macos-12 runs-on: macos-13
strategy: strategy:
matrix: matrix:
arch: [amd64, arm64] arch: [amd64, arm64]
@ -67,7 +101,7 @@ jobs:
else else
RUST_TARGET="aarch64-apple-darwin" RUST_TARGET="aarch64-apple-darwin"
fi fi
cargo install --version 0.2.4 cross cargo install --version 0.2.5 cross
rustup target add ${RUST_TARGET} rustup target add ${RUST_TARGET}
make -e RUST_TARGET_STATIC=$RUST_TARGET -e CARGO=cross static-release make -e RUST_TARGET_STATIC=$RUST_TARGET -e CARGO=cross static-release
sudo mv target/$RUST_TARGET/release/nydusd nydusd sudo mv target/$RUST_TARGET/release/nydusd nydusd
@ -205,3 +239,87 @@ jobs:
generate_release_notes: true generate_release_notes: true
files: | files: |
${{ env.tarballs }} ${{ env.tarballs }}
goreleaser:
if: github.event_name == 'push' && startsWith(github.ref, 'refs/tags/')
strategy:
matrix:
arch: [amd64, arm64]
os: [linux]
needs: [nydus-linux, contrib-linux]
permissions:
contents: write
runs-on: ubuntu-latest
timeout-minutes: 60
outputs:
hashes: ${{ steps.hash.outputs.hashes }}
steps:
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 0
submodules: recursive
- name: Setup Golang
uses: actions/setup-go@v5
with:
go-version-file: 'go.work'
cache-dependency-path: "**/*.sum"
- name: download artifacts
uses: actions/download-artifact@v4
with:
pattern: nydus-artifacts-${{ matrix.os }}-${{ matrix.arch }}*
merge-multiple: true
path: nydus-static
- name: prepare context
run: |
chmod +x nydus-static/*
export GOARCH=${{ matrix.arch }}
echo "GOARCH: $GOARCH"
sh ./goreleaser.sh
- name: Check GoReleaser config
uses: goreleaser/goreleaser-action@90a3faa9d0182683851fbfa97ca1a2cb983bfca3
with:
version: latest
args: check
- name: Run GoReleaser
uses: goreleaser/goreleaser-action@90a3faa9d0182683851fbfa97ca1a2cb983bfca3
id: run-goreleaser
with:
version: latest
args: release --clean
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Generate subject
id: hash
env:
ARTIFACTS: "${{ steps.run-goreleaser.outputs.artifacts }}"
run: |
set -euo pipefail
hashes=$(echo $ARTIFACTS | jq --raw-output '.[] | {name, "digest": (.extra.Digest // .extra.Checksum)} | select(.digest) | {digest} + {name} | join(" ") | sub("^sha256:";"")' | base64 -w0)
if test "$hashes" = ""; then # goreleaser < v1.13.0
checksum_file=$(echo "$ARTIFACTS" | jq -r '.[] | select (.type=="Checksum") | .path')
hashes=$(cat $checksum_file | base64 -w0)
fi
echo "hashes=$hashes" >> $GITHUB_OUTPUT
- name: Set tag output
id: tag
run: echo "tag_name=${GITHUB_REF#refs/*/}" >> "$GITHUB_OUTPUT"
provenance:
if: github.event_name == 'push' && startsWith(github.ref, 'refs/tags/')
needs: [goreleaser]
permissions:
actions: read # To read the workflow path.
id-token: write # To sign the provenance.
contents: write # To add assets to a release.
uses: slsa-framework/slsa-github-generator/.github/workflows/generator_generic_slsa3.yml@v2.1.0
with:
base64-subjects: "${{ needs.goreleaser.outputs.hashes }}"
upload-assets: true # upload to a new release
upload-tag-name: "${{ needs.release.outputs.tag_name }}"
draft-release: true

View File

@ -57,7 +57,7 @@ jobs:
- name: Lint - name: Lint
uses: golangci/golangci-lint-action@v6 uses: golangci/golangci-lint-action@v6
with: with:
version: v1.61 version: v1.64
working-directory: ${{ matrix.path }} working-directory: ${{ matrix.path }}
args: --timeout=10m --verbose args: --timeout=10m --verbose
@ -76,12 +76,46 @@ jobs:
shared-key: ${{ runner.os }}-cargo-${{ matrix.arch }} shared-key: ${{ runner.os }}-cargo-${{ matrix.arch }}
save-if: ${{ github.ref == 'refs/heads/master' }} save-if: ${{ github.ref == 'refs/heads/master' }}
- uses: dsherret/rust-toolchain-file@v1 - uses: dsherret/rust-toolchain-file@v1
- name: Build Nydus - name: Read Rust toolchain version
id: set_toolchain_version
run: |
RUST_TOOLCHAIN_VERSION=$(grep -oP '(?<=channel = ")[^"]*' rust-toolchain.toml)
echo "Rust toolchain version: $RUST_TOOLCHAIN_VERSION"
echo "rust-version=$RUST_TOOLCHAIN_VERSION" >> $GITHUB_OUTPUT
shell: bash
- name: Set up Docker Buildx
if: matrix.arch == 'riscv64'
uses: docker/setup-buildx-action@v3
- name: Build and push Docker image
if: matrix.arch == 'riscv64'
uses: docker/build-push-action@v6
with:
context: .
file: ./.github/workflows/Dockerfile.cross
push: false
load: true
tags: rust-cross-compile-riscv64:latest
cache-from: type=gha
cache-to: type=gha,mode=max
build-args: |
RUST_VERSION=${{ steps.set_toolchain_version.outputs.rust-version }}
- name: Build Nydus Non-RISC-V
if: matrix.arch != 'riscv64'
run: |
declare -A rust_target_map=( ["amd64"]="x86_64-unknown-linux-musl" ["arm64"]="aarch64-unknown-linux-musl" ["ppc64le"]="powerpc64le-unknown-linux-gnu")
RUST_TARGET=${rust_target_map[${{ matrix.arch }}]}
cargo install --locked --version 0.2.5 cross
make -e RUST_TARGET_STATIC=$RUST_TARGET -e CARGO=cross static-release
- name: Build Nydus RISC-V
if: matrix.arch == 'riscv64'
run: |
RUST_TARGET=riscv64gc-unknown-linux-gnu
docker run --rm -v ${{ github.workspace }}:/root/src rust-cross-compile-riscv64:latest \
sh -c "cd /root/src && make -e RUST_TARGET_STATIC=$RUST_TARGET static-release"
- name: Prepare to upload artifacts
run: | run: |
declare -A rust_target_map=( ["amd64"]="x86_64-unknown-linux-musl" ["arm64"]="aarch64-unknown-linux-musl" ["ppc64le"]="powerpc64le-unknown-linux-gnu" ["riscv64"]="riscv64gc-unknown-linux-gnu") declare -A rust_target_map=( ["amd64"]="x86_64-unknown-linux-musl" ["arm64"]="aarch64-unknown-linux-musl" ["ppc64le"]="powerpc64le-unknown-linux-gnu" ["riscv64"]="riscv64gc-unknown-linux-gnu")
RUST_TARGET=${rust_target_map[${{ matrix.arch }}]} RUST_TARGET=${rust_target_map[${{ matrix.arch }}]}
cargo install --locked --version 0.2.4 cross
make -e RUST_TARGET_STATIC=$RUST_TARGET -e CARGO=cross static-release
sudo mv target/$RUST_TARGET/release/nydusd . sudo mv target/$RUST_TARGET/release/nydusd .
sudo mv target/$RUST_TARGET/release/nydus-image . sudo mv target/$RUST_TARGET/release/nydus-image .
- name: Upload Nydus Binaries - name: Upload Nydus Binaries
@ -94,7 +128,7 @@ jobs:
nydusd nydusd
nydusd-build-macos: nydusd-build-macos:
runs-on: macos-12 runs-on: macos-13
strategy: strategy:
matrix: matrix:
arch: [amd64, arm64] arch: [amd64, arm64]
@ -114,7 +148,7 @@ jobs:
else else
RUST_TARGET="aarch64-apple-darwin" RUST_TARGET="aarch64-apple-darwin"
fi fi
cargo install --version 0.2.4 cross cargo install --version 0.2.5 cross
rustup target add ${RUST_TARGET} rustup target add ${RUST_TARGET}
make -e RUST_TARGET_STATIC=$RUST_TARGET -e CARGO=cross static-release make -e RUST_TARGET_STATIC=$RUST_TARGET -e CARGO=cross static-release
@ -159,6 +193,21 @@ jobs:
with: with:
go-version-file: 'go.work' go-version-file: 'go.work'
cache-dependency-path: "**/*.sum" cache-dependency-path: "**/*.sum"
- name: Free Disk Space
uses: jlumbroso/free-disk-space@main
with:
# this might remove tools that are actually needed,
# if set to "true" but frees about 6 GB
tool-cache: false
# all of these default to true, but feel free to set to
# "false" if necessary for your workflow
android: true
dotnet: true
haskell: true
large-packages: true
docker-images: true
swap-storage: true
- name: Integration Test - name: Integration Test
run: | run: |
sudo mkdir -p /usr/bin/nydus-latest /home/runner/work/workdir sudo mkdir -p /usr/bin/nydus-latest /home/runner/work/workdir
@ -179,7 +228,7 @@ jobs:
export NYDUS_NYDUSIFY_$version_export=/usr/bin/nydus-$version/nydusify export NYDUS_NYDUSIFY_$version_export=/usr/bin/nydus-$version/nydusify
done done
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sudo sh -s -- -b /usr/bin v1.54.2 curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sudo sh -s -- -b /usr/bin v1.64.8
sudo -E make smoke-only sudo -E make smoke-only
nydus-unit-test: nydus-unit-test:
@ -201,7 +250,8 @@ jobs:
run: | run: |
CARGO_HOME=${HOME}/.cargo CARGO_HOME=${HOME}/.cargo
CARGO_BIN=$(which cargo) CARGO_BIN=$(which cargo)
sudo -E CARGO=${CARGO_BIN} make ut-nextest RUSTUP_BIN=$(which rustup)
sudo -E RUSTUP=${RUSTUP_BIN} make ut-nextest
contrib-unit-test-coverage: contrib-unit-test-coverage:
runs-on: ubuntu-latest runs-on: ubuntu-latest
@ -243,7 +293,8 @@ jobs:
run: | run: |
CARGO_HOME=${HOME}/.cargo CARGO_HOME=${HOME}/.cargo
CARGO_BIN=$(which cargo) CARGO_BIN=$(which cargo)
sudo -E CARGO=${CARGO_BIN} make coverage-codecov RUSTUP_BIN=$(which rustup)
sudo -E RUSTUP=${RUSTUP_BIN} make coverage-codecov
- name: Upload nydus coverage file - name: Upload nydus coverage file
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v4
with: with:
@ -278,7 +329,7 @@ jobs:
timeout-minutes: 10 timeout-minutes: 10
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v4
- uses: EmbarkStudios/cargo-deny-action@v1 - uses: EmbarkStudios/cargo-deny-action@v2
performance-test: performance-test:
runs-on: ubuntu-latest runs-on: ubuntu-latest
@ -332,4 +383,4 @@ jobs:
- name: Takeover Test - name: Takeover Test
run: | run: |
export NEW_NYDUSD_BINARY_PATH=target/release/nydusd export NEW_NYDUSD_BINARY_PATH=target/release/nydusd
sudo -E make smoke-takeover sudo -E make smoke-takeover

31
.github/workflows/stale.yaml vendored Normal file
View File

@ -0,0 +1,31 @@
name: Close stale issues and PRs
on:
workflow_dispatch:
schedule:
- cron: "0 0 * * *"
permissions:
issues: write
pull-requests: write
jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@5bef64f19d7facfb25b37b414482c7164d639639 # v9.1.0
id: stale
with:
delete-branch: true
days-before-close: 7
days-before-stale: 60
days-before-pr-close: 7
days-before-pr-stale: 60
stale-issue-label: "stale"
exempt-issue-labels: bug,wip
exempt-pr-labels: bug,wip
exempt-all-milestones: true
stale-issue-message: 'This issue is stale because it has been open 60 days with no activity.'
close-issue-message: 'This issue was closed because it has been stalled for 7 days with no activity.'
stale-pr-message: 'This PR is stale because it has been open 60 days with no activity.'
close-pr-message: 'This PR was closed because it has been stalled for 7 days with no activity.'

5
.gitignore vendored
View File

@ -7,3 +7,8 @@
__pycache__ __pycache__
.DS_Store .DS_Store
go.work.sum go.work.sum
dist/
nydus-static/
.goreleaser.yml
metadata.db
tests/texture/zran/233c72f2b6b698c07021c4da367cfe2dff4f049efbaa885ca0ff760ea297865a

1810
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -53,39 +53,37 @@ tar = "0.4.40"
tokio = { version = "1.35.1", features = ["macros"] } tokio = { version = "1.35.1", features = ["macros"] }
# Build static linked openssl library # Build static linked openssl library
openssl = { version = "0.10.55", features = ["vendored"] } openssl = { version = '0.10.72', features = ["vendored"] }
# pin openssl-src to bring in fix for https://rustsec.org/advisories/RUSTSEC-2022-0032
#openssl-src = { version = "111.22" }
nydus-api = { version = "0.3.0", path = "api", features = [ nydus-api = { version = "0.4.0", path = "api", features = [
"error-backtrace", "error-backtrace",
"handler", "handler",
] } ] }
nydus-builder = { version = "0.1.0", path = "builder" } nydus-builder = { version = "0.2.0", path = "builder" }
nydus-rafs = { version = "0.3.1", path = "rafs" } nydus-rafs = { version = "0.4.0", path = "rafs" }
nydus-service = { version = "0.3.0", path = "service", features = [ nydus-service = { version = "0.4.0", path = "service", features = [
"block-device", "block-device",
] } ] }
nydus-storage = { version = "0.6.3", path = "storage", features = [ nydus-storage = { version = "0.7.0", path = "storage", features = [
"prefetch-rate-limit", "prefetch-rate-limit",
] } ] }
nydus-utils = { version = "0.4.2", path = "utils" } nydus-utils = { version = "0.5.0", path = "utils" }
vhost = { version = "0.6.0", features = ["vhost-user-slave"], optional = true } vhost = { version = "0.11.0", features = ["vhost-user"], optional = true }
vhost-user-backend = { version = "0.8.0", optional = true } vhost-user-backend = { version = "0.15.0", optional = true }
virtio-bindings = { version = "0.1", features = [ virtio-bindings = { version = "0.1", features = [
"virtio-v5_0_0", "virtio-v5_0_0",
], optional = true } ], optional = true }
virtio-queue = { version = "0.7.0", optional = true } virtio-queue = { version = "0.12.0", optional = true }
vm-memory = { version = "0.10.0", features = ["backend-mmap"], optional = true } vm-memory = { version = "0.14.1", features = ["backend-mmap","backend-atomic"], optional = true }
vmm-sys-util = { version = "0.11.0", optional = true } vmm-sys-util = { version = "0.12.1", optional = true }
[build-dependencies] [build-dependencies]
time = { version = "0.3.14", features = ["formatting"] } time = { version = "0.3.14", features = ["formatting"] }
[dev-dependencies] [dev-dependencies]
xattr = "1.0.1" xattr = "1.0.1"
vmm-sys-util = "0.11.0" vmm-sys-util = "0.12.1"
[features] [features]
default = [ default = [
@ -95,6 +93,7 @@ default = [
"backend-s3", "backend-s3",
"backend-http-proxy", "backend-http-proxy",
"backend-localdisk", "backend-localdisk",
"dedup",
] ]
virtiofs = [ virtiofs = [
"nydus-service/virtiofs", "nydus-service/virtiofs",
@ -116,6 +115,8 @@ backend-oss = ["nydus-storage/backend-oss"]
backend-registry = ["nydus-storage/backend-registry"] backend-registry = ["nydus-storage/backend-registry"]
backend-s3 = ["nydus-storage/backend-s3"] backend-s3 = ["nydus-storage/backend-s3"]
dedup = ["nydus-storage/dedup"]
[workspace] [workspace]
members = [ members = [
"api", "api",

15
MAINTAINERS.md Normal file
View File

@ -0,0 +1,15 @@
# Maintainers
<!-- markdownlint-disable -->
| GitHub ID | Name | Email | Company |
| :-------------------------------------------: | :---------: | :-----------------------------: | :-----------: |
| [imeoer](https://github.com/imeoer) | Yan Song | imeoer@gmail.com | Ant Group |
| [bergwolf](https://github.com/bergwolf) | Peng Tao | bergwolf@hyper.sh | Ant Group |
| [jiangliu](https://github.com/jiangliu) | Jiang Liu | gerry@linux.alibaba.com | Alibaba Group |
| [liubogithub](https://github.com/liubogithub) | Liu Bo | liub.liubo@gmail.com | Alibaba Group |
| [luodw](https://github.com/luodw) | daowen luo | luodaowen.backend@bytedance.com | ByteDance |
| [changweige](https://github.com/changweige) | Changwei Ge | gechangwei@live.cn | ByteDance |
| [hsiangkao](https://github.com/hsiangkao) | Gao Xiang | hsiangkao@linux.alibaba.com | Alibaba Group |
<!-- markdownlint-restore -->

View File

@ -18,7 +18,7 @@ CARGO ?= $(shell which cargo)
RUSTUP ?= $(shell which rustup) RUSTUP ?= $(shell which rustup)
CARGO_BUILD_GEARS = -v ~/.ssh/id_rsa:/root/.ssh/id_rsa -v ~/.cargo/git:/root/.cargo/git -v ~/.cargo/registry:/root/.cargo/registry CARGO_BUILD_GEARS = -v ~/.ssh/id_rsa:/root/.ssh/id_rsa -v ~/.cargo/git:/root/.cargo/git -v ~/.cargo/registry:/root/.cargo/registry
SUDO = $(shell which sudo) SUDO = $(shell which sudo)
CARGO_COMMON ?= CARGO_COMMON ?=
EXCLUDE_PACKAGES = EXCLUDE_PACKAGES =
UNAME_M := $(shell uname -m) UNAME_M := $(shell uname -m)
@ -108,7 +108,11 @@ ut: .release_version
# you need install cargo nextest first from: https://nexte.st/book/pre-built-binaries.html # you need install cargo nextest first from: https://nexte.st/book/pre-built-binaries.html
ut-nextest: .release_version ut-nextest: .release_version
TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) RUST_BACKTRACE=1 ${CARGO} nextest run --no-fail-fast --filter-expr 'test(test) - test(integration)' --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS) --test-threads 8 TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) RUST_BACKTRACE=1 ${RUSTUP} run stable cargo nextest run --no-fail-fast --filter-expr 'test(test) - test(integration)' --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS)
# install miri first from https://github.com/rust-lang/miri/
miri-ut-nextest: .release_version
MIRIFLAGS=-Zmiri-disable-isolation TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) RUST_BACKTRACE=1 ${RUSTUP} run nightly cargo miri nextest run --no-fail-fast --filter-expr 'test(test) - test(integration) - test(deduplicate::tests) - test(inode_bitmap::tests::test_inode_bitmap)' --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS)
# install test dependencies # install test dependencies
pre-coverage: pre-coverage:
@ -121,7 +125,7 @@ coverage: pre-coverage
# write unit teset coverage to codecov.json, used for Github CI # write unit teset coverage to codecov.json, used for Github CI
coverage-codecov: coverage-codecov:
TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) ${CARGO} llvm-cov --codecov --output-path codecov.json --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS) -- --skip integration --nocapture --test-threads=8 TEST_WORKDIR_PREFIX=$(TEST_WORKDIR_PREFIX) ${RUSTUP} run stable cargo llvm-cov --codecov --output-path codecov.json --workspace $(EXCLUDE_PACKAGES) $(CARGO_COMMON) $(CARGO_BUILD_FLAGS) -- --skip integration --nocapture --test-threads=8
smoke-only: smoke-only:
make -C smoke test make -C smoke test

View File

@ -12,6 +12,7 @@
[![License](https://img.shields.io/crates/l/nydus-rs)](https://crates.io/crates/nydus-rs) [![License](https://img.shields.io/crates/l/nydus-rs)](https://crates.io/crates/nydus-rs)
[![Twitter](https://img.shields.io/twitter/url?style=social&url=https%3A%2F%2Ftwitter.com%2Fdragonfly_oss)](https://twitter.com/dragonfly_oss) [![Twitter](https://img.shields.io/twitter/url?style=social&url=https%3A%2F%2Ftwitter.com%2Fdragonfly_oss)](https://twitter.com/dragonfly_oss)
[![Nydus Stars](https://img.shields.io/github/stars/dragonflyoss/nydus?label=Nydus%20Stars&style=social)](https://github.com/dragonflyoss/nydus) [![Nydus Stars](https://img.shields.io/github/stars/dragonflyoss/nydus?label=Nydus%20Stars&style=social)](https://github.com/dragonflyoss/nydus)
[<img src="https://app.devin.ai/devin_v4.png" width="20" title="deepwiki">](https://deepwiki.com/dragonflyoss/nydus)
[![Smoke Test](https://github.com/dragonflyoss/nydus/actions/workflows/smoke.yml/badge.svg?event=schedule)](https://github.com/dragonflyoss/nydus/actions/workflows/smoke.yml?query=event%3Aschedule) [![Smoke Test](https://github.com/dragonflyoss/nydus/actions/workflows/smoke.yml/badge.svg?event=schedule)](https://github.com/dragonflyoss/nydus/actions/workflows/smoke.yml?query=event%3Aschedule)
[![Image Conversion](https://github.com/dragonflyoss/nydus/actions/workflows/convert.yml/badge.svg?event=schedule)](https://github.com/dragonflyoss/nydus/actions/workflows/convert.yml?query=event%3Aschedule) [![Image Conversion](https://github.com/dragonflyoss/nydus/actions/workflows/convert.yml/badge.svg?event=schedule)](https://github.com/dragonflyoss/nydus/actions/workflows/convert.yml?query=event%3Aschedule)
@ -63,7 +64,7 @@ The following Benchmarking results demonstrate that Nydus images significantly o
| ------------- | --------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------ | | ------------- | --------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------ |
| Storage | Registry/OSS/S3/NAS | Support for OCI-compatible distribution implementations such as Docker Hub, Harbor, Github GHCR, Aliyun ACR, NAS, and Aliyun OSS-like object storage service | ✅ | | Storage | Registry/OSS/S3/NAS | Support for OCI-compatible distribution implementations such as Docker Hub, Harbor, Github GHCR, Aliyun ACR, NAS, and Aliyun OSS-like object storage service | ✅ |
| Storage/Build | [Harbor](https://github.com/goharbor/acceleration-service) | Provides a general service for Harbor to support acceleration image conversion based on kinds of accelerator like Nydus and eStargz etc | ✅ | | Storage/Build | [Harbor](https://github.com/goharbor/acceleration-service) | Provides a general service for Harbor to support acceleration image conversion based on kinds of accelerator like Nydus and eStargz etc | ✅ |
| Distribution | [Dragonfly](https://github.com/dragonflyoss/Dragonfly2) | Improve the runtime performance of Nydus image even further with the Dragonfly P2P data distribution system | ✅ | | Distribution | [Dragonfly](https://github.com/dragonflyoss/dragonfly) | Improve the runtime performance of Nydus image even further with the Dragonfly P2P data distribution system | ✅ |
| Build | [Buildkit](https://github.com/nydusaccelerator/buildkit/blob/master/docs/nydus.md) | Provides the ability to build and export Nydus images directly from Dockerfile | ✅ | | Build | [Buildkit](https://github.com/nydusaccelerator/buildkit/blob/master/docs/nydus.md) | Provides the ability to build and export Nydus images directly from Dockerfile | ✅ |
| Build/Runtime | [Nerdctl](https://github.com/containerd/nerdctl/blob/master/docs/nydus.md) | The containerd client to build or run (requires nydus snapshotter) Nydus image | ✅ | | Build/Runtime | [Nerdctl](https://github.com/containerd/nerdctl/blob/master/docs/nydus.md) | The containerd client to build or run (requires nydus snapshotter) Nydus image | ✅ |
| Runtime | [Docker / Moby](https://github.com/dragonflyoss/nydus/blob/master/docs/docker-env-setup.md) | Run Nydus image in Docker container with containerd and nydus-snapshotter | ✅ | | Runtime | [Docker / Moby](https://github.com/dragonflyoss/nydus/blob/master/docs/docker-env-setup.md) | Run Nydus image in Docker container with containerd and nydus-snapshotter | ✅ |
@ -154,6 +155,8 @@ Using the key features of nydus as native in your project without preparing and
Please visit [**Wiki**](https://github.com/dragonflyoss/nydus/wiki), or [**docs**](./docs) Please visit [**Wiki**](https://github.com/dragonflyoss/nydus/wiki), or [**docs**](./docs)
There is also a very nice [Devin](https://devin.ai/) generated document available at [**deepwiki**](https://deepwiki.com/dragonflyoss/nydus).
## Community ## Community
Nydus aims to form a **vendor-neutral opensource** image distribution solution to all communities. Nydus aims to form a **vendor-neutral opensource** image distribution solution to all communities.

View File

@ -1,6 +1,6 @@
[package] [package]
name = "nydus-api" name = "nydus-api"
version = "0.3.1" version = "0.4.0"
description = "APIs for Nydus Image Service" description = "APIs for Nydus Image Service"
authors = ["The Nydus Developers"] authors = ["The Nydus Developers"]
license = "Apache-2.0 OR BSD-3-Clause" license = "Apache-2.0 OR BSD-3-Clause"
@ -24,7 +24,7 @@ serde = { version = "1.0.110", features = ["rc", "serde_derive"] }
url = { version = "2.1.1", optional = true } url = { version = "2.1.1", optional = true }
[dev-dependencies] [dev-dependencies]
vmm-sys-util = { version = "0.11" } vmm-sys-util = { version = "0.12.1" }
[features] [features]
error-backtrace = ["backtrace"] error-backtrace = ["backtrace"]

View File

@ -25,6 +25,9 @@ pub struct ConfigV2 {
pub id: String, pub id: String,
/// Configuration information for storage backend. /// Configuration information for storage backend.
pub backend: Option<BackendConfigV2>, pub backend: Option<BackendConfigV2>,
/// Configuration for external storage backends, order insensitivity.
#[serde(default)]
pub external_backends: Vec<ExternalBackendConfig>,
/// Configuration information for local cache system. /// Configuration information for local cache system.
pub cache: Option<CacheConfigV2>, pub cache: Option<CacheConfigV2>,
/// Configuration information for RAFS filesystem. /// Configuration information for RAFS filesystem.
@ -42,6 +45,7 @@ impl Default for ConfigV2 {
version: 2, version: 2,
id: String::new(), id: String::new(),
backend: None, backend: None,
external_backends: Vec::new(),
cache: None, cache: None,
rafs: None, rafs: None,
overlay: None, overlay: None,
@ -57,6 +61,7 @@ impl ConfigV2 {
version: 2, version: 2,
id: id.to_string(), id: id.to_string(),
backend: None, backend: None,
external_backends: Vec::new(),
cache: None, cache: None,
rafs: None, rafs: None,
overlay: None, overlay: None,
@ -514,9 +519,6 @@ pub struct OssConfig {
/// Enable HTTP proxy for the read request. /// Enable HTTP proxy for the read request.
#[serde(default)] #[serde(default)]
pub proxy: ProxyConfig, pub proxy: ProxyConfig,
/// Enable mirrors for the read request.
#[serde(default)]
pub mirrors: Vec<MirrorConfig>,
} }
/// S3 configuration information to access blobs. /// S3 configuration information to access blobs.
@ -558,9 +560,6 @@ pub struct S3Config {
/// Enable HTTP proxy for the read request. /// Enable HTTP proxy for the read request.
#[serde(default)] #[serde(default)]
pub proxy: ProxyConfig, pub proxy: ProxyConfig,
/// Enable mirrors for the read request.
#[serde(default)]
pub mirrors: Vec<MirrorConfig>,
} }
/// Http proxy configuration information to access blobs. /// Http proxy configuration information to access blobs.
@ -587,9 +586,6 @@ pub struct HttpProxyConfig {
/// Enable HTTP proxy for the read request. /// Enable HTTP proxy for the read request.
#[serde(default)] #[serde(default)]
pub proxy: ProxyConfig, pub proxy: ProxyConfig,
/// Enable mirrors for the read request.
#[serde(default)]
pub mirrors: Vec<MirrorConfig>,
} }
/// Container registry configuration information to access blobs. /// Container registry configuration information to access blobs.
@ -630,9 +626,6 @@ pub struct RegistryConfig {
/// Enable HTTP proxy for the read request. /// Enable HTTP proxy for the read request.
#[serde(default)] #[serde(default)]
pub proxy: ProxyConfig, pub proxy: ProxyConfig,
/// Enable mirrors for the read request.
#[serde(default)]
pub mirrors: Vec<MirrorConfig>,
} }
/// Configuration information for blob cache manager. /// Configuration information for blob cache manager.
@ -925,41 +918,6 @@ impl Default for ProxyConfig {
} }
} }
/// Configuration for registry mirror.
#[derive(Clone, Debug, Deserialize, Eq, PartialEq, Serialize)]
pub struct MirrorConfig {
/// Mirror server URL, for example http://127.0.0.1:65001.
pub host: String,
/// Ping URL to check mirror server health.
#[serde(default)]
pub ping_url: String,
/// HTTP request headers to be passed to mirror server.
#[serde(default)]
pub headers: HashMap<String, String>,
/// Interval for mirror health checking, in seconds.
#[serde(default = "default_check_interval")]
pub health_check_interval: u64,
/// Maximum number of failures before marking a mirror as unusable.
#[serde(default = "default_failure_limit")]
pub failure_limit: u8,
/// Elapsed time to pause mirror health check when the request is inactive, in seconds.
#[serde(default = "default_check_pause_elapsed")]
pub health_check_pause_elapsed: u64,
}
impl Default for MirrorConfig {
fn default() -> Self {
Self {
host: String::new(),
headers: HashMap::new(),
health_check_interval: 5,
failure_limit: 5,
ping_url: String::new(),
health_check_pause_elapsed: 300,
}
}
}
/// Configuration information for a cached blob`. /// Configuration information for a cached blob`.
#[derive(Clone, Debug, Default, Deserialize, Eq, PartialEq, Serialize)] #[derive(Clone, Debug, Default, Deserialize, Eq, PartialEq, Serialize)]
pub struct BlobCacheEntryConfigV2 { pub struct BlobCacheEntryConfigV2 {
@ -971,6 +929,9 @@ pub struct BlobCacheEntryConfigV2 {
/// Configuration information for storage backend. /// Configuration information for storage backend.
#[serde(default)] #[serde(default)]
pub backend: BackendConfigV2, pub backend: BackendConfigV2,
/// Configuration for external storage backends, order insensitivity.
#[serde(default)]
pub external_backends: Vec<ExternalBackendConfig>,
/// Configuration information for local cache system. /// Configuration information for local cache system.
#[serde(default)] #[serde(default)]
pub cache: CacheConfigV2, pub cache: CacheConfigV2,
@ -1034,6 +995,7 @@ impl From<&BlobCacheEntryConfigV2> for ConfigV2 {
version: c.version, version: c.version,
id: c.id.clone(), id: c.id.clone(),
backend: Some(c.backend.clone()), backend: Some(c.backend.clone()),
external_backends: c.external_backends.clone(),
cache: Some(c.cache.clone()), cache: Some(c.cache.clone()),
rafs: None, rafs: None,
overlay: None, overlay: None,
@ -1203,10 +1165,6 @@ fn default_check_pause_elapsed() -> u64 {
300 300
} }
fn default_failure_limit() -> u8 {
5
}
fn default_work_dir() -> String { fn default_work_dir() -> String {
".".to_string() ".".to_string()
} }
@ -1302,13 +1260,26 @@ struct CacheConfig {
#[serde(default, rename = "config")] #[serde(default, rename = "config")]
pub cache_config: Value, pub cache_config: Value,
/// Whether to validate data read from the cache. /// Whether to validate data read from the cache.
#[serde(skip_serializing, skip_deserializing)] #[serde(default, rename = "validate")]
pub cache_validate: bool, pub cache_validate: bool,
/// Configuration for blob data prefetching. /// Configuration for blob data prefetching.
#[serde(skip_serializing, skip_deserializing)] #[serde(skip_serializing, skip_deserializing)]
pub prefetch_config: BlobPrefetchConfig, pub prefetch_config: BlobPrefetchConfig,
} }
/// Additional configuration information for external backend, its items
/// will be merged to the configuration from image.
#[derive(Clone, Debug, Default, Deserialize, Eq, PartialEq, Serialize)]
pub struct ExternalBackendConfig {
/// External backend identifier to merge.
pub patch: HashMap<String, String>,
/// External backend type.
#[serde(rename = "type")]
pub kind: String,
/// External backend config items to merge.
pub config: HashMap<String, String>,
}
impl TryFrom<&CacheConfig> for CacheConfigV2 { impl TryFrom<&CacheConfig> for CacheConfigV2 {
type Error = std::io::Error; type Error = std::io::Error;
@ -1350,6 +1321,9 @@ struct FactoryConfig {
pub id: String, pub id: String,
/// Configuration for storage backend. /// Configuration for storage backend.
pub backend: BackendConfig, pub backend: BackendConfig,
/// Configuration for external storage backends, order insensitivity.
#[serde(default)]
pub external_backends: Vec<ExternalBackendConfig>,
/// Configuration for blob cache manager. /// Configuration for blob cache manager.
#[serde(default)] #[serde(default)]
pub cache: CacheConfig, pub cache: CacheConfig,
@ -1410,6 +1384,7 @@ impl TryFrom<RafsConfig> for ConfigV2 {
version: 2, version: 2,
id: v.device.id, id: v.device.id,
backend: Some(backend), backend: Some(backend),
external_backends: v.device.external_backends,
cache: Some(cache), cache: Some(cache),
rafs: Some(rafs), rafs: Some(rafs),
overlay: None, overlay: None,
@ -1500,6 +1475,9 @@ pub(crate) struct BlobCacheEntryConfig {
/// ///
/// Possible value: `LocalFsConfig`, `RegistryConfig`, `OssConfig`, `LocalDiskConfig`. /// Possible value: `LocalFsConfig`, `RegistryConfig`, `OssConfig`, `LocalDiskConfig`.
backend_config: Value, backend_config: Value,
/// Configuration for external storage backends, order insensitivity.
#[serde(default)]
external_backends: Vec<ExternalBackendConfig>,
/// Type of blob cache, corresponding to `FactoryConfig::CacheConfig::cache_type`. /// Type of blob cache, corresponding to `FactoryConfig::CacheConfig::cache_type`.
/// ///
/// Possible value: "fscache", "filecache". /// Possible value: "fscache", "filecache".
@ -1535,6 +1513,7 @@ impl TryFrom<&BlobCacheEntryConfig> for BlobCacheEntryConfigV2 {
version: 2, version: 2,
id: v.id.clone(), id: v.id.clone(),
backend: (&backend_config).try_into()?, backend: (&backend_config).try_into()?,
external_backends: v.external_backends.clone(),
cache: (&cache_config).try_into()?, cache: (&cache_config).try_into()?,
metadata_path: v.metadata_path.clone(), metadata_path: v.metadata_path.clone(),
}) })
@ -1856,11 +1835,6 @@ mod tests {
fallback = true fallback = true
check_interval = 10 check_interval = 10
use_http = true use_http = true
[[backend.oss.mirrors]]
host = "http://127.0.0.1:65001"
ping_url = "http://127.0.0.1:65001/ping"
health_check_interval = 10
failure_limit = 10
"#; "#;
let config: ConfigV2 = toml::from_str(content).unwrap(); let config: ConfigV2 = toml::from_str(content).unwrap();
assert_eq!(config.version, 2); assert_eq!(config.version, 2);
@ -1887,14 +1861,6 @@ mod tests {
assert_eq!(oss.proxy.check_interval, 10); assert_eq!(oss.proxy.check_interval, 10);
assert!(oss.proxy.fallback); assert!(oss.proxy.fallback);
assert!(oss.proxy.use_http); assert!(oss.proxy.use_http);
assert_eq!(oss.mirrors.len(), 1);
let mirror = &oss.mirrors[0];
assert_eq!(mirror.host, "http://127.0.0.1:65001");
assert_eq!(mirror.ping_url, "http://127.0.0.1:65001/ping");
assert!(mirror.headers.is_empty());
assert_eq!(mirror.health_check_interval, 10);
assert_eq!(mirror.failure_limit, 10);
} }
#[test] #[test]
@ -1920,11 +1886,6 @@ mod tests {
fallback = true fallback = true
check_interval = 10 check_interval = 10
use_http = true use_http = true
[[backend.registry.mirrors]]
host = "http://127.0.0.1:65001"
ping_url = "http://127.0.0.1:65001/ping"
health_check_interval = 10
failure_limit = 10
"#; "#;
let config: ConfigV2 = toml::from_str(content).unwrap(); let config: ConfigV2 = toml::from_str(content).unwrap();
assert_eq!(config.version, 2); assert_eq!(config.version, 2);
@ -1953,14 +1914,6 @@ mod tests {
assert_eq!(registry.proxy.check_interval, 10); assert_eq!(registry.proxy.check_interval, 10);
assert!(registry.proxy.fallback); assert!(registry.proxy.fallback);
assert!(registry.proxy.use_http); assert!(registry.proxy.use_http);
assert_eq!(registry.mirrors.len(), 1);
let mirror = &registry.mirrors[0];
assert_eq!(mirror.host, "http://127.0.0.1:65001");
assert_eq!(mirror.ping_url, "http://127.0.0.1:65001/ping");
assert!(mirror.headers.is_empty());
assert_eq!(mirror.health_check_interval, 10);
assert_eq!(mirror.failure_limit, 10);
} }
#[test] #[test]
@ -2367,15 +2320,6 @@ mod tests {
assert!(res); assert!(res);
} }
#[test]
fn test_default_mirror_config() {
let cfg = MirrorConfig::default();
assert_eq!(cfg.host, "");
assert_eq!(cfg.health_check_interval, 5);
assert_eq!(cfg.failure_limit, 5);
assert_eq!(cfg.ping_url, "");
}
#[test] #[test]
fn test_config_v2_from_file() { fn test_config_v2_from_file() {
let content = r#"version=2 let content = r#"version=2
@ -2585,7 +2529,6 @@ mod tests {
#[test] #[test]
fn test_default_value() { fn test_default_value() {
assert!(default_true()); assert!(default_true());
assert_eq!(default_failure_limit(), 5);
assert_eq!(default_prefetch_batch_size(), 1024 * 1024); assert_eq!(default_prefetch_batch_size(), 1024 * 1024);
assert_eq!(default_prefetch_threads_count(), 8); assert_eq!(default_prefetch_threads_count(), 8);
} }

View File

@ -86,6 +86,8 @@ define_error_macro!(eother, std::io::Error::new(std::io::ErrorKind::Other, ""));
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use std::io::{Error, ErrorKind};
fn check_size(size: usize) -> std::io::Result<()> { fn check_size(size: usize) -> std::io::Result<()> {
if size > 0x1000 { if size > 0x1000 {
return Err(einval!()); return Err(einval!());
@ -101,4 +103,150 @@ mod tests {
std::io::Error::from_raw_os_error(libc::EINVAL).kind() std::io::Error::from_raw_os_error(libc::EINVAL).kind()
); );
} }
#[test]
fn test_make_error() {
let original_error = Error::new(ErrorKind::Other, "test error");
let debug_info = "debug information";
let file = "test.rs";
let line = 42;
let result_error = super::make_error(original_error, debug_info, file, line);
assert_eq!(result_error.kind(), ErrorKind::Other);
}
#[test]
fn test_libc_error_macros() {
// Test einval macro
let err = einval!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EINVAL).kind());
// Test enoent macro
let err = enoent!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::ENOENT).kind());
// Test ebadf macro
let err = ebadf!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EBADF).kind());
// Test eacces macro
let err = eacces!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EACCES).kind());
// Test enotdir macro
let err = enotdir!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::ENOTDIR).kind());
// Test eisdir macro
let err = eisdir!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EISDIR).kind());
// Test ealready macro
let err = ealready!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EALREADY).kind());
// Test enosys macro
let err = enosys!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::ENOSYS).kind());
// Test epipe macro
let err = epipe!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EPIPE).kind());
// Test eio macro
let err = eio!();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EIO).kind());
}
#[test]
fn test_libc_error_macros_with_context() {
let test_msg = "test context";
// Test einval macro with context
let err = einval!(test_msg);
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EINVAL).kind());
// Test enoent macro with context
let err = enoent!(test_msg);
assert_eq!(err.kind(), Error::from_raw_os_error(libc::ENOENT).kind());
// Test eio macro with context
let err = eio!(test_msg);
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EIO).kind());
}
#[test]
fn test_custom_error_macros() {
// Test last_error macro
let err = last_error!();
// We can't predict the exact error, but we can check it's a valid error
assert!(!err.to_string().is_empty());
// Test eother macro
let err = eother!();
assert_eq!(err.kind(), ErrorKind::Other);
// Test eother macro with context
let err = eother!("custom context");
assert_eq!(err.kind(), ErrorKind::Other);
}
fn test_bail_einval_function() -> std::io::Result<()> {
bail_einval!("test error message");
}
fn test_bail_eio_function() -> std::io::Result<()> {
bail_eio!("test error message");
}
#[test]
fn test_bail_macros() {
// Test bail_einval macro
let result = test_bail_einval_function();
assert!(result.is_err());
let err = result.unwrap_err();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EINVAL).kind());
// The error message format is controlled by the macro, so just check it's not empty
assert!(!err.to_string().is_empty());
// Test bail_eio macro
let result = test_bail_eio_function();
assert!(result.is_err());
let err = result.unwrap_err();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EIO).kind());
// The error message format is controlled by the macro, so just check it's not empty
assert!(!err.to_string().is_empty());
}
#[test]
fn test_bail_macros_with_formatting() {
fn test_bail_with_format(code: i32) -> std::io::Result<()> {
if code == 1 {
bail_einval!("error code: {}", code);
} else if code == 2 {
bail_eio!("I/O error with code: {}", code);
}
Ok(())
}
// Test bail_einval with formatting
let result = test_bail_with_format(1);
assert!(result.is_err());
let err = result.unwrap_err();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EINVAL).kind());
// The error message format is controlled by the macro, so just check it's not empty
assert!(!err.to_string().is_empty());
// Test bail_eio with formatting
let result = test_bail_with_format(2);
assert!(result.is_err());
let err = result.unwrap_err();
assert_eq!(err.kind(), Error::from_raw_os_error(libc::EIO).kind());
// The error message format is controlled by the macro, so just check it's not empty
assert!(!err.to_string().is_empty());
// Test success case
let result = test_bail_with_format(3);
assert!(result.is_ok());
}
} }

View File

@ -140,7 +140,7 @@ impl EndpointHandler for MetricsFsFilesHandler {
(Method::Get, None) => { (Method::Get, None) => {
let id = extract_query_part(req, "id"); let id = extract_query_part(req, "id");
let latest_read_files = extract_query_part(req, "latest") let latest_read_files = extract_query_part(req, "latest")
.map_or(false, |b| b.parse::<bool>().unwrap_or(false)); .is_some_and(|b| b.parse::<bool>().unwrap_or(false));
let r = kicker(ApiRequest::ExportFsFilesMetrics(id, latest_read_files)); let r = kicker(ApiRequest::ExportFsFilesMetrics(id, latest_read_files));
Ok(convert_to_response(r, HttpError::FsFilesMetrics)) Ok(convert_to_response(r, HttpError::FsFilesMetrics))
} }

View File

@ -43,9 +43,8 @@ pub fn extract_query_part(req: &Request, key: &str) -> Option<String> {
// right now, below way makes it easy to obtain query parts from uri. // right now, below way makes it easy to obtain query parts from uri.
let http_prefix = format!("http:{}", req.uri().get_abs_path()); let http_prefix = format!("http:{}", req.uri().get_abs_path());
let url = Url::parse(&http_prefix) let url = Url::parse(&http_prefix)
.map_err(|e| { .inspect_err(|e| {
error!("api: can't parse request {:?}", e); error!("api: can't parse request {:?}", e);
e
}) })
.ok()?; .ok()?;
@ -326,35 +325,30 @@ mod tests {
#[test] #[test]
fn test_http_api_routes_v1() { fn test_http_api_routes_v1() {
assert!(HTTP_ROUTES.routes.get("/api/v1/daemon").is_some()); assert!(HTTP_ROUTES.routes.contains_key("/api/v1/daemon"));
assert!(HTTP_ROUTES.routes.get("/api/v1/daemon/events").is_some()); assert!(HTTP_ROUTES.routes.contains_key("/api/v1/daemon/events"));
assert!(HTTP_ROUTES.routes.get("/api/v1/daemon/backend").is_some()); assert!(HTTP_ROUTES.routes.contains_key("/api/v1/daemon/backend"));
assert!(HTTP_ROUTES.routes.get("/api/v1/daemon/start").is_some()); assert!(HTTP_ROUTES.routes.contains_key("/api/v1/daemon/start"));
assert!(HTTP_ROUTES.routes.get("/api/v1/daemon/exit").is_some()); assert!(HTTP_ROUTES.routes.contains_key("/api/v1/daemon/exit"));
assert!(HTTP_ROUTES assert!(HTTP_ROUTES
.routes .routes
.get("/api/v1/daemon/fuse/sendfd") .contains_key("/api/v1/daemon/fuse/sendfd"));
.is_some());
assert!(HTTP_ROUTES assert!(HTTP_ROUTES
.routes .routes
.get("/api/v1/daemon/fuse/takeover") .contains_key("/api/v1/daemon/fuse/takeover"));
.is_some()); assert!(HTTP_ROUTES.routes.contains_key("/api/v1/mount"));
assert!(HTTP_ROUTES.routes.get("/api/v1/mount").is_some()); assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics"));
assert!(HTTP_ROUTES.routes.get("/api/v1/metrics").is_some()); assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics/files"));
assert!(HTTP_ROUTES.routes.get("/api/v1/metrics/files").is_some()); assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics/pattern"));
assert!(HTTP_ROUTES.routes.get("/api/v1/metrics/pattern").is_some()); assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics/backend"));
assert!(HTTP_ROUTES.routes.get("/api/v1/metrics/backend").is_some()); assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics/blobcache"));
assert!(HTTP_ROUTES assert!(HTTP_ROUTES.routes.contains_key("/api/v1/metrics/inflight"));
.routes
.get("/api/v1/metrics/blobcache")
.is_some());
assert!(HTTP_ROUTES.routes.get("/api/v1/metrics/inflight").is_some());
} }
#[test] #[test]
fn test_http_api_routes_v2() { fn test_http_api_routes_v2() {
assert!(HTTP_ROUTES.routes.get("/api/v2/daemon").is_some()); assert!(HTTP_ROUTES.routes.contains_key("/api/v2/daemon"));
assert!(HTTP_ROUTES.routes.get("/api/v2/blobs").is_some()); assert!(HTTP_ROUTES.routes.contains_key("/api/v2/blobs"));
} }
#[test] #[test]

View File

@ -1,6 +1,6 @@
[package] [package]
name = "nydus-builder" name = "nydus-builder"
version = "0.1.0" version = "0.2.0"
description = "Nydus Image Builder" description = "Nydus Image Builder"
authors = ["The Nydus Developers"] authors = ["The Nydus Developers"]
license = "Apache-2.0" license = "Apache-2.0"
@ -20,13 +20,15 @@ serde = { version = "1.0.110", features = ["serde_derive", "rc"] }
serde_json = "1.0.53" serde_json = "1.0.53"
sha2 = "0.10.2" sha2 = "0.10.2"
tar = "0.4.40" tar = "0.4.40"
vmm-sys-util = "0.11.0" vmm-sys-util = "0.12.1"
xattr = "1.0.1" xattr = "1.0.1"
parse-size = "1.1.0"
nydus-api = { version = "0.3", path = "../api" } nydus-api = { version = "0.4.0", path = "../api" }
nydus-rafs = { version = "0.3", path = "../rafs" } nydus-rafs = { version = "0.4.0", path = "../rafs" }
nydus-storage = { version = "0.6", path = "../storage", features = ["backend-localfs"] } nydus-storage = { version = "0.7.0", path = "../storage", features = ["backend-localfs"] }
nydus-utils = { version = "0.4", path = "../utils" } nydus-utils = { version = "0.5.0", path = "../utils" }
gix-attributes = "0.25.0"
[package.metadata.docs.rs] [package.metadata.docs.rs]
all-features = true all-features = true

189
builder/src/attributes.rs Normal file
View File

@ -0,0 +1,189 @@
// Copyright 2024 Nydus Developers. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
use std::collections::HashMap;
use std::path::{Path, PathBuf};
use std::{fs, path};
use anyhow::Result;
use gix_attributes::parse;
use gix_attributes::parse::Kind;
const KEY_TYPE: &str = "type";
const KEY_CRCS: &str = "crcs";
const VAL_EXTERNAL: &str = "external";
pub struct Parser {}
#[derive(Clone, Debug, Eq, PartialEq, Default)]
pub struct Item {
pub pattern: PathBuf,
pub attributes: HashMap<String, String>,
}
#[derive(Clone, Debug, Eq, PartialEq, Default)]
pub struct Attributes {
pub items: HashMap<PathBuf, HashMap<String, String>>,
pub crcs: HashMap<PathBuf, Vec<u32>>,
}
impl Attributes {
/// Parse nydus attributes from a file.
pub fn from<P: AsRef<Path>>(path: P) -> Result<Attributes> {
let content = fs::read(path)?;
let _items = parse(&content);
let mut items = HashMap::new();
let mut crcs = HashMap::new();
for _item in _items {
let _item = _item?;
if let Kind::Pattern(pattern) = _item.0 {
let mut path = PathBuf::from(pattern.text.to_string());
if !path.is_absolute() {
path = path::Path::new("/").join(path);
}
let mut current_path = path.clone();
let mut attributes = HashMap::new();
let mut _type = String::new();
let mut _crcs = vec![];
for line in _item.1 {
let line = line?;
let name = line.name.as_str();
let state = line.state.as_bstr().unwrap_or_default();
if name == KEY_TYPE {
_type = state.to_string();
}
if name == KEY_CRCS {
_crcs = state
.to_string()
.split(',')
.map(|s| {
let trimmed = s.trim();
let hex_str = if let Some(stripped) = trimmed.strip_prefix("0x") {
stripped
} else {
trimmed
};
u32::from_str_radix(hex_str, 16).map_err(|e| anyhow::anyhow!(e))
})
.collect::<Result<Vec<u32>, _>>()?;
}
attributes.insert(name.to_string(), state.to_string());
}
crcs.insert(path.clone(), _crcs);
items.insert(path, attributes);
// process parent directory
while let Some(parent) = current_path.parent() {
if parent == Path::new("/") {
break;
}
let mut attributes = HashMap::new();
if !items.contains_key(parent) {
attributes.insert(KEY_TYPE.to_string(), VAL_EXTERNAL.to_string());
items.insert(parent.to_path_buf(), attributes);
}
current_path = parent.to_path_buf();
}
}
}
Ok(Attributes { items, crcs })
}
fn check_external(&self, attributes: &HashMap<String, String>) -> bool {
attributes.get(KEY_TYPE) == Some(&VAL_EXTERNAL.to_string())
}
pub fn is_external<P: AsRef<Path>>(&self, path: P) -> bool {
if let Some(attributes) = self.items.get(path.as_ref()) {
return self.check_external(attributes);
}
false
}
pub fn is_prefix_external<P: AsRef<Path>>(&self, target: P) -> bool {
self.items
.iter()
.any(|item| item.0.starts_with(&target) && self.check_external(item.1))
}
pub fn get_value<P: AsRef<Path>, K: AsRef<str>>(&self, path: P, key: K) -> Option<String> {
if let Some(attributes) = self.items.get(path.as_ref()) {
return attributes.get(key.as_ref()).map(|s| s.to_string());
}
None
}
pub fn get_values<P: AsRef<Path>>(&self, path: P) -> Option<&HashMap<String, String>> {
self.items.get(path.as_ref())
}
pub fn get_crcs<P: AsRef<Path>>(&self, path: P) -> Option<&Vec<u32>> {
self.crcs.get(path.as_ref())
}
}
#[cfg(test)]
mod tests {
use std::{collections::HashMap, fs, path::PathBuf};
use super::{Attributes, Item};
use vmm_sys_util::tempfile::TempFile;
#[test]
fn test_attribute_parse() {
let file = TempFile::new().unwrap();
fs::write(
file.as_path(),
"/foo type=external crcs=0x1234,0x5678
/bar type=external crcs=0x1234,0x5678
/models/foo/bar type=external",
)
.unwrap();
let attributes = Attributes::from(file.as_path()).unwrap();
let _attributes_base: HashMap<String, String> =
[("type".to_string(), "external".to_string())]
.iter()
.cloned()
.collect();
let _attributes: HashMap<String, String> = [
("type".to_string(), "external".to_string()),
("crcs".to_string(), "0x1234,0x5678".to_string()),
]
.iter()
.cloned()
.collect();
let items_map: HashMap<PathBuf, HashMap<String, String>> = vec![
Item {
pattern: PathBuf::from("/foo"),
attributes: _attributes.clone(),
},
Item {
pattern: PathBuf::from("/bar"),
attributes: _attributes.clone(),
},
Item {
pattern: PathBuf::from("/models"),
attributes: _attributes_base.clone(),
},
Item {
pattern: PathBuf::from("/models/foo"),
attributes: _attributes_base.clone(),
},
Item {
pattern: PathBuf::from("/models/foo/bar"),
attributes: _attributes_base.clone(),
},
]
.into_iter()
.map(|item| (item.pattern, item.attributes))
.collect();
assert_eq!(attributes.items, items_map);
assert_eq!(attributes.get_crcs("/foo"), Some(&vec![0x1234, 0x5678]))
}
}

View File

@ -18,14 +18,15 @@ use super::core::node::{ChunkSource, NodeInfo};
use super::{BlobManager, Bootstrap, BootstrapManager, BuildContext, BuildOutput, Tree}; use super::{BlobManager, Bootstrap, BootstrapManager, BuildContext, BuildOutput, Tree};
use crate::core::node::Node; use crate::core::node::Node;
use crate::NodeChunk; use crate::NodeChunk;
use anyhow::Result; use crate::OsString;
use anyhow::{Ok, Result};
use nydus_rafs::metadata::chunk::ChunkWrapper; use nydus_rafs::metadata::chunk::ChunkWrapper;
use nydus_rafs::metadata::inode::InodeWrapper; use nydus_rafs::metadata::inode::InodeWrapper;
use nydus_rafs::metadata::layout::RafsXAttrs; use nydus_rafs::metadata::layout::RafsXAttrs;
use nydus_storage::meta::BlobChunkInfoV1Ondisk; use nydus_storage::meta::BlobChunkInfoV1Ondisk;
use nydus_utils::compress::Algorithm; use nydus_utils::compress::Algorithm;
use nydus_utils::digest::RafsDigest; use nydus_utils::digest::RafsDigest;
use std::ffi::OsString;
use std::mem::size_of; use std::mem::size_of;
use std::path::PathBuf; use std::path::PathBuf;
use std::str::FromStr; use std::str::FromStr;
@ -37,6 +38,7 @@ pub struct ChunkdictChunkInfo {
pub version: String, pub version: String,
pub chunk_blob_id: String, pub chunk_blob_id: String,
pub chunk_digest: String, pub chunk_digest: String,
pub chunk_crc32: u32,
pub chunk_compressed_size: u32, pub chunk_compressed_size: u32,
pub chunk_uncompressed_size: u32, pub chunk_uncompressed_size: u32,
pub chunk_compressed_offset: u64, pub chunk_compressed_offset: u64,
@ -87,7 +89,7 @@ impl Generator {
let storage = &mut bootstrap_mgr.bootstrap_storage; let storage = &mut bootstrap_mgr.bootstrap_storage;
bootstrap.dump(ctx, storage, &mut bootstrap_ctx, &blob_table)?; bootstrap.dump(ctx, storage, &mut bootstrap_ctx, &blob_table)?;
BuildOutput::new(blob_mgr, &bootstrap_mgr.bootstrap_storage) BuildOutput::new(blob_mgr, None, &bootstrap_mgr.bootstrap_storage, &None)
} }
/// Validate tree. /// Validate tree.
@ -269,6 +271,7 @@ impl Generator {
chunk.set_uncompressed_size(chunk_info.chunk_uncompressed_size); chunk.set_uncompressed_size(chunk_info.chunk_uncompressed_size);
chunk.set_uncompressed_offset(chunk_info.chunk_uncompressed_offset); chunk.set_uncompressed_offset(chunk_info.chunk_uncompressed_offset);
chunk.set_id(RafsDigest::from_string(&chunk_info.chunk_digest)); chunk.set_id(RafsDigest::from_string(&chunk_info.chunk_digest));
chunk.set_crc32(chunk_info.chunk_crc32);
node.chunks.push(NodeChunk { node.chunks.push(NodeChunk {
source: ChunkSource::Build, source: ChunkSource::Build,

View File

@ -21,6 +21,7 @@ use nydus_utils::{digest, try_round_up_4k};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use sha2::Digest; use sha2::Digest;
use crate::attributes::Attributes;
use crate::core::context::Artifact; use crate::core::context::Artifact;
use super::core::blob::Blob; use super::core::blob::Blob;
@ -48,22 +49,30 @@ pub struct Config {
/// available value: 0-99, 0 means disable /// available value: 0-99, 0 means disable
/// hint: it's better to disable this option when there are some shared blobs /// hint: it's better to disable this option when there are some shared blobs
/// for example: build-cache /// for example: build-cache
#[serde(default)] pub min_used_ratio: u8,
min_used_ratio: u8,
/// we compact blobs whose size are less than compact_blob_size /// we compact blobs whose size are less than compact_blob_size
#[serde(default = "default_compact_blob_size")] pub compact_blob_size: usize,
compact_blob_size: usize,
/// size of compacted blobs should not be larger than max_compact_size /// size of compacted blobs should not be larger than max_compact_size
#[serde(default = "default_max_compact_size")] pub max_compact_size: usize,
max_compact_size: usize,
/// if number of blobs >= layers_to_compact, do compact /// if number of blobs >= layers_to_compact, do compact
/// 0 means always try compact /// 0 means always try compact
#[serde(default)] pub layers_to_compact: usize,
layers_to_compact: usize,
/// local blobs dir, may haven't upload to backend yet /// local blobs dir, may haven't upload to backend yet
/// what's more, new blobs will output to this dir /// what's more, new blobs will output to this dir
/// name of blob file should be equal to blob_id /// name of blob file should be equal to blob_id
blobs_dir: String, pub blobs_dir: String,
}
impl Default for Config {
fn default() -> Self {
Self {
min_used_ratio: 0,
compact_blob_size: default_compact_blob_size(),
max_compact_size: default_max_compact_size(),
layers_to_compact: 0,
blobs_dir: String::new(),
}
}
} }
#[derive(Debug, Clone, Copy, Hash, PartialEq, Eq)] #[derive(Debug, Clone, Copy, Hash, PartialEq, Eq)]
@ -79,7 +88,7 @@ impl ChunkKey {
match c { match c {
ChunkWrapper::V5(_) => Self::Digest(*c.id()), ChunkWrapper::V5(_) => Self::Digest(*c.id()),
ChunkWrapper::V6(_) => Self::Offset(c.blob_index(), c.compressed_offset()), ChunkWrapper::V6(_) => Self::Offset(c.blob_index(), c.compressed_offset()),
ChunkWrapper::Ref(_) => unimplemented!("unsupport ChunkWrapper::Ref(c)"), ChunkWrapper::Ref(_) => Self::Digest(*c.id()),
} }
} }
} }
@ -285,7 +294,7 @@ impl BlobCompactor {
version, version,
states: vec![Default::default(); ori_blobs_number], states: vec![Default::default(); ori_blobs_number],
ori_blob_mgr, ori_blob_mgr,
new_blob_mgr: BlobManager::new(digester), new_blob_mgr: BlobManager::new(digester, false),
c2nodes: HashMap::new(), c2nodes: HashMap::new(),
b2nodes: HashMap::new(), b2nodes: HashMap::new(),
backend, backend,
@ -547,7 +556,8 @@ impl BlobCompactor {
info!("compactor: delete compacted blob {}", ori_blob_ids[idx]); info!("compactor: delete compacted blob {}", ori_blob_ids[idx]);
} }
State::Rebuild(cs) => { State::Rebuild(cs) => {
let blob_storage = ArtifactStorage::FileDir(PathBuf::from(dir)); let blob_storage =
ArtifactStorage::FileDir((PathBuf::from(dir), String::new()));
let mut blob_ctx = BlobContext::new( let mut blob_ctx = BlobContext::new(
String::from(""), String::from(""),
0, 0,
@ -557,6 +567,7 @@ impl BlobCompactor {
build_ctx.cipher, build_ctx.cipher,
Default::default(), Default::default(),
None, None,
false,
); );
blob_ctx.set_meta_info_enabled(self.is_v6()); blob_ctx.set_meta_info_enabled(self.is_v6());
let blob_idx = self.new_blob_mgr.alloc_index()?; let blob_idx = self.new_blob_mgr.alloc_index()?;
@ -609,14 +620,16 @@ impl BlobCompactor {
PathBuf::from(""), PathBuf::from(""),
Default::default(), Default::default(),
None, None,
None,
false, false,
Features::new(), Features::new(),
false, false,
Attributes::default(),
); );
let mut bootstrap_mgr = let mut bootstrap_mgr =
BootstrapManager::new(Some(ArtifactStorage::SingleFile(d_bootstrap)), None); BootstrapManager::new(Some(ArtifactStorage::SingleFile(d_bootstrap)), None);
let mut bootstrap_ctx = bootstrap_mgr.create_ctx()?; let mut bootstrap_ctx = bootstrap_mgr.create_ctx()?;
let mut ori_blob_mgr = BlobManager::new(rs.meta.get_digester()); let mut ori_blob_mgr = BlobManager::new(rs.meta.get_digester(), false);
ori_blob_mgr.extend_from_blob_table(&build_ctx, rs.superblock.get_blob_infos())?; ori_blob_mgr.extend_from_blob_table(&build_ctx, rs.superblock.get_blob_infos())?;
if let Some(dict) = chunk_dict { if let Some(dict) = chunk_dict {
ori_blob_mgr.set_chunk_dict(dict); ori_blob_mgr.set_chunk_dict(dict);
@ -655,7 +668,9 @@ impl BlobCompactor {
Ok(Some(BuildOutput::new( Ok(Some(BuildOutput::new(
&compactor.new_blob_mgr, &compactor.new_blob_mgr,
None,
&bootstrap_mgr.bootstrap_storage, &bootstrap_mgr.bootstrap_storage,
&None,
)?)) )?))
} }
} }
@ -701,8 +716,7 @@ mod tests {
pub uncompress_offset: u64, pub uncompress_offset: u64,
pub file_offset: u64, pub file_offset: u64,
pub index: u32, pub index: u32,
#[allow(unused)] pub crc32: u32,
pub reserved: u32,
} }
impl BlobChunkInfo for MockChunkInfo { impl BlobChunkInfo for MockChunkInfo {
@ -724,6 +738,18 @@ mod tests {
false false
} }
fn has_crc32(&self) -> bool {
self.flags.contains(BlobChunkFlags::HAS_CRC32)
}
fn crc32(&self) -> u32 {
if self.has_crc32() {
self.crc32
} else {
0
}
}
fn as_any(&self) -> &dyn Any { fn as_any(&self) -> &dyn Any {
self self
} }
@ -790,7 +816,6 @@ mod tests {
} }
#[test] #[test]
#[should_panic = "not implemented: unsupport ChunkWrapper::Ref(c)"]
fn test_chunk_key_from() { fn test_chunk_key_from() {
let cw = ChunkWrapper::new(RafsVersion::V5); let cw = ChunkWrapper::new(RafsVersion::V5);
matches!(ChunkKey::from(&cw), ChunkKey::Digest(_)); matches!(ChunkKey::from(&cw), ChunkKey::Digest(_));
@ -808,7 +833,7 @@ mod tests {
uncompress_offset: 0x1000, uncompress_offset: 0x1000,
file_offset: 0x1000, file_offset: 0x1000,
index: 1, index: 1,
reserved: 0, crc32: 0,
}) as Arc<dyn BlobChunkInfo>; }) as Arc<dyn BlobChunkInfo>;
let cw = ChunkWrapper::Ref(chunk); let cw = ChunkWrapper::Ref(chunk);
ChunkKey::from(&cw); ChunkKey::from(&cw);
@ -857,6 +882,7 @@ mod tests {
crypt::Algorithm::Aes256Xts, crypt::Algorithm::Aes256Xts,
Arc::new(cipher_object), Arc::new(cipher_object),
None, None,
false,
); );
let ori_blob_ids = ["1".to_owned(), "2".to_owned()]; let ori_blob_ids = ["1".to_owned(), "2".to_owned()];
let backend = Arc::new(MockBackend { let backend = Arc::new(MockBackend {
@ -969,7 +995,7 @@ mod tests {
HashChunkDict::from_commandline_arg(path, Arc::new(ConfigV2::default()), &rafs_config) HashChunkDict::from_commandline_arg(path, Arc::new(ConfigV2::default()), &rafs_config)
.unwrap(); .unwrap();
let mut ori_blob_mgr = BlobManager::new(digest::Algorithm::Sha256); let mut ori_blob_mgr = BlobManager::new(digest::Algorithm::Sha256, false);
ori_blob_mgr.set_chunk_dict(dict); ori_blob_mgr.set_chunk_dict(dict);
let backend = Arc::new(MockBackend { let backend = Arc::new(MockBackend {
@ -984,6 +1010,7 @@ mod tests {
tmpfile.as_path().to_path_buf(), tmpfile.as_path().to_path_buf(),
Overlay::UpperAddition, Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32, RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true, true,
false, false,
)?; )?;
@ -1073,6 +1100,7 @@ mod tests {
tmpfile.as_path().to_path_buf(), tmpfile.as_path().to_path_buf(),
Overlay::UpperAddition, Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32, RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true, true,
false, false,
)?; )?;
@ -1084,6 +1112,7 @@ mod tests {
tmpfile2.as_path().to_path_buf(), tmpfile2.as_path().to_path_buf(),
Overlay::UpperAddition, Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32, RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true, true,
false, false,
)?; )?;
@ -1100,9 +1129,9 @@ mod tests {
assert_eq!(compactor.b2nodes.len(), 2); assert_eq!(compactor.b2nodes.len(), 2);
let chunk_key1 = ChunkKey::from(&chunk1); let chunk_key1 = ChunkKey::from(&chunk1);
assert!(compactor.c2nodes.get(&chunk_key1).is_some()); assert!(compactor.c2nodes.contains_key(&chunk_key1));
assert_eq!(compactor.c2nodes.get(&chunk_key1).unwrap().len(), 1); assert_eq!(compactor.c2nodes.get(&chunk_key1).unwrap().len(), 1);
assert!(compactor.b2nodes.get(&chunk2.blob_index()).is_some()); assert!(compactor.b2nodes.contains_key(&chunk2.blob_index()));
assert_eq!( assert_eq!(
compactor.b2nodes.get(&chunk2.blob_index()).unwrap().len(), compactor.b2nodes.get(&chunk2.blob_index()).unwrap().len(),
2 2
@ -1131,9 +1160,11 @@ mod tests {
PathBuf::from(tmp_dir.as_path()), PathBuf::from(tmp_dir.as_path()),
Default::default(), Default::default(),
None, None,
None,
false, false,
Features::new(), Features::new(),
false, false,
Attributes::default(),
); );
let mut compactor = blob_compactor_load_and_dedup_chunks().unwrap(); let mut compactor = blob_compactor_load_and_dedup_chunks().unwrap();
@ -1147,6 +1178,7 @@ mod tests {
build_ctx.cipher, build_ctx.cipher,
Default::default(), Default::default(),
None, None,
false,
); );
let blob_ctx2 = BlobContext::new( let blob_ctx2 = BlobContext::new(
"blob_id2".to_owned(), "blob_id2".to_owned(),
@ -1157,6 +1189,7 @@ mod tests {
build_ctx.cipher, build_ctx.cipher,
Default::default(), Default::default(),
None, None,
false,
); );
let blob_ctx3 = BlobContext::new( let blob_ctx3 = BlobContext::new(
"blob_id3".to_owned(), "blob_id3".to_owned(),
@ -1167,6 +1200,7 @@ mod tests {
build_ctx.cipher, build_ctx.cipher,
Default::default(), Default::default(),
None, None,
false,
); );
let blob_ctx4 = BlobContext::new( let blob_ctx4 = BlobContext::new(
"blob_id4".to_owned(), "blob_id4".to_owned(),
@ -1177,6 +1211,7 @@ mod tests {
build_ctx.cipher, build_ctx.cipher,
Default::default(), Default::default(),
None, None,
false,
); );
let blob_ctx5 = BlobContext::new( let blob_ctx5 = BlobContext::new(
"blob_id5".to_owned(), "blob_id5".to_owned(),
@ -1187,6 +1222,7 @@ mod tests {
build_ctx.cipher, build_ctx.cipher,
Default::default(), Default::default(),
None, None,
false,
); );
compactor.ori_blob_mgr.add_blob(blob_ctx1); compactor.ori_blob_mgr.add_blob(blob_ctx1);
compactor.ori_blob_mgr.add_blob(blob_ctx2); compactor.ori_blob_mgr.add_blob(blob_ctx2);
@ -1228,9 +1264,11 @@ mod tests {
PathBuf::from(tmp_dir.as_path()), PathBuf::from(tmp_dir.as_path()),
Default::default(), Default::default(),
None, None,
None,
false, false,
Features::new(), Features::new(),
false, false,
Attributes::default(),
); );
let mut blob_ctx1 = BlobContext::new( let mut blob_ctx1 = BlobContext::new(
"blob_id1".to_owned(), "blob_id1".to_owned(),
@ -1241,6 +1279,7 @@ mod tests {
build_ctx.cipher, build_ctx.cipher,
Default::default(), Default::default(),
None, None,
false,
); );
blob_ctx1.compressed_blob_size = 2; blob_ctx1.compressed_blob_size = 2;
let mut blob_ctx2 = BlobContext::new( let mut blob_ctx2 = BlobContext::new(
@ -1252,6 +1291,7 @@ mod tests {
build_ctx.cipher, build_ctx.cipher,
Default::default(), Default::default(),
None, None,
false,
); );
blob_ctx2.compressed_blob_size = 0; blob_ctx2.compressed_blob_size = 0;
let blob_ctx3 = BlobContext::new( let blob_ctx3 = BlobContext::new(
@ -1263,6 +1303,7 @@ mod tests {
build_ctx.cipher, build_ctx.cipher,
Default::default(), Default::default(),
None, None,
false,
); );
let blob_ctx4 = BlobContext::new( let blob_ctx4 = BlobContext::new(
"blob_id4".to_owned(), "blob_id4".to_owned(),
@ -1273,6 +1314,7 @@ mod tests {
build_ctx.cipher, build_ctx.cipher,
Default::default(), Default::default(),
None, None,
false,
); );
let blob_ctx5 = BlobContext::new( let blob_ctx5 = BlobContext::new(
"blob_id5".to_owned(), "blob_id5".to_owned(),
@ -1283,6 +1325,7 @@ mod tests {
build_ctx.cipher, build_ctx.cipher,
Default::default(), Default::default(),
None, None,
false,
); );
compactor.ori_blob_mgr.add_blob(blob_ctx1); compactor.ori_blob_mgr.add_blob(blob_ctx1);
compactor.ori_blob_mgr.add_blob(blob_ctx2); compactor.ori_blob_mgr.add_blob(blob_ctx2);

View File

@ -5,7 +5,7 @@
use std::borrow::Cow; use std::borrow::Cow;
use std::slice; use std::slice;
use anyhow::{Context, Result}; use anyhow::{bail, Context, Result};
use nydus_rafs::metadata::RAFS_MAX_CHUNK_SIZE; use nydus_rafs::metadata::RAFS_MAX_CHUNK_SIZE;
use nydus_storage::device::BlobFeatures; use nydus_storage::device::BlobFeatures;
use nydus_storage::meta::{toc, BlobMetaChunkArray}; use nydus_storage::meta::{toc, BlobMetaChunkArray};
@ -18,6 +18,8 @@ use super::node::Node;
use crate::core::context::Artifact; use crate::core::context::Artifact;
use crate::{BlobContext, BlobManager, BuildContext, ConversionType, Feature}; use crate::{BlobContext, BlobManager, BuildContext, ConversionType, Feature};
const VALID_BLOB_ID_LENGTH: usize = 64;
/// Generator for RAFS data blob. /// Generator for RAFS data blob.
pub(crate) struct Blob {} pub(crate) struct Blob {}
@ -94,7 +96,7 @@ impl Blob {
Ok(()) Ok(())
} }
fn finalize_blob_data( pub fn finalize_blob_data(
ctx: &BuildContext, ctx: &BuildContext,
blob_mgr: &mut BlobManager, blob_mgr: &mut BlobManager,
blob_writer: &mut dyn Artifact, blob_writer: &mut dyn Artifact,
@ -120,6 +122,9 @@ impl Blob {
&& (ctx.blob_inline_meta || ctx.features.is_enabled(Feature::BlobToc)) && (ctx.blob_inline_meta || ctx.features.is_enabled(Feature::BlobToc))
{ {
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() { if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
if blob_ctx.external {
return Ok(());
}
blob_ctx.write_tar_header( blob_ctx.write_tar_header(
blob_writer, blob_writer,
toc::TOC_ENTRY_BLOB_RAW, toc::TOC_ENTRY_BLOB_RAW,
@ -141,6 +146,20 @@ impl Blob {
} }
} }
// check blobs to make sure all blobs are valid.
if blob_mgr.external {
for (index, blob_ctx) in blob_mgr.get_blobs().iter().enumerate() {
if blob_ctx.blob_id.len() != VALID_BLOB_ID_LENGTH {
bail!(
"invalid blob id:{}, length:{}, index:{}",
blob_ctx.blob_id,
blob_ctx.blob_id.len(),
index
);
}
}
}
Ok(()) Ok(())
} }

View File

@ -75,7 +75,9 @@ impl Bootstrap {
let digest = RafsDigest::from_buf(&bootstrap_data, digest::Algorithm::Sha256); let digest = RafsDigest::from_buf(&bootstrap_data, digest::Algorithm::Sha256);
let name = digest.to_string(); let name = digest.to_string();
bootstrap_ctx.writer.finalize(Some(name.clone()))?; bootstrap_ctx.writer.finalize(Some(name.clone()))?;
*bootstrap_storage = Some(ArtifactStorage::SingleFile(p.join(name))); let mut path = p.0.join(name);
path.set_extension(&p.1);
*bootstrap_storage = Some(ArtifactStorage::SingleFile(path));
Ok(()) Ok(())
} else { } else {
bootstrap_ctx.writer.finalize(Some(String::default())) bootstrap_ctx.writer.finalize(Some(String::default()))

View File

@ -19,7 +19,7 @@ use nydus_utils::digest::{self, RafsDigest};
use crate::Tree; use crate::Tree;
#[derive(Debug, PartialEq, Eq, Hash, Ord, PartialOrd)] #[derive(Debug, PartialEq, Eq, Hash, Ord, PartialOrd)]
pub struct DigestWithBlobIndex(pub RafsDigest, pub u32); pub struct DigestWithBlobIndex(pub RafsDigest, pub u32, pub Option<u32>);
/// Trait to manage chunk cache for chunk deduplication. /// Trait to manage chunk cache for chunk deduplication.
pub trait ChunkDict: Sync + Send + 'static { pub trait ChunkDict: Sync + Send + 'static {

View File

@ -13,11 +13,13 @@ use std::io::{BufWriter, Cursor, Read, Seek, Write};
use std::mem::size_of; use std::mem::size_of;
use std::os::unix::fs::FileTypeExt; use std::os::unix::fs::FileTypeExt;
use std::path::{Display, Path, PathBuf}; use std::path::{Display, Path, PathBuf};
use std::result::Result::Ok;
use std::str::FromStr; use std::str::FromStr;
use std::sync::{Arc, Mutex}; use std::sync::{Arc, Mutex};
use std::{fmt, fs}; use std::{fmt, fs};
use anyhow::{anyhow, Context, Error, Result}; use anyhow::{anyhow, Context, Error, Result};
use nydus_utils::crc32;
use nydus_utils::crypt::{self, Cipher, CipherContext}; use nydus_utils::crypt::{self, Cipher, CipherContext};
use sha2::{Digest, Sha256}; use sha2::{Digest, Sha256};
use tar::{EntryType, Header}; use tar::{EntryType, Header};
@ -44,6 +46,7 @@ use nydus_utils::digest::DigestData;
use nydus_utils::{compress, digest, div_round_up, round_down, try_round_up_4k, BufReaderInfo}; use nydus_utils::{compress, digest, div_round_up, round_down, try_round_up_4k, BufReaderInfo};
use super::node::ChunkSource; use super::node::ChunkSource;
use crate::attributes::Attributes;
use crate::core::tree::TreeNode; use crate::core::tree::TreeNode;
use crate::{ChunkDict, Feature, Features, HashChunkDict, Prefetch, PrefetchPolicy, WhiteoutSpec}; use crate::{ChunkDict, Feature, Features, HashChunkDict, Prefetch, PrefetchPolicy, WhiteoutSpec};
@ -138,7 +141,7 @@ pub enum ArtifactStorage {
// Won't rename user's specification // Won't rename user's specification
SingleFile(PathBuf), SingleFile(PathBuf),
// Will rename it from tmp file as user didn't specify a name. // Will rename it from tmp file as user didn't specify a name.
FileDir(PathBuf), FileDir((PathBuf, String)),
} }
impl ArtifactStorage { impl ArtifactStorage {
@ -146,7 +149,16 @@ impl ArtifactStorage {
pub fn display(&self) -> Display { pub fn display(&self) -> Display {
match self { match self {
ArtifactStorage::SingleFile(p) => p.display(), ArtifactStorage::SingleFile(p) => p.display(),
ArtifactStorage::FileDir(p) => p.display(), ArtifactStorage::FileDir(p) => p.0.display(),
}
}
pub fn add_suffix(&mut self, suffix: &str) {
match self {
ArtifactStorage::SingleFile(p) => {
p.set_extension(suffix);
}
ArtifactStorage::FileDir(p) => p.1 = String::from(suffix),
} }
} }
} }
@ -335,8 +347,8 @@ impl ArtifactWriter {
ArtifactStorage::FileDir(ref p) => { ArtifactStorage::FileDir(ref p) => {
// Better we can use open(2) O_TMPFILE, but for compatibility sake, we delay this job. // Better we can use open(2) O_TMPFILE, but for compatibility sake, we delay this job.
// TODO: Blob dir existence? // TODO: Blob dir existence?
let tmp = TempFile::new_in(p) let tmp = TempFile::new_in(&p.0)
.with_context(|| format!("failed to create temp file in {}", p.display()))?; .with_context(|| format!("failed to create temp file in {}", p.0.display()))?;
let tmp2 = tmp.as_file().try_clone()?; let tmp2 = tmp.as_file().try_clone()?;
let reader = OpenOptions::new() let reader = OpenOptions::new()
.read(true) .read(true)
@ -368,7 +380,10 @@ impl Artifact for ArtifactWriter {
if let Some(n) = name { if let Some(n) = name {
if let ArtifactStorage::FileDir(s) = &self.storage { if let ArtifactStorage::FileDir(s) = &self.storage {
let path = Path::new(s).join(n); let mut path = Path::new(&s.0).join(n);
if !s.1.is_empty() {
path.set_extension(&s.1);
}
if !path.exists() { if !path.exists() {
if let Some(tmp_file) = &self.tmp_file { if let Some(tmp_file) = &self.tmp_file {
rename(tmp_file.as_path(), &path).with_context(|| { rename(tmp_file.as_path(), &path).with_context(|| {
@ -459,6 +474,7 @@ impl BlobCacheGenerator {
} }
} }
#[derive(Clone)]
/// BlobContext is used to hold the blob information of a layer during build. /// BlobContext is used to hold the blob information of a layer during build.
pub struct BlobContext { pub struct BlobContext {
/// Blob id (user specified or sha256(blob)). /// Blob id (user specified or sha256(blob)).
@ -509,6 +525,9 @@ pub struct BlobContext {
/// Cipher to encrypt the RAFS blobs. /// Cipher to encrypt the RAFS blobs.
pub cipher_object: Arc<Cipher>, pub cipher_object: Arc<Cipher>,
pub cipher_ctx: Option<CipherContext>, pub cipher_ctx: Option<CipherContext>,
/// Whether the blob is from external storage backend.
pub external: bool,
} }
impl BlobContext { impl BlobContext {
@ -523,6 +542,7 @@ impl BlobContext {
cipher: crypt::Algorithm, cipher: crypt::Algorithm,
cipher_object: Arc<Cipher>, cipher_object: Arc<Cipher>,
cipher_ctx: Option<CipherContext>, cipher_ctx: Option<CipherContext>,
external: bool,
) -> Self { ) -> Self {
let blob_meta_info = if features.contains(BlobFeatures::CHUNK_INFO_V2) { let blob_meta_info = if features.contains(BlobFeatures::CHUNK_INFO_V2) {
BlobMetaChunkArray::new_v2() BlobMetaChunkArray::new_v2()
@ -559,6 +579,8 @@ impl BlobContext {
entry_list: toc::TocEntryList::new(), entry_list: toc::TocEntryList::new(),
cipher_object, cipher_object,
cipher_ctx, cipher_ctx,
external,
}; };
blob_ctx blob_ctx
@ -600,6 +622,9 @@ impl BlobContext {
blob_ctx blob_ctx
.blob_meta_header .blob_meta_header
.set_is_chunkdict_generated(features.contains(BlobFeatures::IS_CHUNKDICT_GENERATED)); .set_is_chunkdict_generated(features.contains(BlobFeatures::IS_CHUNKDICT_GENERATED));
blob_ctx
.blob_meta_header
.set_external(features.contains(BlobFeatures::EXTERNAL));
blob_ctx blob_ctx
} }
@ -699,6 +724,7 @@ impl BlobContext {
cipher, cipher,
cipher_object, cipher_object,
cipher_ctx, cipher_ctx,
false,
); );
blob_ctx.blob_prefetch_size = blob.prefetch_size(); blob_ctx.blob_prefetch_size = blob.prefetch_size();
blob_ctx.chunk_count = blob.chunk_count(); blob_ctx.chunk_count = blob.chunk_count();
@ -782,6 +808,10 @@ impl BlobContext {
info.set_uncompressed_offset(chunk.uncompressed_offset()); info.set_uncompressed_offset(chunk.uncompressed_offset());
self.blob_meta_info.add_v2_info(info); self.blob_meta_info.add_v2_info(info);
} else { } else {
let mut data: u64 = 0;
if chunk.has_crc32() {
data = chunk.crc32() as u64;
}
self.blob_meta_info.add_v2( self.blob_meta_info.add_v2(
chunk.compressed_offset(), chunk.compressed_offset(),
chunk.compressed_size(), chunk.compressed_size(),
@ -789,8 +819,9 @@ impl BlobContext {
chunk.uncompressed_size(), chunk.uncompressed_size(),
chunk.is_compressed(), chunk.is_compressed(),
chunk.is_encrypted(), chunk.is_encrypted(),
chunk.has_crc32(),
chunk.is_batch(), chunk.is_batch(),
0, data,
); );
} }
self.blob_chunk_digest.push(chunk.id().data); self.blob_chunk_digest.push(chunk.id().data);
@ -817,7 +848,7 @@ impl BlobContext {
} }
/// Get blob id if the blob has some chunks. /// Get blob id if the blob has some chunks.
pub fn blob_id(&mut self) -> Option<String> { pub fn blob_id(&self) -> Option<String> {
if self.uncompressed_blob_size > 0 { if self.uncompressed_blob_size > 0 {
Some(self.blob_id.to_string()) Some(self.blob_id.to_string())
} else { } else {
@ -885,20 +916,28 @@ pub struct BlobManager {
/// Used for chunk data de-duplication between layers (with `--parent-bootstrap`) /// Used for chunk data de-duplication between layers (with `--parent-bootstrap`)
/// or within layer (with `--inline-bootstrap`). /// or within layer (with `--inline-bootstrap`).
pub(crate) layered_chunk_dict: HashChunkDict, pub(crate) layered_chunk_dict: HashChunkDict,
// Whether the managed blobs is from external storage backend.
pub external: bool,
} }
impl BlobManager { impl BlobManager {
/// Create a new instance of [BlobManager]. /// Create a new instance of [BlobManager].
pub fn new(digester: digest::Algorithm) -> Self { pub fn new(digester: digest::Algorithm, external: bool) -> Self {
Self { Self {
blobs: Vec::new(), blobs: Vec::new(),
current_blob_index: None, current_blob_index: None,
global_chunk_dict: Arc::new(()), global_chunk_dict: Arc::new(()),
layered_chunk_dict: HashChunkDict::new(digester), layered_chunk_dict: HashChunkDict::new(digester),
external,
} }
} }
fn new_blob_ctx(ctx: &BuildContext) -> Result<BlobContext> { /// Set current blob index
pub fn set_current_blob_index(&mut self, index: usize) {
self.current_blob_index = Some(index as u32)
}
pub fn new_blob_ctx(&self, ctx: &BuildContext) -> Result<BlobContext> {
let (cipher_object, cipher_ctx) = match ctx.cipher { let (cipher_object, cipher_ctx) = match ctx.cipher {
crypt::Algorithm::None => (Default::default(), None), crypt::Algorithm::None => (Default::default(), None),
crypt::Algorithm::Aes128Xts => { crypt::Algorithm::Aes128Xts => {
@ -917,15 +956,22 @@ impl BlobManager {
))) )))
} }
}; };
let mut blob_features = ctx.blob_features;
let mut compressor = ctx.compressor;
if self.external {
blob_features.insert(BlobFeatures::EXTERNAL);
compressor = compress::Algorithm::None;
}
let mut blob_ctx = BlobContext::new( let mut blob_ctx = BlobContext::new(
ctx.blob_id.clone(), ctx.blob_id.clone(),
ctx.blob_offset, ctx.blob_offset,
ctx.blob_features, blob_features,
ctx.compressor, compressor,
ctx.digester, ctx.digester,
ctx.cipher, ctx.cipher,
Arc::new(cipher_object), Arc::new(cipher_object),
cipher_ctx, cipher_ctx,
self.external,
); );
blob_ctx.set_chunk_size(ctx.chunk_size); blob_ctx.set_chunk_size(ctx.chunk_size);
blob_ctx.set_meta_info_enabled( blob_ctx.set_meta_info_enabled(
@ -941,7 +987,7 @@ impl BlobManager {
ctx: &BuildContext, ctx: &BuildContext,
) -> Result<(u32, &mut BlobContext)> { ) -> Result<(u32, &mut BlobContext)> {
if self.current_blob_index.is_none() { if self.current_blob_index.is_none() {
let blob_ctx = Self::new_blob_ctx(ctx)?; let blob_ctx = self.new_blob_ctx(ctx)?;
self.current_blob_index = Some(self.alloc_index()?); self.current_blob_index = Some(self.alloc_index()?);
self.add_blob(blob_ctx); self.add_blob(blob_ctx);
} }
@ -949,6 +995,21 @@ impl BlobManager {
Ok(self.get_current_blob().unwrap()) Ok(self.get_current_blob().unwrap())
} }
pub fn get_or_create_blob_by_idx(
&mut self,
ctx: &BuildContext,
blob_idx: u32,
) -> Result<(u32, &mut BlobContext)> {
let blob_idx = blob_idx as usize;
if blob_idx >= self.blobs.len() {
for _ in self.blobs.len()..=blob_idx {
let blob_ctx = self.new_blob_ctx(ctx)?;
self.add_blob(blob_ctx);
}
}
Ok((blob_idx as u32, &mut self.blobs[blob_idx as usize]))
}
/// Get the current blob object. /// Get the current blob object.
pub fn get_current_blob(&mut self) -> Option<(u32, &mut BlobContext)> { pub fn get_current_blob(&mut self) -> Option<(u32, &mut BlobContext)> {
if let Some(idx) = self.current_blob_index { if let Some(idx) = self.current_blob_index {
@ -964,8 +1025,9 @@ impl BlobManager {
ctx: &BuildContext, ctx: &BuildContext,
id: &str, id: &str,
) -> Result<(u32, &mut BlobContext)> { ) -> Result<(u32, &mut BlobContext)> {
let blob_mgr = Self::new(ctx.digester, false);
if self.get_blob_idx_by_id(id).is_none() { if self.get_blob_idx_by_id(id).is_none() {
let blob_ctx = Self::new_blob_ctx(ctx)?; let blob_ctx = blob_mgr.new_blob_ctx(ctx)?;
self.current_blob_index = Some(self.alloc_index()?); self.current_blob_index = Some(self.alloc_index()?);
self.add_blob(blob_ctx); self.add_blob(blob_ctx);
} else { } else {
@ -1253,6 +1315,7 @@ impl BootstrapContext {
} }
/// BootstrapManager is used to hold the parent bootstrap reader and create new bootstrap context. /// BootstrapManager is used to hold the parent bootstrap reader and create new bootstrap context.
#[derive(Clone)]
pub struct BootstrapManager { pub struct BootstrapManager {
pub(crate) f_parent_path: Option<PathBuf>, pub(crate) f_parent_path: Option<PathBuf>,
pub(crate) bootstrap_storage: Option<ArtifactStorage>, pub(crate) bootstrap_storage: Option<ArtifactStorage>,
@ -1289,6 +1352,7 @@ pub struct BuildContext {
pub digester: digest::Algorithm, pub digester: digest::Algorithm,
/// Blob encryption algorithm flag. /// Blob encryption algorithm flag.
pub cipher: crypt::Algorithm, pub cipher: crypt::Algorithm,
pub crc32_algorithm: crc32::Algorithm,
/// Save host uid gid in each inode. /// Save host uid gid in each inode.
pub explicit_uidgid: bool, pub explicit_uidgid: bool,
/// whiteout spec: overlayfs or oci /// whiteout spec: overlayfs or oci
@ -1314,6 +1378,7 @@ pub struct BuildContext {
/// Storage writing blob to single file or a directory. /// Storage writing blob to single file or a directory.
pub blob_storage: Option<ArtifactStorage>, pub blob_storage: Option<ArtifactStorage>,
pub external_blob_storage: Option<ArtifactStorage>,
pub blob_zran_generator: Option<Mutex<ZranContextGenerator<File>>>, pub blob_zran_generator: Option<Mutex<ZranContextGenerator<File>>>,
pub blob_batch_generator: Option<Mutex<BatchContextGenerator>>, pub blob_batch_generator: Option<Mutex<BatchContextGenerator>>,
pub blob_tar_reader: Option<BufReaderInfo<File>>, pub blob_tar_reader: Option<BufReaderInfo<File>>,
@ -1327,6 +1392,8 @@ pub struct BuildContext {
/// Whether is chunkdict. /// Whether is chunkdict.
pub is_chunkdict_generated: bool, pub is_chunkdict_generated: bool,
/// Nydus attributes for different build behavior.
pub attributes: Attributes,
} }
impl BuildContext { impl BuildContext {
@ -1343,9 +1410,11 @@ impl BuildContext {
source_path: PathBuf, source_path: PathBuf,
prefetch: Prefetch, prefetch: Prefetch,
blob_storage: Option<ArtifactStorage>, blob_storage: Option<ArtifactStorage>,
external_blob_storage: Option<ArtifactStorage>,
blob_inline_meta: bool, blob_inline_meta: bool,
features: Features, features: Features,
encrypt: bool, encrypt: bool,
attributes: Attributes,
) -> Self { ) -> Self {
// It's a flag for images built with new nydus-image 2.2 and newer. // It's a flag for images built with new nydus-image 2.2 and newer.
let mut blob_features = BlobFeatures::CAP_TAR_TOC; let mut blob_features = BlobFeatures::CAP_TAR_TOC;
@ -1366,6 +1435,8 @@ impl BuildContext {
} else { } else {
crypt::Algorithm::None crypt::Algorithm::None
}; };
let crc32_algorithm = crc32::Algorithm::Crc32Iscsi;
BuildContext { BuildContext {
blob_id, blob_id,
aligned_chunk, aligned_chunk,
@ -1373,6 +1444,7 @@ impl BuildContext {
compressor, compressor,
digester, digester,
cipher, cipher,
crc32_algorithm,
explicit_uidgid, explicit_uidgid,
whiteout_spec, whiteout_spec,
@ -1385,6 +1457,7 @@ impl BuildContext {
prefetch, prefetch,
blob_storage, blob_storage,
external_blob_storage,
blob_zran_generator: None, blob_zran_generator: None,
blob_batch_generator: None, blob_batch_generator: None,
blob_tar_reader: None, blob_tar_reader: None,
@ -1396,6 +1469,8 @@ impl BuildContext {
configuration: Arc::new(ConfigV2::default()), configuration: Arc::new(ConfigV2::default()),
blob_cache_generator: None, blob_cache_generator: None,
is_chunkdict_generated: false, is_chunkdict_generated: false,
attributes,
} }
} }
@ -1429,6 +1504,7 @@ impl Default for BuildContext {
compressor: compress::Algorithm::default(), compressor: compress::Algorithm::default(),
digester: digest::Algorithm::default(), digester: digest::Algorithm::default(),
cipher: crypt::Algorithm::None, cipher: crypt::Algorithm::None,
crc32_algorithm: crc32::Algorithm::default(),
explicit_uidgid: true, explicit_uidgid: true,
whiteout_spec: WhiteoutSpec::default(), whiteout_spec: WhiteoutSpec::default(),
@ -1441,6 +1517,7 @@ impl Default for BuildContext {
prefetch: Prefetch::default(), prefetch: Prefetch::default(),
blob_storage: None, blob_storage: None,
external_blob_storage: None,
blob_zran_generator: None, blob_zran_generator: None,
blob_batch_generator: None, blob_batch_generator: None,
blob_tar_reader: None, blob_tar_reader: None,
@ -1451,6 +1528,8 @@ impl Default for BuildContext {
configuration: Arc::new(ConfigV2::default()), configuration: Arc::new(ConfigV2::default()),
blob_cache_generator: None, blob_cache_generator: None,
is_chunkdict_generated: false, is_chunkdict_generated: false,
attributes: Attributes::default(),
} }
} }
} }
@ -1462,8 +1541,12 @@ pub struct BuildOutput {
pub blobs: Vec<String>, pub blobs: Vec<String>,
/// The size of output blob in this build. /// The size of output blob in this build.
pub blob_size: Option<u64>, pub blob_size: Option<u64>,
/// External blob ids in the blob table of external bootstrap.
pub external_blobs: Vec<String>,
/// File path for the metadata blob. /// File path for the metadata blob.
pub bootstrap_path: Option<String>, pub bootstrap_path: Option<String>,
/// File path for the external metadata blob.
pub external_bootstrap_path: Option<String>,
} }
impl fmt::Display for BuildOutput { impl fmt::Display for BuildOutput {
@ -1478,7 +1561,17 @@ impl fmt::Display for BuildOutput {
"data blob size: 0x{:x}", "data blob size: 0x{:x}",
self.blob_size.unwrap_or_default() self.blob_size.unwrap_or_default()
)?; )?;
write!(f, "data blobs: {:?}", self.blobs)?; if self.external_blobs.is_empty() {
write!(f, "data blobs: {:?}", self.blobs)?;
} else {
writeln!(f, "data blobs: {:?}", self.blobs)?;
writeln!(
f,
"external meta blob path: {}",
self.external_bootstrap_path.as_deref().unwrap_or("<none>")
)?;
write!(f, "external data blobs: {:?}", self.external_blobs)?;
}
Ok(()) Ok(())
} }
} }
@ -1487,20 +1580,28 @@ impl BuildOutput {
/// Create a new instance of [BuildOutput]. /// Create a new instance of [BuildOutput].
pub fn new( pub fn new(
blob_mgr: &BlobManager, blob_mgr: &BlobManager,
external_blob_mgr: Option<&BlobManager>,
bootstrap_storage: &Option<ArtifactStorage>, bootstrap_storage: &Option<ArtifactStorage>,
external_bootstrap_storage: &Option<ArtifactStorage>,
) -> Result<BuildOutput> { ) -> Result<BuildOutput> {
let blobs = blob_mgr.get_blob_ids(); let blobs = blob_mgr.get_blob_ids();
let blob_size = blob_mgr.get_last_blob().map(|b| b.compressed_blob_size); let blob_size = blob_mgr.get_last_blob().map(|b| b.compressed_blob_size);
let bootstrap_path = if let Some(ArtifactStorage::SingleFile(p)) = bootstrap_storage { let bootstrap_path = bootstrap_storage
Some(p.display().to_string()) .as_ref()
} else { .map(|stor| stor.display().to_string());
None let external_bootstrap_path = external_bootstrap_storage
}; .as_ref()
.map(|stor| stor.display().to_string());
let external_blobs = external_blob_mgr
.map(|mgr| mgr.get_blob_ids())
.unwrap_or_default();
Ok(Self { Ok(Self {
blobs, blobs,
external_blobs,
blob_size, blob_size,
bootstrap_path, bootstrap_path,
external_bootstrap_path,
}) })
} }
} }
@ -1553,6 +1654,7 @@ mod tests {
registry: None, registry: None,
http_proxy: None, http_proxy: None,
}), }),
external_backends: Vec::new(),
id: "id".to_owned(), id: "id".to_owned(),
cache: None, cache: None,
rafs: None, rafs: None,

View File

@ -25,8 +25,9 @@ use nydus_rafs::metadata::{Inode, RafsVersion};
use nydus_storage::device::BlobFeatures; use nydus_storage::device::BlobFeatures;
use nydus_storage::meta::{BlobChunkInfoV2Ondisk, BlobMetaChunkInfo}; use nydus_storage::meta::{BlobChunkInfoV2Ondisk, BlobMetaChunkInfo};
use nydus_utils::digest::{DigestHasher, RafsDigest}; use nydus_utils::digest::{DigestHasher, RafsDigest};
use nydus_utils::{compress, crypt}; use nydus_utils::{compress, crc32, crypt};
use nydus_utils::{div_round_up, event_tracer, root_tracer, try_round_up_4k, ByteSize}; use nydus_utils::{div_round_up, event_tracer, root_tracer, try_round_up_4k, ByteSize};
use parse_size::parse_size;
use sha2::digest::Digest; use sha2::digest::Digest;
use crate::{BlobContext, BlobManager, BuildContext, ChunkDict, ConversionType, Overlay}; use crate::{BlobContext, BlobManager, BuildContext, ChunkDict, ConversionType, Overlay};
@ -34,7 +35,7 @@ use crate::{BlobContext, BlobManager, BuildContext, ChunkDict, ConversionType, O
use super::context::Artifact; use super::context::Artifact;
/// Filesystem root path for Unix OSs. /// Filesystem root path for Unix OSs.
const ROOT_PATH_NAME: &[u8] = &[b'/']; const ROOT_PATH_NAME: &[u8] = b"/";
/// Source of chunk data: chunk dictionary, parent filesystem or builder. /// Source of chunk data: chunk dictionary, parent filesystem or builder.
#[derive(Clone, Hash, PartialEq, Eq)] #[derive(Clone, Hash, PartialEq, Eq)]
@ -275,6 +276,88 @@ impl Node {
None None
}; };
if blob_mgr.external {
let external_values = ctx.attributes.get_values(self.target()).unwrap();
let external_blob_index = external_values
.get("blob_index")
.and_then(|v| v.parse::<u32>().ok())
.ok_or_else(|| anyhow!("failed to parse blob_index"))?;
let external_blob_id = external_values
.get("blob_id")
.ok_or_else(|| anyhow!("failed to parse blob_id"))?;
let external_chunk_size = external_values
.get("chunk_size")
.and_then(|v| parse_size(v).ok())
.ok_or_else(|| anyhow!("failed to parse chunk_size"))?;
let mut external_compressed_offset = external_values
.get("chunk_0_compressed_offset")
.and_then(|v| v.parse::<u64>().ok())
.ok_or_else(|| anyhow!("failed to parse chunk_0_compressed_offset"))?;
let external_compressed_size = external_values
.get("compressed_size")
.and_then(|v| v.parse::<u64>().ok())
.ok_or_else(|| anyhow!("failed to parse compressed_size"))?;
let (_, external_blob_ctx) =
blob_mgr.get_or_create_blob_by_idx(ctx, external_blob_index)?;
external_blob_ctx.blob_id = external_blob_id.to_string();
external_blob_ctx.compressed_blob_size = external_compressed_size;
external_blob_ctx.uncompressed_blob_size = external_compressed_size;
let chunk_count = self
.chunk_count(external_chunk_size as u64)
.with_context(|| {
format!("failed to get chunk count for {}", self.path().display())
})?;
self.inode.set_child_count(chunk_count);
info!(
"target {:?}, file_size {}, blob_index {}, blob_id {}, chunk_size {}, chunk_count {}",
self.target(),
self.inode.size(),
external_blob_index,
external_blob_id,
external_chunk_size,
chunk_count
);
for i in 0..self.inode.child_count() {
let mut chunk = self.inode.create_chunk();
let file_offset = i as u64 * external_chunk_size as u64;
let compressed_size = if i == self.inode.child_count() - 1 {
self.inode.size() - (external_chunk_size * i as u64)
} else {
external_chunk_size
} as u32;
chunk.set_blob_index(external_blob_index);
chunk.set_index(external_blob_ctx.alloc_chunk_index()?);
chunk.set_compressed_offset(external_compressed_offset);
chunk.set_compressed_size(compressed_size);
chunk.set_uncompressed_offset(external_compressed_offset);
chunk.set_uncompressed_size(compressed_size);
chunk.set_compressed(false);
chunk.set_file_offset(file_offset);
external_compressed_offset += compressed_size as u64;
external_blob_ctx.chunk_size = external_chunk_size as u32;
if ctx.crc32_algorithm != crc32::Algorithm::None {
self.set_external_chunk_crc32(ctx, &mut chunk, i)?
}
if let Some(h) = inode_hasher.as_mut() {
h.digest_update(chunk.id().as_ref());
}
self.chunks.push(NodeChunk {
source: ChunkSource::Build,
inner: Arc::new(chunk),
});
}
if let Some(h) = inode_hasher {
self.inode.set_digest(h.digest_finalize());
}
return Ok(0);
}
// `child_count` of regular file is reused as `chunk_count`. // `child_count` of regular file is reused as `chunk_count`.
for i in 0..self.inode.child_count() { for i in 0..self.inode.child_count() {
let chunk_size = ctx.chunk_size; let chunk_size = ctx.chunk_size;
@ -286,13 +369,14 @@ impl Node {
}; };
let chunk_data = &mut data_buf[0..uncompressed_size as usize]; let chunk_data = &mut data_buf[0..uncompressed_size as usize];
let (mut chunk, mut chunk_info) = self.read_file_chunk(ctx, reader, chunk_data)?; let (mut chunk, mut chunk_info) =
self.read_file_chunk(ctx, reader, chunk_data, blob_mgr.external)?;
if let Some(h) = inode_hasher.as_mut() { if let Some(h) = inode_hasher.as_mut() {
h.digest_update(chunk.id().as_ref()); h.digest_update(chunk.id().as_ref());
} }
// No need to perform chunk deduplication for tar-tarfs case. // No need to perform chunk deduplication for tar-tarfs/external blob case.
if ctx.conversion_type != ConversionType::TarToTarfs { if ctx.conversion_type != ConversionType::TarToTarfs && !blob_mgr.external {
chunk = match self.deduplicate_chunk( chunk = match self.deduplicate_chunk(
ctx, ctx,
blob_mgr, blob_mgr,
@ -347,20 +431,43 @@ impl Node {
Ok(blob_size) Ok(blob_size)
} }
fn set_external_chunk_crc32(
&self,
ctx: &BuildContext,
chunk: &mut ChunkWrapper,
i: u32,
) -> Result<()> {
if let Some(crcs) = ctx.attributes.get_crcs(self.target()) {
if (i as usize) >= crcs.len() {
return Err(anyhow!(
"invalid crc index {} for file {}",
i,
self.target().display()
));
}
chunk.set_has_crc32(true);
chunk.set_crc32(crcs[i as usize]);
}
Ok(())
}
fn read_file_chunk<R: Read>( fn read_file_chunk<R: Read>(
&self, &self,
ctx: &BuildContext, ctx: &BuildContext,
reader: &mut R, reader: &mut R,
buf: &mut [u8], buf: &mut [u8],
external: bool,
) -> Result<(ChunkWrapper, Option<BlobChunkInfoV2Ondisk>)> { ) -> Result<(ChunkWrapper, Option<BlobChunkInfoV2Ondisk>)> {
let mut chunk = self.inode.create_chunk(); let mut chunk = self.inode.create_chunk();
let mut chunk_info = None; let mut chunk_info = None;
if let Some(ref zran) = ctx.blob_zran_generator { if let Some(ref zran) = ctx.blob_zran_generator {
let mut zran = zran.lock().unwrap(); let mut zran = zran.lock().unwrap();
zran.start_chunk(ctx.chunk_size as u64)?; zran.start_chunk(ctx.chunk_size as u64)?;
reader if !external {
.read_exact(buf) reader
.with_context(|| format!("failed to read node file {:?}", self.path()))?; .read_exact(buf)
.with_context(|| format!("failed to read node file {:?}", self.path()))?;
}
let info = zran.finish_chunk()?; let info = zran.finish_chunk()?;
chunk.set_compressed_offset(info.compressed_offset()); chunk.set_compressed_offset(info.compressed_offset());
chunk.set_compressed_size(info.compressed_size()); chunk.set_compressed_size(info.compressed_size());
@ -372,21 +479,27 @@ impl Node {
chunk.set_compressed_offset(pos); chunk.set_compressed_offset(pos);
chunk.set_compressed_size(buf.len() as u32); chunk.set_compressed_size(buf.len() as u32);
chunk.set_compressed(false); chunk.set_compressed(false);
reader if !external {
.read_exact(buf) reader
.with_context(|| format!("failed to read node file {:?}", self.path()))?; .read_exact(buf)
} else { .with_context(|| format!("failed to read node file {:?}", self.path()))?;
}
} else if !external {
reader reader
.read_exact(buf) .read_exact(buf)
.with_context(|| format!("failed to read node file {:?}", self.path()))?; .with_context(|| format!("failed to read node file {:?}", self.path()))?;
} }
// For tar-tarfs case, no need to compute chunk id. // For tar-tarfs case, no need to compute chunk id.
if ctx.conversion_type != ConversionType::TarToTarfs { if ctx.conversion_type != ConversionType::TarToTarfs && !external {
chunk.set_id(RafsDigest::from_buf(buf, ctx.digester)); chunk.set_id(RafsDigest::from_buf(buf, ctx.digester));
if ctx.crc32_algorithm != crc32::Algorithm::None {
chunk.set_has_crc32(true);
chunk.set_crc32(crc32::Crc32::new(ctx.crc32_algorithm).from_buf(buf));
}
} }
if ctx.cipher != crypt::Algorithm::None { if ctx.cipher != crypt::Algorithm::None && !external {
chunk.set_encrypted(true); chunk.set_encrypted(true);
} }
@ -495,12 +608,12 @@ impl Node {
} }
pub fn write_chunk_data( pub fn write_chunk_data(
ctx: &BuildContext, _ctx: &BuildContext,
blob_ctx: &mut BlobContext, blob_ctx: &mut BlobContext,
blob_writer: &mut dyn Artifact, blob_writer: &mut dyn Artifact,
chunk_data: &[u8], chunk_data: &[u8],
) -> Result<(u64, u32, bool)> { ) -> Result<(u64, u32, bool)> {
let (compressed, is_compressed) = compress::compress(chunk_data, ctx.compressor) let (compressed, is_compressed) = compress::compress(chunk_data, blob_ctx.blob_compressor)
.with_context(|| "failed to compress node file".to_string())?; .with_context(|| "failed to compress node file".to_string())?;
let encrypted = crypt::encrypt_with_context( let encrypted = crypt::encrypt_with_context(
&compressed, &compressed,
@ -510,10 +623,14 @@ impl Node {
)?; )?;
let compressed_size = encrypted.len() as u32; let compressed_size = encrypted.len() as u32;
let pre_compressed_offset = blob_ctx.current_compressed_offset; let pre_compressed_offset = blob_ctx.current_compressed_offset;
blob_writer if !blob_ctx.external {
.write_all(&encrypted) // For the external blob, both compressor and encrypter should
.context("failed to write blob")?; // be none, and we don't write data into blob file.
blob_ctx.blob_hash.update(&encrypted); blob_writer
.write_all(&encrypted)
.context("failed to write blob")?;
blob_ctx.blob_hash.update(&encrypted);
}
blob_ctx.current_compressed_offset += compressed_size as u64; blob_ctx.current_compressed_offset += compressed_size as u64;
blob_ctx.compressed_blob_size += compressed_size as u64; blob_ctx.compressed_blob_size += compressed_size as u64;
@ -588,6 +705,7 @@ impl Node {
// build node object from a filesystem object. // build node object from a filesystem object.
impl Node { impl Node {
#[allow(clippy::too_many_arguments)]
/// Create a new instance of [Node] from a filesystem object. /// Create a new instance of [Node] from a filesystem object.
pub fn from_fs_object( pub fn from_fs_object(
version: RafsVersion, version: RafsVersion,
@ -595,6 +713,7 @@ impl Node {
path: PathBuf, path: PathBuf,
overlay: Overlay, overlay: Overlay,
chunk_size: u32, chunk_size: u32,
file_size: u64,
explicit_uidgid: bool, explicit_uidgid: bool,
v6_force_extended_inode: bool, v6_force_extended_inode: bool,
) -> Result<Node> { ) -> Result<Node> {
@ -627,7 +746,7 @@ impl Node {
v6_dirents: Vec::new(), v6_dirents: Vec::new(),
}; };
node.build_inode(chunk_size) node.build_inode(chunk_size, file_size)
.context("failed to build Node from fs object")?; .context("failed to build Node from fs object")?;
if version.is_v6() { if version.is_v6() {
node.v6_set_inode_compact(); node.v6_set_inode_compact();
@ -667,7 +786,7 @@ impl Node {
Ok(()) Ok(())
} }
fn build_inode_stat(&mut self) -> Result<()> { fn build_inode_stat(&mut self, file_size: u64) -> Result<()> {
let meta = self let meta = self
.meta() .meta()
.with_context(|| format!("failed to get metadata of {}", self.path().display()))?; .with_context(|| format!("failed to get metadata of {}", self.path().display()))?;
@ -702,7 +821,13 @@ impl Node {
// directory entries, so let's ignore the value provided by source filesystem and // directory entries, so let's ignore the value provided by source filesystem and
// calculate it later by ourself. // calculate it later by ourself.
if !self.is_dir() { if !self.is_dir() {
self.inode.set_size(meta.st_size()); // If the file size is not 0, and the meta size is 0, it means the file is an
// external dummy file. We need to set the size to file_size.
if file_size != 0 && meta.st_size() == 0 {
self.inode.set_size(file_size);
} else {
self.inode.set_size(meta.st_size());
}
self.v5_set_inode_blocks(); self.v5_set_inode_blocks();
} }
self.info = Arc::new(info); self.info = Arc::new(info);
@ -710,7 +835,7 @@ impl Node {
Ok(()) Ok(())
} }
fn build_inode(&mut self, chunk_size: u32) -> Result<()> { fn build_inode(&mut self, chunk_size: u32, file_size: u64) -> Result<()> {
let size = self.name().byte_size(); let size = self.name().byte_size();
if size > u16::MAX as usize { if size > u16::MAX as usize {
bail!("file name length 0x{:x} is too big", size,); bail!("file name length 0x{:x} is too big", size,);
@ -720,7 +845,7 @@ impl Node {
// NOTE: Always retrieve xattr before attr so that we can know the size of xattr pairs. // NOTE: Always retrieve xattr before attr so that we can know the size of xattr pairs.
self.build_inode_xattr() self.build_inode_xattr()
.with_context(|| format!("failed to get xattr for {}", self.path().display()))?; .with_context(|| format!("failed to get xattr for {}", self.path().display()))?;
self.build_inode_stat() self.build_inode_stat(file_size)
.with_context(|| format!("failed to build inode {}", self.path().display()))?; .with_context(|| format!("failed to build inode {}", self.path().display()))?;
if self.is_reg() { if self.is_reg() {
@ -895,12 +1020,12 @@ impl Node {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use std::io::BufReader; use std::{collections::HashMap, io::BufReader};
use nydus_utils::{digest, BufReaderInfo}; use nydus_utils::{digest, BufReaderInfo};
use vmm_sys_util::tempfile::TempFile; use vmm_sys_util::tempfile::TempFile;
use crate::{ArtifactWriter, BlobCacheGenerator, HashChunkDict}; use crate::{attributes::Attributes, ArtifactWriter, BlobCacheGenerator, HashChunkDict};
use super::*; use super::*;
@ -972,7 +1097,7 @@ mod tests {
.unwrap(), .unwrap(),
); );
let mut blob_mgr = BlobManager::new(digest::Algorithm::Sha256); let mut blob_mgr = BlobManager::new(digest::Algorithm::Sha256, false);
let mut chunk_dict = HashChunkDict::new(digest::Algorithm::Sha256); let mut chunk_dict = HashChunkDict::new(digest::Algorithm::Sha256);
let mut chunk_wrapper = ChunkWrapper::new(RafsVersion::V5); let mut chunk_wrapper = ChunkWrapper::new(RafsVersion::V5);
chunk_wrapper.set_id(RafsDigest { chunk_wrapper.set_id(RafsDigest {
@ -1108,4 +1233,43 @@ mod tests {
node.remove_xattr(OsStr::new("system.posix_acl_default.key")); node.remove_xattr(OsStr::new("system.posix_acl_default.key"));
assert!(!node.inode.has_xattr()); assert!(!node.inode.has_xattr());
} }
#[test]
fn test_set_external_chunk_crc32() {
let mut ctx = BuildContext {
crc32_algorithm: crc32::Algorithm::Crc32Iscsi,
attributes: Attributes {
crcs: HashMap::new(),
..Default::default()
},
..Default::default()
};
let target = PathBuf::from("/test_file");
ctx.attributes
.crcs
.insert(target.clone(), vec![0x12345678, 0x87654321]);
let node = Node::new(
InodeWrapper::new(RafsVersion::V5),
NodeInfo {
path: target.clone(),
target: target.clone(),
..Default::default()
},
1,
);
let mut chunk = node.inode.create_chunk();
print!("target: {}", node.target().display());
let result = node.set_external_chunk_crc32(&ctx, &mut chunk, 1);
assert!(result.is_ok());
assert_eq!(chunk.crc32(), 0x87654321);
assert!(chunk.has_crc32());
// test invalid crc index
let result = node.set_external_chunk_crc32(&ctx, &mut chunk, 2);
assert!(result.is_err());
let err = result.unwrap_err().to_string();
assert!(err.contains("invalid crc index 2 for file /test_file"));
}
} }

View File

@ -71,6 +71,16 @@ pub enum WhiteoutSpec {
None, None,
} }
impl fmt::Display for WhiteoutSpec {
fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result {
match self {
WhiteoutSpec::Oci => write!(f, "oci"),
WhiteoutSpec::Overlayfs => write!(f, "overlayfs"),
WhiteoutSpec::None => write!(f, "none"),
}
}
}
impl Default for WhiteoutSpec { impl Default for WhiteoutSpec {
fn default() -> Self { fn default() -> Self {
Self::Oci Self::Oci

View File

@ -174,6 +174,30 @@ impl Tree {
Some(tree) Some(tree)
} }
/// Get the mutable tree node corresponding to the path.
pub fn get_node_mut(&mut self, path: &Path) -> Option<&mut Tree> {
let target_vec = Node::generate_target_vec(path);
assert!(!target_vec.is_empty());
let mut tree = self;
let last_idx = target_vec.len() - 1;
for name in &target_vec[1..last_idx] {
match tree.get_child_idx(name.as_bytes()) {
Some(idx) => tree = &mut tree.children[idx],
None => return None,
}
}
if let Some(last_name) = target_vec.last() {
match tree.get_child_idx(last_name.as_bytes()) {
Some(idx) => Some(&mut tree.children[idx]),
None => None,
}
} else {
Some(tree)
}
}
/// Merge the upper layer tree into the lower layer tree, applying whiteout rules. /// Merge the upper layer tree into the lower layer tree, applying whiteout rules.
pub fn merge_overaly(&mut self, ctx: &BuildContext, upper: Tree) -> Result<()> { pub fn merge_overaly(&mut self, ctx: &BuildContext, upper: Tree) -> Result<()> {
assert_eq!(self.name, "/".as_bytes()); assert_eq!(self.name, "/".as_bytes());
@ -399,6 +423,7 @@ mod tests {
tmpfile.as_path().to_path_buf(), tmpfile.as_path().to_path_buf(),
Overlay::UpperAddition, Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32, RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true, true,
false, false,
) )
@ -415,6 +440,7 @@ mod tests {
tmpfile.as_path().to_path_buf(), tmpfile.as_path().to_path_buf(),
Overlay::UpperAddition, Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32, RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true, true,
false, false,
) )
@ -434,6 +460,7 @@ mod tests {
tmpfile.as_path().to_path_buf(), tmpfile.as_path().to_path_buf(),
Overlay::UpperAddition, Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32, RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true, true,
false, false,
) )
@ -447,6 +474,7 @@ mod tests {
tmpfile2.as_path().to_path_buf(), tmpfile2.as_path().to_path_buf(),
Overlay::UpperAddition, Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32, RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true, true,
false, false,
) )
@ -461,6 +489,7 @@ mod tests {
tmpfile3.as_path().to_path_buf(), tmpfile3.as_path().to_path_buf(),
Overlay::UpperAddition, Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32, RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
true, true,
false, false,
) )

View File

@ -21,7 +21,7 @@ use nydus_rafs::metadata::layout::v6::{
}; };
use nydus_rafs::metadata::RafsStore; use nydus_rafs::metadata::RafsStore;
use nydus_rafs::RafsIoWrite; use nydus_rafs::RafsIoWrite;
use nydus_storage::device::BlobFeatures; use nydus_storage::device::{BlobFeatures, BlobInfo};
use nydus_utils::{root_tracer, round_down, round_up, timing_tracer}; use nydus_utils::{root_tracer, round_down, round_up, timing_tracer};
use super::chunk_dict::DigestWithBlobIndex; use super::chunk_dict::DigestWithBlobIndex;
@ -41,6 +41,7 @@ impl Node {
orig_meta_addr: u64, orig_meta_addr: u64,
meta_addr: u64, meta_addr: u64,
chunk_cache: &mut BTreeMap<DigestWithBlobIndex, Arc<ChunkWrapper>>, chunk_cache: &mut BTreeMap<DigestWithBlobIndex, Arc<ChunkWrapper>>,
blobs: &[Arc<BlobInfo>],
) -> Result<()> { ) -> Result<()> {
let xattr_inline_count = self.info.xattrs.count_v6(); let xattr_inline_count = self.info.xattrs.count_v6();
ensure!( ensure!(
@ -70,7 +71,7 @@ impl Node {
if self.is_dir() { if self.is_dir() {
self.v6_dump_dir(ctx, f_bootstrap, meta_addr, meta_offset, &mut inode)?; self.v6_dump_dir(ctx, f_bootstrap, meta_addr, meta_offset, &mut inode)?;
} else if self.is_reg() { } else if self.is_reg() {
self.v6_dump_file(ctx, f_bootstrap, chunk_cache, &mut inode)?; self.v6_dump_file(ctx, f_bootstrap, chunk_cache, &mut inode, &blobs)?;
} else if self.is_symlink() { } else if self.is_symlink() {
self.v6_dump_symlink(ctx, f_bootstrap, &mut inode)?; self.v6_dump_symlink(ctx, f_bootstrap, &mut inode)?;
} else { } else {
@ -177,10 +178,9 @@ impl Node {
} else { } else {
// Avoid sorting again if "." and ".." are at the head after sorting due to that // Avoid sorting again if "." and ".." are at the head after sorting due to that
// `tree.children` has already been sorted. // `tree.children` has already been sorted.
d_size = (".".as_bytes().len() d_size =
+ size_of::<RafsV6Dirent>() (".".len() + size_of::<RafsV6Dirent>() + "..".len() + size_of::<RafsV6Dirent>())
+ "..".as_bytes().len() as u64;
+ size_of::<RafsV6Dirent>()) as u64;
for child in tree.children.iter() { for child in tree.children.iter() {
let len = child.name().len() + size_of::<RafsV6Dirent>(); let len = child.name().len() + size_of::<RafsV6Dirent>();
// erofs disk format requires dirent to be aligned to block size. // erofs disk format requires dirent to be aligned to block size.
@ -453,6 +453,7 @@ impl Node {
f_bootstrap: &mut dyn RafsIoWrite, f_bootstrap: &mut dyn RafsIoWrite,
chunk_cache: &mut BTreeMap<DigestWithBlobIndex, Arc<ChunkWrapper>>, chunk_cache: &mut BTreeMap<DigestWithBlobIndex, Arc<ChunkWrapper>>,
inode: &mut Box<dyn RafsV6OndiskInode>, inode: &mut Box<dyn RafsV6OndiskInode>,
blobs: &[Arc<BlobInfo>],
) -> Result<()> { ) -> Result<()> {
let mut is_continuous = true; let mut is_continuous = true;
let mut prev = None; let mut prev = None;
@ -474,8 +475,15 @@ impl Node {
v6_chunk.set_block_addr(blk_addr); v6_chunk.set_block_addr(blk_addr);
chunks.extend(v6_chunk.as_ref()); chunks.extend(v6_chunk.as_ref());
let external =
blobs[chunk.inner.blob_index() as usize].has_feature(BlobFeatures::EXTERNAL);
let chunk_index = if external {
Some(chunk.inner.index())
} else {
None
};
chunk_cache.insert( chunk_cache.insert(
DigestWithBlobIndex(*chunk.inner.id(), chunk.inner.blob_index() + 1), DigestWithBlobIndex(*chunk.inner.id(), chunk.inner.blob_index() + 1, chunk_index),
chunk.inner.clone(), chunk.inner.clone(),
); );
if let Some((prev_idx, prev_pos)) = prev { if let Some((prev_idx, prev_pos)) = prev {
@ -710,6 +718,7 @@ impl Bootstrap {
orig_meta_addr, orig_meta_addr,
meta_addr, meta_addr,
&mut chunk_cache, &mut chunk_cache,
&blobs,
) )
}) })
}, },
@ -911,6 +920,7 @@ mod tests {
pa_aa.as_path().to_path_buf(), pa_aa.as_path().to_path_buf(),
Overlay::UpperAddition, Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32, RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
false, false,
false, false,
) )
@ -938,6 +948,7 @@ mod tests {
pa.as_path().to_path_buf(), pa.as_path().to_path_buf(),
Overlay::UpperAddition, Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32, RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
false, false,
false, false,
) )
@ -1034,6 +1045,7 @@ mod tests {
pa_reg.as_path().to_path_buf(), pa_reg.as_path().to_path_buf(),
Overlay::UpperAddition, Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32, RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
false, false,
false, false,
) )
@ -1047,6 +1059,7 @@ mod tests {
pa_pyc.as_path().to_path_buf(), pa_pyc.as_path().to_path_buf(),
Overlay::UpperAddition, Overlay::UpperAddition,
RAFS_DEFAULT_CHUNK_SIZE as u32, RAFS_DEFAULT_CHUNK_SIZE as u32,
0,
false, false,
false, false,
) )

View File

@ -5,14 +5,15 @@
use std::fs; use std::fs;
use std::fs::DirEntry; use std::fs::DirEntry;
use anyhow::{Context, Result}; use anyhow::{anyhow, Context, Result};
use nydus_utils::{event_tracer, lazy_drop, root_tracer, timing_tracer}; use nydus_utils::{event_tracer, lazy_drop, root_tracer, timing_tracer};
use crate::core::context::{Artifact, NoopArtifactWriter}; use crate::core::context::{Artifact, NoopArtifactWriter};
use crate::core::prefetch;
use super::core::blob::Blob; use super::core::blob::Blob;
use super::core::context::{ use super::core::context::{
ArtifactWriter, BlobManager, BootstrapContext, BootstrapManager, BuildContext, BuildOutput, ArtifactWriter, BlobManager, BootstrapManager, BuildContext, BuildOutput,
}; };
use super::core::node::Node; use super::core::node::Node;
use super::{build_bootstrap, dump_bootstrap, finalize_blob, Builder, Overlay, Tree, TreeNode}; use super::{build_bootstrap, dump_bootstrap, finalize_blob, Builder, Overlay, Tree, TreeNode};
@ -29,14 +30,14 @@ impl FilesystemTreeBuilder {
fn load_children( fn load_children(
&self, &self,
ctx: &mut BuildContext, ctx: &mut BuildContext,
bootstrap_ctx: &mut BootstrapContext,
parent: &TreeNode, parent: &TreeNode,
layer_idx: u16, layer_idx: u16,
) -> Result<Vec<Tree>> { ) -> Result<(Vec<Tree>, Vec<Tree>)> {
let mut result = Vec::new(); let mut trees = Vec::new();
let mut external_trees = Vec::new();
let parent = parent.borrow(); let parent = parent.borrow();
if !parent.is_dir() { if !parent.is_dir() {
return Ok(result); return Ok((trees.clone(), external_trees));
} }
let children = fs::read_dir(parent.path()) let children = fs::read_dir(parent.path())
@ -46,12 +47,26 @@ impl FilesystemTreeBuilder {
event_tracer!("load_from_directory", +children.len()); event_tracer!("load_from_directory", +children.len());
for child in children { for child in children {
let path = child.path(); let path = child.path();
let target = Node::generate_target(&path, &ctx.source_path);
let mut file_size: u64 = 0;
if ctx.attributes.is_external(&target) {
if let Some(value) = ctx.attributes.get_value(&target, "file_size") {
file_size = value.parse::<u64>().ok().ok_or_else(|| {
anyhow!(
"failed to parse file_size for external file {}",
&target.display()
)
})?;
}
}
let mut child = Node::from_fs_object( let mut child = Node::from_fs_object(
ctx.fs_version, ctx.fs_version,
ctx.source_path.clone(), ctx.source_path.clone(),
path.clone(), path.clone(),
Overlay::UpperAddition, Overlay::UpperAddition,
ctx.chunk_size, ctx.chunk_size,
file_size,
parent.info.explicit_uidgid, parent.info.explicit_uidgid,
true, true,
) )
@ -60,24 +75,41 @@ impl FilesystemTreeBuilder {
// as per OCI spec, whiteout file should not be present within final image // as per OCI spec, whiteout file should not be present within final image
// or filesystem, only existed in layers. // or filesystem, only existed in layers.
if !bootstrap_ctx.layered if layer_idx == 0
&& child.whiteout_type(ctx.whiteout_spec).is_some() && child.whiteout_type(ctx.whiteout_spec).is_some()
&& !child.is_overlayfs_opaque(ctx.whiteout_spec) && !child.is_overlayfs_opaque(ctx.whiteout_spec)
{ {
continue; continue;
} }
let mut child = Tree::new(child); let (mut child, mut external_child) = (Tree::new(child.clone()), Tree::new(child));
child.children = self.load_children(ctx, bootstrap_ctx, &child.node, layer_idx)?; let (child_children, external_children) =
self.load_children(ctx, &child.node, layer_idx)?;
child.children = child_children;
external_child.children = external_children;
child child
.borrow_mut_node() .borrow_mut_node()
.v5_set_dir_size(ctx.fs_version, &child.children); .v5_set_dir_size(ctx.fs_version, &child.children);
result.push(child); external_child
.borrow_mut_node()
.v5_set_dir_size(ctx.fs_version, &external_child.children);
if ctx.attributes.is_external(&target) {
external_trees.push(external_child);
} else {
// TODO: need to implement type=ignore for nydus attributes,
// let's ignore the tree for workaround.
trees.push(child.clone());
if ctx.attributes.is_prefix_external(target) {
external_trees.push(external_child);
}
};
} }
result.sort_unstable_by(|a, b| a.name().cmp(b.name())); trees.sort_unstable_by(|a, b| a.name().cmp(b.name()));
external_trees.sort_unstable_by(|a, b| a.name().cmp(b.name()));
Ok(result) Ok((trees, external_trees))
} }
} }
@ -90,57 +122,46 @@ impl DirectoryBuilder {
} }
/// Build node tree from a filesystem directory /// Build node tree from a filesystem directory
fn build_tree( fn build_tree(&mut self, ctx: &mut BuildContext, layer_idx: u16) -> Result<(Tree, Tree)> {
&mut self,
ctx: &mut BuildContext,
bootstrap_ctx: &mut BootstrapContext,
layer_idx: u16,
) -> Result<Tree> {
let node = Node::from_fs_object( let node = Node::from_fs_object(
ctx.fs_version, ctx.fs_version,
ctx.source_path.clone(), ctx.source_path.clone(),
ctx.source_path.clone(), ctx.source_path.clone(),
Overlay::UpperAddition, Overlay::UpperAddition,
ctx.chunk_size, ctx.chunk_size,
0,
ctx.explicit_uidgid, ctx.explicit_uidgid,
true, true,
)?; )?;
let mut tree = Tree::new(node); let mut tree = Tree::new(node.clone());
let mut external_tree = Tree::new(node);
let tree_builder = FilesystemTreeBuilder::new(); let tree_builder = FilesystemTreeBuilder::new();
tree.children = timing_tracer!( let (tree_children, external_tree_children) = timing_tracer!(
{ tree_builder.load_children(ctx, bootstrap_ctx, &tree.node, layer_idx) }, { tree_builder.load_children(ctx, &tree.node, layer_idx) },
"load_from_directory" "load_from_directory"
)?; )?;
tree.children = tree_children;
external_tree.children = external_tree_children;
tree.borrow_mut_node() tree.borrow_mut_node()
.v5_set_dir_size(ctx.fs_version, &tree.children); .v5_set_dir_size(ctx.fs_version, &tree.children);
external_tree
.borrow_mut_node()
.v5_set_dir_size(ctx.fs_version, &external_tree.children);
Ok(tree) Ok((tree, external_tree))
} }
}
impl Builder for DirectoryBuilder { fn one_build(
fn build(
&mut self, &mut self,
ctx: &mut BuildContext, ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager, bootstrap_mgr: &mut BootstrapManager,
blob_mgr: &mut BlobManager, blob_mgr: &mut BlobManager,
blob_writer: &mut Box<dyn Artifact>,
tree: Tree,
) -> Result<BuildOutput> { ) -> Result<BuildOutput> {
let mut bootstrap_ctx = bootstrap_mgr.create_ctx()?;
let layer_idx = u16::from(bootstrap_ctx.layered);
let mut blob_writer: Box<dyn Artifact> = if let Some(blob_stor) = ctx.blob_storage.clone() {
Box::new(ArtifactWriter::new(blob_stor)?)
} else {
Box::<NoopArtifactWriter>::default()
};
// Scan source directory to build upper layer tree.
let tree = timing_tracer!(
{ self.build_tree(ctx, &mut bootstrap_ctx, layer_idx) },
"build_tree"
)?;
// Build bootstrap // Build bootstrap
let mut bootstrap_ctx = bootstrap_mgr.create_ctx()?;
let mut bootstrap = timing_tracer!( let mut bootstrap = timing_tracer!(
{ build_bootstrap(ctx, bootstrap_mgr, &mut bootstrap_ctx, blob_mgr, tree) }, { build_bootstrap(ctx, bootstrap_mgr, &mut bootstrap_ctx, blob_mgr, tree) },
"build_bootstrap" "build_bootstrap"
@ -192,6 +213,55 @@ impl Builder for DirectoryBuilder {
lazy_drop(bootstrap_ctx); lazy_drop(bootstrap_ctx);
BuildOutput::new(blob_mgr, &bootstrap_mgr.bootstrap_storage) BuildOutput::new(blob_mgr, None, &bootstrap_mgr.bootstrap_storage, &None)
}
}
impl Builder for DirectoryBuilder {
fn build(
&mut self,
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_mgr: &mut BlobManager,
) -> Result<BuildOutput> {
let layer_idx = u16::from(bootstrap_mgr.f_parent_path.is_some());
// Scan source directory to build upper layer tree.
let (tree, external_tree) =
timing_tracer!({ self.build_tree(ctx, layer_idx) }, "build_tree")?;
// Build for tree
let mut blob_writer: Box<dyn Artifact> = if let Some(blob_stor) = ctx.blob_storage.clone() {
Box::new(ArtifactWriter::new(blob_stor)?)
} else {
Box::<NoopArtifactWriter>::default()
};
let mut output = self.one_build(ctx, bootstrap_mgr, blob_mgr, &mut blob_writer, tree)?;
// Build for external tree
ctx.prefetch = prefetch::Prefetch::new(prefetch::PrefetchPolicy::None)?;
let mut external_blob_mgr = BlobManager::new(ctx.digester, true);
let mut external_bootstrap_mgr = bootstrap_mgr.clone();
if let Some(stor) = external_bootstrap_mgr.bootstrap_storage.as_mut() {
stor.add_suffix("external")
}
let mut external_blob_writer: Box<dyn Artifact> =
if let Some(blob_stor) = ctx.external_blob_storage.clone() {
Box::new(ArtifactWriter::new(blob_stor)?)
} else {
Box::<NoopArtifactWriter>::default()
};
let external_output = self.one_build(
ctx,
&mut external_bootstrap_mgr,
&mut external_blob_mgr,
&mut external_blob_writer,
external_tree,
)?;
output.external_bootstrap_path = external_output.bootstrap_path;
output.external_blobs = external_output.blobs;
Ok(output)
} }
} }

View File

@ -27,6 +27,7 @@ pub use self::chunkdict_generator::ChunkdictBlobInfo;
pub use self::chunkdict_generator::ChunkdictChunkInfo; pub use self::chunkdict_generator::ChunkdictChunkInfo;
pub use self::chunkdict_generator::Generator; pub use self::chunkdict_generator::Generator;
pub use self::compact::BlobCompactor; pub use self::compact::BlobCompactor;
pub use self::compact::Config as CompactConfig;
pub use self::core::bootstrap::Bootstrap; pub use self::core::bootstrap::Bootstrap;
pub use self::core::chunk_dict::{parse_chunk_dict_arg, ChunkDict, HashChunkDict}; pub use self::core::chunk_dict::{parse_chunk_dict_arg, ChunkDict, HashChunkDict};
pub use self::core::context::{ pub use self::core::context::{
@ -40,14 +41,18 @@ pub use self::core::prefetch::{Prefetch, PrefetchPolicy};
pub use self::core::tree::{MetadataTreeBuilder, Tree, TreeNode}; pub use self::core::tree::{MetadataTreeBuilder, Tree, TreeNode};
pub use self::directory::DirectoryBuilder; pub use self::directory::DirectoryBuilder;
pub use self::merge::Merger; pub use self::merge::Merger;
pub use self::optimize_prefetch::update_ctx_from_bootstrap;
pub use self::optimize_prefetch::OptimizePrefetch;
pub use self::stargz::StargzBuilder; pub use self::stargz::StargzBuilder;
pub use self::tarball::TarballBuilder; pub use self::tarball::TarballBuilder;
pub mod attributes;
mod chunkdict_generator; mod chunkdict_generator;
mod compact; mod compact;
mod core; mod core;
mod directory; mod directory;
mod merge; mod merge;
mod optimize_prefetch;
mod stargz; mod stargz;
mod tarball; mod tarball;
@ -116,9 +121,14 @@ fn dump_bootstrap(
if ctx.blob_inline_meta { if ctx.blob_inline_meta {
assert_ne!(ctx.conversion_type, ConversionType::TarToTarfs); assert_ne!(ctx.conversion_type, ConversionType::TarToTarfs);
// Ensure the blob object is created in case of no chunks generated for the blob. // Ensure the blob object is created in case of no chunks generated for the blob.
let (_, blob_ctx) = blob_mgr let blob_ctx = if blob_mgr.external {
.get_or_create_current_blob(ctx) &mut blob_mgr.new_blob_ctx(ctx)?
.map_err(|_e| anyhow!("failed to get current blob object"))?; } else {
let (_, blob_ctx) = blob_mgr
.get_or_create_current_blob(ctx)
.map_err(|_e| anyhow!("failed to get current blob object"))?;
blob_ctx
};
let bootstrap_offset = blob_writer.pos()?; let bootstrap_offset = blob_writer.pos()?;
let uncompressed_bootstrap = bootstrap_ctx.writer.as_bytes()?; let uncompressed_bootstrap = bootstrap_ctx.writer.as_bytes()?;
let uncompressed_size = uncompressed_bootstrap.len(); let uncompressed_size = uncompressed_bootstrap.len();
@ -248,7 +258,6 @@ fn finalize_blob(
blob_cache.finalize(&blob_ctx.blob_id)?; blob_cache.finalize(&blob_ctx.blob_id)?;
} }
} }
Ok(()) Ok(())
} }

View File

@ -129,7 +129,7 @@ impl Merger {
} }
let mut tree: Option<Tree> = None; let mut tree: Option<Tree> = None;
let mut blob_mgr = BlobManager::new(ctx.digester); let mut blob_mgr = BlobManager::new(ctx.digester, false);
let mut blob_idx_map = HashMap::new(); let mut blob_idx_map = HashMap::new();
let mut parent_layers = 0; let mut parent_layers = 0;
@ -304,15 +304,40 @@ impl Merger {
ctx.chunk_size = chunk_size; ctx.chunk_size = chunk_size;
} }
// After merging all trees, we need to re-calculate the blob index of
// referenced blobs, as the upper tree might have deleted some files
// or directories by opaques, and some blobs are dereferenced.
let mut used_blobs = HashMap::new(); // HashMap<blob_id, new_blob_index>
let mut used_blob_mgr = BlobManager::new(ctx.digester, false);
let origin_blobs = blob_mgr.get_blobs();
tree.walk_bfs(true, &mut |n| {
let mut node = n.borrow_mut_node();
for chunk in &mut node.chunks {
let origin_blob_index = chunk.inner.blob_index() as usize;
let blob_ctx = origin_blobs[origin_blob_index].clone();
let origin_blob_id = blob_ctx.blob_id();
let new_blob_index = if let Some(new_blob_index) = used_blobs.get(&origin_blob_id) {
*new_blob_index
} else {
let new_blob_index = used_blob_mgr.len();
used_blobs.insert(origin_blob_id, new_blob_index);
used_blob_mgr.add_blob(blob_ctx);
new_blob_index
};
chunk.set_blob_index(new_blob_index as u32);
}
Ok(())
})?;
let mut bootstrap_ctx = BootstrapContext::new(Some(target.clone()), false)?; let mut bootstrap_ctx = BootstrapContext::new(Some(target.clone()), false)?;
let mut bootstrap = Bootstrap::new(tree)?; let mut bootstrap = Bootstrap::new(tree)?;
bootstrap.build(ctx, &mut bootstrap_ctx)?; bootstrap.build(ctx, &mut bootstrap_ctx)?;
let blob_table = blob_mgr.to_blob_table(ctx)?; let blob_table = used_blob_mgr.to_blob_table(ctx)?;
let mut bootstrap_storage = Some(target.clone()); let mut bootstrap_storage = Some(target.clone());
bootstrap bootstrap
.dump(ctx, &mut bootstrap_storage, &mut bootstrap_ctx, &blob_table) .dump(ctx, &mut bootstrap_storage, &mut bootstrap_ctx, &blob_table)
.context(format!("dump bootstrap to {:?}", target.display()))?; .context(format!("dump bootstrap to {:?}", target.display()))?;
BuildOutput::new(&blob_mgr, &bootstrap_storage) BuildOutput::new(&used_blob_mgr, None, &bootstrap_storage, &None)
} }
} }

View File

@ -0,0 +1,302 @@
use crate::anyhow;
use crate::core::blob::Blob;
use crate::finalize_blob;
use crate::Artifact;
use crate::ArtifactWriter;
use crate::BlobContext;
use crate::BlobManager;
use crate::Bootstrap;
use crate::BootstrapManager;
use crate::BuildContext;
use crate::BuildOutput;
use crate::ChunkSource;
use crate::ConversionType;
use crate::NodeChunk;
use crate::Path;
use crate::PathBuf;
use crate::Tree;
use crate::TreeNode;
use anyhow::Context;
use anyhow::{Ok, Result};
use nydus_api::ConfigV2;
use nydus_rafs::metadata::layout::RafsBlobTable;
use nydus_rafs::metadata::RafsSuper;
use nydus_rafs::metadata::RafsVersion;
use nydus_storage::device::BlobInfo;
use nydus_storage::meta::BatchContextGenerator;
use nydus_storage::meta::BlobChunkInfoV1Ondisk;
use nydus_utils::compress;
use sha2::Digest;
use std::fs::File;
use std::io::{Read, Seek, Write};
use std::mem::size_of;
use std::sync::Arc;
pub struct OptimizePrefetch {}
struct PrefetchBlobState {
blob_info: BlobInfo,
blob_ctx: BlobContext,
blob_writer: Box<dyn Artifact>,
}
impl PrefetchBlobState {
fn new(ctx: &BuildContext, blob_layer_num: u32, blobs_dir_path: &Path) -> Result<Self> {
let mut blob_info = BlobInfo::new(
blob_layer_num,
String::from("prefetch-blob"),
0,
0,
ctx.chunk_size,
u32::MAX,
ctx.blob_features,
);
blob_info.set_compressor(ctx.compressor);
blob_info.set_separated_with_prefetch_files_feature(true);
let mut blob_ctx = BlobContext::from(ctx, &blob_info, ChunkSource::Build)?;
blob_ctx.blob_meta_info_enabled = true;
let blob_writer = ArtifactWriter::new(crate::ArtifactStorage::FileDir((
blobs_dir_path.to_path_buf(),
String::new(),
)))
.map(|writer| Box::new(writer) as Box<dyn Artifact>)?;
Ok(Self {
blob_info,
blob_ctx,
blob_writer,
})
}
}
impl OptimizePrefetch {
/// Generate a new bootstrap for prefetch.
pub fn generate_prefetch(
tree: &mut Tree,
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_table: &mut RafsBlobTable,
blobs_dir_path: PathBuf,
prefetch_nodes: Vec<TreeNode>,
) -> Result<BuildOutput> {
// create a new blob for prefetch layer
let blob_layer_num = match blob_table {
RafsBlobTable::V5(table) => table.get_all().len(),
RafsBlobTable::V6(table) => table.get_all().len(),
};
let mut blob_state = PrefetchBlobState::new(&ctx, blob_layer_num as u32, &blobs_dir_path)?;
let mut batch = BatchContextGenerator::new(0)?;
for node in &prefetch_nodes {
Self::process_prefetch_node(
tree,
&node,
&mut blob_state,
&mut batch,
blob_table,
&blobs_dir_path,
)?;
}
let blob_mgr = Self::dump_blob(ctx, blob_table, &mut blob_state)?;
debug!("prefetch blob id: {}", ctx.blob_id);
Self::build_dump_bootstrap(tree, ctx, bootstrap_mgr, blob_table)?;
BuildOutput::new(&blob_mgr, None, &bootstrap_mgr.bootstrap_storage, &None)
}
fn build_dump_bootstrap(
tree: &mut Tree,
ctx: &mut BuildContext,
bootstrap_mgr: &mut BootstrapManager,
blob_table: &mut RafsBlobTable,
) -> Result<()> {
let mut bootstrap_ctx = bootstrap_mgr.create_ctx()?;
let mut bootstrap = Bootstrap::new(tree.clone())?;
// Build bootstrap
bootstrap.build(ctx, &mut bootstrap_ctx)?;
let blob_table_withprefetch = match blob_table {
RafsBlobTable::V5(table) => RafsBlobTable::V5(table.clone()),
RafsBlobTable::V6(table) => RafsBlobTable::V6(table.clone()),
};
bootstrap.dump(
ctx,
&mut bootstrap_mgr.bootstrap_storage,
&mut bootstrap_ctx,
&blob_table_withprefetch,
)?;
Ok(())
}
fn dump_blob(
ctx: &mut BuildContext,
blob_table: &mut RafsBlobTable,
blob_state: &mut PrefetchBlobState,
) -> Result<BlobManager> {
match blob_table {
RafsBlobTable::V5(table) => {
table.entries.push(blob_state.blob_info.clone().into());
}
RafsBlobTable::V6(table) => {
table.entries.push(blob_state.blob_info.clone().into());
}
}
let mut blob_mgr = BlobManager::new(ctx.digester, false);
blob_mgr.add_blob(blob_state.blob_ctx.clone());
blob_mgr.set_current_blob_index(0);
Blob::finalize_blob_data(&ctx, &mut blob_mgr, blob_state.blob_writer.as_mut())?;
if let Some((_, blob_ctx)) = blob_mgr.get_current_blob() {
Blob::dump_meta_data(&ctx, blob_ctx, blob_state.blob_writer.as_mut()).unwrap();
};
ctx.blob_id = String::from("");
blob_mgr.get_current_blob().unwrap().1.blob_id = String::from("");
finalize_blob(ctx, &mut blob_mgr, blob_state.blob_writer.as_mut())?;
ctx.blob_id = blob_mgr
.get_current_blob()
.ok_or(anyhow!("failed to get current blob"))?
.1
.blob_id
.clone();
let entries = match blob_table {
RafsBlobTable::V5(table) => table.get_all(),
RafsBlobTable::V6(table) => table.get_all(),
};
// Verify and update prefetch blob
assert!(
entries
.iter()
.filter(|blob| blob.blob_id() == "prefetch-blob")
.count()
== 1,
"Expected exactly one prefetch-blob"
);
// Rewrite prefetch blob id
match blob_table {
RafsBlobTable::V5(table) => {
rewrite_blob_id(&mut table.entries, "prefetch-blob", ctx.blob_id.clone())
}
RafsBlobTable::V6(table) => {
rewrite_blob_id(&mut table.entries, "prefetch-blob", ctx.blob_id.clone())
}
}
Ok(blob_mgr)
}
fn process_prefetch_node(
tree: &mut Tree,
node: &TreeNode,
prefetch_state: &mut PrefetchBlobState,
batch: &mut BatchContextGenerator,
blob_table: &RafsBlobTable,
blobs_dir_path: &Path,
) -> Result<()> {
let tree_node = tree
.get_node_mut(&node.borrow().path())
.ok_or(anyhow!("failed to get node"))?
.node
.as_ref();
let entries = match blob_table {
RafsBlobTable::V5(table) => table.get_all(),
RafsBlobTable::V6(table) => table.get_all(),
};
let blob_id = tree_node
.borrow()
.chunks
.first()
.and_then(|chunk| entries.get(chunk.inner.blob_index() as usize).cloned())
.map(|entry| entry.blob_id())
.ok_or(anyhow!("failed to get blob id"))?;
let mut blob_file = Arc::new(File::open(blobs_dir_path.join(blob_id))?);
tree_node.borrow_mut().layer_idx = prefetch_state.blob_info.blob_index() as u16;
let mut child = tree_node.borrow_mut();
let chunks: &mut Vec<NodeChunk> = child.chunks.as_mut();
let blob_ctx = &mut prefetch_state.blob_ctx;
let blob_info = &mut prefetch_state.blob_info;
let encrypted = blob_ctx.blob_compressor != compress::Algorithm::None;
for chunk in chunks {
let inner = Arc::make_mut(&mut chunk.inner);
let mut buf = vec![0u8; inner.compressed_size() as usize];
blob_file.seek(std::io::SeekFrom::Start(inner.compressed_offset()))?;
blob_file.read_exact(&mut buf)?;
prefetch_state.blob_writer.write_all(&buf)?;
let info = batch.generate_chunk_info(
blob_ctx.current_compressed_offset,
blob_ctx.current_uncompressed_offset,
inner.uncompressed_size(),
encrypted,
)?;
inner.set_blob_index(blob_info.blob_index());
if blob_ctx.chunk_count == u32::MAX {
blob_ctx.chunk_count = 0;
}
inner.set_index(blob_ctx.chunk_count);
blob_ctx.chunk_count += 1;
inner.set_compressed_offset(blob_ctx.current_compressed_offset);
inner.set_uncompressed_offset(blob_ctx.current_uncompressed_offset);
let aligned_d_size: u64 = nydus_utils::try_round_up_4k(inner.uncompressed_size())
.ok_or_else(|| anyhow!("invalid size"))?;
blob_ctx.compressed_blob_size += inner.compressed_size() as u64;
blob_ctx.uncompressed_blob_size += aligned_d_size;
blob_ctx.current_compressed_offset += inner.compressed_size() as u64;
blob_ctx.current_uncompressed_offset += aligned_d_size;
blob_ctx.add_chunk_meta_info(&inner, Some(info))?;
blob_ctx.blob_hash.update(&buf);
blob_info.set_meta_ci_compressed_size(
(blob_info.meta_ci_compressed_size() + size_of::<BlobChunkInfoV1Ondisk>() as u64)
as usize,
);
blob_info.set_meta_ci_uncompressed_size(
(blob_info.meta_ci_uncompressed_size() + size_of::<BlobChunkInfoV1Ondisk>() as u64)
as usize,
);
}
Ok(())
}
}
fn rewrite_blob_id(entries: &mut [Arc<BlobInfo>], blob_id: &str, new_blob_id: String) {
entries
.iter_mut()
.filter(|blob| blob.blob_id() == blob_id)
.for_each(|blob| {
let mut info = (**blob).clone();
info.set_blob_id(new_blob_id.clone());
*blob = Arc::new(info);
});
}
pub fn update_ctx_from_bootstrap(
ctx: &mut BuildContext,
config: Arc<ConfigV2>,
bootstrap_path: &Path,
) -> Result<RafsSuper> {
let (sb, _) = RafsSuper::load_from_file(bootstrap_path, config, false)?;
ctx.blob_features = sb
.superblock
.get_blob_infos()
.first()
.ok_or_else(|| anyhow!("No blob info found in superblock"))?
.features();
let config = sb.meta.get_config();
if config.is_tarfs_mode {
ctx.conversion_type = ConversionType::TarToRafs;
}
ctx.fs_version =
RafsVersion::try_from(sb.meta.version).context("Failed to get RAFS version")?;
ctx.compressor = config.compressor;
Ok(sb)
}

View File

@ -58,10 +58,10 @@ struct TocEntry {
/// - block: block device /// - block: block device
/// - fifo: fifo /// - fifo: fifo
/// - chunk: a chunk of regular file data As described in the above section, /// - chunk: a chunk of regular file data As described in the above section,
/// a regular file can be divided into several chunks. TOCEntry MUST be created for each chunk. /// a regular file can be divided into several chunks. TOCEntry MUST be created for each chunk.
/// TOCEntry of the first chunk of that file MUST be typed as reg. TOCEntry of each chunk after /// TOCEntry of the first chunk of that file MUST be typed as reg. TOCEntry of each chunk after
/// 2nd MUST be typed as chunk. chunk TOCEntry MUST set offset, chunkOffset and chunkSize /// 2nd MUST be typed as chunk. chunk TOCEntry MUST set offset, chunkOffset and chunkSize
/// properties. /// properties.
#[serde(rename = "type")] #[serde(rename = "type")]
pub toc_type: String, pub toc_type: String,
@ -456,7 +456,7 @@ impl StargzBuilder {
uncompressed_offset: self.uncompressed_offset, uncompressed_offset: self.uncompressed_offset,
file_offset: entry.chunk_offset as u64, file_offset: entry.chunk_offset as u64,
index: 0, index: 0,
reserved: 0, crc32: 0,
}); });
let chunk = NodeChunk { let chunk = NodeChunk {
source: ChunkSource::Build, source: ChunkSource::Build,
@ -904,14 +904,16 @@ impl Builder for StargzBuilder {
lazy_drop(bootstrap_ctx); lazy_drop(bootstrap_ctx);
BuildOutput::new(blob_mgr, &bootstrap_mgr.bootstrap_storage) BuildOutput::new(blob_mgr, None, &bootstrap_mgr.bootstrap_storage, &None)
} }
} }
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;
use crate::{ArtifactStorage, ConversionType, Features, Prefetch, WhiteoutSpec}; use crate::{
attributes::Attributes, ArtifactStorage, ConversionType, Features, Prefetch, WhiteoutSpec,
};
#[test] #[test]
fn test_build_stargz_toc() { fn test_build_stargz_toc() {
@ -932,16 +934,20 @@ mod tests {
ConversionType::EStargzIndexToRef, ConversionType::EStargzIndexToRef,
source_path, source_path,
prefetch, prefetch,
Some(ArtifactStorage::FileDir(tmp_dir.clone())), Some(ArtifactStorage::FileDir((tmp_dir.clone(), String::new()))),
None,
false, false,
Features::new(), Features::new(),
false, false,
Attributes::default(),
); );
ctx.fs_version = RafsVersion::V6; ctx.fs_version = RafsVersion::V6;
ctx.conversion_type = ConversionType::EStargzToRafs; ctx.conversion_type = ConversionType::EStargzToRafs;
let mut bootstrap_mgr = let mut bootstrap_mgr = BootstrapManager::new(
BootstrapManager::new(Some(ArtifactStorage::FileDir(tmp_dir.clone())), None); Some(ArtifactStorage::FileDir((tmp_dir.clone(), String::new()))),
let mut blob_mgr = BlobManager::new(digest::Algorithm::Sha256); None,
);
let mut blob_mgr = BlobManager::new(digest::Algorithm::Sha256, false);
let mut builder = StargzBuilder::new(0x1000000, &ctx); let mut builder = StargzBuilder::new(0x1000000, &ctx);
let builder = builder.build(&mut ctx, &mut bootstrap_mgr, &mut blob_mgr); let builder = builder.build(&mut ctx, &mut bootstrap_mgr, &mut blob_mgr);

View File

@ -8,11 +8,11 @@
//! //!
//! The tarball data is arrange as a sequence of tar headers with associated file data interleaved. //! The tarball data is arrange as a sequence of tar headers with associated file data interleaved.
//! - (tar header) (tar header) (file data) (tar header) (file data) (tar header) //! - (tar header) (tar header) (file data) (tar header) (file data) (tar header)
//! And to support read tarball data from FIFO, we could only go over the tarball stream once. //! And to support read tarball data from FIFO, we could only go over the tarball stream once.
//! So the workflow is as: //! So the workflow is as:
//! - for each tar header from the stream //! - for each tar header from the stream
//! -- generate RAFS filesystem node from the tar header //! -- generate RAFS filesystem node from the tar header
//! -- optionally dump file data associated with the tar header into RAFS data blob //! -- optionally dump file data associated with the tar header into RAFS data blob
//! - arrange all generated RAFS nodes into a RAFS filesystem tree //! - arrange all generated RAFS nodes into a RAFS filesystem tree
//! - dump the RAFS filesystem tree into RAFS metadata blob //! - dump the RAFS filesystem tree into RAFS metadata blob
use std::ffi::{OsStr, OsString}; use std::ffi::{OsStr, OsString};
@ -659,13 +659,14 @@ impl Builder for TarballBuilder {
lazy_drop(bootstrap_ctx); lazy_drop(bootstrap_ctx);
BuildOutput::new(blob_mgr, &bootstrap_mgr.bootstrap_storage) BuildOutput::new(blob_mgr, None, &bootstrap_mgr.bootstrap_storage, &None)
} }
} }
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;
use crate::attributes::Attributes;
use crate::{ArtifactStorage, Features, Prefetch, WhiteoutSpec}; use crate::{ArtifactStorage, Features, Prefetch, WhiteoutSpec};
use nydus_utils::{compress, digest}; use nydus_utils::{compress, digest};
@ -687,14 +688,18 @@ mod tests {
ConversionType::TarToTarfs, ConversionType::TarToTarfs,
source_path, source_path,
prefetch, prefetch,
Some(ArtifactStorage::FileDir(tmp_dir.clone())), Some(ArtifactStorage::FileDir((tmp_dir.clone(), String::new()))),
None,
false, false,
Features::new(), Features::new(),
false, false,
Attributes::default(),
); );
let mut bootstrap_mgr = let mut bootstrap_mgr = BootstrapManager::new(
BootstrapManager::new(Some(ArtifactStorage::FileDir(tmp_dir)), None); Some(ArtifactStorage::FileDir((tmp_dir, String::new()))),
let mut blob_mgr = BlobManager::new(digest::Algorithm::Sha256); None,
);
let mut blob_mgr = BlobManager::new(digest::Algorithm::Sha256, false);
let mut builder = TarballBuilder::new(ConversionType::TarToTarfs); let mut builder = TarballBuilder::new(ConversionType::TarToTarfs);
builder builder
.build(&mut ctx, &mut bootstrap_mgr, &mut blob_mgr) .build(&mut ctx, &mut bootstrap_mgr, &mut blob_mgr)
@ -719,14 +724,18 @@ mod tests {
ConversionType::TarToTarfs, ConversionType::TarToTarfs,
source_path, source_path,
prefetch, prefetch,
Some(ArtifactStorage::FileDir(tmp_dir.clone())), Some(ArtifactStorage::FileDir((tmp_dir.clone(), String::new()))),
None,
false, false,
Features::new(), Features::new(),
true, true,
Attributes::default(),
); );
let mut bootstrap_mgr = let mut bootstrap_mgr = BootstrapManager::new(
BootstrapManager::new(Some(ArtifactStorage::FileDir(tmp_dir)), None); Some(ArtifactStorage::FileDir((tmp_dir, String::new()))),
let mut blob_mgr = BlobManager::new(digest::Algorithm::Sha256); None,
);
let mut blob_mgr = BlobManager::new(digest::Algorithm::Sha256, false);
let mut builder = TarballBuilder::new(ConversionType::TarToTarfs); let mut builder = TarballBuilder::new(ConversionType::TarToTarfs);
builder builder
.build(&mut ctx, &mut bootstrap_mgr, &mut blob_mgr) .build(&mut ctx, &mut bootstrap_mgr, &mut blob_mgr)

View File

@ -16,9 +16,9 @@ crate-type = ["cdylib", "staticlib"]
libc = "0.2.137" libc = "0.2.137"
log = "0.4.17" log = "0.4.17"
fuse-backend-rs = "^0.12.0" fuse-backend-rs = "^0.12.0"
nydus-api = { version = "0.3", path = "../api" } nydus-api = { version = "0.4.0", path = "../api" }
nydus-rafs = { version = "0.3.1", path = "../rafs" } nydus-rafs = { version = "0.4.0", path = "../rafs" }
nydus-storage = { version = "0.6.3", path = "../storage" } nydus-storage = { version = "0.7.0", path = "../storage" }
[features] [features]
baekend-s3 = ["nydus-storage/backend-s3"] baekend-s3 = ["nydus-storage/backend-s3"]

View File

@ -0,0 +1,8 @@
package main
import "fmt"
// This is a dummy program, to workaround the goreleaser can't pre build the binary.
func main() {
fmt.Println("Hello, World!")
}

View File

@ -15,4 +15,7 @@ linters:
- errcheck - errcheck
run: run:
deadline: 4m timeout: 5m
issues:
exclude-dirs:
- misc

View File

@ -2,7 +2,7 @@ GIT_COMMIT := $(shell git rev-parse --verify HEAD --short=7)
BUILD_TIME := $(shell date -u +%Y%m%d.%H%M) BUILD_TIME := $(shell date -u +%Y%m%d.%H%M)
PACKAGES ?= $(shell go list ./... | grep -v /vendor/) PACKAGES ?= $(shell go list ./... | grep -v /vendor/)
GOARCH ?= $(shell go env GOARCH) GOARCH ?= $(shell go env GOARCH)
GOPROXY ?= https://goproxy.io GOPROXY ?=
ifdef GOPROXY ifdef GOPROXY
PROXY := GOPROXY=${GOPROXY} PROXY := GOPROXY=${GOPROXY}

View File

@ -8,7 +8,7 @@ import (
"syscall" "syscall"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/urfave/cli/v2" cli "github.com/urfave/cli/v2"
"golang.org/x/sys/unix" "golang.org/x/sys/unix"
) )

View File

@ -15,6 +15,7 @@ linters:
- errcheck - errcheck
run: run:
deadline: 4m timeout: 5m
issues.exclude-dirs: issues:
- misc exclude-dirs:
- misc

View File

@ -1,6 +1,6 @@
PACKAGES ?= $(shell go list ./... | grep -v /vendor/) PACKAGES ?= $(shell go list ./... | grep -v /vendor/)
GOARCH ?= $(shell go env GOARCH) GOARCH ?= $(shell go env GOARCH)
GOPROXY ?= https://goproxy.io GOPROXY ?=
ifdef GOPROXY ifdef GOPROXY
PROXY := GOPROXY=${GOPROXY} PROXY := GOPROXY=${GOPROXY}

View File

@ -16,6 +16,8 @@ import (
"runtime" "runtime"
"strings" "strings"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/optimizer"
"github.com/containerd/containerd/reference/docker" "github.com/containerd/containerd/reference/docker"
"github.com/distribution/reference" "github.com/distribution/reference"
"github.com/dustin/go-humanize" "github.com/dustin/go-humanize"
@ -79,7 +81,7 @@ func getBackendConfig(c *cli.Context, prefix string, required bool) (string, str
return "", "", nil return "", "", nil
} }
possibleBackendTypes := []string{"oss", "s3"} possibleBackendTypes := []string{"oss", "s3", "localfs"}
if !isPossibleValue(possibleBackendTypes, backendType) { if !isPossibleValue(possibleBackendTypes, backendType) {
return "", "", fmt.Errorf("--%sbackend-type should be one of %v", prefix, possibleBackendTypes) return "", "", fmt.Errorf("--%sbackend-type should be one of %v", prefix, possibleBackendTypes)
} }
@ -89,7 +91,7 @@ func getBackendConfig(c *cli.Context, prefix string, required bool) (string, str
) )
if err != nil { if err != nil {
return "", "", err return "", "", err
} else if (backendType == "oss" || backendType == "s3") && strings.TrimSpace(backendConfig) == "" { } else if (backendType == "oss" || backendType == "s3" || backendType == "localfs") && strings.TrimSpace(backendConfig) == "" {
return "", "", errors.Errorf("backend configuration is empty, please specify option '--%sbackend-config'", prefix) return "", "", errors.Errorf("backend configuration is empty, please specify option '--%sbackend-config'", prefix)
} }
@ -191,22 +193,7 @@ func main() {
} }
// global options // global options
app.Flags = []cli.Flag{ app.Flags = getGlobalFlags()
&cli.BoolFlag{
Name: "debug",
Aliases: []string{"D"},
Required: false,
Value: false,
Usage: "Enable debug log level, overwrites the 'log-level' option",
EnvVars: []string{"DEBUG_LOG_LEVEL"}},
&cli.StringFlag{
Name: "log-level",
Aliases: []string{"l"},
Value: "info",
Usage: "Set log level (panic, fatal, error, warn, info, debug, trace)",
EnvVars: []string{"LOG_LEVEL"},
},
}
app.Commands = []*cli.Command{ app.Commands = []*cli.Command{
{ {
@ -225,6 +212,18 @@ func main() {
Usage: "Target (Nydus) image reference", Usage: "Target (Nydus) image reference",
EnvVars: []string{"TARGET"}, EnvVars: []string{"TARGET"},
}, },
&cli.StringFlag{
Name: "source-backend-type",
Value: "",
Usage: "Type of storage backend, possible values: 'oss', 's3'",
EnvVars: []string{"BACKEND_TYPE"},
},
&cli.StringFlag{
Name: "source-backend-config",
Value: "",
Usage: "Json configuration string for storage backend",
EnvVars: []string{"BACKEND_CONFIG"},
},
&cli.StringFlag{ &cli.StringFlag{
Name: "target-suffix", Name: "target-suffix",
Required: false, Required: false,
@ -399,7 +398,7 @@ func main() {
&cli.StringFlag{ &cli.StringFlag{
Name: "fs-chunk-size", Name: "fs-chunk-size",
Value: "0x100000", Value: "0x100000",
Usage: "size of nydus image data chunk, must be power of two and between 0x1000-0x100000, [default: 0x100000]", Usage: "size of nydus image data chunk, must be power of two and between 0x1000-0x10000000, [default: 0x4000000]",
EnvVars: []string{"FS_CHUNK_SIZE"}, EnvVars: []string{"FS_CHUNK_SIZE"},
Aliases: []string{"chunk-size"}, Aliases: []string{"chunk-size"},
}, },
@ -427,6 +426,24 @@ func main() {
Usage: "File path to save the metrics collected during conversion in JSON format, for example: './output.json'", Usage: "File path to save the metrics collected during conversion in JSON format, for example: './output.json'",
EnvVars: []string{"OUTPUT_JSON"}, EnvVars: []string{"OUTPUT_JSON"},
}, },
&cli.BoolFlag{
Name: "plain-http",
Value: false,
Usage: "Enable plain http for Nydus image push",
EnvVars: []string{"PLAIN_HTTP"},
},
&cli.IntFlag{
Name: "push-retry-count",
Value: 3,
Usage: "Number of retries when pushing to registry fails",
EnvVars: []string{"PUSH_RETRY_COUNT"},
},
&cli.StringFlag{
Name: "push-retry-delay",
Value: "5s",
Usage: "Delay between push retries (e.g. 5s, 1m, 1h)",
EnvVars: []string{"PUSH_RETRY_DELAY"},
},
}, },
Action: func(c *cli.Context) error { Action: func(c *cli.Context) error {
setupLogLevel(c) setupLogLevel(c)
@ -492,10 +509,12 @@ func main() {
WorkDir: c.String("work-dir"), WorkDir: c.String("work-dir"),
NydusImagePath: c.String("nydus-image"), NydusImagePath: c.String("nydus-image"),
Source: c.String("source"), SourceBackendType: c.String("source-backend-type"),
Target: targetRef, SourceBackendConfig: c.String("source-backend-config"),
SourceInsecure: c.Bool("source-insecure"), Source: c.String("source"),
TargetInsecure: c.Bool("target-insecure"), Target: targetRef,
SourceInsecure: c.Bool("source-insecure"),
TargetInsecure: c.Bool("target-insecure"),
BackendType: backendType, BackendType: backendType,
BackendConfig: backendConfig, BackendConfig: backendConfig,
@ -523,7 +542,10 @@ func main() {
AllPlatforms: c.Bool("all-platforms"), AllPlatforms: c.Bool("all-platforms"),
Platforms: c.String("platform"), Platforms: c.String("platform"),
OutputJSON: c.String("output-json"), OutputJSON: c.String("output-json"),
WithPlainHTTP: c.Bool("plain-http"),
PushRetryCount: c.Int("push-retry-count"),
PushRetryDelay: c.String("push-retry-delay"),
} }
return converter.Convert(context.Background(), opt) return converter.Convert(context.Background(), opt)
@ -559,19 +581,39 @@ func main() {
}, },
&cli.StringFlag{ &cli.StringFlag{
Name: "backend-type", Name: "source-backend-type",
Value: "", Value: "",
Usage: "Type of storage backend, enable verification of file data in Nydus image if specified, possible values: 'oss', 's3'", Usage: "Type of storage backend, possible values: 'oss', 's3'",
EnvVars: []string{"BACKEND_TYPE"}, EnvVars: []string{"BACKEND_TYPE"},
}, },
&cli.StringFlag{ &cli.StringFlag{
Name: "backend-config", Name: "source-backend-config",
Value: "", Value: "",
Usage: "Json string for storage backend configuration", Usage: "Json configuration string for storage backend",
EnvVars: []string{"BACKEND_CONFIG"}, EnvVars: []string{"BACKEND_CONFIG"},
}, },
&cli.PathFlag{ &cli.PathFlag{
Name: "backend-config-file", Name: "source-backend-config-file",
Value: "",
TakesFile: true,
Usage: "Json configuration file for storage backend",
EnvVars: []string{"BACKEND_CONFIG_FILE"},
},
&cli.StringFlag{
Name: "target-backend-type",
Value: "",
Usage: "Type of storage backend, possible values: 'oss', 's3'",
EnvVars: []string{"BACKEND_TYPE"},
},
&cli.StringFlag{
Name: "target-backend-config",
Value: "",
Usage: "Json configuration string for storage backend",
EnvVars: []string{"BACKEND_CONFIG"},
},
&cli.PathFlag{
Name: "target-backend-config-file",
Value: "", Value: "",
TakesFile: true, TakesFile: true,
Usage: "Json configuration file for storage backend", Usage: "Json configuration file for storage backend",
@ -612,7 +654,12 @@ func main() {
Action: func(c *cli.Context) error { Action: func(c *cli.Context) error {
setupLogLevel(c) setupLogLevel(c)
backendType, backendConfig, err := getBackendConfig(c, "", false) sourceBackendType, sourceBackendConfig, err := getBackendConfig(c, "source-", false)
if err != nil {
return err
}
targetBackendType, targetBackendConfig, err := getBackendConfig(c, "target-", false)
if err != nil { if err != nil {
return err return err
} }
@ -623,16 +670,20 @@ func main() {
} }
checker, err := checker.New(checker.Opt{ checker, err := checker.New(checker.Opt{
WorkDir: c.String("work-dir"), WorkDir: c.String("work-dir"),
Source: c.String("source"),
Target: c.String("target"), Source: c.String("source"),
Target: c.String("target"),
SourceInsecure: c.Bool("source-insecure"),
TargetInsecure: c.Bool("target-insecure"),
SourceBackendType: sourceBackendType,
SourceBackendConfig: sourceBackendConfig,
TargetBackendType: targetBackendType,
TargetBackendConfig: targetBackendConfig,
MultiPlatform: c.Bool("multi-platform"), MultiPlatform: c.Bool("multi-platform"),
SourceInsecure: c.Bool("source-insecure"),
TargetInsecure: c.Bool("target-insecure"),
NydusImagePath: c.String("nydus-image"), NydusImagePath: c.String("nydus-image"),
NydusdPath: c.String("nydusd"), NydusdPath: c.String("nydusd"),
BackendType: backendType,
BackendConfig: backendConfig,
ExpectedArch: arch, ExpectedArch: arch,
}) })
if err != nil { if err != nil {
@ -1160,6 +1211,97 @@ func main() {
return copier.Copy(context.Background(), opt) return copier.Copy(context.Background(), opt)
}, },
}, },
{
Name: "optimize",
Usage: "Optimize a source nydus image and push to the target",
Flags: []cli.Flag{
&cli.StringFlag{
Name: "source",
Required: true,
Usage: "Source (Nydus) image reference",
EnvVars: []string{"SOURCE"},
},
&cli.StringFlag{
Name: "target",
Required: true,
Usage: "Target (Nydus) image reference",
EnvVars: []string{"TARGET"},
},
&cli.BoolFlag{
Name: "source-insecure",
Required: false,
Usage: "Skip verifying server certs for HTTPS source registry",
EnvVars: []string{"SOURCE_INSECURE"},
},
&cli.BoolFlag{
Name: "target-insecure",
Required: false,
Usage: "Skip verifying server certs for HTTPS target registry",
EnvVars: []string{"TARGET_INSECURE"},
},
&cli.StringFlag{
Name: "policy",
Value: "separated-blob-with-prefetch-files",
Usage: "Specify the optimizing way",
EnvVars: []string{"OPTIMIZE_POLICY"},
},
&cli.StringFlag{
Name: "prefetch-files",
Required: false,
Usage: "File path to include prefetch files for optimization",
EnvVars: []string{"PREFETCH_FILES"},
},
&cli.StringFlag{
Name: "work-dir",
Value: "./tmp",
Usage: "Working directory for image optimization",
EnvVars: []string{"WORK_DIR"},
},
&cli.StringFlag{
Name: "nydus-image",
Value: "nydus-image",
Usage: "Path to the nydus-image binary, default to search in PATH",
EnvVars: []string{"NYDUS_IMAGE"},
},
&cli.StringFlag{
Name: "push-chunk-size",
Value: "0MB",
Usage: "Chunk size for pushing a blob layer in chunked",
},
},
Action: func(c *cli.Context) error {
setupLogLevel(c)
pushChunkSize, err := humanize.ParseBytes(c.String("push-chunk-size"))
if err != nil {
return errors.Wrap(err, "invalid --push-chunk-size option")
}
if pushChunkSize > 0 {
logrus.Infof("will push layer with chunk size %s", c.String("push-chunk-size"))
}
opt := optimizer.Opt{
WorkDir: c.String("work-dir"),
NydusImagePath: c.String("nydus-image"),
Source: c.String("source"),
Target: c.String("target"),
SourceInsecure: c.Bool("source-insecure"),
TargetInsecure: c.Bool("target-insecure"),
AllPlatforms: c.Bool("all-platforms"),
Platforms: c.String("platform"),
PushChunkSize: int64(pushChunkSize),
PrefetchFilesPath: c.String("prefetch-files"),
}
return optimizer.Optimize(context.Background(), opt)
},
},
{ {
Name: "commit", Name: "commit",
Usage: "Create and push a new nydus image from a container's changes that use a nydus image", Usage: "Create and push a new nydus image from a container's changes that use a nydus image",
@ -1192,7 +1334,7 @@ func main() {
&cli.StringFlag{ &cli.StringFlag{
Name: "container", Name: "container",
Required: true, Required: true,
Usage: "Target container id", Usage: "Target container ID (supports short ID, full ID)",
EnvVars: []string{"CONTAINER"}, EnvVars: []string{"CONTAINER"},
}, },
&cli.StringFlag{ &cli.StringFlag{
@ -1265,7 +1407,7 @@ func main() {
} }
cm, err := committer.NewCommitter(opt) cm, err := committer.NewCommitter(opt)
if err != nil { if err != nil {
return errors.Wrap(err, "create commiter") return errors.Wrap(err, "failed to create committer instance")
} }
return cm.Commit(c.Context, opt) return cm.Commit(c.Context, opt)
}, },
@ -1296,4 +1438,39 @@ func setupLogLevel(c *cli.Context) {
} }
logrus.SetLevel(logLevel) logrus.SetLevel(logLevel)
if c.String("log-file") != "" {
f, err := os.OpenFile(c.String("log-file"), os.O_RDWR|os.O_CREATE|os.O_APPEND, 0666)
if err != nil {
logrus.Errorf("failed to open log file: %+v", err)
return
}
logrus.SetOutput(f)
}
}
func getGlobalFlags() []cli.Flag {
return []cli.Flag{
&cli.BoolFlag{
Name: "debug",
Aliases: []string{"D"},
Required: false,
Value: false,
Usage: "Enable debug log level, overwrites the 'log-level' option",
EnvVars: []string{"DEBUG_LOG_LEVEL"},
},
&cli.StringFlag{
Name: "log-level",
Aliases: []string{"l"},
Value: "info",
Usage: "Set log level (panic, fatal, error, warn, info, debug, trace)",
EnvVars: []string{"LOG_LEVEL"},
},
&cli.StringFlag{
Name: "log-file",
Required: false,
Usage: "Write logs to a file",
EnvVars: []string{"LOG_FILE"},
},
}
} }

View File

@ -8,9 +8,13 @@ package main
import ( import (
"encoding/json" "encoding/json"
"flag" "flag"
"fmt"
"os" "os"
"testing" "testing"
"github.com/agiledragon/gomonkey/v2"
"github.com/sirupsen/logrus"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
"github.com/urfave/cli/v2" "github.com/urfave/cli/v2"
) )
@ -80,54 +84,13 @@ func TestParseBackendConfig(t *testing.T) {
} }
func TestGetBackendConfig(t *testing.T) { func TestGetBackendConfig(t *testing.T) {
app := &cli.App{ tests := []struct {
Flags: []cli.Flag{ backendType string
&cli.StringFlag{ backendConfig string
Name: "prefixbackend-type", }{
Value: "", {
}, backendType: "oss",
&cli.StringFlag{ backendConfig: `
Name: "prefixbackend-config",
Value: "",
},
&cli.StringFlag{
Name: "prefixbackend-config-file",
Value: "",
},
},
}
ctx := cli.NewContext(app, nil, nil)
backendType, backendConfig, err := getBackendConfig(ctx, "prefix", false)
require.NoError(t, err)
require.Empty(t, backendType)
require.Empty(t, backendConfig)
backendType, backendConfig, err = getBackendConfig(ctx, "prefix", true)
require.Error(t, err)
require.Contains(t, err.Error(), "backend type is empty, please specify option")
require.Empty(t, backendType)
require.Empty(t, backendConfig)
flagSet := flag.NewFlagSet("test1", flag.PanicOnError)
flagSet.String("prefixbackend-type", "errType", "")
ctx = cli.NewContext(app, flagSet, nil)
backendType, backendConfig, err = getBackendConfig(ctx, "prefix", true)
require.Error(t, err)
require.Contains(t, err.Error(), "backend-type should be one of")
require.Empty(t, backendType)
require.Empty(t, backendConfig)
flagSet = flag.NewFlagSet("test2", flag.PanicOnError)
flagSet.String("prefixbackend-type", "oss", "")
ctx = cli.NewContext(app, flagSet, nil)
backendType, backendConfig, err = getBackendConfig(ctx, "prefix", true)
require.Error(t, err)
require.Contains(t, err.Error(), "backend configuration is empty, please specify option")
require.Empty(t, backendType)
require.Empty(t, backendConfig)
configJSON := `
{ {
"bucket_name": "test", "bucket_name": "test",
"endpoint": "region.oss.com", "endpoint": "region.oss.com",
@ -135,45 +98,106 @@ func TestGetBackendConfig(t *testing.T) {
"access_key_secret": "testSK", "access_key_secret": "testSK",
"meta_prefix": "meta", "meta_prefix": "meta",
"blob_prefix": "blob" "blob_prefix": "blob"
}` }`,
require.True(t, json.Valid([]byte(configJSON))) },
{
backendType: "localfs",
backendConfig: `
{
"dir": "/path/to/blobs"
}`,
},
}
flagSet = flag.NewFlagSet("test3", flag.PanicOnError) for _, test := range tests {
flagSet.String("prefixbackend-type", "oss", "") t.Run(fmt.Sprintf("backend config %s", test.backendType), func(t *testing.T) {
flagSet.String("prefixbackend-config", configJSON, "") app := &cli.App{
ctx = cli.NewContext(app, flagSet, nil) Flags: []cli.Flag{
backendType, backendConfig, err = getBackendConfig(ctx, "prefix", true) &cli.StringFlag{
require.NoError(t, err) Name: "prefixbackend-type",
require.Equal(t, "oss", backendType) Value: "",
require.Equal(t, configJSON, backendConfig) },
&cli.StringFlag{
Name: "prefixbackend-config",
Value: "",
},
&cli.StringFlag{
Name: "prefixbackend-config-file",
Value: "",
},
},
}
ctx := cli.NewContext(app, nil, nil)
file, err := os.CreateTemp("", "nydusify-backend-config-test.json") backendType, backendConfig, err := getBackendConfig(ctx, "prefix", false)
require.NoError(t, err) require.NoError(t, err)
defer os.RemoveAll(file.Name()) require.Empty(t, backendType)
require.Empty(t, backendConfig)
_, err = file.WriteString(configJSON) backendType, backendConfig, err = getBackendConfig(ctx, "prefix", true)
require.NoError(t, err) require.Error(t, err)
file.Sync() require.Contains(t, err.Error(), "backend type is empty, please specify option")
require.Empty(t, backendType)
require.Empty(t, backendConfig)
flagSet = flag.NewFlagSet("test4", flag.PanicOnError) flagSet := flag.NewFlagSet("test1", flag.PanicOnError)
flagSet.String("prefixbackend-type", "oss", "") flagSet.String("prefixbackend-type", "errType", "")
flagSet.String("prefixbackend-config-file", file.Name(), "") ctx = cli.NewContext(app, flagSet, nil)
ctx = cli.NewContext(app, flagSet, nil) backendType, backendConfig, err = getBackendConfig(ctx, "prefix", true)
backendType, backendConfig, err = getBackendConfig(ctx, "prefix", true) require.Error(t, err)
require.NoError(t, err) require.Contains(t, err.Error(), "backend-type should be one of")
require.Equal(t, "oss", backendType) require.Empty(t, backendType)
require.Equal(t, configJSON, backendConfig) require.Empty(t, backendConfig)
flagSet = flag.NewFlagSet("test5", flag.PanicOnError) flagSet = flag.NewFlagSet("test2", flag.PanicOnError)
flagSet.String("prefixbackend-type", "oss", "") flagSet.String("prefixbackend-type", test.backendType, "")
flagSet.String("prefixbackend-config", configJSON, "") ctx = cli.NewContext(app, flagSet, nil)
flagSet.String("prefixbackend-config-file", file.Name(), "") backendType, backendConfig, err = getBackendConfig(ctx, "prefix", true)
ctx = cli.NewContext(app, flagSet, nil) require.Error(t, err)
backendType, backendConfig, err = getBackendConfig(ctx, "prefix", true) require.Contains(t, err.Error(), "backend configuration is empty, please specify option")
require.Error(t, err) require.Empty(t, backendType)
require.Contains(t, err.Error(), "--backend-config conflicts with --backend-config-file") require.Empty(t, backendConfig)
require.Empty(t, backendType)
require.Empty(t, backendConfig) require.True(t, json.Valid([]byte(test.backendConfig)))
flagSet = flag.NewFlagSet("test3", flag.PanicOnError)
flagSet.String("prefixbackend-type", test.backendType, "")
flagSet.String("prefixbackend-config", test.backendConfig, "")
ctx = cli.NewContext(app, flagSet, nil)
backendType, backendConfig, err = getBackendConfig(ctx, "prefix", true)
require.NoError(t, err)
require.Equal(t, test.backendType, backendType)
require.Equal(t, test.backendConfig, backendConfig)
file, err := os.CreateTemp("", "nydusify-backend-config-test.json")
require.NoError(t, err)
defer os.RemoveAll(file.Name())
_, err = file.WriteString(test.backendConfig)
require.NoError(t, err)
file.Sync()
flagSet = flag.NewFlagSet("test4", flag.PanicOnError)
flagSet.String("prefixbackend-type", test.backendType, "")
flagSet.String("prefixbackend-config-file", file.Name(), "")
ctx = cli.NewContext(app, flagSet, nil)
backendType, backendConfig, err = getBackendConfig(ctx, "prefix", true)
require.NoError(t, err)
require.Equal(t, test.backendType, backendType)
require.Equal(t, test.backendConfig, backendConfig)
flagSet = flag.NewFlagSet("test5", flag.PanicOnError)
flagSet.String("prefixbackend-type", test.backendType, "")
flagSet.String("prefixbackend-config", test.backendConfig, "")
flagSet.String("prefixbackend-config-file", file.Name(), "")
ctx = cli.NewContext(app, flagSet, nil)
backendType, backendConfig, err = getBackendConfig(ctx, "prefix", true)
require.Error(t, err)
require.Contains(t, err.Error(), "--backend-config conflicts with --backend-config-file")
require.Empty(t, backendType)
require.Empty(t, backendConfig)
})
}
} }
func TestGetTargetReference(t *testing.T) { func TestGetTargetReference(t *testing.T) {
@ -320,3 +344,50 @@ func TestGetPrefetchPatterns(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, "/", patterns) require.Equal(t, "/", patterns)
} }
func TestGetGlobalFlags(t *testing.T) {
flags := getGlobalFlags()
require.Equal(t, 3, len(flags))
}
func TestSetupLogLevelWithLogFile(t *testing.T) {
logFilePath := "test_log_file.log"
defer os.Remove(logFilePath)
c := &cli.Context{}
patches := gomonkey.ApplyMethodSeq(c, "String", []gomonkey.OutputCell{
{Values: []interface{}{"info"}, Times: 1},
{Values: []interface{}{"test_log_file.log"}, Times: 2},
})
defer patches.Reset()
setupLogLevel(c)
file, err := os.Open(logFilePath)
assert.NoError(t, err)
assert.NotNil(t, file)
file.Close()
logrusOutput := logrus.StandardLogger().Out
assert.NotNil(t, logrusOutput)
logrus.Info("This is a test log message")
content, err := os.ReadFile(logFilePath)
assert.NoError(t, err)
assert.Contains(t, string(content), "This is a test log message")
}
func TestSetupLogLevelWithInvalidLogFile(t *testing.T) {
c := &cli.Context{}
patches := gomonkey.ApplyMethodSeq(c, "String", []gomonkey.OutputCell{
{Values: []interface{}{"info"}, Times: 1},
{Values: []interface{}{"test/test_log_file.log"}, Times: 2},
})
defer patches.Reset()
setupLogLevel(c)
logrusOutput := logrus.StandardLogger().Out
assert.NotNil(t, logrusOutput)
}

View File

@ -1,6 +1,8 @@
module github.com/dragonflyoss/nydus/contrib/nydusify module github.com/dragonflyoss/nydus/contrib/nydusify
go 1.21 go 1.23.1
toolchain go1.23.6
require ( require (
github.com/aliyun/aliyun-oss-go-sdk v3.0.2+incompatible github.com/aliyun/aliyun-oss-go-sdk v3.0.2+incompatible
@ -37,8 +39,11 @@ require (
require ( require (
github.com/AdaLogics/go-fuzz-headers v0.0.0-20230811130428-ced1acdcaa24 // indirect github.com/AdaLogics/go-fuzz-headers v0.0.0-20230811130428-ced1acdcaa24 // indirect
github.com/AdamKorcz/go-118-fuzz-build v0.0.0-20231105174938-2b5cbb29f3e2 // indirect github.com/AdamKorcz/go-118-fuzz-build v0.0.0-20231105174938-2b5cbb29f3e2 // indirect
github.com/BraveY/snapshotter-converter v0.0.5 // indirect
github.com/CloudNativeAI/model-spec v0.0.2 // indirect
github.com/Microsoft/go-winio v0.6.2 // indirect github.com/Microsoft/go-winio v0.6.2 // indirect
github.com/Microsoft/hcsshim v0.11.5 // indirect github.com/Microsoft/hcsshim v0.11.5 // indirect
github.com/agiledragon/gomonkey/v2 v2.13.0 // indirect
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.5.4 // indirect github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.5.4 // indirect
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.14.11 // indirect github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.14.11 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.2.10 // indirect github.com/aws/aws-sdk-go-v2/internal/configsources v1.2.10 // indirect
@ -123,4 +128,4 @@ require (
gopkg.in/yaml.v3 v3.0.1 // indirect gopkg.in/yaml.v3 v3.0.1 // indirect
) )
replace github.com/containerd/containerd => github.com/nydusaccelerator/containerd v0.0.0-20240605070649-62e0d4d66f9f replace github.com/containerd/containerd => github.com/nydusaccelerator/containerd v1.7.18-nydus.10

View File

@ -3,11 +3,21 @@ github.com/AdaLogics/go-fuzz-headers v0.0.0-20230811130428-ced1acdcaa24 h1:bvDV9
github.com/AdaLogics/go-fuzz-headers v0.0.0-20230811130428-ced1acdcaa24/go.mod h1:8o94RPi1/7XTJvwPpRSzSUedZrtlirdB3r9Z20bi2f8= github.com/AdaLogics/go-fuzz-headers v0.0.0-20230811130428-ced1acdcaa24/go.mod h1:8o94RPi1/7XTJvwPpRSzSUedZrtlirdB3r9Z20bi2f8=
github.com/AdamKorcz/go-118-fuzz-build v0.0.0-20231105174938-2b5cbb29f3e2 h1:dIScnXFlF784X79oi7MzVT6GWqr/W1uUt0pB5CsDs9M= github.com/AdamKorcz/go-118-fuzz-build v0.0.0-20231105174938-2b5cbb29f3e2 h1:dIScnXFlF784X79oi7MzVT6GWqr/W1uUt0pB5CsDs9M=
github.com/AdamKorcz/go-118-fuzz-build v0.0.0-20231105174938-2b5cbb29f3e2/go.mod h1:gCLVsLfv1egrcZu+GoJATN5ts75F2s62ih/457eWzOw= github.com/AdamKorcz/go-118-fuzz-build v0.0.0-20231105174938-2b5cbb29f3e2/go.mod h1:gCLVsLfv1egrcZu+GoJATN5ts75F2s62ih/457eWzOw=
github.com/BraveY/snapshotter-converter v0.0.5 h1:h3zAB31u16EOkshS2J9Nx40RiWSjH6zd5baOSmjLCOg=
github.com/BraveY/snapshotter-converter v0.0.5/go.mod h1:nOVwsdXqdeltxr12x0t0JIbYDD+cdmdBx0HA2pYpxQY=
github.com/BraveY/snapshotter-converter v0.0.6-0.20250409034316-66511579fa6d h1:00wAtig4otPLOMJN+CZHvG4MWm+g4NMY6j0K7eYEFNk=
github.com/BraveY/snapshotter-converter v0.0.6-0.20250409034316-66511579fa6d/go.mod h1:nOVwsdXqdeltxr12x0t0JIbYDD+cdmdBx0HA2pYpxQY=
github.com/BraveY/snapshotter-converter v0.0.6-0.20250409042404-e997e14906b7 h1:c9aFn0vSkXe1nrGe5mONSRs18/BXJKEiSiHvZyaXlBE=
github.com/BraveY/snapshotter-converter v0.0.6-0.20250409042404-e997e14906b7/go.mod h1:nOVwsdXqdeltxr12x0t0JIbYDD+cdmdBx0HA2pYpxQY=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/CloudNativeAI/model-spec v0.0.2 h1:uCO86kMk8wwadn8vKs0wT4petig5crByTIngdO3L2cQ=
github.com/CloudNativeAI/model-spec v0.0.2/go.mod h1:3U/4zubBfbUkW59ATSg41HnkYyKrKUcKFH/cVdoPQnk=
github.com/Microsoft/go-winio v0.6.2 h1:F2VQgta7ecxGYO8k3ZZz3RS8fVIXVxONVUPlNERoyfY= github.com/Microsoft/go-winio v0.6.2 h1:F2VQgta7ecxGYO8k3ZZz3RS8fVIXVxONVUPlNERoyfY=
github.com/Microsoft/go-winio v0.6.2/go.mod h1:yd8OoFMLzJbo9gZq8j5qaps8bJ9aShtEA8Ipt1oGCvU= github.com/Microsoft/go-winio v0.6.2/go.mod h1:yd8OoFMLzJbo9gZq8j5qaps8bJ9aShtEA8Ipt1oGCvU=
github.com/Microsoft/hcsshim v0.11.5 h1:haEcLNpj9Ka1gd3B3tAEs9CpE0c+1IhoL59w/exYU38= github.com/Microsoft/hcsshim v0.11.5 h1:haEcLNpj9Ka1gd3B3tAEs9CpE0c+1IhoL59w/exYU38=
github.com/Microsoft/hcsshim v0.11.5/go.mod h1:MV8xMfmECjl5HdO7U/3/hFVnkmSBjAjmA09d4bExKcU= github.com/Microsoft/hcsshim v0.11.5/go.mod h1:MV8xMfmECjl5HdO7U/3/hFVnkmSBjAjmA09d4bExKcU=
github.com/agiledragon/gomonkey/v2 v2.13.0 h1:B24Jg6wBI1iB8EFR1c+/aoTg7QN/Cum7YffG8KMIyYo=
github.com/agiledragon/gomonkey/v2 v2.13.0/go.mod h1:ap1AmDzcVOAz1YpeJ3TCzIgstoaWLA6jbbgxfB4w2iY=
github.com/aliyun/aliyun-oss-go-sdk v3.0.2+incompatible h1:8psS8a+wKfiLt1iVDX79F7Y6wUM49Lcha2FMXt4UM8g= github.com/aliyun/aliyun-oss-go-sdk v3.0.2+incompatible h1:8psS8a+wKfiLt1iVDX79F7Y6wUM49Lcha2FMXt4UM8g=
github.com/aliyun/aliyun-oss-go-sdk v3.0.2+incompatible/go.mod h1:T/Aws4fEfogEE9v+HPhhw+CntffsBHJ8nXQCwKr0/g8= github.com/aliyun/aliyun-oss-go-sdk v3.0.2+incompatible/go.mod h1:T/Aws4fEfogEE9v+HPhhw+CntffsBHJ8nXQCwKr0/g8=
github.com/aws/aws-sdk-go-v2 v1.24.1 h1:xAojnj+ktS95YZlDf0zxWBkbFtymPeDP+rvUQIH3uAU= github.com/aws/aws-sdk-go-v2 v1.24.1 h1:xAojnj+ktS95YZlDf0zxWBkbFtymPeDP+rvUQIH3uAU=
@ -146,6 +156,7 @@ github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeN
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0= github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/hashicorp/errwrap v1.1.0 h1:OxrOeh75EUXMY8TBjag2fzXGZ40LB6IKw45YeGUDY2I= github.com/hashicorp/errwrap v1.1.0 h1:OxrOeh75EUXMY8TBjag2fzXGZ40LB6IKw45YeGUDY2I=
github.com/hashicorp/errwrap v1.1.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4= github.com/hashicorp/errwrap v1.1.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-hclog v1.6.2 h1:NOtoftovWkDheyUM/8JW3QMiXyxJK3uHRK7wV04nD2I= github.com/hashicorp/go-hclog v1.6.2 h1:NOtoftovWkDheyUM/8JW3QMiXyxJK3uHRK7wV04nD2I=
@ -164,6 +175,7 @@ github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9Y
github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo= github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo=
github.com/jmespath/go-jmespath/internal/testify v1.5.1 h1:shLQSRRSCCPj3f2gpwzGwWFoC7ycTf1rcQZHOlsJ6N8= github.com/jmespath/go-jmespath/internal/testify v1.5.1 h1:shLQSRRSCCPj3f2gpwzGwWFoC7ycTf1rcQZHOlsJ6N8=
github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfCI6z80xFu9LTZmf1ZRjMHUOPmWr69U= github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfCI6z80xFu9LTZmf1ZRjMHUOPmWr69U=
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8= github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck= github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/klauspost/compress v1.17.4 h1:Ej5ixsIri7BrIjBkRZLTo6ghwrEtHFk7ijlczPW4fZ4= github.com/klauspost/compress v1.17.4 h1:Ej5ixsIri7BrIjBkRZLTo6ghwrEtHFk7ijlczPW4fZ4=
@ -199,8 +211,8 @@ github.com/moby/sys/signal v0.7.0 h1:25RW3d5TnQEoKvRbEKUGay6DCQ46IxAVTT9CUMgmsSI
github.com/moby/sys/signal v0.7.0/go.mod h1:GQ6ObYZfqacOwTtlXvcmh9A26dVRul/hbOZn88Kg8Tg= github.com/moby/sys/signal v0.7.0/go.mod h1:GQ6ObYZfqacOwTtlXvcmh9A26dVRul/hbOZn88Kg8Tg=
github.com/moby/sys/user v0.1.0 h1:WmZ93f5Ux6het5iituh9x2zAG7NFY9Aqi49jjE1PaQg= github.com/moby/sys/user v0.1.0 h1:WmZ93f5Ux6het5iituh9x2zAG7NFY9Aqi49jjE1PaQg=
github.com/moby/sys/user v0.1.0/go.mod h1:fKJhFOnsCN6xZ5gSfbM6zaHGgDJMrqt9/reuj4T7MmU= github.com/moby/sys/user v0.1.0/go.mod h1:fKJhFOnsCN6xZ5gSfbM6zaHGgDJMrqt9/reuj4T7MmU=
github.com/nydusaccelerator/containerd v0.0.0-20240605070649-62e0d4d66f9f h1:jbWfZohlnnbKXcYykpfw0VT8baJpI90sWg0hxvD596g= github.com/nydusaccelerator/containerd v1.7.18-nydus.10 h1:ir28uQOPtYtFP+gry7sbiwaOHUISC1viPeogTDTff+Q=
github.com/nydusaccelerator/containerd v0.0.0-20240605070649-62e0d4d66f9f/go.mod h1:IYEk9/IO6wAPUz2bCMVUbsfXjzw5UNP5fLz4PsUygQ4= github.com/nydusaccelerator/containerd v1.7.18-nydus.10/go.mod h1:IYEk9/IO6wAPUz2bCMVUbsfXjzw5UNP5fLz4PsUygQ4=
github.com/oklog/run v1.1.0 h1:GEenZ1cK0+q0+wsJew9qUg/DyD8k3JzYsZAi5gYi2mA= github.com/oklog/run v1.1.0 h1:GEenZ1cK0+q0+wsJew9qUg/DyD8k3JzYsZAi5gYi2mA=
github.com/oklog/run v1.1.0/go.mod h1:sVPdnTZT1zYwAJeCMu2Th4T21pA3FPOQRfWjQlk7DVU= github.com/oklog/run v1.1.0/go.mod h1:sVPdnTZT1zYwAJeCMu2Th4T21pA3FPOQRfWjQlk7DVU=
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U= github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
@ -232,6 +244,8 @@ github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ= github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=
github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ= github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
github.com/stefanberger/go-pkcs11uri v0.0.0-20230803200340-78284954bff6 h1:pnnLyeX7o/5aX8qUQ69P/mLojDqwda8hFOCBTmP/6hw= github.com/stefanberger/go-pkcs11uri v0.0.0-20230803200340-78284954bff6 h1:pnnLyeX7o/5aX8qUQ69P/mLojDqwda8hFOCBTmP/6hw=
github.com/stefanberger/go-pkcs11uri v0.0.0-20230803200340-78284954bff6/go.mod h1:39R/xuhNgVhi+K0/zst4TLrJrVmbm6LVgl4A0+ZFS5M= github.com/stefanberger/go-pkcs11uri v0.0.0-20230803200340-78284954bff6/go.mod h1:39R/xuhNgVhi+K0/zst4TLrJrVmbm6LVgl4A0+ZFS5M=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
@ -352,6 +366,7 @@ golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGm
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY= golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=

View File

@ -9,6 +9,7 @@ import (
"fmt" "fmt"
"io" "io"
"github.com/containerd/containerd/remotes"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/remote" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/remote"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/utils" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/utils"
"github.com/opencontainers/go-digest" "github.com/opencontainers/go-digest"
@ -27,6 +28,7 @@ type Backend interface {
Check(blobID string) (bool, error) Check(blobID string) (bool, error)
Type() Type Type() Type
Reader(blobID string) (io.ReadCloser, error) Reader(blobID string) (io.ReadCloser, error)
RangeReader(blobID string) (remotes.RangeReadCloser, error)
Size(blobID string) (int64, error) Size(blobID string) (int64, error)
} }

View File

@ -17,6 +17,7 @@ import (
"time" "time"
"github.com/aliyun/aliyun-oss-go-sdk/oss" "github.com/aliyun/aliyun-oss-go-sdk/oss"
"github.com/containerd/containerd/remotes"
ocispec "github.com/opencontainers/image-spec/specs-go/v1" ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
@ -259,6 +260,20 @@ func (b *OSSBackend) Type() Type {
return OssBackend return OssBackend
} }
type RangeReader struct {
b *OSSBackend
blobID string
}
func (rr *RangeReader) Reader(offset int64, size int64) (io.ReadCloser, error) {
return rr.b.bucket.GetObject(rr.blobID, oss.Range(offset, offset+size-1))
}
func (b *OSSBackend) RangeReader(blobID string) (remotes.RangeReadCloser, error) {
blobID = b.objectPrefix + blobID
return &RangeReader{b: b, blobID: blobID}, nil
}
func (b *OSSBackend) Reader(blobID string) (io.ReadCloser, error) { func (b *OSSBackend) Reader(blobID string) (io.ReadCloser, error) {
blobID = b.objectPrefix + blobID blobID = b.objectPrefix + blobID
rc, err := b.bucket.GetObject(blobID) rc, err := b.bucket.GetObject(blobID)

View File

@ -5,6 +5,7 @@ import (
"io" "io"
"os" "os"
"github.com/containerd/containerd/remotes"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/remote" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/remote"
ocispec "github.com/opencontainers/image-spec/specs-go/v1" ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors" "github.com/pkg/errors"
@ -47,6 +48,10 @@ func (r *Registry) Type() Type {
return RegistryBackend return RegistryBackend
} }
func (r *Registry) RangeReader(_ string) (remotes.RangeReadCloser, error) {
panic("not implemented")
}
func (r *Registry) Reader(_ string) (io.ReadCloser, error) { func (r *Registry) Reader(_ string) (io.ReadCloser, error) {
panic("not implemented") panic("not implemented")
} }

View File

@ -22,6 +22,7 @@ import (
"github.com/aws/aws-sdk-go-v2/feature/s3/manager" "github.com/aws/aws-sdk-go-v2/feature/s3/manager"
"github.com/aws/aws-sdk-go-v2/service/s3" "github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/aws/aws-sdk-go-v2/service/s3/types" "github.com/aws/aws-sdk-go-v2/service/s3/types"
"github.com/containerd/containerd/remotes"
ocispec "github.com/opencontainers/image-spec/specs-go/v1" ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
@ -160,6 +161,25 @@ func (b *S3Backend) blobObjectKey(blobID string) string {
return b.objectPrefix + blobID return b.objectPrefix + blobID
} }
type rangeReader struct {
b *S3Backend
objectKey string
}
func (rr *rangeReader) Reader(offset int64, size int64) (io.ReadCloser, error) {
output, err := rr.b.client.GetObject(context.TODO(), &s3.GetObjectInput{
Bucket: &rr.b.bucketName,
Key: &rr.objectKey,
Range: aws.String(fmt.Sprintf("bytes=%d-%d", offset, offset+size-1)),
})
return output.Body, err
}
func (b *S3Backend) RangeReader(blobID string) (remotes.RangeReadCloser, error) {
objectKey := b.blobObjectKey(blobID)
return &rangeReader{b: b, objectKey: objectKey}, nil
}
func (b *S3Backend) Reader(blobID string) (io.ReadCloser, error) { func (b *S3Backend) Reader(blobID string) (io.ReadCloser, error) {
objectKey := b.blobObjectKey(blobID) objectKey := b.blobObjectKey(blobID)
output, err := b.client.GetObject(context.TODO(), &s3.GetObjectInput{ output, err := b.client.GetObject(context.TODO(), &s3.GetObjectInput{

View File

@ -38,7 +38,12 @@ type CompactOption struct {
BackendType string BackendType string
BackendConfigPath string BackendConfigPath string
OutputJSONPath string OutputJSONPath string
CompactConfigPath string
MinUsedRatio string
CompactBlobSize string
MaxCompactSize string
LayersToCompact string
BlobsDir string
} }
type GenerateOption struct { type GenerateOption struct {
@ -82,7 +87,11 @@ func (builder *Builder) Compact(option CompactOption) error {
args := []string{ args := []string{
"compact", "compact",
"--bootstrap", option.BootstrapPath, "--bootstrap", option.BootstrapPath,
"--config", option.CompactConfigPath, "--blob-dir", option.BlobsDir,
"--min-used-ratio", option.MinUsedRatio,
"--compact-blob-size", option.CompactBlobSize,
"--max-compact-size", option.MaxCompactSize,
"--layers-to-compact", option.LayersToCompact,
"--backend-type", option.BackendType, "--backend-type", option.BackendType,
"--backend-config-file", option.BackendConfigPath, "--backend-config-file", option.BackendConfigPath,
"--log-level", "info", "--log-level", "info",

View File

@ -18,7 +18,7 @@ import (
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
"github.com/containerd/containerd/images" "github.com/containerd/containerd/images"
digest "github.com/opencontainers/go-digest" "github.com/opencontainers/go-digest"
"github.com/opencontainers/image-spec/specs-go" "github.com/opencontainers/image-spec/specs-go"
ocispec "github.com/opencontainers/image-spec/specs-go/v1" ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors" "github.com/pkg/errors"
@ -295,23 +295,23 @@ func (cache *Cache) layerToRecord(layer *ocispec.Descriptor) *Record {
return nil return nil
} }
func mergeRecord(old, new *Record) *Record { func mergeRecord(oldRec, newRec *Record) *Record {
if old == nil { if oldRec == nil {
old = &Record{ oldRec = &Record{
SourceChainID: new.SourceChainID, SourceChainID: newRec.SourceChainID,
} }
} }
if new.NydusBootstrapDesc != nil { if newRec.NydusBootstrapDesc != nil {
old.NydusBootstrapDesc = new.NydusBootstrapDesc oldRec.NydusBootstrapDesc = newRec.NydusBootstrapDesc
old.NydusBootstrapDiffID = new.NydusBootstrapDiffID oldRec.NydusBootstrapDiffID = newRec.NydusBootstrapDiffID
} }
if new.NydusBlobDesc != nil { if newRec.NydusBlobDesc != nil {
old.NydusBlobDesc = new.NydusBlobDesc oldRec.NydusBlobDesc = newRec.NydusBlobDesc
} }
return old return oldRec
} }
func (cache *Cache) importRecordsFromLayers(layers []ocispec.Descriptor) { func (cache *Cache) importRecordsFromLayers(layers []ocispec.Descriptor) {

View File

@ -13,43 +13,44 @@ import (
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/checker/rule" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/checker/rule"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/checker/tool"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/parser" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/parser"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/provider" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/provider"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/remote"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/utils" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/utils"
) )
// Opt defines Checker options. // Opt defines Checker options.
// Note: target is the Nydus image reference. // Note: target is the nydus image reference.
type Opt struct { type Opt struct {
WorkDir string WorkDir string
Source string
Target string Source string
SourceInsecure bool Target string
TargetInsecure bool SourceInsecure bool
TargetInsecure bool
SourceBackendType string
SourceBackendConfig string
TargetBackendType string
TargetBackendConfig string
MultiPlatform bool MultiPlatform bool
NydusImagePath string NydusImagePath string
NydusdPath string NydusdPath string
BackendType string
BackendConfig string
ExpectedArch string ExpectedArch string
} }
// Checker validates Nydus image manifest, bootstrap and mounts filesystem // Checker validates nydus image manifest, bootstrap and mounts filesystem
// by Nydusd to compare file metadata and data with OCI image. // by nydusd to compare file metadata and data between OCI / nydus image.
type Checker struct { type Checker struct {
Opt Opt
sourceParser *parser.Parser sourceParser *parser.Parser
targetParser *parser.Parser targetParser *parser.Parser
} }
// New creates Checker instance, target is the Nydus image reference. // New creates Checker instance, target is the nydus image reference.
func New(opt Opt) (*Checker, error) { func New(opt Opt) (*Checker, error) {
// TODO: support source and target resolver
targetRemote, err := provider.DefaultRemote(opt.Target, opt.TargetInsecure) targetRemote, err := provider.DefaultRemote(opt.Target, opt.TargetInsecure)
if err != nil { if err != nil {
return nil, errors.Wrap(err, "Init target image parser") return nil, errors.Wrap(err, "init target image parser")
} }
targetParser, err := parser.New(targetRemote, opt.ExpectedArch) targetParser, err := parser.New(targetRemote, opt.ExpectedArch)
if err != nil { if err != nil {
@ -63,7 +64,7 @@ func New(opt Opt) (*Checker, error) {
return nil, errors.Wrap(err, "Init source image parser") return nil, errors.Wrap(err, "Init source image parser")
} }
sourceParser, err = parser.New(sourceRemote, opt.ExpectedArch) sourceParser, err = parser.New(sourceRemote, opt.ExpectedArch)
if sourceParser == nil { if err != nil {
return nil, errors.Wrap(err, "failed to create parser") return nil, errors.Wrap(err, "failed to create parser")
} }
} }
@ -77,7 +78,7 @@ func New(opt Opt) (*Checker, error) {
return checker, nil return checker, nil
} }
// Check checks Nydus image, and outputs image information to work // Check checks nydus image, and outputs image information to work
// directory, the check workflow is composed of various rules. // directory, the check workflow is composed of various rules.
func (checker *Checker) Check(ctx context.Context) error { func (checker *Checker) Check(ctx context.Context) error {
if err := checker.check(ctx); err != nil { if err := checker.check(ctx); err != nil {
@ -93,12 +94,13 @@ func (checker *Checker) Check(ctx context.Context) error {
return nil return nil
} }
// Check checks Nydus image, and outputs image information to work // Check checks nydus image, and outputs image information to work
// directory, the check workflow is composed of various rules. // directory, the check workflow is composed of various rules.
func (checker *Checker) check(ctx context.Context) error { func (checker *Checker) check(ctx context.Context) error {
logrus.WithField("image", checker.targetParser.Remote.Ref).Infof("parsing image")
targetParsed, err := checker.targetParser.Parse(ctx) targetParsed, err := checker.targetParser.Parse(ctx)
if err != nil { if err != nil {
return errors.Wrap(err, "parse Nydus image") return errors.Wrap(err, "parse nydus image")
} }
var sourceParsed *parser.Parsed var sourceParsed *parser.Parsed
@ -107,89 +109,66 @@ func (checker *Checker) check(ctx context.Context) error {
if err != nil { if err != nil {
return errors.Wrap(err, "parse source image") return errors.Wrap(err, "parse source image")
} }
} else {
sourceParsed = targetParsed
} }
if err := os.RemoveAll(checker.WorkDir); err != nil { if err := os.RemoveAll(checker.WorkDir); err != nil {
return errors.Wrap(err, "clean up work directory") return errors.Wrap(err, "clean up work directory")
} }
if err := os.MkdirAll(filepath.Join(checker.WorkDir, "fs"), 0755); err != nil { if sourceParsed != nil {
return errors.Wrap(err, "create work directory") if err := checker.Output(ctx, sourceParsed, filepath.Join(checker.WorkDir, "source")); err != nil {
} return errors.Wrapf(err, "output image information: %s", sourceParsed.Remote.Ref)
if err := checker.Output(ctx, sourceParsed, targetParsed, checker.WorkDir); err != nil {
return errors.Wrap(err, "output image information")
}
mode := "direct"
digestValidate := false
if targetParsed.NydusImage != nil {
nydusManifest := parser.FindNydusBootstrapDesc(&targetParsed.NydusImage.Manifest)
if nydusManifest != nil {
v := utils.GetNydusFsVersionOrDefault(nydusManifest.Annotations, utils.V5)
if v == utils.V5 {
// Digest validate is not currently supported for v6,
// but v5 supports it. In order to make the check more sufficient,
// this validate needs to be turned on for v5.
digestValidate = true
}
} }
} }
var sourceRemote *remote.Remote if targetParsed != nil {
if checker.sourceParser != nil { if err := checker.Output(ctx, targetParsed, filepath.Join(checker.WorkDir, "target")); err != nil {
sourceRemote = checker.sourceParser.Remote return errors.Wrapf(err, "output image information: %s", targetParsed.Remote.Ref)
}
} }
rules := []rule.Rule{ rules := []rule.Rule{
&rule.ManifestRule{ &rule.ManifestRule{
SourceParsed: sourceParsed, SourceParsed: sourceParsed,
TargetParsed: targetParsed, TargetParsed: targetParsed,
MultiPlatform: checker.MultiPlatform,
BackendType: checker.BackendType,
ExpectedArch: checker.ExpectedArch,
}, },
&rule.BootstrapRule{ &rule.BootstrapRule{
Parsed: targetParsed, WorkDir: checker.WorkDir,
NydusImagePath: checker.NydusImagePath, NydusImagePath: checker.NydusImagePath,
BackendType: checker.BackendType,
BootstrapPath: filepath.Join(checker.WorkDir, "nydus_bootstrap"), SourceParsed: sourceParsed,
DebugOutputPath: filepath.Join(checker.WorkDir, "nydus_bootstrap_debug.json"), TargetParsed: targetParsed,
SourceBackendType: checker.SourceBackendType,
SourceBackendConfig: checker.SourceBackendConfig,
TargetBackendType: checker.TargetBackendType,
TargetBackendConfig: checker.TargetBackendConfig,
}, },
&rule.FilesystemRule{ &rule.FilesystemRule{
Source: checker.Source, WorkDir: checker.WorkDir,
SourceMountPath: filepath.Join(checker.WorkDir, "fs/source_mounted"), NydusdPath: checker.NydusdPath,
SourceParsed: sourceParsed,
SourcePath: filepath.Join(checker.WorkDir, "fs/source"), SourceImage: &rule.Image{
SourceRemote: sourceRemote, Parsed: sourceParsed,
Target: checker.Target, Insecure: checker.SourceInsecure,
TargetInsecure: checker.TargetInsecure,
PlainHTTP: checker.targetParser.Remote.IsWithHTTP(),
NydusdConfig: tool.NydusdConfig{
EnablePrefetch: true,
NydusdPath: checker.NydusdPath,
BackendType: checker.BackendType,
BackendConfig: checker.BackendConfig,
BootstrapPath: filepath.Join(checker.WorkDir, "nydus_bootstrap"),
ConfigPath: filepath.Join(checker.WorkDir, "fs/nydusd_config.json"),
BlobCacheDir: filepath.Join(checker.WorkDir, "fs/nydus_blobs"),
MountPath: filepath.Join(checker.WorkDir, "fs/nydus_mounted"),
APISockPath: filepath.Join(checker.WorkDir, "fs/nydus_api.sock"),
Mode: mode,
DigestValidate: digestValidate,
}, },
TargetImage: &rule.Image{
Parsed: targetParsed,
Insecure: checker.TargetInsecure,
},
SourceBackendType: checker.SourceBackendType,
SourceBackendConfig: checker.SourceBackendConfig,
TargetBackendType: checker.TargetBackendType,
TargetBackendConfig: checker.TargetBackendConfig,
}, },
} }
for _, rule := range rules { for _, rule := range rules {
if err := rule.Validate(); err != nil { if err := rule.Validate(); err != nil {
return errors.Wrapf(err, "validate rule %s", rule.Name()) return errors.Wrapf(err, "validate %s failed", rule.Name())
} }
} }
logrus.Infof("Verified Nydus image %s", checker.targetParser.Remote.Ref) logrus.Info("verified image")
return nil return nil
} }

View File

@ -7,14 +7,20 @@ package checker
import ( import (
"context" "context"
"encoding/json" "encoding/json"
"io"
"os" "os"
"path/filepath" "path/filepath"
"github.com/containerd/containerd/archive/compression"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
modelspec "github.com/CloudNativeAI/model-spec/specs-go/v1"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/checker/tool"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/parser" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/parser"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/utils" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/utils"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
) )
func prettyDump(obj interface{}, name string) error { func prettyDump(obj interface{}, name string) error {
@ -25,70 +31,107 @@ func prettyDump(obj interface{}, name string) error {
return os.WriteFile(name, bytes, 0644) return os.WriteFile(name, bytes, 0644)
} }
// Output outputs OCI and Nydus image manifest, index, config to JSON file. // Output outputs OCI and nydus image manifest, index, config to JSON file.
// Prefer to use source image to output OCI image information. // Prefer to use source image to output OCI image information.
func (checker *Checker) Output( func (checker *Checker) Output(
ctx context.Context, sourceParsed, targetParsed *parser.Parsed, outputPath string, ctx context.Context, parsed *parser.Parsed, dir string,
) error { ) error {
logrus.Infof("Dumping OCI and Nydus manifests to %s", outputPath) logrus.WithField("type", tool.CheckImageType(parsed)).WithField("image", parsed.Remote.Ref).Info("dumping manifest")
if sourceParsed.Index != nil { if err := os.MkdirAll(dir, 0755); err != nil {
return errors.Wrap(err, "create output directory")
}
if parsed.Index != nil && parsed.OCIImage != nil {
if err := prettyDump( if err := prettyDump(
sourceParsed.Index, parsed.Index,
filepath.Join(outputPath, "oci_index.json"), filepath.Join(dir, "oci_index.json"),
); err != nil { ); err != nil {
return errors.Wrap(err, "output oci index file") return errors.Wrap(err, "output oci index file")
} }
} }
if targetParsed.Index != nil { if parsed.Index != nil && parsed.NydusImage != nil {
if err := prettyDump( if err := prettyDump(
targetParsed.Index, parsed.Index,
filepath.Join(outputPath, "nydus_index.json"), filepath.Join(dir, "nydus_index.json"),
); err != nil { ); err != nil {
return errors.Wrap(err, "output nydus index file") return errors.Wrap(err, "output nydus index file")
} }
} }
if sourceParsed.OCIImage != nil { if parsed.OCIImage != nil {
if err := prettyDump( if err := prettyDump(
sourceParsed.OCIImage.Manifest, parsed.OCIImage.Manifest,
filepath.Join(outputPath, "oci_manifest.json"), filepath.Join(dir, "oci_manifest.json"),
); err != nil { ); err != nil {
return errors.Wrap(err, "output OCI manifest file") return errors.Wrap(err, "output OCI manifest file")
} }
if err := prettyDump( if err := prettyDump(
sourceParsed.OCIImage.Config, parsed.OCIImage.Config,
filepath.Join(outputPath, "oci_config.json"), filepath.Join(dir, "oci_config.json"),
); err != nil { ); err != nil {
return errors.Wrap(err, "output OCI config file") return errors.Wrap(err, "output OCI config file")
} }
} }
if targetParsed.NydusImage != nil { if parsed.NydusImage != nil {
if err := prettyDump( if err := prettyDump(
targetParsed.NydusImage.Manifest, parsed.NydusImage.Manifest,
filepath.Join(outputPath, "nydus_manifest.json"), filepath.Join(dir, "nydus_manifest.json"),
); err != nil { ); err != nil {
return errors.Wrap(err, "output Nydus manifest file") return errors.Wrap(err, "output nydus manifest file")
} }
if err := prettyDump( if err := prettyDump(
targetParsed.NydusImage.Config, parsed.NydusImage.Config,
filepath.Join(outputPath, "nydus_config.json"), filepath.Join(dir, "nydus_config.json"),
); err != nil { ); err != nil {
return errors.Wrap(err, "output Nydus config file") return errors.Wrap(err, "output nydus config file")
} }
target := filepath.Join(outputPath, "nydus_bootstrap") bootstrapDir := filepath.Join(dir, "nydus_bootstrap")
logrus.Infof("Pulling Nydus bootstrap to %s", target) logrus.WithField("type", tool.CheckImageType(parsed)).WithField("image", parsed.Remote.Ref).Info("pulling bootstrap")
bootstrapReader, err := checker.targetParser.PullNydusBootstrap(ctx, targetParsed.NydusImage) var parser *parser.Parser
if dir == "source" {
parser = checker.sourceParser
} else {
parser = checker.targetParser
}
bootstrapReader, err := parser.PullNydusBootstrap(ctx, parsed.NydusImage)
if err != nil { if err != nil {
return errors.Wrap(err, "pull Nydus bootstrap layer") return errors.Wrap(err, "pull nydus bootstrap layer")
} }
defer bootstrapReader.Close() defer bootstrapReader.Close()
if err := utils.UnpackFile(bootstrapReader, utils.BootstrapFileNameInLayer, target); err != nil { tarRc, err := compression.DecompressStream(bootstrapReader)
return errors.Wrap(err, "unpack Nydus bootstrap layer") if err != nil {
return err
}
defer tarRc.Close()
diffID := digest.SHA256.Digester()
if err := utils.UnpackFromTar(io.TeeReader(tarRc, diffID.Hash()), bootstrapDir); err != nil {
return errors.Wrap(err, "unpack nydus bootstrap layer")
}
diffIDs := parsed.NydusImage.Config.RootFS.DiffIDs
manifest := parsed.NydusImage.Manifest
if manifest.ArtifactType != modelspec.ArtifactTypeModelManifest && diffIDs[len(diffIDs)-1] != diffID.Digest() {
return errors.Errorf(
"invalid bootstrap layer diff id: %s (calculated) != %s (in image config)",
diffID.Digest().String(),
diffIDs[len(diffIDs)-1].String(),
)
}
if manifest.ArtifactType == modelspec.ArtifactTypeModelManifest {
if manifest.Subject == nil {
return errors.New("missing subject in manifest")
}
if manifest.Subject.MediaType != ocispec.MediaTypeImageManifest {
return errors.Errorf("invalid subject media type: %s", manifest.Subject.MediaType)
}
} }
} }

View File

@ -8,79 +8,90 @@ import (
"encoding/json" "encoding/json"
"fmt" "fmt"
"os" "os"
"path/filepath"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
"github.com/containerd/nydus-snapshotter/pkg/label"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/checker/tool" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/checker/tool"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/parser" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/parser"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/utils"
) )
// BootstrapRule validates bootstrap in Nydus image // BootstrapRule validates bootstrap in nydus image
type BootstrapRule struct { type BootstrapRule struct {
Parsed *parser.Parsed WorkDir string
BootstrapPath string NydusImagePath string
NydusImagePath string
DebugOutputPath string SourceParsed *parser.Parsed
BackendType string TargetParsed *parser.Parsed
SourceBackendType string
SourceBackendConfig string
TargetBackendType string
TargetBackendConfig string
} }
type bootstrapDebug struct { type output struct {
Blobs []string `json:"blobs"` Blobs []string `json:"blobs"`
} }
func (rule *BootstrapRule) Name() string { func (rule *BootstrapRule) Name() string {
return "Bootstrap" return "bootstrap"
} }
func (rule *BootstrapRule) Validate() error { func (rule *BootstrapRule) validate(parsed *parser.Parsed, dir string) error {
logrus.Infof("Checking Nydus bootstrap") if parsed == nil || parsed.NydusImage == nil {
return nil
}
logrus.WithField("type", tool.CheckImageType(parsed)).WithField("image", parsed.Remote.Ref).Info("checking bootstrap")
bootstrapDir := filepath.Join(rule.WorkDir, dir, "nydus_bootstrap")
outputPath := filepath.Join(rule.WorkDir, dir, "nydus_output.json")
// Get blob list in the blob table of bootstrap by calling // Get blob list in the blob table of bootstrap by calling
// `nydus-image check` command // `nydus-image check` command
builder := tool.NewBuilder(rule.NydusImagePath) builder := tool.NewBuilder(rule.NydusImagePath)
if err := builder.Check(tool.BuilderOption{ if err := builder.Check(tool.BuilderOption{
BootstrapPath: rule.BootstrapPath, BootstrapPath: filepath.Join(bootstrapDir, utils.BootstrapFileNameInLayer),
DebugOutputPath: rule.DebugOutputPath, DebugOutputPath: outputPath,
}); err != nil { }); err != nil {
return errors.Wrap(err, "invalid nydus bootstrap format") return errors.Wrap(err, "invalid nydus bootstrap format")
} }
// For registry garbage collection, nydus puts the blobs to // Parse blob list from blob layers in nydus manifest
// the layers in manifest, so here only need to check blob
// list consistency for registry backend.
if rule.BackendType != "registry" {
return nil
}
// Parse blob list from blob layers in Nydus manifest
blobListInLayer := map[string]bool{} blobListInLayer := map[string]bool{}
layers := rule.Parsed.NydusImage.Manifest.Layers layers := parsed.NydusImage.Manifest.Layers
for i, layer := range layers { for i, layer := range layers {
if layer.Annotations != nil && layers[i].Annotations[label.NydusRefLayer] != "" {
// Ignore OCI reference layer check
continue
}
if i != len(layers)-1 { if i != len(layers)-1 {
blobListInLayer[layer.Digest.Hex()] = true blobListInLayer[layer.Digest.Hex()] = true
} }
} }
// Parse blob list from blob table of bootstrap // Parse blob list from blob table of bootstrap
var bootstrap bootstrapDebug var out output
bootstrapBytes, err := os.ReadFile(rule.DebugOutputPath) outputBytes, err := os.ReadFile(outputPath)
if err != nil { if err != nil {
return errors.Wrap(err, "read bootstrap debug json") return errors.Wrap(err, "read bootstrap debug json")
} }
if err := json.Unmarshal(bootstrapBytes, &bootstrap); err != nil { if err := json.Unmarshal(outputBytes, &out); err != nil {
return errors.Wrap(err, "unmarshal bootstrap output JSON") return errors.Wrap(err, "unmarshal bootstrap output JSON")
} }
blobListInBootstrap := map[string]bool{} blobListInBootstrap := map[string]bool{}
lostInLayer := false lostInLayer := false
for _, blobID := range bootstrap.Blobs { for _, blobID := range out.Blobs {
blobListInBootstrap[blobID] = true blobListInBootstrap[blobID] = true
if !blobListInLayer[blobID] { if !blobListInLayer[blobID] {
lostInLayer = true lostInLayer = true
} }
} }
if !lostInLayer { if len(blobListInLayer) == 0 || !lostInLayer {
return nil return nil
} }
@ -94,3 +105,15 @@ func (rule *BootstrapRule) Validate() error {
blobListInLayer, blobListInLayer,
) )
} }
func (rule *BootstrapRule) Validate() error {
if err := rule.validate(rule.SourceParsed, "source"); err != nil {
return errors.Wrap(err, "source image: invalid nydus bootstrap")
}
if err := rule.validate(rule.TargetParsed, "target"); err != nil {
return errors.Wrap(err, "target image: invalid nydus bootstrap")
}
return nil
}

View File

@ -14,11 +14,11 @@ import (
"reflect" "reflect"
"syscall" "syscall"
modelspec "github.com/CloudNativeAI/model-spec/specs-go/v1"
"github.com/distribution/reference" "github.com/distribution/reference"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/checker/tool" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/checker/tool"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/parser" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/parser"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/remote"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/utils" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/utils"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/pkg/xattr" "github.com/pkg/xattr"
@ -29,18 +29,23 @@ import (
var WorkerCount uint = 8 var WorkerCount uint = 8
// FilesystemRule compares file metadata and data in the two mountpoints: // FilesystemRule compares file metadata and data in the two mountpoints:
// Mounted by Nydusd for Nydus image, // Mounted by nydusd for nydus image,
// Mounted by Overlayfs for OCI image. // Mounted by Overlayfs for OCI image.
type FilesystemRule struct { type FilesystemRule struct {
NydusdConfig tool.NydusdConfig WorkDir string
Source string NydusdPath string
SourceMountPath string
SourceParsed *parser.Parsed SourceImage *Image
SourcePath string TargetImage *Image
SourceRemote *remote.Remote SourceBackendType string
Target string SourceBackendConfig string
TargetInsecure bool TargetBackendType string
PlainHTTP bool TargetBackendConfig string
}
type Image struct {
Parsed *parser.Parsed
Insecure bool
} }
// Node records file metadata and file data hash. // Node records file metadata and file data hash.
@ -66,14 +71,14 @@ type RegistryBackendConfig struct {
func (node *Node) String() string { func (node *Node) String() string {
return fmt.Sprintf( return fmt.Sprintf(
"Path: %s, Size: %d, Mode: %d, Rdev: %d, Symink: %s, UID: %d, GID: %d, "+ "path: %s, size: %d, mode: %d, rdev: %d, symink: %s, uid: %d, gid: %d, "+
"Xattrs: %v, Hash: %s", node.Path, node.Size, node.Mode, node.Rdev, node.Symlink, "xattrs: %v, hash: %s", node.Path, node.Size, node.Mode, node.Rdev, node.Symlink,
node.UID, node.GID, node.Xattrs, hex.EncodeToString(node.Hash), node.UID, node.GID, node.Xattrs, hex.EncodeToString(node.Hash),
) )
} }
func (rule *FilesystemRule) Name() string { func (rule *FilesystemRule) Name() string {
return "Filesystem" return "filesystem"
} }
func getXattrs(path string) (map[string][]byte, error) { func getXattrs(path string) (map[string][]byte, error) {
@ -132,13 +137,13 @@ func (rule *FilesystemRule) walk(rootfs string) (map[string]Node, error) {
xattrs, err := getXattrs(path) xattrs, err := getXattrs(path)
if err != nil { if err != nil {
logrus.Warnf("Failed to get xattr: %s", err) logrus.Warnf("failed to get xattr: %s", err)
} }
// Calculate file data hash if the `backend-type` option be specified, // Calculate file data hash if the `backend-type` option be specified,
// this will cause that nydusd read data from backend, it's network load // this will cause that nydusd read data from backend, it's network load
var hash []byte var hash []byte
if rule.NydusdConfig.BackendType != "" && info.Mode().IsRegular() { if info.Mode().IsRegular() {
hash, err = utils.HashFile(path) hash, err = utils.HashFile(path)
if err != nil { if err != nil {
return err return err
@ -166,20 +171,146 @@ func (rule *FilesystemRule) walk(rootfs string) (map[string]Node, error) {
return nodes, nil return nodes, nil
} }
func (rule *FilesystemRule) pullSourceImage() (*tool.Image, error) { func (rule *FilesystemRule) mountNydusImage(image *Image, dir string) (func() error, error) {
layers := rule.SourceParsed.OCIImage.Manifest.Layers logrus.WithField("type", tool.CheckImageType(image.Parsed)).WithField("image", image.Parsed.Remote.Ref).Info("mounting image")
digestValidate := false
if image.Parsed.NydusImage != nil {
nydusManifest := parser.FindNydusBootstrapDesc(&image.Parsed.NydusImage.Manifest)
if nydusManifest != nil {
v := utils.GetNydusFsVersionOrDefault(nydusManifest.Annotations, utils.V5)
if v == utils.V5 {
// Digest validate is not currently supported for v6,
// but v5 supports it. In order to make the check more sufficient,
// this validate needs to be turned on for v5.
digestValidate = true
}
}
}
backendType := rule.SourceBackendType
backendConfig := rule.SourceBackendConfig
if dir == "target" {
backendType = rule.TargetBackendType
backendConfig = rule.TargetBackendConfig
}
mountDir := filepath.Join(rule.WorkDir, dir, "mnt")
nydusdDir := filepath.Join(rule.WorkDir, dir, "nydusd")
if err := os.MkdirAll(nydusdDir, 0755); err != nil {
return nil, errors.Wrap(err, "create nydusd directory")
}
nydusdConfig := tool.NydusdConfig{
EnablePrefetch: true,
NydusdPath: rule.NydusdPath,
BackendType: backendType,
BackendConfig: backendConfig,
BootstrapPath: filepath.Join(rule.WorkDir, dir, "nydus_bootstrap/image/image.boot"),
ExternalBackendConfigPath: filepath.Join(rule.WorkDir, dir, "nydus_bootstrap/image/backend.json"),
ConfigPath: filepath.Join(nydusdDir, "config.json"),
BlobCacheDir: filepath.Join(nydusdDir, "cache"),
APISockPath: filepath.Join(nydusdDir, "api.sock"),
MountPath: mountDir,
Mode: "direct",
DigestValidate: digestValidate,
}
if err := os.MkdirAll(nydusdConfig.BlobCacheDir, 0755); err != nil {
return nil, errors.Wrap(err, "create blob cache directory for nydusd")
}
if err := os.MkdirAll(nydusdConfig.MountPath, 0755); err != nil {
return nil, errors.Wrap(err, "create mountpoint directory of nydus image")
}
ref, err := reference.ParseNormalizedNamed(image.Parsed.Remote.Ref)
if err != nil {
return nil, err
}
if nydusdConfig.BackendType == "" {
nydusdConfig.BackendType = "registry"
if nydusdConfig.BackendConfig == "" {
backendConfig, err := utils.NewRegistryBackendConfig(ref, image.Insecure)
if err != nil {
return nil, errors.Wrap(err, "failed to parse backend configuration")
}
if image.Insecure {
backendConfig.SkipVerify = true
}
if image.Parsed.Remote.IsWithHTTP() {
backendConfig.Scheme = "http"
}
bytes, err := json.Marshal(backendConfig)
if err != nil {
return nil, errors.Wrap(err, "parse registry backend config")
}
nydusdConfig.BackendConfig = string(bytes)
}
}
if image.Parsed.NydusImage.Manifest.ArtifactType == modelspec.ArtifactTypeModelManifest {
if err := utils.BuildRuntimeExternalBackendConfig(nydusdConfig.BackendConfig, nydusdConfig.ExternalBackendConfigPath); err != nil {
return nil, errors.Wrap(err, "failed to build external backend config file")
}
}
nydusd, err := tool.NewNydusd(nydusdConfig)
if err != nil {
return nil, errors.Wrap(err, "create nydusd daemon")
}
if err := nydusd.Mount(); err != nil {
return nil, errors.Wrap(err, "mount nydus image")
}
umount := func() error {
if err := nydusd.Umount(false); err != nil {
return errors.Wrap(err, "umount nydus image")
}
if err := os.RemoveAll(mountDir); err != nil {
logrus.WithError(err).Warnf("cleanup mount directory: %s", mountDir)
}
if err := os.RemoveAll(nydusdDir); err != nil {
logrus.WithError(err).Warnf("cleanup nydusd directory: %s", nydusdDir)
}
return nil
}
return umount, nil
}
func (rule *FilesystemRule) mountOCIImage(image *Image, dir string) (func() error, error) {
logrus.WithField("type", tool.CheckImageType(image.Parsed)).WithField("image", image.Parsed.Remote.Ref).Infof("mounting image")
mountPath := filepath.Join(rule.WorkDir, dir, "mnt")
if err := os.MkdirAll(mountPath, 0755); err != nil {
return nil, errors.Wrap(err, "create mountpoint directory")
}
layerBasePath := filepath.Join(rule.WorkDir, dir, "layers")
if err := os.MkdirAll(layerBasePath, 0755); err != nil {
return nil, errors.Wrap(err, "create layer base directory")
}
layers := image.Parsed.OCIImage.Manifest.Layers
worker := utils.NewWorkerPool(WorkerCount, uint(len(layers))) worker := utils.NewWorkerPool(WorkerCount, uint(len(layers)))
for idx := range layers { for idx := range layers {
worker.Put(func(idx int) func() error { worker.Put(func(idx int) func() error {
return func() error { return func() error {
layer := layers[idx] layer := layers[idx]
reader, err := rule.SourceRemote.Pull(context.Background(), layer, true) reader, err := image.Parsed.Remote.Pull(context.Background(), layer, true)
if err != nil { if err != nil {
return errors.Wrap(err, "pull source image layers from the remote registry") return errors.Wrap(err, "pull source image layers from the remote registry")
} }
if err = utils.UnpackTargz(context.Background(), filepath.Join(rule.SourcePath, fmt.Sprintf("layer-%d", idx)), reader, true); err != nil { layerDir := filepath.Join(layerBasePath, fmt.Sprintf("layer-%d", idx))
if err = utils.UnpackTargz(context.Background(), layerDir, reader, true); err != nil {
return errors.Wrap(err, "unpack source image layers") return errors.Wrap(err, "unpack source image layers")
} }
@ -192,102 +323,59 @@ func (rule *FilesystemRule) pullSourceImage() (*tool.Image, error) {
return nil, errors.Wrap(err, "pull source image layers in wait") return nil, errors.Wrap(err, "pull source image layers in wait")
} }
return &tool.Image{ mounter := &tool.Image{
Layers: layers, Layers: layers,
Source: rule.Source, LayerBaseDir: layerBasePath,
SourcePath: rule.SourcePath, Rootfs: mountPath,
Rootfs: rule.SourceMountPath,
}, nil
}
func (rule *FilesystemRule) mountSourceImage() (*tool.Image, error) {
logrus.Infof("Mounting source image to %s", rule.SourceMountPath)
image, err := rule.pullSourceImage()
if err != nil {
return nil, errors.Wrap(err, "pull source image")
} }
if err := image.Umount(); err != nil { if err := mounter.Umount(); err != nil {
return nil, errors.Wrap(err, "umount previous rootfs") return nil, errors.Wrap(err, "umount previous rootfs")
} }
if err := image.Mount(); err != nil { if err := mounter.Mount(); err != nil {
return nil, errors.Wrap(err, "mount source image") return nil, errors.Wrap(err, "mount source image")
} }
return image, nil umount := func() error {
} if err := mounter.Umount(); err != nil {
logrus.WithError(err).Warnf("umount rootfs")
func (rule *FilesystemRule) mountNydusImage() (*tool.Nydusd, error) {
logrus.Infof("Mounting Nydus image to %s", rule.NydusdConfig.MountPath)
if err := os.MkdirAll(rule.NydusdConfig.BlobCacheDir, 0755); err != nil {
return nil, errors.Wrap(err, "create blob cache directory for Nydusd")
}
if err := os.MkdirAll(rule.NydusdConfig.MountPath, 0755); err != nil {
return nil, errors.Wrap(err, "create mountpoint directory of Nydus image")
}
parsed, err := reference.ParseNormalizedNamed(rule.Target)
if err != nil {
return nil, err
}
if rule.NydusdConfig.BackendType == "" {
rule.NydusdConfig.BackendType = "registry"
if rule.NydusdConfig.BackendConfig == "" {
backendConfig, err := utils.NewRegistryBackendConfig(parsed, rule.TargetInsecure)
if err != nil {
return nil, errors.Wrap(err, "failed to parse backend configuration")
}
if rule.TargetInsecure {
backendConfig.SkipVerify = true
}
if rule.PlainHTTP {
backendConfig.Scheme = "http"
}
bytes, err := json.Marshal(backendConfig)
if err != nil {
return nil, errors.Wrap(err, "parse registry backend config")
}
rule.NydusdConfig.BackendConfig = string(bytes)
} }
if err := os.RemoveAll(layerBasePath); err != nil {
logrus.WithError(err).Warnf("cleanup layers directory %s", layerBasePath)
}
return nil
} }
nydusd, err := tool.NewNydusd(rule.NydusdConfig) return umount, nil
if err != nil {
return nil, errors.Wrap(err, "create Nydusd daemon")
}
if err := nydusd.Mount(); err != nil {
return nil, errors.Wrap(err, "mount Nydus image")
}
return nydusd, nil
} }
func (rule *FilesystemRule) verify() error { func (rule *FilesystemRule) mountImage(image *Image, dir string) (func() error, error) {
logrus.Infof("Verifying filesystem for source and Nydus image") if image.Parsed.OCIImage != nil {
return rule.mountOCIImage(image, dir)
} else if image.Parsed.NydusImage != nil {
return rule.mountNydusImage(image, dir)
}
return nil, fmt.Errorf("invalid image for mounting")
}
func (rule *FilesystemRule) verify(sourceRootfs, targetRootfs string) error {
logrus.Infof("comparing filesystem")
sourceNodes := map[string]Node{} sourceNodes := map[string]Node{}
// Concurrently walk the rootfs directory of source and Nydus image // Concurrently walk the rootfs directory of source and nydus image
walkErr := make(chan error) walkErr := make(chan error)
go func() { go func() {
var err error var err error
sourceNodes, err = rule.walk(rule.SourceMountPath) sourceNodes, err = rule.walk(sourceRootfs)
walkErr <- err walkErr <- err
}() }()
nydusNodes, err := rule.walk(rule.NydusdConfig.MountPath) targetNodes, err := rule.walk(targetRootfs)
if err != nil { if err != nil {
return errors.Wrap(err, "walk rootfs of Nydus image") return errors.Wrap(err, "walk rootfs of source image")
} }
if err := <-walkErr; err != nil { if err := <-walkErr; err != nil {
@ -295,54 +383,44 @@ func (rule *FilesystemRule) verify() error {
} }
for path, sourceNode := range sourceNodes { for path, sourceNode := range sourceNodes {
nydusNode, exist := nydusNodes[path] targetNode, exist := targetNodes[path]
if !exist { if !exist {
return fmt.Errorf("File not found in Nydus image: %s", path) return fmt.Errorf("file not found in target image: %s", path)
} }
delete(nydusNodes, path) delete(targetNodes, path)
if path != "/" && !reflect.DeepEqual(sourceNode, nydusNode) { if path != "/" && !reflect.DeepEqual(sourceNode, targetNode) {
return fmt.Errorf("File not match in Nydus image: %s <=> %s", sourceNode.String(), nydusNode.String()) return fmt.Errorf("file not match in target image:\n\t[source] %s\n\t[target] %s", sourceNode.String(), targetNode.String())
} }
} }
for path := range nydusNodes { for path := range targetNodes {
return fmt.Errorf("File not found in source image: %s", path) return fmt.Errorf("file not found in source image: %s", path)
} }
return nil return nil
} }
func (rule *FilesystemRule) Validate() error { func (rule *FilesystemRule) Validate() error {
// Skip filesystem validation if no source image be specified // Skip filesystem validation if no source or target image be specified
if rule.Source == "" { if rule.SourceImage.Parsed == nil || rule.TargetImage.Parsed == nil {
return nil return nil
} }
// Cleanup temporary directories umountSource, err := rule.mountImage(rule.SourceImage, "source")
defer func() {
if err := os.RemoveAll(rule.SourcePath); err != nil {
logrus.WithError(err).Warnf("cleanup source image directory %s", rule.SourcePath)
}
if err := os.RemoveAll(rule.NydusdConfig.MountPath); err != nil {
logrus.WithError(err).Warnf("cleanup nydus image directory %s", rule.NydusdConfig.MountPath)
}
if err := os.RemoveAll(rule.NydusdConfig.BlobCacheDir); err != nil {
logrus.WithError(err).Warnf("cleanup nydus blob cache directory %s", rule.NydusdConfig.BlobCacheDir)
}
}()
image, err := rule.mountSourceImage()
if err != nil { if err != nil {
return err return err
} }
defer image.Umount() defer umountSource()
nydusd, err := rule.mountNydusImage() umountTarget, err := rule.mountImage(rule.TargetImage, "target")
if err != nil { if err != nil {
return err return err
} }
defer nydusd.Umount(false) defer umountTarget()
return rule.verify() return rule.verify(
filepath.Join(rule.WorkDir, "source/mnt"),
filepath.Join(rule.WorkDir, "target/mnt"),
)
} }

View File

@ -6,104 +6,132 @@ package rule
import ( import (
"encoding/json" "encoding/json"
"fmt"
"reflect" "reflect"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
modelspec "github.com/CloudNativeAI/model-spec/specs-go/v1"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/checker/tool"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/parser" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/parser"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/utils" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/utils"
) )
// ManifestRule validates manifest format of Nydus image // ManifestRule validates manifest format of nydus image
type ManifestRule struct { type ManifestRule struct {
SourceParsed *parser.Parsed SourceParsed *parser.Parsed
TargetParsed *parser.Parsed TargetParsed *parser.Parsed
MultiPlatform bool
BackendType string
ExpectedArch string
} }
func (rule *ManifestRule) Name() string { func (rule *ManifestRule) Name() string {
return "Manifest" return "manifest"
} }
func (rule *ManifestRule) Validate() error { func (rule *ManifestRule) validateConfig(sourceImage, targetImage *parser.Image) error {
logrus.Infof("Checking Nydus manifest") //nolint:staticcheck
// ignore static check SA1019 here. We have to assign deprecated field.
//
// Skip ArgsEscaped's Check
//
// This field is present only for legacy compatibility with Docker and
// should not be used by new image builders. Nydusify (1.6 and above)
// ignores it, which is an expected behavior.
// Also ignore it in check.
//
// Addition: [ArgsEscaped in spec](https://github.com/opencontainers/image-spec/pull/892)
sourceImage.Config.Config.ArgsEscaped = targetImage.Config.Config.ArgsEscaped
// Ensure the target image represents a manifest list, sourceConfig, err := json.Marshal(sourceImage.Config.Config)
// and it should consist of OCI and Nydus manifest if err != nil {
if rule.MultiPlatform { return errors.New("marshal source image config")
if rule.TargetParsed.Index == nil { }
return errors.New("not found image manifest list") targetConfig, err := json.Marshal(targetImage.Config.Config)
} if err != nil {
foundNydusDesc := false return errors.New("marshal target image config")
foundOCIDesc := false }
for _, desc := range rule.TargetParsed.Index.Manifests { if !reflect.DeepEqual(sourceConfig, targetConfig) {
if desc.Platform == nil { return errors.New("source image config should be equal with target image config")
continue
}
if desc.Platform.Architecture == rule.ExpectedArch && desc.Platform.OS == "linux" {
if utils.IsNydusPlatform(desc.Platform) {
foundNydusDesc = true
} else {
foundOCIDesc = true
}
}
}
if !foundNydusDesc {
return errors.Errorf("not found nydus image of specified platform linux/%s", rule.ExpectedArch)
}
if !foundOCIDesc {
return errors.Errorf("not found OCI image of specified platform linux/%s", rule.ExpectedArch)
}
} }
// Check manifest of Nydus return nil
if rule.TargetParsed.NydusImage == nil { }
return errors.New("invalid nydus image manifest")
func (rule *ManifestRule) validateOCI(image *parser.Image) error {
// Check config diff IDs
layers := image.Manifest.Layers
artifact := image.Manifest.ArtifactType
if artifact != modelspec.ArtifactTypeModelManifest && len(image.Config.RootFS.DiffIDs) != len(layers) {
return fmt.Errorf("invalid diff ids in image config: %d (diff ids) != %d (layers)", len(image.Config.RootFS.DiffIDs), len(layers))
} }
layers := rule.TargetParsed.NydusImage.Manifest.Layers return nil
}
func (rule *ManifestRule) validateNydus(image *parser.Image) error {
// Check bootstrap and blob layers
layers := image.Manifest.Layers
manifestArtifact := image.Manifest.ArtifactType
for i, layer := range layers { for i, layer := range layers {
if i == len(layers)-1 { if i == len(layers)-1 {
if layer.Annotations[utils.LayerAnnotationNydusBootstrap] != "true" { if layer.Annotations[utils.LayerAnnotationNydusBootstrap] != "true" {
return errors.New("invalid bootstrap layer in nydus image manifest") return errors.New("invalid bootstrap layer in nydus image manifest")
} }
if manifestArtifact == modelspec.ArtifactTypeModelManifest && layer.Annotations[utils.LayerAnnotationNydusArtifactType] != manifestArtifact {
return errors.New("invalid manifest artifact type in nydus image manifest")
}
} else { } else {
if layer.MediaType != utils.MediaTypeNydusBlob || if manifestArtifact != modelspec.ArtifactTypeModelManifest &&
layer.Annotations[utils.LayerAnnotationNydusBlob] != "true" { (layer.MediaType != utils.MediaTypeNydusBlob ||
layer.Annotations[utils.LayerAnnotationNydusBlob] != "true") {
return errors.New("invalid blob layer in nydus image manifest") return errors.New("invalid blob layer in nydus image manifest")
} }
} }
} }
// Check Nydus image config with OCI image // Check config diff IDs
if rule.SourceParsed.OCIImage != nil { if manifestArtifact != modelspec.ArtifactTypeModelManifest && len(image.Config.RootFS.DiffIDs) != len(layers) {
return fmt.Errorf("invalid diff ids in image config: %d (diff ids) != %d (layers)", len(image.Config.RootFS.DiffIDs), len(layers))
}
//nolint:staticcheck return nil
// ignore static check SA1019 here. We have to assign deprecated field. }
//
// Skip ArgsEscaped's Check
//
// This field is present only for legacy compatibility with Docker and
// should not be used by new image builders. Nydusify (1.6 and above)
// ignores it, which is an expected behavior.
// Also ignore it in check.
//
// Addition: [ArgsEscaped in spec](https://github.com/opencontainers/image-spec/pull/892)
rule.TargetParsed.NydusImage.Config.Config.ArgsEscaped = rule.SourceParsed.OCIImage.Config.Config.ArgsEscaped
ociConfig, err := json.Marshal(rule.SourceParsed.OCIImage.Config.Config) func (rule *ManifestRule) validate(parsed *parser.Parsed) error {
if err != nil { if parsed == nil {
return errors.New("marshal oci image config") return nil
}
logrus.WithField("type", tool.CheckImageType(parsed)).WithField("image", parsed.Remote.Ref).Infof("checking manifest")
if parsed.OCIImage != nil {
return errors.Wrap(rule.validateOCI(parsed.OCIImage), "invalid OCI image manifest")
} else if parsed.NydusImage != nil {
return errors.Wrap(rule.validateNydus(parsed.NydusImage), "invalid nydus image manifest")
}
return errors.New("not found valid image")
}
func (rule *ManifestRule) Validate() error {
if err := rule.validate(rule.SourceParsed); err != nil {
return errors.Wrap(err, "source image: invalid manifest")
}
if err := rule.validate(rule.TargetParsed); err != nil {
return errors.Wrap(err, "target image: invalid manifest")
}
if rule.SourceParsed != nil && rule.TargetParsed != nil {
sourceImage := rule.SourceParsed.OCIImage
if sourceImage == nil {
sourceImage = rule.SourceParsed.NydusImage
} }
nydusConfig, err := json.Marshal(rule.TargetParsed.NydusImage.Config.Config) targetImage := rule.TargetParsed.OCIImage
if err != nil { if targetImage == nil {
return errors.New("marshal nydus image config") targetImage = rule.TargetParsed.NydusImage
} }
if !reflect.DeepEqual(ociConfig, nydusConfig) { if err := rule.validateConfig(sourceImage, targetImage); err != nil {
return errors.New("nydus image config should be equal with oci image config") return fmt.Errorf("validate image config: %v", err)
} }
} }

View File

@ -8,19 +8,21 @@ import (
"testing" "testing"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/parser" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/parser"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/utils" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/remote"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
"github.com/opencontainers/go-digest"
ocispec "github.com/opencontainers/image-spec/specs-go/v1" ocispec "github.com/opencontainers/image-spec/specs-go/v1"
) )
func TestManifestName(t *testing.T) { func TestManifestName(t *testing.T) {
rule := ManifestRule{} rule := ManifestRule{}
require.Equal(t, "Manifest", rule.Name()) require.Equal(t, "manifest", rule.Name())
} }
func TestManifestRuleValidate_IgnoreDeprecatedField(t *testing.T) { func TestManifestRuleValidate_IgnoreDeprecatedField(t *testing.T) {
source := &parser.Parsed{ source := &parser.Parsed{
Remote: &remote.Remote{},
OCIImage: &parser.Image{ OCIImage: &parser.Image{
Config: ocispec.Image{ Config: ocispec.Image{
Config: ocispec.ImageConfig{ Config: ocispec.ImageConfig{
@ -30,6 +32,7 @@ func TestManifestRuleValidate_IgnoreDeprecatedField(t *testing.T) {
}, },
} }
target := &parser.Parsed{ target := &parser.Parsed{
Remote: &remote.Remote{},
NydusImage: &parser.Image{ NydusImage: &parser.Image{
Config: ocispec.Image{ Config: ocispec.Image{
Config: ocispec.ImageConfig{ Config: ocispec.ImageConfig{
@ -47,61 +50,11 @@ func TestManifestRuleValidate_IgnoreDeprecatedField(t *testing.T) {
require.Nil(t, rule.Validate()) require.Nil(t, rule.Validate())
} }
func TestManifestRuleValidate_MultiPlatform(t *testing.T) {
source := &parser.Parsed{
OCIImage: &parser.Image{},
}
target := &parser.Parsed{
NydusImage: &parser.Image{},
}
rule := ManifestRule{
MultiPlatform: true,
ExpectedArch: "amd64",
SourceParsed: source,
TargetParsed: target,
}
require.Error(t, rule.Validate())
require.Contains(t, rule.Validate().Error(), "not found image manifest list")
rule.TargetParsed.Index = &ocispec.Index{}
require.Error(t, rule.Validate())
require.Contains(t, rule.Validate().Error(), "not found nydus image of specified platform linux")
rule.TargetParsed.Index = &ocispec.Index{
Manifests: []ocispec.Descriptor{
{
MediaType: utils.MediaTypeNydusBlob,
Platform: &ocispec.Platform{
Architecture: "amd64",
OS: "linux",
OSFeatures: []string{utils.ManifestOSFeatureNydus},
},
},
},
}
require.Error(t, rule.Validate())
require.Contains(t, rule.Validate().Error(), "not found OCI image of specified platform linux")
rule.TargetParsed.Index.Manifests = append(rule.TargetParsed.Index.Manifests, ocispec.Descriptor{
MediaType: "application/vnd.oci.image.manifest.v1+json",
Platform: &ocispec.Platform{
Architecture: "amd64",
OS: "linux",
},
})
require.NoError(t, rule.Validate())
}
func TestManifestRuleValidate_TargetLayer(t *testing.T) { func TestManifestRuleValidate_TargetLayer(t *testing.T) {
rule := ManifestRule{ rule := ManifestRule{}
SourceParsed: &parser.Parsed{},
TargetParsed: &parser.Parsed{},
}
require.Error(t, rule.Validate())
require.Contains(t, rule.Validate().Error(), "invalid nydus image manifest")
rule.TargetParsed = &parser.Parsed{ rule.TargetParsed = &parser.Parsed{
Remote: &remote.Remote{},
NydusImage: &parser.Image{ NydusImage: &parser.Image{
Manifest: ocispec.Manifest{ Manifest: ocispec.Manifest{
MediaType: "application/vnd.docker.distribution.manifest.v2+json", MediaType: "application/vnd.docker.distribution.manifest.v2+json",
@ -147,6 +100,11 @@ func TestManifestRuleValidate_TargetLayer(t *testing.T) {
require.Error(t, rule.Validate()) require.Error(t, rule.Validate())
require.Contains(t, rule.Validate().Error(), "invalid bootstrap layer in nydus image manifest") require.Contains(t, rule.Validate().Error(), "invalid bootstrap layer in nydus image manifest")
rule.TargetParsed.NydusImage.Config.RootFS.DiffIDs = []digest.Digest{
"sha256:09845cce1d983b158d4865fc37c23bbfb892d4775c786e8114d3cf868975c059",
"sha256:bec98c9e3dce739877b8f5fe1cddd339de1db2b36c20995d76f6265056dbdb08",
}
rule.TargetParsed.NydusImage.Manifest.Layers = []ocispec.Descriptor{ rule.TargetParsed.NydusImage.Manifest.Layers = []ocispec.Descriptor{
{ {
MediaType: "application/vnd.oci.image.layer.nydus.blob.v1", MediaType: "application/vnd.oci.image.layer.nydus.blob.v1",

View File

@ -29,7 +29,7 @@ func NewBuilder(binaryPath string) *Builder {
} }
} }
// Check calls `nydus-image check` to parse Nydus bootstrap // Check calls `nydus-image check` to parse nydus bootstrap
// and output debug information to specified JSON file. // and output debug information to specified JSON file.
func (builder *Builder) Check(option BuilderOption) error { func (builder *Builder) Check(option BuilderOption) error {
args := []string{ args := []string{

View File

@ -11,6 +11,7 @@ import (
"strings" "strings"
"github.com/containerd/containerd/mount" "github.com/containerd/containerd/mount"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/parser"
ocispec "github.com/opencontainers/image-spec/specs-go/v1" ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors" "github.com/pkg/errors"
) )
@ -45,11 +46,19 @@ func mkMounts(dirs []string) []mount.Mount {
} }
} }
func CheckImageType(parsed *parser.Parsed) string {
if parsed.NydusImage != nil {
return "nydus"
} else if parsed.OCIImage != nil {
return "oci"
}
return "unknown"
}
type Image struct { type Image struct {
Layers []ocispec.Descriptor Layers []ocispec.Descriptor
Source string LayerBaseDir string
SourcePath string Rootfs string
Rootfs string
} }
// Mount mounts rootfs of OCI image. // Mount mounts rootfs of OCI image.
@ -62,7 +71,7 @@ func (image *Image) Mount() error {
count := len(image.Layers) count := len(image.Layers)
for idx := range image.Layers { for idx := range image.Layers {
layerName := fmt.Sprintf("layer-%d", count-idx-1) layerName := fmt.Sprintf("layer-%d", count-idx-1)
layerDir := filepath.Join(image.SourcePath, layerName) layerDir := filepath.Join(image.LayerBaseDir, layerName)
dirs = append(dirs, strings.ReplaceAll(layerDir, ":", "\\:")) dirs = append(dirs, strings.ReplaceAll(layerDir, ":", "\\:"))
} }

View File

@ -14,24 +14,28 @@ import (
"net/http" "net/http"
"os" "os"
"os/exec" "os/exec"
"strings"
"text/template" "text/template"
"time" "time"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/sirupsen/logrus"
) )
type NydusdConfig struct { type NydusdConfig struct {
EnablePrefetch bool EnablePrefetch bool
NydusdPath string NydusdPath string
BootstrapPath string BootstrapPath string
ConfigPath string ConfigPath string
BackendType string BackendType string
BackendConfig string BackendConfig string
BlobCacheDir string ExternalBackendConfigPath string
APISockPath string ExternalBackendProxyCacheDir string
MountPath string BlobCacheDir string
Mode string APISockPath string
DigestValidate bool MountPath string
Mode string
DigestValidate bool
} }
// Nydusd runs nydusd binary. // Nydusd runs nydusd binary.
@ -50,6 +54,9 @@ var configTpl = `
"type": "{{.BackendType}}", "type": "{{.BackendType}}",
"config": {{.BackendConfig}} "config": {{.BackendConfig}}
}, },
"external_backend": {
"config_path": "{{.ExternalBackendConfigPath}}"
},
"cache": { "cache": {
"type": "blobcache", "type": "blobcache",
"config": { "config": {
@ -178,6 +185,7 @@ func (nydusd *Nydusd) Mount() error {
} }
cmd := exec.Command(nydusd.NydusdPath, args...) cmd := exec.Command(nydusd.NydusdPath, args...)
logrus.Debugf("Command: %s %s", nydusd.NydusdPath, strings.Join(args, " "))
cmd.Stdout = os.Stdout cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr cmd.Stderr = os.Stderr

View File

@ -20,7 +20,7 @@ import (
originprovider "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/provider" originprovider "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/provider"
"github.com/goharbor/acceleration-service/pkg/remote" "github.com/goharbor/acceleration-service/pkg/remote"
"github.com/containerd/nydus-snapshotter/pkg/converter" "github.com/BraveY/snapshotter-converter/converter"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/converter/provider" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/converter/provider"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/utils" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/utils"
"github.com/dustin/go-humanize" "github.com/dustin/go-humanize"

View File

@ -19,10 +19,13 @@ import (
"sync" "sync"
"time" "time"
"github.com/containerd/containerd/labels"
"github.com/BraveY/snapshotter-converter/converter"
"github.com/containerd/containerd"
"github.com/containerd/containerd/content/local" "github.com/containerd/containerd/content/local"
"github.com/containerd/containerd/namespaces" "github.com/containerd/containerd/namespaces"
"github.com/containerd/containerd/reference/docker" "github.com/containerd/containerd/reference/docker"
"github.com/containerd/nydus-snapshotter/pkg/converter"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/committer/diff" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/committer/diff"
parserPkg "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/parser" parserPkg "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/parser"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/provider" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/provider"
@ -35,6 +38,7 @@ import (
"golang.org/x/sync/errgroup" "golang.org/x/sync/errgroup"
) )
// Opt defines the options for committing container changes
type Opt struct { type Opt struct {
WorkDir string WorkDir string
ContainerdAddress string ContainerdAddress string
@ -59,6 +63,7 @@ type Committer struct {
manager *Manager manager *Manager
} }
// NewCommitter creates a new Committer instance
func NewCommitter(opt Opt) (*Committer, error) { func NewCommitter(opt Opt) (*Committer, error) {
if err := os.MkdirAll(opt.WorkDir, 0755); err != nil { if err := os.MkdirAll(opt.WorkDir, 0755); err != nil {
return nil, errors.Wrap(err, "prepare work dir") return nil, errors.Wrap(err, "prepare work dir")
@ -73,6 +78,7 @@ func NewCommitter(opt Opt) (*Committer, error) {
if err != nil { if err != nil {
return nil, errors.Wrap(err, "new container manager") return nil, errors.Wrap(err, "new container manager")
} }
return &Committer{ return &Committer{
workDir: workDir, workDir: workDir,
builder: opt.NydusImagePath, builder: opt.NydusImagePath,
@ -81,6 +87,11 @@ func NewCommitter(opt Opt) (*Committer, error) {
} }
func (cm *Committer) Commit(ctx context.Context, opt Opt) error { func (cm *Committer) Commit(ctx context.Context, opt Opt) error {
// Resolve container ID first
if err := cm.resolveContainerID(ctx, &opt); err != nil {
return errors.Wrap(err, "failed to resolve container ID")
}
ctx = namespaces.WithNamespace(ctx, opt.Namespace) ctx = namespaces.WithNamespace(ctx, opt.Namespace)
targetRef, err := ValidateRef(opt.TargetRef) targetRef, err := ValidateRef(opt.TargetRef)
if err != nil { if err != nil {
@ -92,9 +103,11 @@ func (cm *Committer) Commit(ctx context.Context, opt Opt) error {
return errors.Wrap(err, "inspect container") return errors.Wrap(err, "inspect container")
} }
originalSourceRef := inspect.Image
logrus.Infof("pulling base bootstrap") logrus.Infof("pulling base bootstrap")
start := time.Now() start := time.Now()
image, committedLayers, err := cm.pullBootstrap(ctx, inspect.Image, "bootstrap-base", opt.SourceInsecure) image, committedLayers, err := cm.pullBootstrap(ctx, originalSourceRef, "bootstrap-base", opt.SourceInsecure)
if err != nil { if err != nil {
return errors.Wrap(err, "pull base bootstrap") return errors.Wrap(err, "pull base bootstrap")
} }
@ -107,6 +120,16 @@ func (cm *Committer) Commit(ctx context.Context, opt Opt) error {
return errors.Wrap(err, "obtain bootstrap FsVersion and Compressor") return errors.Wrap(err, "obtain bootstrap FsVersion and Compressor")
} }
// Push lower blobs
for idx, layer := range image.Manifest.Layers {
if layer.MediaType == utils.MediaTypeNydusBlob {
name := fmt.Sprintf("blob-mount-%d", idx)
if _, err := cm.pushBlob(ctx, name, layer.Digest, originalSourceRef, targetRef, opt.TargetInsecure, image); err != nil {
return errors.Wrap(err, "push lower blob")
}
}
}
mountList := NewMountList() mountList := NewMountList()
var upperBlob *Blob var upperBlob *Blob
@ -123,7 +146,7 @@ func (cm *Committer) Commit(ctx context.Context, opt Opt) error {
} }
logrus.Infof("pushing blob for upper") logrus.Infof("pushing blob for upper")
start := time.Now() start := time.Now()
upperBlobDesc, err := cm.pushBlob(ctx, "blob-upper", *upperBlobDigest, opt.TargetRef, opt.TargetInsecure) upperBlobDesc, err := cm.pushBlob(ctx, "blob-upper", *upperBlobDigest, originalSourceRef, targetRef, opt.TargetInsecure, image)
if err != nil { if err != nil {
return errors.Wrap(err, "push upper blob") return errors.Wrap(err, "push upper blob")
} }
@ -150,7 +173,7 @@ func (cm *Committer) Commit(ctx context.Context, opt Opt) error {
} }
logrus.Infof("pushing blob for mount") logrus.Infof("pushing blob for mount")
start := time.Now() start := time.Now()
mountBlobDesc, err := cm.pushBlob(ctx, name, *mountBlobDigest, opt.TargetRef, opt.TargetInsecure) mountBlobDesc, err := cm.pushBlob(ctx, name, *mountBlobDigest, originalSourceRef, targetRef, opt.TargetInsecure, image)
if err != nil { if err != nil {
return errors.Wrap(err, "push mount blob") return errors.Wrap(err, "push mount blob")
} }
@ -172,7 +195,7 @@ func (cm *Committer) Commit(ctx context.Context, opt Opt) error {
appendedEg := errgroup.Group{} appendedEg := errgroup.Group{}
appendedMutex := sync.Mutex{} appendedMutex := sync.Mutex{}
if len(mountList.paths) > 0 { if len(mountList.paths) > 0 {
logrus.Infof("need commit appened mount path: %s", strings.Join(mountList.paths, ", ")) logrus.Infof("need commit appended mount path: %s", strings.Join(mountList.paths, ", "))
} }
for idx := range mountList.paths { for idx := range mountList.paths {
func(idx int) { func(idx int) {
@ -188,7 +211,7 @@ func (cm *Committer) Commit(ctx context.Context, opt Opt) error {
} }
logrus.Infof("pushing blob for appended mount") logrus.Infof("pushing blob for appended mount")
start := time.Now() start := time.Now()
mountBlobDesc, err := cm.pushBlob(ctx, name, *mountBlobDigest, opt.TargetRef, opt.TargetInsecure) mountBlobDesc, err := cm.pushBlob(ctx, name, *mountBlobDigest, originalSourceRef, targetRef, opt.TargetInsecure, image)
if err != nil { if err != nil {
return errors.Wrap(err, "push appended mount blob") return errors.Wrap(err, "push appended mount blob")
} }
@ -207,6 +230,14 @@ func (cm *Committer) Commit(ctx context.Context, opt Opt) error {
return appendedEg.Wait() return appendedEg.Wait()
} }
// Ensure filesystem changes are written to disk before committing
// This prevents issues where changes are still in memory buffers
// and not yet visible in the overlay filesystem's upper directory
logrus.Infof("syncing filesystem before commit")
if err := cm.syncFilesystem(ctx, opt.ContainerID); err != nil {
return errors.Wrap(err, "failed to sync filesystem")
}
if err := cm.pause(ctx, opt.ContainerID, commit); err != nil { if err := cm.pause(ctx, opt.ContainerID, commit); err != nil {
return errors.Wrap(err, "pause container to commit") return errors.Wrap(err, "pause container to commit")
} }
@ -261,7 +292,7 @@ func (cm *Committer) pullBootstrap(ctx context.Context, ref, bootstrapName strin
_commitBlobs := bootstrapDesc.Annotations[utils.LayerAnnotationNydusCommitBlobs] _commitBlobs := bootstrapDesc.Annotations[utils.LayerAnnotationNydusCommitBlobs]
if _commitBlobs != "" { if _commitBlobs != "" {
committedLayers = len(strings.Split(_commitBlobs, ",")) committedLayers = len(strings.Split(_commitBlobs, ","))
logrus.Infof("detected the committed layers: %d", committedLayers) logrus.Infof("detected committed layers: %d", committedLayers)
} }
target := filepath.Join(cm.workDir, bootstrapName) target := filepath.Join(cm.workDir, bootstrapName)
@ -269,12 +300,21 @@ func (cm *Committer) pullBootstrap(ctx context.Context, ref, bootstrapName strin
if err != nil { if err != nil {
return nil, 0, errors.Wrap(err, "pull bootstrap layer") return nil, 0, errors.Wrap(err, "pull bootstrap layer")
} }
defer reader.Close() var closeErr error
defer func() {
if err := reader.Close(); err != nil {
closeErr = errors.Wrap(err, "close bootstrap reader")
}
}()
if err := utils.UnpackFile(reader, utils.BootstrapFileNameInLayer, target); err != nil { if err := utils.UnpackFile(reader, utils.BootstrapFileNameInLayer, target); err != nil {
return nil, 0, errors.Wrap(err, "unpack bootstrap layer") return nil, 0, errors.Wrap(err, "unpack bootstrap layer")
} }
if closeErr != nil {
return nil, 0, closeErr
}
return parsed.NydusImage, committedLayers, nil return parsed.NydusImage, committedLayers, nil
} }
@ -315,37 +355,153 @@ func (cm *Committer) commitUpperByDiff(ctx context.Context, appendMount func(pat
return &blobDigest, nil return &blobDigest, nil
} }
func (cm *Committer) pushBlob(ctx context.Context, blobName string, blobDigest digest.Digest, targetRef string, insecure bool) (*ocispec.Descriptor, error) { // getDistributionSourceLabel returns the source label key and value for the image distribution
blobRa, err := local.OpenReader(filepath.Join(cm.workDir, blobName)) func getDistributionSourceLabel(sourceRef string) (string, string) {
named, err := docker.ParseDockerRef(sourceRef)
if err != nil { if err != nil {
return nil, errors.Wrap(err, "open reader for upper blob") return "", ""
}
host := docker.Domain(named)
labelValue := docker.Path(named)
labelKey := fmt.Sprintf("%s.%s", labels.LabelDistributionSource, host)
return labelKey, labelValue
}
// pushBlob pushes a blob to the target registry
func (cm *Committer) pushBlob(ctx context.Context, blobName string, blobDigest digest.Digest, sourceRef string, targetRef string, insecure bool, image *parserPkg.Image) (*ocispec.Descriptor, error) {
logrus.Infof("pushing blob: %s, digest: %s", blobName, blobDigest)
targetRemoter, err := provider.DefaultRemote(targetRef, insecure)
if err != nil {
return nil, errors.Wrap(err, "create target remote")
} }
blobDesc := ocispec.Descriptor{ // Check if this is a lower blob (starts with "blob-mount-" but not in workDir)
Digest: blobDigest, isLowerBlob := strings.HasPrefix(blobName, "blob-mount-")
Size: blobRa.Size(), blobPath := filepath.Join(cm.workDir, blobName)
MediaType: utils.MediaTypeNydusBlob,
Annotations: map[string]string{ var blobDesc ocispec.Descriptor
var reader io.Reader
var readerCloser io.Closer
var closeErr error
defer func() {
if readerCloser != nil {
if err := readerCloser.Close(); err != nil {
closeErr = errors.Wrap(err, "close blob reader")
}
}
}()
if isLowerBlob {
logrus.Debugf("handling lower blob: %s", blobName)
// For lower blobs, use remote access
blobDesc = ocispec.Descriptor{
Digest: blobDigest,
MediaType: utils.MediaTypeNydusBlob,
}
// Find corresponding layer in source manifest to get size
var sourceLayer *ocispec.Descriptor
for _, layer := range image.Manifest.Layers {
if layer.Digest == blobDigest {
sourceLayer = &layer
blobDesc.Size = layer.Size
break
}
}
if sourceLayer == nil {
return nil, fmt.Errorf("layer not found in source image: %s", blobDigest)
}
if blobDesc.Size <= 0 {
return nil, fmt.Errorf("invalid blob size: %d", blobDesc.Size)
}
logrus.Debugf("lower blob size: %d", blobDesc.Size)
// Use source image remoter to get blob data
sourceRemoter, err := provider.DefaultRemote(sourceRef, insecure)
if err != nil {
return nil, errors.Wrap(err, "create source remote")
}
// Get ReaderAt for remote blob
readerAt, err := sourceRemoter.ReaderAt(ctx, *sourceLayer, true)
if err != nil {
return nil, errors.Wrap(err, "create remote reader for lower blob")
}
if readerAt == nil {
return nil, fmt.Errorf("got nil reader for lower blob: %s", blobName)
}
reader = io.NewSectionReader(readerAt, 0, readerAt.Size())
if closer, ok := readerAt.(io.Closer); ok {
readerCloser = closer
}
// Add required annotations
blobDesc.Annotations = map[string]string{
utils.LayerAnnotationUncompressed: blobDigest.String(), utils.LayerAnnotationUncompressed: blobDigest.String(),
utils.LayerAnnotationNydusBlob: "true", utils.LayerAnnotationNydusBlob: "true",
}, }
} else {
logrus.Debugf("handling local blob: %s", blobName)
// Handle local blob
blobRa, err := local.OpenReader(blobPath)
if err != nil {
return nil, errors.Wrap(err, "open reader for blob")
}
if blobRa == nil {
return nil, fmt.Errorf("got nil reader for local blob: %s", blobName)
}
size := blobRa.Size()
if size <= 0 {
blobRa.Close()
return nil, fmt.Errorf("invalid local blob size: %d", size)
}
logrus.Debugf("local blob size: %d", size)
reader = io.NewSectionReader(blobRa, 0, size)
readerCloser = blobRa
blobDesc = ocispec.Descriptor{
Digest: blobDigest,
Size: size,
MediaType: utils.MediaTypeNydusBlob,
Annotations: map[string]string{
utils.LayerAnnotationUncompressed: blobDigest.String(),
utils.LayerAnnotationNydusBlob: "true",
},
}
} }
remoter, err := provider.DefaultRemote(targetRef, insecure) // Add distribution source label
if err != nil { distributionSourceLabel, distributionSourceLabelValue := getDistributionSourceLabel(sourceRef)
return nil, errors.Wrap(err, "create remote") if distributionSourceLabel != "" {
if blobDesc.Annotations == nil {
blobDesc.Annotations = make(map[string]string)
}
blobDesc.Annotations[distributionSourceLabel] = distributionSourceLabelValue
} }
if err := remoter.Push(ctx, blobDesc, true, io.NewSectionReader(blobRa, 0, blobRa.Size())); err != nil { logrus.Debugf("pushing blob: digest=%s, size=%d", blobDesc.Digest, blobDesc.Size)
if err := targetRemoter.Push(ctx, blobDesc, true, reader); err != nil {
if utils.RetryWithHTTP(err) { if utils.RetryWithHTTP(err) {
remoter.MaybeWithHTTP(err) targetRemoter.MaybeWithHTTP(err)
if err := remoter.Push(ctx, blobDesc, true, io.NewSectionReader(blobRa, 0, blobRa.Size())); err != nil { logrus.Debugf("retrying push with HTTP")
return nil, errors.Wrap(err, "push blob") if err := targetRemoter.Push(ctx, blobDesc, true, reader); err != nil {
return nil, errors.Wrap(err, "push blob with HTTP")
} }
} else { } else {
return nil, errors.Wrap(err, "push blob") return nil, errors.Wrap(err, "push blob")
} }
} }
if closeErr != nil {
return nil, closeErr
}
return &blobDesc, nil return &blobDesc, nil
} }
@ -367,6 +523,36 @@ func (cm *Committer) pause(ctx context.Context, containerID string, handle func(
return cm.manager.UnPause(ctx, containerID) return cm.manager.UnPause(ctx, containerID)
} }
// syncFilesystem forces filesystem sync to ensure all changes are written to disk.
// This is crucial for overlay filesystems where changes may still be in memory
// buffers and not yet visible in the upper directory when committing.
func (cm *Committer) syncFilesystem(ctx context.Context, containerID string) error {
inspect, err := cm.manager.Inspect(ctx, containerID)
if err != nil {
return errors.Wrap(err, "inspect container for sync")
}
// Use nsenter to execute sync command in the container's namespace
config := &Config{
Mount: true,
PID: true,
Target: inspect.Pid,
}
stderr, err := config.ExecuteContext(ctx, io.Discard, "sync")
if err != nil {
return errors.Wrap(err, fmt.Sprintf("execute sync in container namespace: %s", strings.TrimSpace(stderr)))
}
// Also sync the host filesystem to ensure overlay changes are written
cmd := exec.CommandContext(ctx, "sync")
if err := cmd.Run(); err != nil {
return errors.Wrap(err, "execute host sync")
}
return nil
}
func (cm *Committer) pushManifest( func (cm *Committer) pushManifest(
ctx context.Context, nydusImage parserPkg.Image, bootstrapDiffID digest.Digest, targetRef, bootstrapName, fsversion string, upperBlob *Blob, mountBlobs []Blob, insecure bool, ctx context.Context, nydusImage parserPkg.Image, bootstrapDiffID digest.Digest, targetRef, bootstrapName, fsversion string, upperBlob *Blob, mountBlobs []Blob, insecure bool,
) error { ) error {
@ -703,3 +889,52 @@ func (cm *Committer) obtainBootStrapInfo(ctx context.Context, BootstrapName stri
} }
return output.FsVersion, strings.ToLower(output.Compressor), nil return output.FsVersion, strings.ToLower(output.Compressor), nil
} }
// resolveContainerID resolves the container ID to its full ID
func (cm *Committer) resolveContainerID(ctx context.Context, opt *Opt) error {
// If the ID is already a full ID (64 characters), return it directly
if len(opt.ContainerID) == 64 {
logrus.Debugf("container ID %s is already a full ID", opt.ContainerID)
return nil
}
logrus.Infof("resolving container ID prefix %s to full ID", opt.ContainerID)
var (
fullID string
matchCount int
)
// Create containerd client directly
client, err := containerd.New(cm.manager.address)
if err != nil {
return fmt.Errorf("failed to create containerd client: %w", err)
}
defer client.Close()
// Set namespace in context
ctx = namespaces.WithNamespace(ctx, opt.Namespace)
walker := NewContainerWalker(client, func(_ context.Context, found Found) error {
fullID = found.Container.ID()
matchCount = found.MatchCount
return nil
})
n, err := walker.Walk(ctx, opt.ContainerID)
if err != nil {
return fmt.Errorf("failed to walk containers: %w", err)
}
if n == 0 {
return fmt.Errorf("no container found with ID : %s", opt.ContainerID)
}
if matchCount > 1 {
return fmt.Errorf("ambiguous container ID '%s' matches multiple containers, please provide a more specific ID", opt.ContainerID)
}
opt.ContainerID = fullID
logrus.Infof("resolved container ID to full ID: %s", fullID)
return nil
}

View File

@ -0,0 +1,70 @@
// Ported from nerdctl project, copyright The nerdctl Authors.
// https://github.com/containerd/nerdctl/blob/31b4e49db76382567eea223a7e8562e0213ef05f/pkg/idutil/containerwalker/containerwalker.go#L53
package committer
import (
"context"
"fmt"
"regexp"
"strings"
"github.com/containerd/containerd"
"github.com/sirupsen/logrus"
)
type Found struct {
Container containerd.Container
Req string // The raw request string. name, short ID, or long ID.
MatchIndex int // Begins with 0, up to MatchCount - 1.
MatchCount int // 1 on exact match. > 1 on ambiguous match. Never be <= 0.
}
type OnFound func(ctx context.Context, found Found) error
type ContainerWalker struct {
Client *containerd.Client
OnFound OnFound
}
func NewContainerWalker(client *containerd.Client, onFound OnFound) *ContainerWalker {
return &ContainerWalker{
Client: client,
OnFound: onFound,
}
}
// Walk walks containers and calls w.OnFound.
// Req is name, short ID, or long ID.
// Returns the number of the found entries.
func (w *ContainerWalker) Walk(ctx context.Context, req string) (int, error) {
logrus.Debugf("walking containers with request: %s", req)
if strings.HasPrefix(req, "k8s://") {
return -1, fmt.Errorf("specifying \"k8s://...\" form is not supported (Hint: specify ID instead): %q", req)
}
filters := []string{
fmt.Sprintf("id~=^%s.*$", regexp.QuoteMeta(req)),
}
containers, err := w.Client.Containers(ctx, filters...)
if err != nil {
return -1, err
}
matchCount := len(containers)
for i, c := range containers {
logrus.Debugf("found match for container ID: %s", c.ID())
f := Found{
Container: c,
Req: req,
MatchIndex: i,
MatchCount: matchCount,
}
if e := w.OnFound(ctx, f); e != nil {
return -1, e
}
}
return matchCount, nil
}

View File

@ -236,14 +236,14 @@ func Changes(ctx context.Context, appendMount func(path string), withPaths []str
} }
// checkDelete checks if the specified file is a whiteout // checkDelete checks if the specified file is a whiteout
func checkDelete(_ string, path string, base string, f os.FileInfo) (delete, skip bool, _ error) { func checkDelete(_ string, path string, base string, f os.FileInfo) (isDelete, skip bool, _ error) {
if f.Mode()&os.ModeCharDevice != 0 { if f.Mode()&os.ModeCharDevice != 0 {
if _, ok := f.Sys().(*syscall.Stat_t); ok { if _, ok := f.Sys().(*syscall.Stat_t); ok {
maj, min, err := devices.DeviceInfo(f) maj, minor, err := devices.DeviceInfo(f)
if err != nil { if err != nil {
return false, false, errors.Wrapf(err, "failed to get device info") return false, false, errors.Wrapf(err, "failed to get device info")
} }
if maj == 0 && min == 0 { if maj == 0 && minor == 0 {
// This file is a whiteout (char 0/0) that indicates this is deleted from the base // This file is a whiteout (char 0/0) that indicates this is deleted from the base
if _, err := os.Lstat(filepath.Join(base, path)); err != nil { if _, err := os.Lstat(filepath.Join(base, path)); err != nil {
if !os.IsNotExist(err) { if !os.IsNotExist(err) {

View File

@ -10,18 +10,18 @@ import (
) )
var defaultCompactConfig = &CompactConfig{ var defaultCompactConfig = &CompactConfig{
MinUsedRatio: 5, MinUsedRatio: "5",
CompactBlobSize: 10485760, CompactBlobSize: "10485760",
MaxCompactSize: 104857600, MaxCompactSize: "104857600",
LayersToCompact: 32, LayersToCompact: "32",
} }
type CompactConfig struct { type CompactConfig struct {
MinUsedRatio int `json:"min_used_ratio"` MinUsedRatio string
CompactBlobSize int `json:"compact_blob_size"` CompactBlobSize string
MaxCompactSize int `json:"max_compact_size"` MaxCompactSize string
LayersToCompact int `json:"layers_to_compact"` LayersToCompact string
BlobsDir string `json:"blobs_dir,omitempty"` BlobsDir string
} }
func (cfg *CompactConfig) Dumps(filePath string) error { func (cfg *CompactConfig) Dumps(filePath string) error {
@ -81,11 +81,6 @@ func (compactor *Compactor) Compact(bootstrapPath, chunkDict, backendType, backe
if err := os.Remove(targetBootstrap); err != nil && !os.IsNotExist(err) { if err := os.Remove(targetBootstrap); err != nil && !os.IsNotExist(err) {
return "", errors.Wrap(err, "failed to delete old bootstrap file") return "", errors.Wrap(err, "failed to delete old bootstrap file")
} }
// prepare config file
configFilePath := filepath.Join(compactor.workdir, "compact.json")
if err := compactor.cfg.Dumps(configFilePath); err != nil {
return "", errors.Wrap(err, "compact err")
}
outputJSONPath := filepath.Join(compactor.workdir, "compact-result.json") outputJSONPath := filepath.Join(compactor.workdir, "compact-result.json")
if err := os.Remove(outputJSONPath); err != nil && !os.IsNotExist(err) { if err := os.Remove(outputJSONPath); err != nil && !os.IsNotExist(err) {
return "", errors.Wrap(err, "failed to delete old output-json file") return "", errors.Wrap(err, "failed to delete old output-json file")
@ -97,7 +92,11 @@ func (compactor *Compactor) Compact(bootstrapPath, chunkDict, backendType, backe
BackendType: backendType, BackendType: backendType,
BackendConfigPath: backendConfigFile, BackendConfigPath: backendConfigFile,
OutputJSONPath: outputJSONPath, OutputJSONPath: outputJSONPath,
CompactConfigPath: configFilePath, MinUsedRatio: compactor.cfg.MinUsedRatio,
CompactBlobSize: compactor.cfg.CompactBlobSize,
MaxCompactSize: compactor.cfg.MaxCompactSize,
LayersToCompact: compactor.cfg.LayersToCompact,
BlobsDir: compactor.cfg.BlobsDir,
}) })
if err != nil { if err != nil {
return "", errors.Wrap(err, "failed to run compact command") return "", errors.Wrap(err, "failed to run compact command")

View File

@ -5,11 +5,37 @@
package converter package converter
import ( import (
"bytes"
"compress/gzip"
"context" "context"
"fmt"
"io"
"os" "os"
"path/filepath"
"strconv"
"strings"
"time"
modelspec "github.com/CloudNativeAI/model-spec/specs-go/v1"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/containerd/containerd/content"
"github.com/containerd/containerd/content/local"
"github.com/containerd/containerd/namespaces" "github.com/containerd/containerd/namespaces"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/converter/provider" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/converter/provider"
pkgPvd "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/provider"
snapConv "github.com/BraveY/snapshotter-converter/converter"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/external/modctl"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/parser"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/utils"
"encoding/json"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/snapshotter/external"
"github.com/opencontainers/go-digest"
"github.com/opencontainers/image-spec/specs-go"
"github.com/goharbor/acceleration-service/pkg/converter" "github.com/goharbor/acceleration-service/pkg/converter"
"github.com/goharbor/acceleration-service/pkg/platformutil" "github.com/goharbor/acceleration-service/pkg/platformutil"
"github.com/pkg/errors" "github.com/pkg/errors"
@ -24,6 +50,9 @@ type Opt struct {
Target string Target string
ChunkDictRef string ChunkDictRef string
SourceBackendType string
SourceBackendConfig string
SourceInsecure bool SourceInsecure bool
TargetInsecure bool TargetInsecure bool
ChunkDictInsecure bool ChunkDictInsecure bool
@ -47,14 +76,31 @@ type Opt struct {
PrefetchPatterns string PrefetchPatterns string
OCIRef bool OCIRef bool
WithReferrer bool WithReferrer bool
WithPlainHTTP bool
AllPlatforms bool AllPlatforms bool
Platforms string Platforms string
OutputJSON string OutputJSON string
PushRetryCount int
PushRetryDelay string
}
type SourceBackendConfig struct {
Context string `json:"context"`
WorkDir string `json:"work_dir"`
} }
func Convert(ctx context.Context, opt Opt) error { func Convert(ctx context.Context, opt Opt) error {
if opt.SourceBackendType == "modelfile" {
return convertModelFile(ctx, opt)
}
if opt.SourceBackendType == "model-artifact" {
return convertModelArtifact(ctx, opt)
}
ctx = namespaces.WithNamespace(ctx, "nydusify") ctx = namespaces.WithNamespace(ctx, "nydusify")
platformMC, err := platformutil.ParsePlatforms(opt.AllPlatforms, opt.Platforms) platformMC, err := platformutil.ParsePlatforms(opt.AllPlatforms, opt.Platforms)
if err != nil { if err != nil {
@ -83,6 +129,15 @@ func Convert(ctx context.Context, opt Opt) error {
} }
defer os.RemoveAll(tmpDir) defer os.RemoveAll(tmpDir)
// Parse retry delay
retryDelay, err := time.ParseDuration(opt.PushRetryDelay)
if err != nil {
return errors.Wrap(err, "parse push retry delay")
}
// Set push retry configuration
pvd.SetPushRetryConfig(opt.PushRetryCount, retryDelay)
cvt, err := converter.New( cvt, err := converter.New(
converter.WithProvider(pvd), converter.WithProvider(pvd),
converter.WithDriver("nydus", getConfig(opt)), converter.WithDriver("nydus", getConfig(opt)),
@ -98,3 +153,413 @@ func Convert(ctx context.Context, opt Opt) error {
} }
return err return err
} }
func convertModelFile(ctx context.Context, opt Opt) error {
if _, err := os.Stat(opt.WorkDir); err != nil {
if errors.Is(err, os.ErrNotExist) {
if err := os.MkdirAll(opt.WorkDir, 0755); err != nil {
return errors.Wrap(err, "prepare work directory")
}
// We should only clean up when the work directory not exists
// before, otherwise it may delete user data by mistake.
defer os.RemoveAll(opt.WorkDir)
} else {
return errors.Wrap(err, "stat work directory")
}
}
tmpDir, err := os.MkdirTemp(opt.WorkDir, "nydusify-")
if err != nil {
return errors.Wrap(err, "create temp directory")
}
defer os.RemoveAll(tmpDir)
attributesPath := filepath.Join(tmpDir, ".nydusattributes")
backendMetaPath := filepath.Join(tmpDir, ".backend.meta")
backendConfigPath := filepath.Join(tmpDir, ".backend.json")
var srcBkdCfg SourceBackendConfig
if err := json.Unmarshal([]byte(opt.SourceBackendConfig), &srcBkdCfg); err != nil {
return errors.Wrap(err, "unmarshal source backend config")
}
modctlHandler, err := newModctlHandler(opt, srcBkdCfg.WorkDir)
if err != nil {
return errors.Wrap(err, "create modctl handler")
}
if err := external.Handle(context.Background(), external.Options{
Dir: srcBkdCfg.WorkDir,
Handler: modctlHandler,
MetaOutput: backendMetaPath,
BackendOutput: backendConfigPath,
AttributesOutput: attributesPath,
}); err != nil {
return errors.Wrap(err, "handle modctl")
}
// Make nydus layer with external blob
packOption := snapConv.PackOption{
BuilderPath: opt.NydusImagePath,
Compressor: opt.Compressor,
FsVersion: opt.FsVersion,
ChunkSize: opt.ChunkSize,
FromDir: srcBkdCfg.Context,
AttributesPath: attributesPath,
}
_, externalBlobDigest, err := packWithAttributes(ctx, packOption, tmpDir)
if err != nil {
return errors.Wrap(err, "pack to blob")
}
bootStrapTarPath, err := packFinalBootstrap(tmpDir, backendConfigPath, externalBlobDigest)
if err != nil {
return errors.Wrap(err, "pack final bootstrap")
}
modelCfg, err := buildModelConfig(modctlHandler)
if err != nil {
return errors.Wrap(err, "build model config")
}
modelLayers := modctlHandler.GetLayers()
nydusImage := buildNydusImage()
return pushManifest(context.Background(), opt, *modelCfg, modelLayers, *nydusImage, bootStrapTarPath)
}
func convertModelArtifact(ctx context.Context, opt Opt) error {
if _, err := os.Stat(opt.WorkDir); err != nil {
if errors.Is(err, os.ErrNotExist) {
if err := os.MkdirAll(opt.WorkDir, 0755); err != nil {
return errors.Wrap(err, "prepare work directory")
}
// We should only clean up when the work directory not exists
// before, otherwise it may delete user data by mistake.
defer os.RemoveAll(opt.WorkDir)
} else {
return errors.Wrap(err, "stat work directory")
}
}
tmpDir, err := os.MkdirTemp(opt.WorkDir, "nydusify-")
if err != nil {
return errors.Wrap(err, "create temp directory")
}
defer os.RemoveAll(tmpDir)
contextDir, err := os.MkdirTemp(tmpDir, "context-")
if err != nil {
return errors.Wrap(err, "create temp directory")
}
defer os.RemoveAll(contextDir)
attributesPath := filepath.Join(tmpDir, ".nydusattributes")
backendMetaPath := filepath.Join(tmpDir, ".backend.meta")
backendConfigPath := filepath.Join(tmpDir, ".backend.json")
handler, err := modctl.NewRemoteHandler(ctx, opt.Source, opt.WithPlainHTTP)
if err != nil {
return errors.Wrap(err, "create modctl handler")
}
if err := external.RemoteHandle(ctx, external.Options{
ContextDir: contextDir,
RemoteHandler: handler,
MetaOutput: backendMetaPath,
BackendOutput: backendConfigPath,
AttributesOutput: attributesPath,
}); err != nil {
return errors.Wrap(err, "remote handle")
}
// Make nydus layer with external blob
packOption := snapConv.PackOption{
BuilderPath: opt.NydusImagePath,
Compressor: opt.Compressor,
FsVersion: opt.FsVersion,
ChunkSize: opt.ChunkSize,
FromDir: contextDir,
AttributesPath: attributesPath,
}
_, externalBlobDigest, err := packWithAttributes(ctx, packOption, tmpDir)
if err != nil {
return errors.Wrap(err, "pack to blob")
}
bootStrapTarPath, err := packFinalBootstrap(tmpDir, backendConfigPath, externalBlobDigest)
if err != nil {
return errors.Wrap(err, "pack final bootstrap")
}
modelCfg, err := handler.GetModelConfig()
if err != nil {
return errors.Wrap(err, "build model config")
}
modelLayers := handler.GetLayers()
nydusImage := buildNydusImage()
return pushManifest(context.Background(), opt, *modelCfg, modelLayers, *nydusImage, bootStrapTarPath)
}
func newModctlHandler(opt Opt, workDir string) (*modctl.Handler, error) {
chunkSizeStr := strings.TrimPrefix(opt.ChunkSize, "0x")
chunkSize, err := strconv.ParseUint(chunkSizeStr, 16, 64)
if err != nil {
return nil, errors.Wrap(err, "parse chunk size to uint64")
}
modctlOpt, err := modctl.GetOption(opt.Source, workDir, chunkSize)
if err != nil {
return nil, errors.Wrap(err, "parse modctl option")
}
return modctl.NewHandler(*modctlOpt)
}
func packWithAttributes(ctx context.Context, packOption snapConv.PackOption, blobDir string) (digest.Digest, digest.Digest, error) {
blob, err := os.CreateTemp(blobDir, "blob-")
if err != nil {
return "", "", errors.Wrap(err, "create temp file for blob")
}
defer blob.Close()
externalBlob, err := os.CreateTemp(blobDir, "external-blob-")
if err != nil {
return "", "", errors.Wrap(err, "create temp file for external blob")
}
defer externalBlob.Close()
blobDigester := digest.Canonical.Digester()
blobWriter := io.MultiWriter(blob, blobDigester.Hash())
externalBlobDigester := digest.Canonical.Digester()
packOption.ExternalBlobWriter = io.MultiWriter(externalBlob, externalBlobDigester.Hash())
_, err = snapConv.Pack(ctx, blobWriter, packOption)
if err != nil {
return "", "", errors.Wrap(err, "pack to blob")
}
blobDigest := blobDigester.Digest()
err = os.Rename(blob.Name(), filepath.Join(blobDir, blobDigest.Hex()))
if err != nil {
return "", "", errors.Wrap(err, "rename blob file")
}
externalBlobDigest := externalBlobDigester.Digest()
err = os.Rename(externalBlob.Name(), filepath.Join(blobDir, externalBlobDigest.Hex()))
if err != nil {
return "", "", errors.Wrap(err, "rename external blob file")
}
return blobDigest, externalBlobDigest, nil
}
// Pack bootstrap and backend config into final bootstrap tar file.
func packFinalBootstrap(workDir, backendConfigPath string, externalBlobDigest digest.Digest) (string, error) {
bkdCfg, err := os.ReadFile(backendConfigPath)
if err != nil {
return "", errors.Wrap(err, "read backend config file")
}
bkdReader := bytes.NewReader(bkdCfg)
files := []snapConv.File{
{
Name: "backend.json",
Reader: bkdReader,
Size: int64(len(bkdCfg)),
},
}
externalBlobRa, err := local.OpenReader(filepath.Join(workDir, externalBlobDigest.Hex()))
if err != nil {
return "", errors.Wrap(err, "open reader for upper blob")
}
bootstrap, err := os.CreateTemp(workDir, "bootstrap-")
if err != nil {
return "", errors.Wrap(err, "create temp file for bootstrap")
}
defer bootstrap.Close()
if _, err := snapConv.UnpackEntry(externalBlobRa, snapConv.EntryBootstrap, bootstrap); err != nil {
return "", errors.Wrap(err, "unpack bootstrap from nydus")
}
files = append(files, snapConv.File{
Name: snapConv.EntryBootstrap,
Reader: content.NewReader(externalBlobRa),
Size: externalBlobRa.Size(),
})
bootStrapTarPath := fmt.Sprintf("%s-final.tar", bootstrap.Name())
bootstrapTar, err := os.Create(bootStrapTarPath)
if err != nil {
return "", errors.Wrap(err, "open bootstrap tar file")
}
defer bootstrap.Close()
rc := snapConv.PackToTar(files, false)
defer rc.Close()
println("copy bootstrap to tar file")
if _, err = io.Copy(bootstrapTar, rc); err != nil {
return "", errors.Wrap(err, "copy merged bootstrap")
}
return bootStrapTarPath, nil
}
func buildNydusImage() *parser.Image {
manifest := ocispec.Manifest{
Versioned: specs.Versioned{SchemaVersion: 2},
MediaType: ocispec.MediaTypeImageManifest,
ArtifactType: modelspec.ArtifactTypeModelManifest,
Config: ocispec.Descriptor{
MediaType: modelspec.MediaTypeModelConfig,
},
}
desc := ocispec.Descriptor{
MediaType: ocispec.MediaTypeImageManifest,
}
nydusImage := &parser.Image{
Manifest: manifest,
Desc: desc,
}
return nydusImage
}
func buildModelConfig(modctlHandler *modctl.Handler) (*modelspec.Model, error) {
cfgBytes, err := modctlHandler.GetConfig()
if err != nil {
return nil, errors.Wrap(err, "get modctl config")
}
var modelCfg modelspec.Model
if err := json.Unmarshal(cfgBytes, &modelCfg); err != nil {
return nil, errors.Wrap(err, "unmarshal modctl config")
}
return &modelCfg, nil
}
func pushManifest(
ctx context.Context, opt Opt, modelCfg modelspec.Model, modelLayers []ocispec.Descriptor, nydusImage parser.Image, bootstrapTarPath string,
) error {
// Push image config
configBytes, configDesc, err := makeDesc(modelCfg, nydusImage.Manifest.Config)
if err != nil {
return errors.Wrap(err, "make config desc")
}
remoter, err := pkgPvd.DefaultRemote(opt.Target, opt.TargetInsecure)
if err != nil {
return errors.Wrap(err, "create remote")
}
if opt.WithPlainHTTP {
remoter.WithHTTP()
}
if err := remoter.Push(ctx, *configDesc, true, bytes.NewReader(configBytes)); err != nil {
if utils.RetryWithHTTP(err) {
remoter.MaybeWithHTTP(err)
if err := remoter.Push(ctx, *configDesc, true, bytes.NewReader(configBytes)); err != nil {
return errors.Wrap(err, "push image config")
}
} else {
return errors.Wrap(err, "push image config")
}
}
// Push bootstrap layer
bootstrapTar, err := os.Open(bootstrapTarPath)
if err != nil {
return errors.Wrap(err, "open bootstrap tar file")
}
bootstrapTarGzPath := bootstrapTarPath + ".gz"
bootstrapTarGz, err := os.Create(bootstrapTarGzPath)
if err != nil {
return errors.Wrap(err, "create bootstrap tar.gz file")
}
defer bootstrapTarGz.Close()
digester := digest.SHA256.Digester()
gzWriter := gzip.NewWriter(io.MultiWriter(bootstrapTarGz, digester.Hash()))
if _, err := io.Copy(gzWriter, bootstrapTar); err != nil {
return errors.Wrap(err, "compress bootstrap tar to tar.gz")
}
if err := gzWriter.Close(); err != nil {
return errors.Wrap(err, "close gzip writer")
}
ra, err := local.OpenReader(bootstrapTarGzPath)
if err != nil {
return errors.Wrap(err, "open reader for upper blob")
}
defer ra.Close()
bootstrapDesc := ocispec.Descriptor{
Digest: digester.Digest(),
Size: ra.Size(),
MediaType: ocispec.MediaTypeImageLayerGzip,
Annotations: map[string]string{
snapConv.LayerAnnotationFSVersion: opt.FsVersion,
snapConv.LayerAnnotationNydusBootstrap: "true",
snapConv.LayerAnnotationNydusArtifactType: modelspec.ArtifactTypeModelManifest,
},
}
bootstrapRc, err := os.Open(bootstrapTarGzPath)
if err != nil {
return errors.Wrapf(err, "open bootstrap %s", bootstrapTarGzPath)
}
defer bootstrapRc.Close()
if err := remoter.Push(ctx, bootstrapDesc, true, bootstrapRc); err != nil {
return errors.Wrap(err, "push bootstrap layer")
}
// Push image manifest
layers := make([]ocispec.Descriptor, 0, len(modelLayers)+1)
layers = append(layers, modelLayers...)
layers = append(layers, bootstrapDesc)
subject, err := getSourceManifestSubject(ctx, opt.Source, opt.SourceInsecure, opt.WithPlainHTTP)
if err != nil {
return errors.Wrap(err, "get source manifest subject")
}
nydusImage.Manifest.Config = *configDesc
nydusImage.Manifest.Layers = layers
nydusImage.Manifest.Subject = subject
manifestBytes, manifestDesc, err := makeDesc(nydusImage.Manifest, nydusImage.Desc)
if err != nil {
return errors.Wrap(err, "make manifest desc")
}
if err := remoter.Push(ctx, *manifestDesc, false, bytes.NewReader(manifestBytes)); err != nil {
return errors.Wrap(err, "push image manifest")
}
return nil
}
func getSourceManifestSubject(ctx context.Context, sourceRef string, inscure, plainHTTP bool) (*ocispec.Descriptor, error) {
remoter, err := pkgPvd.DefaultRemote(sourceRef, inscure)
if err != nil {
return nil, errors.Wrap(err, "create remote")
}
if plainHTTP {
remoter.WithHTTP()
}
desc, err := remoter.Resolve(ctx)
if utils.RetryWithHTTP(err) {
remoter.MaybeWithHTTP(err)
desc, err = remoter.Resolve(ctx)
}
if err != nil {
return nil, errors.Wrap(err, "resolve source manifest subject")
}
return desc, nil
}
func makeDesc(x interface{}, oldDesc ocispec.Descriptor) ([]byte, *ocispec.Descriptor, error) {
data, err := json.MarshalIndent(x, "", " ")
if err != nil {
return nil, nil, errors.Wrap(err, "json marshal")
}
dgst := digest.SHA256.FromBytes(data)
newDesc := oldDesc
newDesc.Size = int64(len(data))
newDesc.Digest = dgst
return data, &newDesc, nil
}

View File

@ -0,0 +1,604 @@
package converter
import (
"bytes"
"context"
"errors"
"io"
"os"
"path/filepath"
"testing"
snapConv "github.com/BraveY/snapshotter-converter/converter"
modelspec "github.com/CloudNativeAI/model-spec/specs-go/v1"
"github.com/agiledragon/gomonkey/v2"
"github.com/containerd/containerd/content"
"github.com/containerd/containerd/content/local"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/external/modctl"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/parser"
pkgPvd "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/provider"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/remote"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/snapshotter/external"
"github.com/opencontainers/go-digest"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/stretchr/testify/assert"
)
func TestConvert(t *testing.T) {
t.Run("convert modelfile", func(t *testing.T) {
opt := Opt{
WorkDir: "/tmp/nydusify",
SourceBackendType: "modelfile",
ChunkSize: "4MiB",
SourceBackendConfig: "{}",
}
err := Convert(context.Background(), opt)
assert.Error(t, err)
opt.ChunkSize = "0x1000"
opt.Source = "docker.io/library/busybox:latest"
opt.Target = "docker.io/library/busybox:latest_nydus"
err = Convert(context.Background(), opt)
assert.Error(t, err)
})
t.Run("Convert model-artifact", func(t *testing.T) {
opt := Opt{
WorkDir: "/tmp/nydusify",
SourceBackendType: "model-artifact",
}
err := Convert(context.Background(), opt)
assert.Error(t, err)
})
}
func TestConvertModelFile(t *testing.T) {
opt := Opt{
WorkDir: "/tmp/nydusify",
SourceBackendConfig: "{}",
Source: "docker.io/library/busybox:latest",
Target: "docker.io/library/busybox:latest_nydus",
ChunkSize: "0x100000",
}
t.Run("Run normal", func(t *testing.T) {
patches := gomonkey.ApplyFunc(modctl.NewHandler, func(modctl.Option) (*modctl.Handler, error) {
return &modctl.Handler{}, nil
})
defer patches.Reset()
extHandlePatches := gomonkey.ApplyFunc(external.Handle, func(context.Context, external.Options) error {
return nil
})
defer extHandlePatches.Reset()
packFinBootPatches := gomonkey.ApplyFunc(packFinalBootstrap, func(string, string, digest.Digest) (string, error) {
return "", nil
})
defer packFinBootPatches.Reset()
buildModelConfigPatches := gomonkey.ApplyFunc(buildModelConfig, func(*modctl.Handler) (*modelspec.Model, error) {
return &modelspec.Model{}, nil
})
defer buildModelConfigPatches.Reset()
pushManifestPatches := gomonkey.ApplyFunc(pushManifest, func(context.Context, Opt, modelspec.Model, []ocispec.Descriptor, parser.Image, string) error {
return nil
})
defer pushManifestPatches.Reset()
err := convertModelFile(context.Background(), opt)
assert.NoError(t, err)
})
t.Run("Run newModctlHandler failed", func(t *testing.T) {
patches := gomonkey.ApplyFunc(modctl.NewHandler, func(modctl.Option) (*modctl.Handler, error) {
return nil, errors.New("new handler error")
})
defer patches.Reset()
err := convertModelFile(context.Background(), opt)
assert.Error(t, err)
})
t.Run("Run external handle failed", func(t *testing.T) {
patches := gomonkey.ApplyFunc(modctl.NewHandler, func(modctl.Option) (*modctl.Handler, error) {
return &modctl.Handler{}, nil
})
defer patches.Reset()
extHandlePatches := gomonkey.ApplyFunc(external.Handle, func(context.Context, external.Options) error {
return errors.New("external handle mock error")
})
defer extHandlePatches.Reset()
err := convertModelFile(context.Background(), opt)
assert.Error(t, err)
})
t.Run("Run packFinalBootstrap failed", func(t *testing.T) {
patches := gomonkey.ApplyFunc(modctl.NewHandler, func(modctl.Option) (*modctl.Handler, error) {
return &modctl.Handler{}, nil
})
defer patches.Reset()
extHandlePatches := gomonkey.ApplyFunc(external.Handle, func(context.Context, external.Options) error {
return nil
})
defer extHandlePatches.Reset()
packFinBootPatches := gomonkey.ApplyFunc(packFinalBootstrap, func(string, string, digest.Digest) (string, error) {
return "", errors.New("pack final bootstrap mock error")
})
defer packFinBootPatches.Reset()
err := convertModelFile(context.Background(), opt)
assert.Error(t, err)
})
t.Run("Run buildModelConfig failed", func(t *testing.T) {
patches := gomonkey.ApplyFunc(modctl.NewHandler, func(modctl.Option) (*modctl.Handler, error) {
return &modctl.Handler{}, nil
})
defer patches.Reset()
extHandlePatches := gomonkey.ApplyFunc(external.Handle, func(context.Context, external.Options) error {
return nil
})
defer extHandlePatches.Reset()
packFinBootPatches := gomonkey.ApplyFunc(packFinalBootstrap, func(string, string, digest.Digest) (string, error) {
return "", nil
})
defer packFinBootPatches.Reset()
buildModelConfigPatches := gomonkey.ApplyFunc(buildModelConfig, func(*modctl.Handler) (*modelspec.Model, error) {
return nil, errors.New("buildModelConfig mock error")
})
defer buildModelConfigPatches.Reset()
err := convertModelFile(context.Background(), opt)
assert.Error(t, err)
})
t.Run("Run pushManifest failed", func(t *testing.T) {
patches := gomonkey.ApplyFunc(modctl.NewHandler, func(modctl.Option) (*modctl.Handler, error) {
return &modctl.Handler{}, nil
})
defer patches.Reset()
extHandlePatches := gomonkey.ApplyFunc(external.Handle, func(context.Context, external.Options) error {
return nil
})
defer extHandlePatches.Reset()
packFinBootPatches := gomonkey.ApplyFunc(packFinalBootstrap, func(string, string, digest.Digest) (string, error) {
return "", nil
})
defer packFinBootPatches.Reset()
buildModelConfigPatches := gomonkey.ApplyFunc(buildModelConfig, func(*modctl.Handler) (*modelspec.Model, error) {
return &modelspec.Model{}, nil
})
defer buildModelConfigPatches.Reset()
pushManifestPatches := gomonkey.ApplyFunc(pushManifest, func(context.Context, Opt, modelspec.Model, []ocispec.Descriptor, parser.Image, string) error {
return errors.New("pushManifest mock error")
})
defer pushManifestPatches.Reset()
err := convertModelFile(context.Background(), opt)
assert.Error(t, err)
})
}
func TestConvertModelArtifact(t *testing.T) {
opt := Opt{
WorkDir: "/tmp/nydusify",
Source: "docker.io/library/busybox:latest",
Target: "docker.io/library/busybox:latest_nydus",
ChunkSize: "0x100000",
}
t.Run("Run normal", func(t *testing.T) {
mockRemoteHandler := &modctl.RemoteHandler{}
patches := gomonkey.ApplyFunc(modctl.NewRemoteHandler, func(context.Context, string, bool) (*modctl.RemoteHandler, error) {
return mockRemoteHandler, nil
})
defer patches.Reset()
extHandlePatches := gomonkey.ApplyFunc(external.RemoteHandle, func(context.Context, external.Options) error {
return nil
})
defer extHandlePatches.Reset()
packWithAttributesPatches := gomonkey.ApplyFunc(packWithAttributes, func(context.Context, snapConv.PackOption, string) (digest.Digest, digest.Digest, error) {
return "", "", nil
})
defer packWithAttributesPatches.Reset()
packFinBootPatches := gomonkey.ApplyFunc(packFinalBootstrap, func(string, string, digest.Digest) (string, error) {
return "", nil
})
defer packFinBootPatches.Reset()
getModelConfigPaches := gomonkey.ApplyMethod(mockRemoteHandler, "GetModelConfig", func() (*modelspec.Model, error) {
return &modelspec.Model{}, nil
})
defer getModelConfigPaches.Reset()
pushManifestPatches := gomonkey.ApplyFunc(pushManifest, func(context.Context, Opt, modelspec.Model, []ocispec.Descriptor, parser.Image, string) error {
return nil
})
defer pushManifestPatches.Reset()
err := convertModelArtifact(context.Background(), opt)
assert.NoError(t, err)
})
t.Run("Run RemoteHandle failed", func(t *testing.T) {
patches := gomonkey.ApplyFunc(modctl.NewRemoteHandler, func(context.Context, string, bool) (*modctl.RemoteHandler, error) {
return nil, errors.New("remote handler mock error")
})
defer patches.Reset()
err := convertModelArtifact(context.Background(), opt)
assert.Error(t, err)
})
t.Run("Run packWithAttributes failed", func(t *testing.T) {
mockRemoteHandler := &modctl.RemoteHandler{}
patches := gomonkey.ApplyFunc(modctl.NewRemoteHandler, func(context.Context, string, bool) (*modctl.RemoteHandler, error) {
return mockRemoteHandler, nil
})
defer patches.Reset()
extHandlePatches := gomonkey.ApplyFunc(external.RemoteHandle, func(context.Context, external.Options) error {
return nil
})
defer extHandlePatches.Reset()
packWithAttributesPatches := gomonkey.ApplyFunc(packWithAttributes, func(context.Context, snapConv.PackOption, string) (digest.Digest, digest.Digest, error) {
return "", "", errors.New("pack with attributes failed mock error")
})
defer packWithAttributesPatches.Reset()
err := convertModelArtifact(context.Background(), opt)
assert.Error(t, err)
})
t.Run("Run packFinalBootstrap failed", func(t *testing.T) {
mockRemoteHandler := &modctl.RemoteHandler{}
patches := gomonkey.ApplyFunc(modctl.NewRemoteHandler, func(context.Context, string, bool) (*modctl.RemoteHandler, error) {
return mockRemoteHandler, nil
})
defer patches.Reset()
extHandlePatches := gomonkey.ApplyFunc(external.RemoteHandle, func(context.Context, external.Options) error {
return nil
})
defer extHandlePatches.Reset()
packWithAttributesPatches := gomonkey.ApplyFunc(packWithAttributes, func(context.Context, snapConv.PackOption, string) (digest.Digest, digest.Digest, error) {
return "", "", nil
})
defer packWithAttributesPatches.Reset()
packFinBootPatches := gomonkey.ApplyFunc(packFinalBootstrap, func(string, string, digest.Digest) (string, error) {
return "", errors.New("packFinalBootstrap mock error")
})
defer packFinBootPatches.Reset()
err := convertModelArtifact(context.Background(), opt)
assert.Error(t, err)
})
t.Run("Run GetModelConfig failed", func(t *testing.T) {
mockRemoteHandler := &modctl.RemoteHandler{}
patches := gomonkey.ApplyFunc(modctl.NewRemoteHandler, func(context.Context, string, bool) (*modctl.RemoteHandler, error) {
return mockRemoteHandler, nil
})
defer patches.Reset()
extHandlePatches := gomonkey.ApplyFunc(external.RemoteHandle, func(context.Context, external.Options) error {
return nil
})
defer extHandlePatches.Reset()
packWithAttributesPatches := gomonkey.ApplyFunc(packWithAttributes, func(context.Context, snapConv.PackOption, string) (digest.Digest, digest.Digest, error) {
return "", "", nil
})
defer packWithAttributesPatches.Reset()
packFinBootPatches := gomonkey.ApplyFunc(packFinalBootstrap, func(string, string, digest.Digest) (string, error) {
return "", nil
})
defer packFinBootPatches.Reset()
getModelConfigPaches := gomonkey.ApplyMethod(mockRemoteHandler, "GetModelConfig", func() (*modelspec.Model, error) {
return nil, errors.New("run getModelConfig mock error")
})
defer getModelConfigPaches.Reset()
err := convertModelArtifact(context.Background(), opt)
assert.Error(t, err)
})
t.Run("Run pushManifest failed", func(t *testing.T) {
mockRemoteHandler := &modctl.RemoteHandler{}
patches := gomonkey.ApplyFunc(modctl.NewRemoteHandler, func(context.Context, string, bool) (*modctl.RemoteHandler, error) {
return mockRemoteHandler, nil
})
defer patches.Reset()
extHandlePatches := gomonkey.ApplyFunc(external.RemoteHandle, func(context.Context, external.Options) error {
return nil
})
defer extHandlePatches.Reset()
packWithAttributesPatches := gomonkey.ApplyFunc(packWithAttributes, func(context.Context, snapConv.PackOption, string) (digest.Digest, digest.Digest, error) {
return "", "", nil
})
defer packWithAttributesPatches.Reset()
packFinBootPatches := gomonkey.ApplyFunc(packFinalBootstrap, func(string, string, digest.Digest) (string, error) {
return "", nil
})
defer packFinBootPatches.Reset()
getModelConfigPaches := gomonkey.ApplyMethod(mockRemoteHandler, "GetModelConfig", func() (*modelspec.Model, error) {
return &modelspec.Model{}, nil
})
defer getModelConfigPaches.Reset()
pushManifestPatches := gomonkey.ApplyFunc(pushManifest, func(context.Context, Opt, modelspec.Model, []ocispec.Descriptor, parser.Image, string) error {
return errors.New("push manifest mock error")
})
defer pushManifestPatches.Reset()
err := convertModelArtifact(context.Background(), opt)
assert.Error(t, err)
})
}
func TestPackWithAttributes(t *testing.T) {
packOpt := snapConv.PackOption{
BuilderPath: "/tmp/nydus-image",
}
blobDir := "/tmp/nydusify"
os.MkdirAll(blobDir, 0755)
defer os.RemoveAll(blobDir)
_, _, err := packWithAttributes(context.Background(), packOpt, blobDir)
assert.Nil(t, err)
}
type mockReaderAt struct{}
func (m *mockReaderAt) ReadAt([]byte, int64) (n int, err error) {
return 0, errors.New("mock error")
}
func (m *mockReaderAt) Close() error {
return nil
}
func (m *mockReaderAt) Size() int64 {
return 0
}
func TestPackFinalBootstrap(t *testing.T) {
workDir := "/tmp/nydusify"
os.MkdirAll(workDir, 0755)
defer os.RemoveAll(workDir)
cfgPath := filepath.Join(workDir, "backend.json")
os.Create(cfgPath)
extDigest := digest.FromString("abc1234")
mockReaderAt := &mockReaderAt{}
t.Run("Run local OpenReader failed", func(t *testing.T) {
_, err := packFinalBootstrap(workDir, cfgPath, extDigest)
assert.Error(t, err)
})
t.Run("Run unpack entry failed", func(t *testing.T) {
openReaderPatches := gomonkey.ApplyFunc(local.OpenReader, func(string) (content.ReaderAt, error) {
return mockReaderAt, nil
})
defer openReaderPatches.Reset()
_, err := packFinalBootstrap(workDir, cfgPath, extDigest)
assert.Error(t, err)
})
t.Run("Run normal", func(t *testing.T) {
openReaderPatches := gomonkey.ApplyFunc(local.OpenReader, func(string) (content.ReaderAt, error) {
return mockReaderAt, nil
})
defer openReaderPatches.Reset()
unpackEntryPatches := gomonkey.ApplyFunc(snapConv.UnpackEntry, func(content.ReaderAt, string, io.Writer) (*snapConv.TOCEntry, error) {
return &snapConv.TOCEntry{}, nil
})
defer unpackEntryPatches.Reset()
packToTarPaches := gomonkey.ApplyFunc(snapConv.PackToTar, func([]snapConv.File, bool) io.ReadCloser {
var buff bytes.Buffer
return io.NopCloser(&buff)
})
defer packToTarPaches.Reset()
ioCopyPatches := gomonkey.ApplyFunc(io.Copy, func(io.Writer, io.Reader) (int64, error) {
return 0, nil
})
defer ioCopyPatches.Reset()
_, err := packFinalBootstrap(workDir, cfgPath, extDigest)
assert.NoError(t, err)
})
}
func TestBuildNydusImage(t *testing.T) {
image := buildNydusImage()
assert.NotNil(t, image)
}
func TestMakeDesc(t *testing.T) {
input := "test"
oldDesc := ocispec.Descriptor{
MediaType: "test",
}
_, _, err := makeDesc(input, oldDesc)
assert.NoError(t, err)
}
func TestBuildModelConfig(t *testing.T) {
modctlHander := &modctl.Handler{}
_, err := buildModelConfig(modctlHander)
assert.Error(t, err)
}
func TestPushManifest(t *testing.T) {
remoter := &remote.Remote{}
t.Run("Run make desc failed", func(t *testing.T) {
makeDescPatches := gomonkey.ApplyFunc(makeDesc, func(interface{}, ocispec.Descriptor) ([]byte, *ocispec.Descriptor, error) {
return nil, nil, errors.New("make desc mock error")
})
defer makeDescPatches.Reset()
err := pushManifest(context.Background(), Opt{}, modelspec.Model{}, nil, parser.Image{}, "")
assert.Error(t, err)
})
t.Run("Run default remote failed", func(t *testing.T) {
makeDescPatches := gomonkey.ApplyFunc(makeDesc, func(interface{}, ocispec.Descriptor) ([]byte, *ocispec.Descriptor, error) {
return []byte{}, &ocispec.Descriptor{}, nil
})
defer makeDescPatches.Reset()
defaultRemotePatches := gomonkey.ApplyFunc(pkgPvd.DefaultRemote, func(string, bool) (*remote.Remote, error) {
return nil, errors.New("default remote failed mock error")
})
defer defaultRemotePatches.Reset()
err := pushManifest(context.Background(), Opt{}, modelspec.Model{}, nil, parser.Image{}, "")
assert.Error(t, err)
})
t.Run("Run push failed", func(t *testing.T) {
makeDescPatches := gomonkey.ApplyFunc(makeDesc, func(interface{}, ocispec.Descriptor) ([]byte, *ocispec.Descriptor, error) {
return []byte{}, &ocispec.Descriptor{}, nil
})
defer makeDescPatches.Reset()
defaultRemotePatches := gomonkey.ApplyFunc(pkgPvd.DefaultRemote, func(string, bool) (*remote.Remote, error) {
return remoter, nil
})
defer defaultRemotePatches.Reset()
pushPatches := gomonkey.ApplyMethod(remoter, "Push", func(*remote.Remote, context.Context, ocispec.Descriptor, bool, io.Reader) error {
return errors.New("push mock timeout error")
})
defer pushPatches.Reset()
err := pushManifest(context.Background(), Opt{WithPlainHTTP: true}, modelspec.Model{}, nil, parser.Image{}, "")
assert.Error(t, err)
})
t.Run("Run open failed", func(t *testing.T) {
makeDescPatches := gomonkey.ApplyFunc(makeDesc, func(interface{}, ocispec.Descriptor) ([]byte, *ocispec.Descriptor, error) {
return []byte{}, &ocispec.Descriptor{}, nil
})
defer makeDescPatches.Reset()
defaultRemotePatches := gomonkey.ApplyFunc(pkgPvd.DefaultRemote, func(string, bool) (*remote.Remote, error) {
return remoter, nil
})
defer defaultRemotePatches.Reset()
pushPatches := gomonkey.ApplyMethod(remoter, "Push", func(*remote.Remote, context.Context, ocispec.Descriptor, bool, io.Reader) error {
return nil
})
defer pushPatches.Reset()
err := pushManifest(context.Background(), Opt{WithPlainHTTP: true}, modelspec.Model{}, nil, parser.Image{}, "")
assert.Error(t, err)
})
t.Run("Run getSourceManifestSubject failed", func(t *testing.T) {
makeDescPatches := gomonkey.ApplyFunc(makeDesc, func(interface{}, ocispec.Descriptor) ([]byte, *ocispec.Descriptor, error) {
return []byte{}, &ocispec.Descriptor{}, nil
})
defer makeDescPatches.Reset()
defaultRemotePatches := gomonkey.ApplyFunc(pkgPvd.DefaultRemote, func(string, bool) (*remote.Remote, error) {
return remoter, nil
})
defer defaultRemotePatches.Reset()
pushPatches := gomonkey.ApplyMethod(remoter, "Push", func(*remote.Remote, context.Context, ocispec.Descriptor, bool, io.Reader) error {
return nil
})
defer pushPatches.Reset()
bootstrapPath := "/tmp/nydusify/bootstrap"
os.Mkdir("/tmp/nydusify/", 0755)
os.Create(bootstrapPath)
defer os.RemoveAll("/tmp/nydusify/")
defer os.Remove(bootstrapPath)
getSourceManifestSubjectPatches := gomonkey.ApplyFunc(getSourceManifestSubject, func(context.Context, string, bool, bool) (*ocispec.Descriptor, error) {
return nil, errors.New("get source manifest subject mock error")
})
defer getSourceManifestSubjectPatches.Reset()
err := pushManifest(context.Background(), Opt{WithPlainHTTP: true}, modelspec.Model{}, nil, parser.Image{}, bootstrapPath)
assert.Error(t, err)
})
t.Run("Run normal", func(t *testing.T) {
makeDescPatches := gomonkey.ApplyFunc(makeDesc, func(interface{}, ocispec.Descriptor) ([]byte, *ocispec.Descriptor, error) {
return []byte{}, &ocispec.Descriptor{}, nil
})
defer makeDescPatches.Reset()
defaultRemotePatches := gomonkey.ApplyFunc(pkgPvd.DefaultRemote, func(string, bool) (*remote.Remote, error) {
return remoter, nil
})
defer defaultRemotePatches.Reset()
pushPatches := gomonkey.ApplyMethod(remoter, "Push", func(*remote.Remote, context.Context, ocispec.Descriptor, bool, io.Reader) error {
return nil
})
defer pushPatches.Reset()
bootstrapPath := "/tmp/nydusify/bootstrap"
os.Mkdir("/tmp/nydusify/", 0755)
os.Create(bootstrapPath)
defer os.RemoveAll("/tmp/nydusify/")
defer os.Remove(bootstrapPath)
getSourceManifestSubjectPatches := gomonkey.ApplyFunc(getSourceManifestSubject, func(context.Context, string, bool, bool) (*ocispec.Descriptor, error) {
return &ocispec.Descriptor{}, nil
})
defer getSourceManifestSubjectPatches.Reset()
err := pushManifest(context.Background(), Opt{WithPlainHTTP: true}, modelspec.Model{}, nil, parser.Image{}, bootstrapPath)
assert.NoError(t, err)
})
}
func TestGetSourceManifestSubject(t *testing.T) {
remoter := &remote.Remote{}
t.Run("Run default remote failed", func(t *testing.T) {
defaultRemotePatches := gomonkey.ApplyFunc(pkgPvd.DefaultRemote, func(string, bool) (*remote.Remote, error) {
return nil, errors.New("default remote failed mock error")
})
defer defaultRemotePatches.Reset()
_, err := getSourceManifestSubject(context.Background(), "", false, false)
assert.Error(t, err)
})
t.Run("Run resolve failed", func(t *testing.T) {
defaultRemotePatches := gomonkey.ApplyFunc(pkgPvd.DefaultRemote, func(string, bool) (*remote.Remote, error) {
return remoter, nil
})
defer defaultRemotePatches.Reset()
remoterReolvePatches := gomonkey.ApplyMethod(remoter, "Resolve", func(*remote.Remote, context.Context) (*ocispec.Descriptor, error) {
return nil, errors.New("resolve failed mock error timeout")
})
defer remoterReolvePatches.Reset()
_, err := getSourceManifestSubject(context.Background(), "", false, false)
assert.Error(t, err)
})
t.Run("Run normal", func(t *testing.T) {
defaultRemotePatches := gomonkey.ApplyFunc(pkgPvd.DefaultRemote, func(string, bool) (*remote.Remote, error) {
return remoter, nil
})
defer defaultRemotePatches.Reset()
remoterReolvePatches := gomonkey.ApplyMethod(remoter, "Resolve", func(*remote.Remote, context.Context) (*ocispec.Descriptor, error) {
return &ocispec.Descriptor{}, nil
})
defer remoterReolvePatches.Reset()
desc, err := getSourceManifestSubject(context.Background(), "", false, false)
assert.NoError(t, err)
assert.NotNil(t, desc)
})
}

View File

@ -22,26 +22,30 @@ import (
"github.com/containerd/containerd/platforms" "github.com/containerd/containerd/platforms"
"github.com/containerd/containerd/remotes" "github.com/containerd/containerd/remotes"
"github.com/containerd/containerd/remotes/docker" "github.com/containerd/containerd/remotes/docker"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/utils"
"github.com/goharbor/acceleration-service/pkg/cache" "github.com/goharbor/acceleration-service/pkg/cache"
accelcontent "github.com/goharbor/acceleration-service/pkg/content" accelcontent "github.com/goharbor/acceleration-service/pkg/content"
"github.com/goharbor/acceleration-service/pkg/remote" "github.com/goharbor/acceleration-service/pkg/remote"
"github.com/opencontainers/go-digest" "github.com/opencontainers/go-digest"
ocispec "github.com/opencontainers/image-spec/specs-go/v1" ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/sirupsen/logrus"
) )
var LayerConcurrentLimit = 5 var LayerConcurrentLimit = 5
type Provider struct { type Provider struct {
mutex sync.Mutex mutex sync.Mutex
usePlainHTTP bool usePlainHTTP bool
images map[string]*ocispec.Descriptor images map[string]*ocispec.Descriptor
store content.Store store content.Store
hosts remote.HostFunc hosts remote.HostFunc
platformMC platforms.MatchComparer platformMC platforms.MatchComparer
cacheSize int cacheSize int
cacheVersion string cacheVersion string
chunkSize int64 chunkSize int64
pushRetryCount int
pushRetryDelay time.Duration
} }
func New(root string, hosts remote.HostFunc, cacheSize uint, cacheVersion string, platformMC platforms.MatchComparer, chunkSize int64) (*Provider, error) { func New(root string, hosts remote.HostFunc, cacheSize uint, cacheVersion string, platformMC platforms.MatchComparer, chunkSize int64) (*Provider, error) {
@ -55,13 +59,15 @@ func New(root string, hosts remote.HostFunc, cacheSize uint, cacheVersion string
} }
return &Provider{ return &Provider{
images: make(map[string]*ocispec.Descriptor), images: make(map[string]*ocispec.Descriptor),
store: store, store: store,
hosts: hosts, hosts: hosts,
cacheSize: int(cacheSize), cacheSize: int(cacheSize),
platformMC: platformMC, platformMC: platformMC,
cacheVersion: cacheVersion, cacheVersion: cacheVersion,
chunkSize: chunkSize, chunkSize: chunkSize,
pushRetryCount: 3,
pushRetryDelay: 5 * time.Second,
}, nil }, nil
} }
@ -142,6 +148,14 @@ func (pvd *Provider) Pull(ctx context.Context, ref string) error {
return nil return nil
} }
// SetPushRetryConfig sets the retry configuration for push operations
func (pvd *Provider) SetPushRetryConfig(count int, delay time.Duration) {
pvd.mutex.Lock()
defer pvd.mutex.Unlock()
pvd.pushRetryCount = count
pvd.pushRetryDelay = delay
}
func (pvd *Provider) Push(ctx context.Context, desc ocispec.Descriptor, ref string) error { func (pvd *Provider) Push(ctx context.Context, desc ocispec.Descriptor, ref string) error {
resolver, err := pvd.Resolver(ref) resolver, err := pvd.Resolver(ref)
if err != nil { if err != nil {
@ -153,7 +167,15 @@ func (pvd *Provider) Push(ctx context.Context, desc ocispec.Descriptor, ref stri
MaxConcurrentUploadedLayers: LayerConcurrentLimit, MaxConcurrentUploadedLayers: LayerConcurrentLimit,
} }
return push(ctx, pvd.store, rc, desc, ref) err = utils.WithRetry(func() error {
return push(ctx, pvd.store, rc, desc, ref)
}, pvd.pushRetryCount, pvd.pushRetryDelay)
if err != nil {
logrus.WithError(err).Error("Push failed after all attempts")
}
return err
} }
func (pvd *Provider) Import(ctx context.Context, reader io.Reader) (string, error) { func (pvd *Provider) Import(ctx context.Context, reader io.Reader) (string, error) {

View File

@ -13,14 +13,15 @@ import (
"path/filepath" "path/filepath"
"strings" "strings"
"github.com/BraveY/snapshotter-converter/converter"
"github.com/containerd/containerd/archive/compression" "github.com/containerd/containerd/archive/compression"
"github.com/containerd/containerd/content" "github.com/containerd/containerd/content"
containerdErrdefs "github.com/containerd/containerd/errdefs"
"github.com/containerd/containerd/images" "github.com/containerd/containerd/images"
"github.com/containerd/containerd/namespaces" "github.com/containerd/containerd/namespaces"
"github.com/containerd/containerd/platforms" "github.com/containerd/containerd/platforms"
"github.com/containerd/containerd/reference/docker" "github.com/containerd/containerd/reference/docker"
"github.com/containerd/nydus-snapshotter/pkg/converter" "github.com/containerd/containerd/remotes"
containerdErrdefs "github.com/containerd/errdefs"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/backend" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/backend"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/checker/tool" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/checker/tool"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/converter/provider" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/converter/provider"
@ -76,7 +77,7 @@ func hosts(opt Opt) remote.HostFunc {
} }
} }
func getPushWriter(ctx context.Context, pvd *provider.Provider, desc ocispec.Descriptor, opt Opt) (content.Writer, error) { func getPusherInChunked(ctx context.Context, pvd *provider.Provider, desc ocispec.Descriptor, opt Opt) (remotes.PusherInChunked, error) {
resolver, err := pvd.Resolver(opt.Target) resolver, err := pvd.Resolver(opt.Target)
if err != nil { if err != nil {
return nil, errors.Wrap(err, "get resolver") return nil, errors.Wrap(err, "get resolver")
@ -85,18 +86,13 @@ func getPushWriter(ctx context.Context, pvd *provider.Provider, desc ocispec.Des
if !strings.Contains(ref, "@") { if !strings.Contains(ref, "@") {
ref = ref + "@" + desc.Digest.String() ref = ref + "@" + desc.Digest.String()
} }
pusher, err := resolver.Pusher(ctx, ref)
pusherInChunked, err := resolver.PusherInChunked(ctx, ref)
if err != nil { if err != nil {
return nil, errors.Wrap(err, "create pusher") return nil, errors.Wrap(err, "create pusher in chunked")
} }
writer, err := pusher.Push(ctx, desc)
if err != nil { return pusherInChunked, nil
if containerdErrdefs.IsAlreadyExists(err) {
return nil, nil
}
return nil, err
}
return writer, nil
} }
func pushBlobFromBackend( func pushBlobFromBackend(
@ -167,11 +163,6 @@ func pushBlobFromBackend(
blobSizeStr := humanize.Bytes(uint64(blobSize)) blobSizeStr := humanize.Bytes(uint64(blobSize))
logrus.WithField("digest", blobDigest).WithField("size", blobSizeStr).Infof("pushing blob from backend") logrus.WithField("digest", blobDigest).WithField("size", blobSizeStr).Infof("pushing blob from backend")
rc, err := backend.Reader(blobID)
if err != nil {
return errors.Wrap(err, "get blob reader")
}
defer rc.Close()
blobDescs[idx] = ocispec.Descriptor{ blobDescs[idx] = ocispec.Descriptor{
Digest: blobDigest, Digest: blobDigest,
Size: blobSize, Size: blobSize,
@ -180,22 +171,61 @@ func pushBlobFromBackend(
converter.LayerAnnotationNydusBlob: "true", converter.LayerAnnotationNydusBlob: "true",
}, },
} }
writer, err := getPushWriter(ctx, pvd, blobDescs[idx], opt)
if err != nil {
if errdefs.NeedsRetryWithHTTP(err) {
pvd.UsePlainHTTP()
writer, err = getPushWriter(ctx, pvd, blobDescs[idx], opt)
}
if err != nil {
return errors.Wrap(err, "get push writer")
}
}
if writer != nil {
defer writer.Close()
return content.Copy(ctx, writer, rc, blobSize, blobDigest)
}
logrus.WithField("digest", blobDigest).WithField("size", blobSizeStr).Infof("pushed blob from backend") if err := nydusifyUtils.RetryWithAttempts(func() error {
pusher, err := getPusherInChunked(ctx, pvd, blobDescs[idx], opt)
if err != nil {
if errdefs.NeedsRetryWithHTTP(err) {
pvd.UsePlainHTTP()
pusher, err = getPusherInChunked(ctx, pvd, blobDescs[idx], opt)
}
if err != nil {
return errors.Wrapf(err, "get push writer: %s", blobDigest)
}
}
push := func() error {
if blobSize > opt.PushChunkSize {
rr, err := backend.RangeReader(blobID)
if err != nil {
return errors.Wrapf(err, "get push reader: %s", blobDigest)
}
if err := pusher.PushInChunked(ctx, blobDescs[idx], rr); err != nil {
return errors.Wrapf(err, "push blob in chunked: %s", blobDigest)
}
} else {
rc, err := backend.Reader(blobID)
if err != nil {
return errors.Wrap(err, "get blob reader")
}
defer rc.Close()
writer, err := pusher.Push(ctx, blobDescs[idx])
if err != nil {
return errors.Wrapf(err, "get push writer: %s", blobDigest)
}
if writer != nil {
defer writer.Close()
if err := content.Copy(ctx, writer, rc, blobSize, blobDigest); err != nil {
return errors.Wrapf(err, "push blob: %s", blobDigest)
}
}
}
return nil
}
if err := push(); err != nil {
if containerdErrdefs.IsAlreadyExists(err) {
logrus.WithField("digest", blobDigest).WithField("size", blobSizeStr).Infof("pushed blob from backend (exists)")
return nil
}
return errors.Wrapf(err, "copy blob content: %s", blobDigest)
}
logrus.WithField("digest", blobDigest).WithField("size", blobSizeStr).Infof("pushed blob from backend")
return nil
}, 3); err != nil {
return errors.Wrapf(err, "push blob: %s", blobDigest)
}
return nil return nil
}) })

View File

@ -0,0 +1,383 @@
// Copyright 2025 Nydus Developers. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
package modctl
import (
"archive/tar"
"context"
"encoding/json"
"fmt"
"io"
"os"
"path/filepath"
"strings"
"github.com/dustin/go-humanize"
"github.com/pkg/errors"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/snapshotter/external/backend"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
)
const (
BlobPath = "/content.v1/docker/registry/v2/blobs/%s/%s/%s/data"
ReposPath = "/content.v1/docker/registry/v2/repositories"
ManifestPath = "/_manifests/tags/%s/current/link"
ModelWeightMediaType = "application/vnd.cnai.model.weight.v1.tar"
ModelDatasetMediaType = "application/vnd.cnai.model.dataset.v1.tar"
)
const (
DefaultFileChunkSize = "4MiB"
)
var mediaTypeChunkSizeMap = map[string]string{
ModelWeightMediaType: "64MiB",
ModelDatasetMediaType: "64MiB",
}
var _ backend.Handler = &Handler{}
type Handler struct {
root string
registryHost string
namespace string
imageName string
tag string
manifest ocispec.Manifest
blobs []backend.Blob
// key is the blob's sha256, value is the blob's mediaType type and index
blobsMap map[string]blobInfo
// config layer in modctl's manifest
blobConfig ocispec.Descriptor
objectID uint32
}
type blobInfo struct {
mediaType string
// Index in the blobs array
blobIndex uint32
blobDigest string
blobSize string
}
type chunk struct {
blobDigest string
blobSize string
objectID uint32
objectContent Object
objectOffset uint64
}
// ObjectID returns the blob index of the chunk
func (c *chunk) ObjectID() uint32 {
return c.objectID
}
func (c *chunk) ObjectContent() interface{} {
return c.objectContent
}
// ObjectOffset returns the offset of the chunk in the blob file
func (c *chunk) ObjectOffset() uint64 {
return c.objectOffset
}
func (c *chunk) FilePath() string {
return c.objectContent.Path
}
func (c *chunk) LimitChunkSize() string {
return c.objectContent.ChunkSize
}
func (c *chunk) BlobDigest() string {
return c.blobDigest
}
func (c *chunk) BlobSize() string {
return c.blobSize
}
type Object struct {
Path string
ChunkSize string
}
type fileInfo struct {
name string
mode uint32
size uint64
offset uint64
}
type Option struct {
Root string `json:"root"`
RegistryHost string `json:"registry_host"`
Namespace string `json:"namespace"`
ImageName string `json:"image_name"`
Tag string `json:"tag"`
WeightChunkSize uint64 `josn:"weightChunkSize"`
}
func setWeightChunkSize(chunkSize uint64) {
if chunkSize == 0 {
chunkSize = 64 * 1024 * 1024
}
chunkSizeStr := humanize.IBytes(chunkSize)
// remove space in chunkSizeStr `16 Mib` -> `16Mib`
chunkSizeStr = strings.ReplaceAll(chunkSizeStr, " ", "")
mediaTypeChunkSizeMap[ModelWeightMediaType] = chunkSizeStr
mediaTypeChunkSizeMap[ModelDatasetMediaType] = chunkSizeStr
}
func getChunkSizeByMediaType(mediaType string) string {
if chunkSize, ok := mediaTypeChunkSizeMap[mediaType]; ok {
return chunkSize
}
return DefaultFileChunkSize
}
func NewHandler(opt Option) (*Handler, error) {
handler := &Handler{
root: opt.Root,
registryHost: opt.RegistryHost,
namespace: opt.Namespace,
imageName: opt.ImageName,
tag: opt.Tag,
objectID: 0,
blobsMap: make(map[string]blobInfo),
}
if opt.WeightChunkSize != 0 {
setWeightChunkSize(opt.WeightChunkSize)
}
if err := initHandler(handler); err != nil {
return nil, errors.Wrap(err, "init handler")
}
return handler, nil
}
func initHandler(handler *Handler) error {
m, err := handler.extractManifest()
if err != nil {
return errors.Wrap(err, "extract manifest failed")
}
handler.manifest = *m
handler.blobs = convertToBlobs(&handler.manifest)
handler.setBlobConfig(m)
handler.setBlobsMap()
return nil
}
func GetOption(srcRef, modCtlRoot string, weightChunkSize uint64) (*Option, error) {
parts := strings.Split(srcRef, "/")
if len(parts) != 3 {
return nil, errors.Errorf("invalid source ref:%s", srcRef)
}
nameTagParts := strings.Split(parts[2], ":")
if len(nameTagParts) != 2 {
return nil, errors.New("invalid source ref for name and tag")
}
opt := Option{
Root: modCtlRoot,
RegistryHost: parts[0],
Namespace: parts[1],
ImageName: nameTagParts[0],
Tag: nameTagParts[1],
WeightChunkSize: weightChunkSize,
}
return &opt, nil
}
func (handler *Handler) Handle(_ context.Context, file backend.File) ([]backend.Chunk, error) {
chunks := []backend.Chunk{}
needIgnore, blobInfo := handler.needIgnore(file.RelativePath)
if needIgnore {
return nil, nil
}
chunkSize := getChunkSizeByMediaType(blobInfo.mediaType)
// read the tar file and get the meta of files
f, err := os.Open(filepath.Join(handler.root, file.RelativePath))
if err != nil {
return nil, errors.Wrap(err, "open tar file failed")
}
defer f.Close()
files, err := readTarBlob(f)
if err != nil {
return nil, errors.Wrap(err, "read blob failed")
}
chunkSizeInInt, err := humanize.ParseBytes(chunkSize)
if err != nil {
return nil, errors.Wrap(err, "parse chunk size failed")
}
for _, f := range files {
objectOffsets := backend.SplitObjectOffsets(int64(f.size), int64(chunkSizeInInt))
for _, objectOffset := range objectOffsets {
chunks = append(chunks, &chunk{
blobDigest: blobInfo.blobDigest,
blobSize: blobInfo.blobSize,
objectID: blobInfo.blobIndex,
objectContent: Object{
Path: f.name,
ChunkSize: chunkSize,
},
objectOffset: f.offset + objectOffset,
})
}
}
handler.objectID++
return chunks, nil
}
func (handler *Handler) Backend(context.Context) (*backend.Backend, error) {
bkd := backend.Backend{
Version: "v1",
}
bkd.Backends = []backend.Config{
{
Type: "registry",
},
}
bkd.Blobs = handler.blobs
return &bkd, nil
}
func (handler *Handler) GetConfig() ([]byte, error) {
return handler.extractBlobs(handler.blobConfig.Digest.String())
}
func (handler *Handler) GetLayers() []ocispec.Descriptor {
return handler.manifest.Layers
}
func (handler *Handler) setBlobConfig(m *ocispec.Manifest) {
handler.blobConfig = m.Config
}
func (handler *Handler) setBlobsMap() {
for i, blob := range handler.blobs {
handler.blobsMap[blob.Config.Digest] = blobInfo{
mediaType: blob.Config.MediaType,
blobIndex: uint32(i),
blobDigest: blob.Config.Digest,
blobSize: blob.Config.Size,
}
}
}
func (handler *Handler) extractManifest() (*ocispec.Manifest, error) {
tagPath := fmt.Sprintf(ManifestPath, handler.tag)
manifestPath := filepath.Join(handler.root, ReposPath, handler.registryHost, handler.namespace, handler.imageName, tagPath)
line, err := os.ReadFile(manifestPath)
if err != nil {
return nil, errors.Wrap(err, "read manifest digest file failed")
}
content, err := handler.extractBlobs(string(line))
if err != nil {
return nil, errors.Wrap(err, "extract blobs failed")
}
var m ocispec.Manifest
if err := json.Unmarshal(content, &m); err != nil {
return nil, errors.Wrap(err, "unmarshal manifest blobs file failed")
}
return &m, nil
}
func (handler *Handler) extractBlobs(digest string) ([]byte, error) {
line := strings.TrimSpace(digest)
digestSplit := strings.Split(line, ":")
if len(digestSplit) != 2 {
return nil, errors.New("invalid digest string")
}
blobPath := fmt.Sprintf(BlobPath, digestSplit[0], digestSplit[1][:2], digestSplit[1])
blobPath = filepath.Join(handler.root, blobPath)
content, err := os.ReadFile(blobPath)
if err != nil {
return nil, errors.Wrap(err, "read blobs file failed")
}
return content, nil
}
func convertToBlobs(m *ocispec.Manifest) []backend.Blob {
createBlob := func(layer ocispec.Descriptor) backend.Blob {
digestStr := layer.Digest.String()
digestParts := strings.Split(digestStr, ":")
if len(digestParts) == 2 {
digestStr = digestParts[1]
}
chunkSize := getChunkSizeByMediaType(layer.MediaType)
return backend.Blob{
Backend: 0,
Config: backend.BlobConfig{
MediaType: layer.MediaType,
Digest: digestStr,
Size: fmt.Sprintf("%d", layer.Size),
ChunkSize: chunkSize,
},
}
}
blobs := make([]backend.Blob, len(m.Layers))
for i, layer := range m.Layers {
blobs[i] = createBlob(layer)
}
return blobs
}
func (handler *Handler) needIgnore(relPath string) (bool, *blobInfo) {
// ignore manifest link file
if strings.HasSuffix(relPath, "link") {
return true, nil
}
// ignore blobs file belong to other image
parts := strings.Split(relPath, "/")
if len(parts) < 3 {
return true, nil
}
digest := parts[len(parts)-2]
blobInfo, ok := handler.blobsMap[digest]
if !ok {
return true, nil
}
return false, &blobInfo
}
func readTarBlob(r io.ReadSeeker) ([]fileInfo, error) {
var files []fileInfo
tarReader := tar.NewReader(r)
for {
header, err := tarReader.Next()
if err != nil {
if err == io.EOF {
break
}
return nil, errors.Wrap(err, "read tar file failed")
}
currentOffset, err := r.Seek(0, io.SeekCurrent)
if err != nil {
return nil, errors.Wrap(err, "seek tar file failed")
}
files = append(files, fileInfo{
name: header.Name,
mode: uint32(header.Mode),
size: uint64(header.Size),
offset: uint64(currentOffset),
})
}
return files, nil
}

View File

@ -0,0 +1,379 @@
package modctl
import (
"archive/tar"
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"os"
"path/filepath"
"strings"
"testing"
"time"
"github.com/agiledragon/gomonkey/v2"
"github.com/dustin/go-humanize"
"github.com/pkg/errors"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
pkgPvd "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/provider"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/snapshotter/external/backend"
"github.com/opencontainers/go-digest"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
)
// Test cases for readBlob function
func TestReadImageRefBlob(t *testing.T) {
ctx := context.Background()
targeRef := os.Getenv("NYDUS_MODEL_IMAGE_REF")
if targeRef == "" {
t.Skip("NYDUS_MODEL_IMAGE_REF is not set, skip test")
}
remoter, err := pkgPvd.DefaultRemote(targeRef, true)
require.Nil(t, err)
// Pull manifest
maniDesc, err := remoter.Resolve(ctx)
require.Nil(t, err)
t.Logf("manifest desc: %v", maniDesc)
rc, err := remoter.Pull(ctx, *maniDesc, true)
require.Nil(t, err)
defer rc.Close()
var buf bytes.Buffer
io.Copy(&buf, rc)
var manifest ocispec.Manifest
err = json.Unmarshal(buf.Bytes(), &manifest)
require.Nil(t, err)
t.Logf("manifest: %v", manifest)
for _, layer := range manifest.Layers {
startTime := time.Now()
rsc, err := remoter.ReadSeekCloser(context.Background(), layer, true)
require.Nil(t, err)
defer rsc.Close()
rs, ok := rsc.(io.ReadSeeker)
require.True(t, ok)
files, err := readTarBlob(rs)
require.Nil(t, err)
require.NotEqual(t, 0, len(files))
t.Logf("files: %v, elapesed: %v", files, time.Since(startTime))
}
}
// MockReadSeeker is a mock implementation of io.ReadSeeker
type MockReadSeeker struct {
mock.Mock
}
func (m *MockReadSeeker) Read(p []byte) (n int, err error) {
args := m.Called(p)
return args.Int(0), args.Error(1)
}
func (m *MockReadSeeker) Seek(offset int64, whence int) (int64, error) {
args := m.Called(offset, whence)
return args.Get(0).(int64), args.Error(1)
}
func TestReadTarBlob(t *testing.T) {
t.Run("Normal case: valid tar file", func(t *testing.T) {
// Create a valid tar file in memory
var buf bytes.Buffer
tw := tar.NewWriter(&buf)
files := []struct {
name string
size int64
}{
{"file1.txt", 10},
{"file2.txt", 20},
{"file3.txt", 30},
}
for _, file := range files {
header := &tar.Header{
Name: file.name,
Size: file.size,
}
if err := tw.WriteHeader(header); err != nil {
t.Fatalf("Failed to write tar header: %v", err)
}
if _, err := tw.Write(make([]byte, file.size)); err != nil {
t.Fatalf("Failed to write tar content: %v", err)
}
}
tw.Close()
reader := bytes.NewReader(buf.Bytes()) // Convert *bytes.Buffer to io.ReadSeeker
result, err := readTarBlob(reader)
assert.NoError(t, err)
assert.Len(t, result, len(files))
for i, file := range files {
assert.Equal(t, file.name, result[i].name)
// Since the file size is less than 512 bytes, it will be padded to 512 bytes in the tar body.
assert.Equal(t, uint64((2*i+1)*512), result[i].offset)
assert.Equal(t, uint64(file.size), result[i].size)
}
})
t.Run("Empty tar file", func(t *testing.T) {
// Create an empty tar file in memory
var buf bytes.Buffer
tw := tar.NewWriter(&buf)
tw.Close()
// Call the function
reader := bytes.NewReader(buf.Bytes()) // Convert *bytes.Buffer to io.ReadSeeker
result, err := readTarBlob(reader)
// Validate the result
assert.NoError(t, err)
assert.Empty(t, result)
})
t.Run("I/O error during read", func(t *testing.T) {
// Create a mock ReadSeeker that returns an error on Read
mockReader := new(MockReadSeeker)
mockReader.On("Read", mock.Anything).Return(0, errors.New("mock read error"))
mockReader.On("Seek", mock.Anything, mock.Anything).Return(int64(0), nil)
// Call the function
_, err := readTarBlob(mockReader)
// Validate the error
assert.Error(t, err)
assert.Contains(t, err.Error(), "read tar file failed")
})
}
func TestGetOption(t *testing.T) {
t.Run("Valid srcRef", func(t *testing.T) {
srcRef := "host/namespace/image:tag"
modCtlRoot := "/mock/root"
weightChunkSize := uint64(64 * 1024 * 1024)
opt, err := GetOption(srcRef, modCtlRoot, weightChunkSize)
assert.NoError(t, err)
assert.Equal(t, "host", opt.RegistryHost)
assert.Equal(t, "namespace", opt.Namespace)
assert.Equal(t, "image", opt.ImageName)
assert.Equal(t, "tag", opt.Tag)
assert.Equal(t, weightChunkSize, opt.WeightChunkSize)
})
t.Run("Invalid srcRef format", func(t *testing.T) {
srcRef := "invalid-ref"
modCtlRoot := "/mock/root"
weightChunkSize := uint64(64 * 1024 * 1024)
_, err := GetOption(srcRef, modCtlRoot, weightChunkSize)
assert.Error(t, err)
})
}
func TestHandle(t *testing.T) {
handler := &Handler{
root: "/tmp",
}
t.Run("File ignored", func(t *testing.T) {
file := backend.File{RelativePath: "ignored-file/link"}
chunks, err := handler.Handle(context.Background(), file)
assert.NoError(t, err)
assert.Nil(t, chunks)
})
handler.blobsMap = make(map[string]blobInfo)
handler.blobsMap["test_digest"] = blobInfo{
mediaType: ModelWeightMediaType,
}
t.Run("Open file failure", func(t *testing.T) {
file := backend.File{RelativePath: "test/test_digest/nonexistent-file"}
_, err := handler.Handle(context.Background(), file)
assert.Error(t, err)
assert.Contains(t, err.Error(), "open tar file failed")
})
t.Run("Normal", func(t *testing.T) {
os.MkdirAll("/tmp/test/test_digest/", 0755)
testFile, err := os.CreateTemp("/tmp/test/test_digest/", "test_tar")
assert.NoError(t, err)
defer testFile.Close()
defer os.RemoveAll(testFile.Name())
tw := tar.NewWriter(testFile)
header := &tar.Header{
Name: "test.txt",
Mode: 0644,
Size: 4,
}
assert.NoError(t, tw.WriteHeader(header))
_, err = tw.Write([]byte("test"))
assert.NoError(t, err)
tw.Close()
testFilePath := strings.TrimPrefix(testFile.Name(), "/tmp/")
file := backend.File{RelativePath: testFilePath}
chunks, err := handler.Handle(context.Background(), file)
assert.NoError(t, err)
assert.Equal(t, 1, len(chunks))
})
}
func TestModctlBackend(t *testing.T) {
handler := &Handler{
blobs: []backend.Blob{
{
Config: backend.BlobConfig{
MediaType: "application/vnd.cnai.model.weight.v1.tar",
Digest: "sha256:mockdigest",
Size: "1024",
ChunkSize: "64MiB",
},
},
},
}
bkd, err := handler.Backend(context.Background())
assert.NoError(t, err)
assert.Equal(t, "v1", bkd.Version)
assert.Equal(t, "registry", bkd.Backends[0].Type)
assert.Len(t, bkd.Blobs, 1)
}
func TestConvertToBlobs(t *testing.T) {
manifestWithColon := &ocispec.Manifest{
Layers: []ocispec.Descriptor{
{
Digest: digest.Digest("sha256:abc123"),
MediaType: ModelWeightMediaType,
Size: 100,
},
},
}
actualBlobs1 := convertToBlobs(manifestWithColon)
assert.Equal(t, 1, len(actualBlobs1))
assert.Equal(t, ModelWeightMediaType, actualBlobs1[0].Config.MediaType)
assert.Equal(t, "abc123", actualBlobs1[0].Config.Digest)
manifestWithoutColon := &ocispec.Manifest{
Layers: []ocispec.Descriptor{
{
Digest: digest.Digest("abc123"),
MediaType: ModelDatasetMediaType,
Size: 100,
},
},
}
actualBlobs2 := convertToBlobs(manifestWithoutColon)
assert.Equal(t, 1, len(actualBlobs2))
assert.Equal(t, ModelDatasetMediaType, actualBlobs2[0].Config.MediaType)
assert.Equal(t, "abc123", actualBlobs2[0].Config.Digest)
}
func TestExtractManifest(t *testing.T) {
handler := &Handler{
root: "/tmp/test",
}
tagPath := fmt.Sprintf(ManifestPath, handler.tag)
manifestPath := filepath.Join(handler.root, ReposPath, handler.registryHost, handler.namespace, handler.imageName, tagPath)
dir := filepath.Dir(manifestPath)
os.MkdirAll(dir, 0755)
maniFile, err := os.Create(manifestPath)
assert.NoError(t, err)
_, err = maniFile.WriteString("sha256:abc1234")
assert.NoError(t, err)
maniFile.Close()
defer os.RemoveAll(manifestPath)
t.Logf("manifest path: %s", manifestPath)
os.MkdirAll(filepath.Dir(manifestPath), 0755)
// No file
_, err = handler.extractManifest()
assert.Error(t, err)
var m = ocispec.Manifest{
Config: ocispec.Descriptor{
MediaType: ModelWeightMediaType,
Digest: "sha256:abc1234",
Size: 10,
},
}
data, err := json.Marshal(m)
assert.NoError(t, err)
blobDir := "/tmp/test/content.v1/docker/registry/v2/blobs/sha256/ab/abc1234/"
os.MkdirAll(blobDir, 0755)
blobPath := blobDir + "data"
blobFile, err := os.Create(blobPath)
assert.NoError(t, err)
defer os.RemoveAll(blobPath)
io.Copy(blobFile, bytes.NewReader(data))
blobFile.Close()
mani, err := handler.extractManifest()
assert.NoError(t, err)
assert.Equal(t, mani.Config.Digest.String(), "sha256:abc1234")
}
func TestSetBlobsMap(t *testing.T) {
handler := &Handler{
root: "/tmp",
blobs: make([]backend.Blob, 0),
blobsMap: map[string]blobInfo{},
}
handler.blobs = append(handler.blobs, backend.Blob{
Config: backend.BlobConfig{
Digest: "sha256:abc1234",
},
})
handler.setBlobsMap()
assert.Equal(t, handler.blobsMap["sha256:abc1234"].blobDigest, "sha256:abc1234")
}
func TestSetWeightChunkSize(t *testing.T) {
setWeightChunkSize(0)
expectedDefault := "64MiB"
assert.Equal(t, expectedDefault, mediaTypeChunkSizeMap[ModelWeightMediaType], "Weight media type should be set to default value")
assert.Equal(t, expectedDefault, mediaTypeChunkSizeMap[ModelDatasetMediaType], "Dataset media type should be set to default value")
chunkSize := uint64(16 * 1024 * 1024)
setWeightChunkSize(chunkSize)
expectedNonDefault := humanize.IBytes(chunkSize)
expectedNonDefault = strings.ReplaceAll(expectedNonDefault, " ", "")
assert.Equal(t, expectedNonDefault, mediaTypeChunkSizeMap[ModelWeightMediaType], "Weight media type should match the specified chunk size")
assert.Equal(t, expectedNonDefault, mediaTypeChunkSizeMap[ModelDatasetMediaType], "Dataset media type should match the specified chunk size")
}
func TestNewHandler(t *testing.T) {
// handler := &Handler{}
t.Run("Run extract manifest failed", func(t *testing.T) {
_, err := NewHandler(Option{})
assert.Error(t, err)
})
t.Run("Run Normal", func(t *testing.T) {
initHandlerPatches := gomonkey.ApplyFunc(initHandler, func(*Handler) error {
return nil
})
defer initHandlerPatches.Reset()
handler, err := NewHandler(Option{})
assert.NoError(t, err)
assert.NotNil(t, handler)
})
}
func TestInitHandler(t *testing.T) {
t.Run("Run initHandler failed", func(t *testing.T) {
handler := &Handler{}
extractManifestPatches := gomonkey.ApplyPrivateMethod(handler, "extractManifest", func() (*ocispec.Manifest, error) {
return &ocispec.Manifest{}, nil
})
defer extractManifestPatches.Reset()
err := initHandler(handler)
assert.NoError(t, err)
})
}

View File

@ -0,0 +1,249 @@
// Copyright 2025 Nydus Developers. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
package modctl
import (
"bytes"
"context"
"encoding/json"
"io"
"os"
"strconv"
"sync"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/snapshotter/external/backend"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/utils"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"golang.org/x/sync/errgroup"
modelspec "github.com/CloudNativeAI/model-spec/specs-go/v1"
pkgPvd "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/provider"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
)
type RemoteInterface interface {
Resolve(ctx context.Context) (*ocispec.Descriptor, error)
Pull(ctx context.Context, desc ocispec.Descriptor, plainHTTP bool) (io.ReadCloser, error)
ReadSeekCloser(ctx context.Context, desc ocispec.Descriptor, plainHTTP bool) (io.ReadSeekCloser, error)
WithHTTP()
MaybeWithHTTP(err error)
}
type RemoteHandler struct {
ctx context.Context
imageRef string
remoter RemoteInterface
manifest ocispec.Manifest
// convert from the manifest.Layers, same order as manifest.Layers
blobs []backend.Blob
}
type FileCrcList struct {
Files []FileCrcInfo `json:"files"`
}
type FileCrcInfo struct {
FilePath string `json:"file_path"`
ChunkCrcs string `json:"chunk_crcs"`
}
const (
filePathKey = "org.cnai.model.filepath"
crcsKey = "org.cnai.nydus.crcs"
)
func NewRemoteHandler(ctx context.Context, imageRef string, plainHTTP bool) (*RemoteHandler, error) {
remoter, err := pkgPvd.DefaultRemote(imageRef, true)
if err != nil {
return nil, errors.Wrap(err, "new remote failed")
}
if plainHTTP {
remoter.WithHTTP()
}
handler := &RemoteHandler{
ctx: ctx,
imageRef: imageRef,
remoter: remoter,
}
if err := initRemoteHandler(handler); err != nil {
return nil, errors.Wrap(err, "init remote handler failed")
}
return handler, nil
}
func initRemoteHandler(handler *RemoteHandler) error {
if err := handler.setManifest(); err != nil {
return errors.Wrap(err, "set manifest failed")
}
handler.blobs = convertToBlobs(&handler.manifest)
return nil
}
func (handler *RemoteHandler) Handle(ctx context.Context) (*backend.Backend, []backend.FileAttribute, error) {
var (
fileAttrs []backend.FileAttribute
mu sync.Mutex
eg *errgroup.Group
)
eg, ctx = errgroup.WithContext(ctx)
eg.SetLimit(10)
for idx, layer := range handler.manifest.Layers {
eg.Go(func() error {
var fa []backend.FileAttribute
err := utils.RetryWithAttempts(func() error {
_fa, err := handler.handle(ctx, layer, int32(idx))
fa = _fa
return err
}, 5)
if err != nil {
return err
}
mu.Lock()
fileAttrs = append(fileAttrs, fa...)
mu.Unlock()
return nil
})
}
if err := eg.Wait(); err != nil {
return nil, nil, errors.Wrap(err, "wait for handle failed")
}
bkd, err := handler.backend()
if err != nil {
return nil, nil, errors.Wrap(err, "get backend failed")
}
return bkd, fileAttrs, nil
}
func (handler *RemoteHandler) GetModelConfig() (*modelspec.Model, error) {
var modelCfg modelspec.Model
rc, err := handler.remoter.Pull(handler.ctx, handler.manifest.Config, true)
if err != nil {
return nil, errors.Wrap(err, "pull model config failed")
}
defer rc.Close()
var buf bytes.Buffer
if _, err = io.Copy(&buf, rc); err != nil {
return nil, errors.Wrap(err, "copy model config failed")
}
if err = json.Unmarshal(buf.Bytes(), &modelCfg); err != nil {
return nil, errors.Wrap(err, "unmarshal model config failed")
}
return &modelCfg, nil
}
func (handler *RemoteHandler) GetLayers() []ocispec.Descriptor {
return handler.manifest.Layers
}
func (handler *RemoteHandler) setManifest() error {
maniDesc, err := handler.remoter.Resolve(handler.ctx)
if utils.RetryWithHTTP(err) {
handler.remoter.MaybeWithHTTP(err)
maniDesc, err = handler.remoter.Resolve(handler.ctx)
}
if err != nil {
return errors.Wrap(err, "resolve image manifest failed")
}
rc, err := handler.remoter.Pull(handler.ctx, *maniDesc, true)
if err != nil {
return errors.Wrap(err, "pull manifest failed")
}
defer rc.Close()
var buf bytes.Buffer
io.Copy(&buf, rc)
var manifest ocispec.Manifest
if err = json.Unmarshal(buf.Bytes(), &manifest); err != nil {
return errors.Wrap(err, "unmarshal manifest failed")
}
handler.manifest = manifest
return nil
}
func (handler *RemoteHandler) backend() (*backend.Backend, error) {
bkd := backend.Backend{
Version: "v1",
}
bkd.Backends = []backend.Config{
{
Type: "registry",
},
}
bkd.Blobs = handler.blobs
return &bkd, nil
}
func (handler *RemoteHandler) handle(ctx context.Context, layer ocispec.Descriptor, index int32) ([]backend.FileAttribute, error) {
logrus.Debugf("handle layer: %s", layer.Digest.String())
chunkSize := getChunkSizeByMediaType(layer.MediaType)
rsc, err := handler.remoter.ReadSeekCloser(ctx, layer, true)
if err != nil {
return nil, errors.Wrap(err, "read seek closer failed")
}
defer rsc.Close()
files, err := readTarBlob(rsc)
if err != nil {
return nil, errors.Wrap(err, "read tar blob failed")
}
var fileCrcList = FileCrcList{}
var fileCrcMap = make(map[string]string)
if layer.Annotations != nil {
if c, ok := layer.Annotations[crcsKey]; ok {
if err := json.Unmarshal([]byte(c), &fileCrcList); err != nil {
return nil, errors.Wrap(err, "unmarshal crcs failed")
}
for _, f := range fileCrcList.Files {
fileCrcMap[f.FilePath] = f.ChunkCrcs
}
}
}
blobInfo := handler.blobs[index].Config
fileAttrs := make([]backend.FileAttribute, len(files))
hackFile := os.Getenv("HACK_FILE")
for idx, f := range files {
if hackFile != "" && f.name == hackFile {
hackFileWrapper(&f)
}
fileAttrs[idx] = backend.FileAttribute{
BlobID: blobInfo.Digest,
BlobIndex: uint32(index),
BlobSize: blobInfo.Size,
FileSize: f.size,
Chunk0CompressedOffset: f.offset,
ChunkSize: chunkSize,
RelativePath: f.name,
Type: "external",
Mode: f.mode,
}
if crcs, ok := fileCrcMap[f.name]; ok {
fileAttrs[idx].Crcs = crcs
}
}
return fileAttrs, nil
}
func hackFileWrapper(f *fileInfo) {
// HACK to chmod config.json to 0640
hackMode := uint32(0640)
// etc 640.
hackModeStr := os.Getenv("HACK_MODE")
if hackModeStr != "" {
modeValue, err := strconv.ParseUint(hackModeStr, 8, 32)
if err != nil {
logrus.Errorf("Invalid HACK_MODE value: %s, using default 0640", hackModeStr)
} else {
hackMode = uint32(modeValue)
}
}
f.mode = hackMode
logrus.Infof("hack file: %s mode: %o", f.name, f.mode)
}

View File

@ -0,0 +1,256 @@
package modctl
import (
"archive/tar"
"bytes"
"context"
"encoding/json"
"io"
"os"
"testing"
modelspec "github.com/CloudNativeAI/model-spec/specs-go/v1"
"github.com/agiledragon/gomonkey/v2"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/provider"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/remote"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/snapshotter/external/backend"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
type MockRemote struct {
ResolveFunc func(ctx context.Context) (*ocispec.Descriptor, error)
PullFunc func(ctx context.Context, desc ocispec.Descriptor, plainHTTP bool) (io.ReadCloser, error)
ReadSeekCloserFunc func(ctx context.Context, desc ocispec.Descriptor, plainHTTP bool) (io.ReadSeekCloser, error)
WithHTTPFunc func()
MaybeWithHTTPFunc func(err error)
}
func (m *MockRemote) Resolve(ctx context.Context) (*ocispec.Descriptor, error) {
return m.ResolveFunc(ctx)
}
func (m *MockRemote) Pull(ctx context.Context, desc ocispec.Descriptor, plainHTTP bool) (io.ReadCloser, error) {
return m.PullFunc(ctx, desc, plainHTTP)
}
func (m *MockRemote) ReadSeekCloser(ctx context.Context, desc ocispec.Descriptor, plainHTTP bool) (io.ReadSeekCloser, error) {
return m.ReadSeekCloserFunc(ctx, desc, plainHTTP)
}
func (m *MockRemote) WithHTTP() {
m.WithHTTPFunc()
}
func (m *MockRemote) MaybeWithHTTP(err error) {
m.MaybeWithHTTPFunc(err)
}
type readSeekCloser struct {
*bytes.Reader
}
func (r *readSeekCloser) Close() error {
return nil
}
func TestRemoteHandler_Handle(t *testing.T) {
mockRemote := &MockRemote{
ResolveFunc: func(context.Context) (*ocispec.Descriptor, error) {
return &ocispec.Descriptor{}, nil
},
PullFunc: func(context.Context, ocispec.Descriptor, bool) (io.ReadCloser, error) {
return io.NopCloser(bytes.NewReader([]byte("{}"))), nil
},
ReadSeekCloserFunc: func(context.Context, ocispec.Descriptor, bool) (io.ReadSeekCloser, error) {
// prepare tar
var buf bytes.Buffer
tw := tar.NewWriter(&buf)
files := []struct {
name string
size int64
}{
{"file1.txt", 10},
{"file2.txt", 20},
{"file3.txt", 30},
}
for _, file := range files {
header := &tar.Header{
Name: file.name,
Size: file.size,
}
if err := tw.WriteHeader(header); err != nil {
t.Fatalf("Failed to write tar header: %v", err)
}
if _, err := tw.Write(make([]byte, file.size)); err != nil {
t.Fatalf("Failed to write tar content: %v", err)
}
}
tw.Close()
reader := bytes.NewReader(buf.Bytes())
return &readSeekCloser{reader}, nil
},
WithHTTPFunc: func() {},
MaybeWithHTTPFunc: func(error) {},
}
fileCrcInfo := &FileCrcInfo{
ChunkCrcs: "0x1234,0x5678",
FilePath: "file1.txt",
}
fileCrcList := &FileCrcList{
Files: []FileCrcInfo{
*fileCrcInfo,
},
}
crcs, err := json.Marshal(fileCrcList)
require.NoError(t, err)
annotations := map[string]string{
filePathKey: "file1.txt",
crcsKey: string(crcs),
}
handler := &RemoteHandler{
ctx: context.Background(),
imageRef: "test-image",
remoter: mockRemote,
manifest: ocispec.Manifest{
Layers: []ocispec.Descriptor{
{
MediaType: "test-media-type",
Digest: "test-digest",
Annotations: annotations,
},
},
},
blobs: []backend.Blob{
{
Config: backend.BlobConfig{
Digest: "test-digest",
Size: "100",
},
},
},
}
backend, fileAttrs, err := handler.Handle(context.Background())
assert.NoError(t, err)
assert.NotNil(t, backend)
assert.NotEmpty(t, fileAttrs)
assert.Equal(t, 3, len(fileAttrs))
assert.Equal(t, fileCrcInfo.ChunkCrcs, fileAttrs[0].Crcs)
assert.Equal(t, "", fileAttrs[1].Crcs)
handler.manifest.Layers[0].Annotations = map[string]string{
filePathKey: "file1.txt",
crcsKey: "0x1234,0x5678",
}
_, _, err = handler.Handle(context.Background())
assert.Error(t, err)
}
func TestGetModelConfig(t *testing.T) {
mockRemote := &MockRemote{
ResolveFunc: func(context.Context) (*ocispec.Descriptor, error) {
return &ocispec.Descriptor{}, nil
},
PullFunc: func(_ context.Context, desc ocispec.Descriptor, _ bool) (io.ReadCloser, error) {
desc = ocispec.Descriptor{
MediaType: modelspec.MediaTypeModelConfig,
Size: desc.Size,
}
data, err := json.Marshal(desc)
assert.Nil(t, err)
return io.NopCloser(bytes.NewReader(data)), nil
},
}
handler := &RemoteHandler{
ctx: context.Background(),
imageRef: "test-image",
remoter: mockRemote,
}
modelConfig, err := handler.GetModelConfig()
assert.NoError(t, err)
assert.NotNil(t, modelConfig)
}
func TestSetManifest(t *testing.T) {
mockRemote := &MockRemote{
ResolveFunc: func(context.Context) (*ocispec.Descriptor, error) {
return &ocispec.Descriptor{}, nil
},
PullFunc: func(context.Context, ocispec.Descriptor, bool) (io.ReadCloser, error) {
mani := ocispec.Manifest{
MediaType: ocispec.MediaTypeImageManifest,
}
data, err := json.Marshal(mani)
assert.Nil(t, err)
return io.NopCloser(bytes.NewReader(data)), nil
},
}
handler := &RemoteHandler{
ctx: context.Background(),
imageRef: "test-image",
remoter: mockRemote,
}
err := handler.setManifest()
assert.Nil(t, err)
}
func TestBackend(t *testing.T) {
handler := &RemoteHandler{
manifest: ocispec.Manifest{},
blobs: []backend.Blob{
{
Config: backend.BlobConfig{
Digest: "test-digest",
Size: "100",
},
},
},
}
backend, err := handler.backend()
assert.NoError(t, err)
assert.NotNil(t, backend)
assert.Equal(t, "v1", backend.Version)
assert.Equal(t, "registry", backend.Backends[0].Type)
}
func TestNewRemoteHandler(t *testing.T) {
var remoter = remote.Remote{}
defaultRemotePatches := gomonkey.ApplyFunc(provider.DefaultRemote, func(string, bool) (*remote.Remote, error) {
return &remoter, nil
})
defer defaultRemotePatches.Reset()
initRemoteHandlerPatches := gomonkey.ApplyFunc(initRemoteHandler, func(*RemoteHandler) error {
return nil
})
defer initRemoteHandlerPatches.Reset()
remoteHandler, err := NewRemoteHandler(context.Background(), "test", false)
assert.Nil(t, err)
assert.NotNil(t, remoteHandler)
}
func TestInitRemoteHandlerError(t *testing.T) {
handler := &RemoteHandler{}
setManifestPatches := gomonkey.ApplyPrivateMethod(handler, "setManifest", func(*RemoteHandler) error {
return nil
})
defer setManifestPatches.Reset()
err := initRemoteHandler(handler)
assert.NoError(t, err)
}
func TestHackFileWrapper(t *testing.T) {
f := &fileInfo{}
os.Setenv("HACK_MODE", "0640")
hackFileWrapper(f)
assert.Equal(t, uint32(0640), f.mode)
}

View File

@ -0,0 +1,87 @@
package optimizer
import (
"context"
"encoding/json"
"os"
"os/exec"
"strings"
"time"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
var logger = logrus.WithField("module", "optimizer")
func isSignalKilled(err error) bool {
return strings.Contains(err.Error(), "signal: killed")
}
type BuildOption struct {
BuilderPath string
PrefetchFilesPath string
BootstrapPath string
BlobDir string
OutputBootstrapPath string
OutputJSONPath string
Timeout *time.Duration
}
type outputJSON struct {
Blobs []string `json:"blobs"`
}
func Build(option BuildOption) (string, error) {
outputJSONPath := option.OutputJSONPath
args := []string{
"optimize",
"--log-level",
"warn",
"--prefetch-files",
option.PrefetchFilesPath,
"--bootstrap",
option.BootstrapPath,
"--blob-dir",
option.BlobDir,
"--output-bootstrap",
option.OutputBootstrapPath,
"--output-json",
outputJSONPath,
}
ctx := context.Background()
var cancel context.CancelFunc
if option.Timeout != nil {
ctx, cancel = context.WithTimeout(ctx, *option.Timeout)
defer cancel()
}
logrus.Debugf("\tCommand: %s %s", option.BuilderPath, strings.Join(args, " "))
cmd := exec.CommandContext(ctx, option.BuilderPath, args...)
cmd.Stdout = logger.Writer()
cmd.Stderr = logger.Writer()
if err := cmd.Run(); err != nil {
if isSignalKilled(err) && option.Timeout != nil {
logrus.WithError(err).Errorf("fail to run %v %+v, possibly due to timeout %v", option.BuilderPath, args, *option.Timeout)
} else {
logrus.WithError(err).Errorf("fail to run %v %+v", option.BuilderPath, args)
}
return "", errors.Wrap(err, "run merge command")
}
outputBytes, err := os.ReadFile(outputJSONPath)
if err != nil {
return "", errors.Wrapf(err, "read file %s", outputJSONPath)
}
var output outputJSON
err = json.Unmarshal(outputBytes, &output)
if err != nil {
return "", errors.Wrapf(err, "unmarshal output json file %s", outputJSONPath)
}
blobID := output.Blobs[len(output.Blobs)-1]
logrus.Infof("build success for prefetch blob : %s", blobID)
return blobID, nil
}

View File

@ -0,0 +1,537 @@
// Copyright 2024 Nydus Developers. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
package optimizer
import (
"archive/tar"
"bytes"
"compress/gzip"
"context"
"encoding/json"
"fmt"
"io"
"os"
"path/filepath"
"runtime"
"time"
"github.com/goharbor/acceleration-service/pkg/platformutil"
"github.com/containerd/containerd/content/local"
"github.com/containerd/containerd/reference/docker"
"github.com/containerd/containerd/content"
"github.com/containerd/containerd/namespaces"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/committer"
converterpvd "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/converter/provider"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/parser"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/provider"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/remote"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/utils"
accerr "github.com/goharbor/acceleration-service/pkg/errdefs"
accremote "github.com/goharbor/acceleration-service/pkg/remote"
"github.com/opencontainers/go-digest"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
const (
EntryBootstrap = "image.boot"
EntryPrefetchFiles = "prefetch.files"
)
type Opt struct {
WorkDir string
NydusImagePath string
Source string
Target string
SourceInsecure bool
TargetInsecure bool
OptimizePolicy string
PrefetchFilesPath string
AllPlatforms bool
Platforms string
PushChunkSize int64
}
// the information generated during building
type BuildInfo struct {
SourceImage parser.Image
BuildDir string
BlobDir string
PrefetchBlobID string
NewBootstrapPath string
}
type File struct {
Name string
Reader io.Reader
Size int64
}
type bootstrapInfo struct {
bootstrapDesc ocispec.Descriptor
bootstrapDiffID digest.Digest
}
func hosts(opt Opt) accremote.HostFunc {
maps := map[string]bool{
opt.Source: opt.SourceInsecure,
opt.Target: opt.TargetInsecure,
}
return func(ref string) (accremote.CredentialFunc, bool, error) {
return accremote.NewDockerConfigCredFunc(), maps[ref], nil
}
}
func remoter(opt Opt) (*remote.Remote, error) {
targetRef, err := committer.ValidateRef(opt.Target)
if err != nil {
return nil, errors.Wrap(err, "validate target reference")
}
remoter, err := provider.DefaultRemote(targetRef, opt.TargetInsecure)
if err != nil {
return nil, errors.Wrap(err, "create remote")
}
return remoter, nil
}
func makeDesc(x interface{}, oldDesc ocispec.Descriptor) ([]byte, *ocispec.Descriptor, error) {
data, err := json.MarshalIndent(x, "", " ")
if err != nil {
return nil, nil, errors.Wrap(err, "json marshal")
}
dgst := digest.SHA256.FromBytes(data)
newDesc := oldDesc
newDesc.Size = int64(len(data))
newDesc.Digest = dgst
return data, &newDesc, nil
}
// packToTar packs files to .tar(.gz) stream then return reader.
//
// ported from https://github.com/containerd/nydus-snapshotter/blob/5f948e4498151b51c742d2ee0b3f7b96f86a26f7/pkg/converter/utils.go#L92
func packToTar(files []File, compress bool) io.ReadCloser {
dirHdr := &tar.Header{
Name: "image",
Mode: 0755,
Typeflag: tar.TypeDir,
}
pr, pw := io.Pipe()
go func() {
// Prepare targz writer
var tw *tar.Writer
var gw *gzip.Writer
var err error
if compress {
gw = gzip.NewWriter(pw)
tw = tar.NewWriter(gw)
} else {
tw = tar.NewWriter(pw)
}
defer func() {
err1 := tw.Close()
var err2 error
if gw != nil {
err2 = gw.Close()
}
var finalErr error
// Return the first error encountered to the other end and ignore others.
switch {
case err != nil:
finalErr = err
case err1 != nil:
finalErr = err1
case err2 != nil:
finalErr = err2
}
pw.CloseWithError(finalErr)
}()
// Write targz stream
if err = tw.WriteHeader(dirHdr); err != nil {
return
}
for _, file := range files {
hdr := tar.Header{
Name: filepath.Join("image", file.Name),
Mode: 0444,
Size: file.Size,
}
if err = tw.WriteHeader(&hdr); err != nil {
return
}
if _, err = io.Copy(tw, file.Reader); err != nil {
return
}
}
}()
return pr
}
func getOriginalBlobLayers(nydusImage parser.Image) []ocispec.Descriptor {
originalBlobLayers := []ocispec.Descriptor{}
for idx := range nydusImage.Manifest.Layers {
layer := nydusImage.Manifest.Layers[idx]
if layer.MediaType == utils.MediaTypeNydusBlob {
originalBlobLayers = append(originalBlobLayers, layer)
}
}
return originalBlobLayers
}
func fetchBlobs(ctx context.Context, opt Opt, buildDir string) error {
logrus.Infof("pulling source image")
start := time.Now()
platformMC, err := platformutil.ParsePlatforms(opt.AllPlatforms, opt.Platforms)
if err != nil {
return err
}
pvd, err := converterpvd.New(buildDir, hosts(opt), 200, "v1", platformMC, opt.PushChunkSize)
if err != nil {
return err
}
sourceNamed, err := docker.ParseDockerRef(opt.Source)
if err != nil {
return errors.Wrap(err, "parse source reference")
}
source := sourceNamed.String()
if err := pvd.Pull(ctx, source); err != nil {
if accerr.NeedsRetryWithHTTP(err) {
pvd.UsePlainHTTP()
if err := pvd.Pull(ctx, source); err != nil {
return errors.Wrap(err, "try to pull image")
}
} else {
return errors.Wrap(err, "pull source image")
}
}
logrus.Infof("pulled source image, elapsed: %s", time.Since(start))
return nil
}
// Optimize coverts and push a new optimized nydus image
func Optimize(ctx context.Context, opt Opt) error {
ctx = namespaces.WithNamespace(ctx, "nydusify")
sourceRemote, err := provider.DefaultRemote(opt.Source, opt.SourceInsecure)
if err != nil {
return errors.Wrap(err, "Init source image parser")
}
sourceParser, err := parser.New(sourceRemote, runtime.GOARCH)
if err != nil {
return errors.Wrap(err, "failed to create parser")
}
sourceParsed, err := sourceParser.Parse(ctx)
if err != nil {
return errors.Wrap(err, "parse source image")
}
sourceNydusImage := sourceParsed.NydusImage
if _, err := os.Stat(opt.WorkDir); err != nil {
if errors.Is(err, os.ErrNotExist) {
if err := os.MkdirAll(opt.WorkDir, 0755); err != nil {
return errors.Wrap(err, "prepare work directory")
}
// We should only clean up when the work directory not exists
// before, otherwise it may delete user data by mistake.
defer os.RemoveAll(opt.WorkDir)
} else {
return errors.Wrap(err, "stat work directory")
}
}
buildDir, err := os.MkdirTemp(opt.WorkDir, "nydusify-")
if err != nil {
return errors.Wrap(err, "create temp directory")
}
defer os.RemoveAll(buildDir)
if err := fetchBlobs(ctx, opt, buildDir); err != nil {
return errors.Wrap(err, "prepare nydus blobs")
}
originalBootstrap := filepath.Join(buildDir, "nydus_bootstrap")
bootstrapDesc := parser.FindNydusBootstrapDesc(&sourceNydusImage.Manifest)
if bootstrapDesc == nil {
return fmt.Errorf("not found Nydus bootstrap layer in manifest")
}
bootstrapReader, err := sourceParser.Remote.Pull(ctx, *bootstrapDesc, true)
if err != nil {
return errors.Wrap(err, "pull Nydus originalBootstrap layer")
}
defer bootstrapReader.Close()
if err := utils.UnpackFile(bootstrapReader, utils.BootstrapFileNameInLayer, originalBootstrap); err != nil {
return errors.Wrap(err, "unpack Nydus originalBootstrap layer")
}
compressAlgo := bootstrapDesc.Digest.Algorithm().String()
blobDir := filepath.Join(buildDir + "/content/blobs/" + compressAlgo)
outPutJSONPath := filepath.Join(buildDir, "output.json")
newBootstrapPath := filepath.Join(buildDir, "optimized_bootstrap")
builderOpt := BuildOption{
BuilderPath: opt.NydusImagePath,
PrefetchFilesPath: opt.PrefetchFilesPath,
BootstrapPath: originalBootstrap,
BlobDir: blobDir,
OutputBootstrapPath: newBootstrapPath,
OutputJSONPath: outPutJSONPath,
}
logrus.Infof("begin to build new prefetch blob and bootstrap")
start := time.Now()
prefetchBlobID, err := Build(builderOpt)
if err != nil {
return errors.Wrap(err, "optimize nydus image")
}
logrus.Infof("builded new prefetch blob and bootstrap, elapsed: %s", time.Since(start))
buildInfo := BuildInfo{
SourceImage: *sourceParsed.NydusImage,
BuildDir: buildDir,
BlobDir: blobDir,
PrefetchBlobID: prefetchBlobID,
NewBootstrapPath: newBootstrapPath,
}
if err := pushNewImage(ctx, opt, buildInfo); err != nil {
return errors.Wrap(err, "push new image")
}
return nil
}
// push blob
func pushBlob(ctx context.Context, opt Opt, buildInfo BuildInfo) (*ocispec.Descriptor, error) {
blobDir := buildInfo.BlobDir
blobID := buildInfo.PrefetchBlobID
remoter, err := remoter(opt)
if err != nil {
return nil, errors.Wrap(err, "create remote")
}
blobRa, err := local.OpenReader(filepath.Join(blobDir, blobID))
if err != nil {
return nil, errors.Wrap(err, "open reader for upper blob")
}
blobDigest := digest.NewDigestFromEncoded(digest.SHA256, blobID)
blobDesc := ocispec.Descriptor{
Digest: blobDigest,
Size: blobRa.Size(),
MediaType: utils.MediaTypeNydusBlob,
Annotations: map[string]string{
utils.LayerAnnotationNydusBlob: "true",
},
}
if err := remoter.Push(ctx, blobDesc, true, io.NewSectionReader(blobRa, 0, blobRa.Size())); err != nil {
if utils.RetryWithHTTP(err) {
remoter.MaybeWithHTTP(err)
if err := remoter.Push(ctx, blobDesc, true, io.NewSectionReader(blobRa, 0, blobRa.Size())); err != nil {
return nil, errors.Wrap(err, "push blob")
}
} else {
return nil, errors.Wrap(err, "push blob")
}
}
return &blobDesc, nil
}
func pushNewBootstrap(ctx context.Context, opt Opt, buildInfo BuildInfo) (*bootstrapInfo, error) {
remoter, err := remoter(opt)
if err != nil {
return nil, errors.Wrap(err, "create remote")
}
bootstrapRa, err := local.OpenReader(buildInfo.NewBootstrapPath)
if err != nil {
return nil, errors.Wrap(err, "open reader for bootstrap")
}
prefetchfilesRa, err := local.OpenReader(opt.PrefetchFilesPath)
if err != nil {
return nil, errors.Wrap(err, "open reader for prefetch files")
}
files := []File{
{
Name: EntryBootstrap,
Reader: content.NewReader(bootstrapRa),
Size: bootstrapRa.Size(),
}, {
Name: EntryPrefetchFiles,
Reader: content.NewReader(prefetchfilesRa),
Size: prefetchfilesRa.Size(),
},
}
rc := packToTar(files, false)
defer rc.Close()
bootstrapTarPath := filepath.Join(buildInfo.BuildDir, "bootstrap.tar")
bootstrapTar, err := os.Create(bootstrapTarPath)
if err != nil {
return nil, errors.Wrap(err, "create bootstrap tar file")
}
defer bootstrapTar.Close()
tarDigester := digest.SHA256.Digester()
if _, err := io.Copy(io.MultiWriter(bootstrapTar, tarDigester.Hash()), rc); err != nil {
return nil, errors.Wrap(err, "get tar digest")
}
bootstrapDiffID := tarDigester.Digest()
bootstrapTarRa, err := os.Open(bootstrapTarPath)
if err != nil {
return nil, errors.Wrap(err, "open bootstrap tar file")
}
defer bootstrapTarRa.Close()
bootstrapTarGzPath := filepath.Join(buildInfo.BuildDir, "bootstrap.tar.gz")
bootstrapTarGz, err := os.Create(bootstrapTarGzPath)
if err != nil {
return nil, errors.Wrap(err, "create bootstrap tar.gz file")
}
defer bootstrapTarGz.Close()
gzDigester := digest.SHA256.Digester()
gzWriter := gzip.NewWriter(io.MultiWriter(bootstrapTarGz, gzDigester.Hash()))
if _, err := io.Copy(gzWriter, bootstrapTarRa); err != nil {
return nil, errors.Wrap(err, "compress bootstrap & prefetchfiles to tar.gz")
}
if err := gzWriter.Close(); err != nil {
return nil, errors.Wrap(err, "close gzip writer")
}
bootstrapTarGzRa, err := local.OpenReader(bootstrapTarGzPath)
if err != nil {
return nil, errors.Wrap(err, "open reader for upper blob")
}
defer bootstrapTarGzRa.Close()
oldBootstrapDesc := parser.FindNydusBootstrapDesc(&buildInfo.SourceImage.Manifest)
if oldBootstrapDesc == nil {
return nil, fmt.Errorf("not found originial Nydus bootstrap layer in manifest")
}
annotations := oldBootstrapDesc.Annotations
annotations[utils.LayerAnnotationNyudsPrefetchBlob] = buildInfo.PrefetchBlobID
// push bootstrap
bootstrapDesc := ocispec.Descriptor{
Digest: gzDigester.Digest(),
Size: bootstrapTarGzRa.Size(),
MediaType: ocispec.MediaTypeImageLayerGzip,
Annotations: annotations,
}
bootstrapRc, err := os.Open(bootstrapTarGzPath)
if err != nil {
return nil, errors.Wrapf(err, "open bootstrap %s", bootstrapTarGzPath)
}
defer bootstrapRc.Close()
if err := remoter.Push(ctx, bootstrapDesc, true, bootstrapRc); err != nil {
return nil, errors.Wrap(err, "push bootstrap layer")
}
return &bootstrapInfo{
bootstrapDesc: bootstrapDesc,
bootstrapDiffID: bootstrapDiffID,
}, nil
}
func pushConfig(ctx context.Context, opt Opt, buildInfo BuildInfo, bootstrapDiffID digest.Digest) (*ocispec.Descriptor, error) {
nydusImage := buildInfo.SourceImage
remoter, err := remoter(opt)
if err != nil {
return nil, errors.Wrap(err, "create remote")
}
config := nydusImage.Config
originalBlobLayers := getOriginalBlobLayers(nydusImage)
config.RootFS.DiffIDs = []digest.Digest{}
for idx := range originalBlobLayers {
config.RootFS.DiffIDs = append(config.RootFS.DiffIDs, originalBlobLayers[idx].Digest)
}
prefetchBlobDigest := digest.NewDigestFromEncoded(digest.SHA256, buildInfo.PrefetchBlobID)
config.RootFS.DiffIDs = append(config.RootFS.DiffIDs, prefetchBlobDigest)
// Note: bootstrap diffid is tar
config.RootFS.DiffIDs = append(config.RootFS.DiffIDs, bootstrapDiffID)
configBytes, configDesc, err := makeDesc(config, nydusImage.Manifest.Config)
if err != nil {
return nil, errors.Wrap(err, "make config desc")
}
if err := remoter.Push(ctx, *configDesc, true, bytes.NewReader(configBytes)); err != nil {
if utils.RetryWithHTTP(err) {
remoter.MaybeWithHTTP(err)
if err := remoter.Push(ctx, *configDesc, true, bytes.NewReader(configBytes)); err != nil {
return nil, errors.Wrap(err, "push image config")
}
} else {
return nil, errors.Wrap(err, "push image config")
}
}
return configDesc, nil
}
func pushNewImage(ctx context.Context, opt Opt, buildInfo BuildInfo) error {
logrus.Infof("pushing new image")
start := time.Now()
remoter, err := remoter(opt)
if err != nil {
return errors.Wrap(err, "create remote")
}
nydusImage := buildInfo.SourceImage
prefetchBlob, err := pushBlob(ctx, opt, buildInfo)
if err != nil {
return errors.Wrap(err, "create and push hot blob desc")
}
bootstrapInfo, err := pushNewBootstrap(ctx, opt, buildInfo)
if err != nil {
return errors.Wrap(err, "create and push bootstrap desc")
}
configDesc, err := pushConfig(ctx, opt, buildInfo, bootstrapInfo.bootstrapDiffID)
if err != nil {
return errors.Wrap(err, "create and push bootstrap desc")
}
// push image manifest
layers := getOriginalBlobLayers(nydusImage)
layers = append(layers, *prefetchBlob)
layers = append(layers, bootstrapInfo.bootstrapDesc)
nydusImage.Manifest.Config = *configDesc
nydusImage.Manifest.Layers = layers
manifestBytes, manifestDesc, err := makeDesc(nydusImage.Manifest, nydusImage.Desc)
if err != nil {
return errors.Wrap(err, "make config desc")
}
if err := remoter.Push(ctx, *manifestDesc, false, bytes.NewReader(manifestBytes)); err != nil {
return errors.Wrap(err, "push image manifest")
}
logrus.Infof("pushed new image, elapsed: %s", time.Since(start))
return nil
}

View File

@ -11,6 +11,7 @@ import (
"path/filepath" "path/filepath"
"testing" "testing"
"github.com/containerd/containerd/remotes"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/backend" "github.com/dragonflyoss/nydus/contrib/nydusify/pkg/backend"
ocispec "github.com/opencontainers/image-spec/specs-go/v1" ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
@ -44,6 +45,10 @@ func (m *mockBackend) Reader(_ string) (io.ReadCloser, error) {
panic("not implemented") panic("not implemented")
} }
func (m *mockBackend) RangeReader(_ string) (remotes.RangeReadCloser, error) {
panic("not implemented")
}
func (m *mockBackend) Size(_ string) (int64, error) { func (m *mockBackend) Size(_ string) (int64, error) {
panic("not implemented") panic("not implemented")
} }

View File

@ -42,7 +42,8 @@ type Image struct {
// Parsed presents OCI and Nydus image manifest. // Parsed presents OCI and Nydus image manifest.
// Nydus image conversion only works on top of an existed oci image whose platform is linux/amd64 // Nydus image conversion only works on top of an existed oci image whose platform is linux/amd64
type Parsed struct { type Parsed struct {
Index *ocispec.Index Remote *remote.Remote
Index *ocispec.Index
// The base image from which to generate nydus image. // The base image from which to generate nydus image.
OCIImage *Image OCIImage *Image
NydusImage *Image NydusImage *Image
@ -183,9 +184,9 @@ func (parser *Parser) matchImagePlatform(desc *ocispec.Descriptor) bool {
// Parse parses Nydus image reference into Parsed object. // Parse parses Nydus image reference into Parsed object.
func (parser *Parser) Parse(ctx context.Context) (*Parsed, error) { func (parser *Parser) Parse(ctx context.Context) (*Parsed, error) {
logrus.Infof("Parsing image %s", parser.Remote.Ref) parsed := Parsed{
Remote: parser.Remote,
parsed := Parsed{} }
imageDesc, err := parser.Remote.Resolve(ctx) imageDesc, err := parser.Remote.Resolve(ctx)
if err != nil { if err != nil {

View File

@ -18,6 +18,7 @@ import (
"os" "os"
"path/filepath" "path/filepath"
"strings" "strings"
"time"
"github.com/containerd/containerd/mount" "github.com/containerd/containerd/mount"
"github.com/opencontainers/go-digest" "github.com/opencontainers/go-digest"
@ -113,7 +114,7 @@ func (sl *defaultSourceLayer) Mount(ctx context.Context) ([]mount.Mount, func()
} }
return nil return nil
}); err != nil { }, 3, 5*time.Second); err != nil {
return nil, nil, err return nil, nil, err
} }

View File

@ -0,0 +1,125 @@
// Copyright 2025 Nydus Developers. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0
package remote
import (
"context"
"io"
"github.com/containerd/containerd/content"
"github.com/containerd/containerd/remotes"
"github.com/distribution/reference"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
// FromFetcher creates a content.Provider based on remotes.Fetcher
func FromFetcher(f remotes.Fetcher) content.Provider {
return &fetchedProvider{
f: f,
}
}
type fetchedProvider struct {
f remotes.Fetcher
}
func (p *fetchedProvider) ReaderAt(ctx context.Context, desc ocispec.Descriptor) (content.ReaderAt, error) {
rc, err := p.f.Fetch(ctx, desc)
if err != nil {
return nil, err
}
return &readerAt{Reader: rc, Closer: rc, size: desc.Size}, nil
}
// readerAt implements content.ReaderAt interface for reading content with random access
type readerAt struct {
io.Reader
io.Closer
size int64
offset int64
}
// ReadAt implements io.ReaderAt interface for random access reading
// It handles seeking to the correct offset and reading the requested data
func (ra *readerAt) ReadAt(p []byte, off int64) (int, error) {
if ra.offset != off {
if seeker, ok := ra.Reader.(io.Seeker); ok {
if _, err := seeker.Seek(off, io.SeekStart); err != nil {
return 0, err
}
ra.offset = off
} else {
return 0, errors.New("reader does not support seeking")
}
}
var totalN int
for len(p) > 0 {
n, err := ra.Reader.Read(p)
if err == io.EOF && n == len(p) {
err = nil
}
ra.offset += int64(n)
totalN += n
p = p[n:]
if err != nil {
return totalN, err
}
}
return totalN, nil
}
// Size returns the total size of the content being read
func (ra *readerAt) Size() int64 {
return ra.size
}
// ReaderAt returns a content.ReaderAt for reading remote blobs
// It creates a new resolver instance for each request to ensure thread safety
func (remote *Remote) ReaderAt(ctx context.Context, desc ocispec.Descriptor, byDigest bool) (content.ReaderAt, error) {
var ref string
if byDigest {
ref = remote.parsed.Name()
} else {
ref = reference.TagNameOnly(remote.parsed).String()
}
// Create a new resolver instance for the request
fetcher, err := remote.resolverFunc(remote.withHTTP).Fetcher(ctx, ref)
if err != nil {
return nil, err
}
// Create Provider using FromFetcher
provider := FromFetcher(fetcher)
return provider.ReaderAt(ctx, desc)
}
func (remote *Remote) ReadSeekCloser(ctx context.Context, desc ocispec.Descriptor, byDigest bool) (io.ReadSeekCloser, error) {
var ref string
if byDigest {
ref = remote.parsed.Name()
} else {
ref = reference.TagNameOnly(remote.parsed).String()
}
// Create a new resolver instance for the request
fetcher, err := remote.resolverFunc(remote.withHTTP).Fetcher(ctx, ref)
if err != nil {
return nil, err
}
rc, err := fetcher.Fetch(ctx, desc)
if err != nil {
return nil, err
}
rsc, ok := rc.(io.ReadSeekCloser)
if !ok {
return nil, errors.New("fetcher does not support ReadSeekCloser")
}
return rsc, nil
}

View File

@ -0,0 +1,127 @@
package remote
import (
"bytes"
"context"
"io"
"testing"
"github.com/containerd/containerd/remotes"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/stretchr/testify/assert"
)
type MockNamed struct {
mockName string
mockString string
}
func (m *MockNamed) Name() string {
return m.mockName
}
func (m *MockNamed) String() string {
return m.mockString
}
// MockResolver implements the Resolver interface for testing purposes.
type MockResolver struct {
ResolveFunc func(ctx context.Context, ref string) (string, ocispec.Descriptor, error)
FetcherFunc func(ctx context.Context, ref string) (remotes.Fetcher, error)
PusherFunc func(ctx context.Context, ref string) (remotes.Pusher, error)
PusherInChunkedFunc func(ctx context.Context, ref string) (remotes.PusherInChunked, error)
}
// Resolve implements the Resolver.Resolve method.
func (m *MockResolver) Resolve(ctx context.Context, ref string) (string, ocispec.Descriptor, error) {
if m.ResolveFunc != nil {
return m.ResolveFunc(ctx, ref)
}
return "", ocispec.Descriptor{}, errors.New("ResolveFunc not implemented")
}
// Fetcher implements the Resolver.Fetcher method.
func (m *MockResolver) Fetcher(ctx context.Context, ref string) (remotes.Fetcher, error) {
if m.FetcherFunc != nil {
return m.FetcherFunc(ctx, ref)
}
return nil, errors.New("FetcherFunc not implemented")
}
// Pusher implements the Resolver.Pusher method.
func (m *MockResolver) Pusher(ctx context.Context, ref string) (remotes.Pusher, error) {
if m.PusherFunc != nil {
return m.PusherFunc(ctx, ref)
}
return nil, errors.New("PusherFunc not implemented")
}
// PusherInChunked implements the Resolver.PusherInChunked method.
func (m *MockResolver) PusherInChunked(ctx context.Context, ref string) (remotes.PusherInChunked, error) {
if m.PusherInChunkedFunc != nil {
return m.PusherInChunkedFunc(ctx, ref)
}
return nil, errors.New("PusherInChunkedFunc not implemented")
}
type mockReadSeekCloeser struct {
buf bytes.Buffer
}
func (m *mockReadSeekCloeser) Read(p []byte) (n int, err error) {
return m.buf.Read(p)
}
func (m *mockReadSeekCloeser) Seek(int64, int) (int64, error) {
return 0, nil
}
func (m *mockReadSeekCloeser) Close() error {
return nil
}
func TestReadSeekCloser(t *testing.T) {
remote := &Remote{
parsed: &MockNamed{
mockName: "docker.io/library/busybox:latest",
mockString: "docker.io/library/busybox:latest",
},
}
t.Run("Run not ReadSeekCloser", func(t *testing.T) {
remote.resolverFunc = func(bool) remotes.Resolver {
return &MockResolver{
FetcherFunc: func(context.Context, string) (remotes.Fetcher, error) {
var buf bytes.Buffer
return remotes.FetcherFunc(func(context.Context, ocispec.Descriptor) (io.ReadCloser, error) {
// return io.ReadSeekCloser
return &readerAt{
Reader: &buf,
Closer: io.NopCloser(&buf),
}, nil
}), nil
},
}
}
_, err := remote.ReadSeekCloser(context.Background(), ocispec.Descriptor{}, false)
assert.Error(t, err)
})
t.Run("Run Normal", func(t *testing.T) {
// mock io.ReadSeekCloser
remote.resolverFunc = func(bool) remotes.Resolver {
return &MockResolver{
FetcherFunc: func(context.Context, string) (remotes.Fetcher, error) {
var buf bytes.Buffer
return remotes.FetcherFunc(func(context.Context, ocispec.Descriptor) (io.ReadCloser, error) {
return &mockReadSeekCloeser{
buf: buf,
}, nil
}), nil
},
}
}
rsc, err := remote.ReadSeekCloser(context.Background(), ocispec.Descriptor{}, false)
assert.NoError(t, err)
assert.NotNil(t, rsc)
})
}

View File

@ -31,7 +31,7 @@ type Remote struct {
resolverFunc func(insecure bool) remotes.Resolver resolverFunc func(insecure bool) remotes.Resolver
pushed sync.Map pushed sync.Map
retryWithHTTP bool withHTTP bool
} }
// New creates remote instance from docker remote resolver // New creates remote instance from docker remote resolver
@ -55,13 +55,16 @@ func (remote *Remote) MaybeWithHTTP(err error) {
// If the error message includes the current registry host string, it // If the error message includes the current registry host string, it
// implies that we can retry the request with plain HTTP. // implies that we can retry the request with plain HTTP.
if strings.Contains(err.Error(), fmt.Sprintf("/%s/", host)) { if strings.Contains(err.Error(), fmt.Sprintf("/%s/", host)) {
remote.retryWithHTTP = true remote.withHTTP = true
} }
} }
} }
func (remote *Remote) WithHTTP() {
remote.withHTTP = true
}
func (remote *Remote) IsWithHTTP() bool { func (remote *Remote) IsWithHTTP() bool {
return remote.retryWithHTTP return remote.withHTTP
} }
// Push pushes blob to registry // Push pushes blob to registry
@ -83,7 +86,7 @@ func (remote *Remote) Push(ctx context.Context, desc ocispec.Descriptor, byDiges
} }
// Create a new resolver instance for the request // Create a new resolver instance for the request
pusher, err := remote.resolverFunc(remote.retryWithHTTP).Pusher(ctx, ref) pusher, err := remote.resolverFunc(remote.withHTTP).Pusher(ctx, ref)
if err != nil { if err != nil {
return err return err
} }
@ -110,7 +113,7 @@ func (remote *Remote) Pull(ctx context.Context, desc ocispec.Descriptor, byDiges
} }
// Create a new resolver instance for the request // Create a new resolver instance for the request
puller, err := remote.resolverFunc(remote.retryWithHTTP).Fetcher(ctx, ref) puller, err := remote.resolverFunc(remote.withHTTP).Fetcher(ctx, ref)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -128,7 +131,7 @@ func (remote *Remote) Resolve(ctx context.Context) (*ocispec.Descriptor, error)
ref := reference.TagNameOnly(remote.parsed).String() ref := reference.TagNameOnly(remote.parsed).String()
// Create a new resolver instance for the request // Create a new resolver instance for the request
_, desc, err := remote.resolverFunc(remote.retryWithHTTP).Resolve(ctx, ref) _, desc, err := remote.resolverFunc(remote.withHTTP).Resolve(ctx, ref)
if err != nil { if err != nil {
return nil, err return nil, err
} }

View File

@ -0,0 +1,96 @@
package backend
import (
"context"
)
type Backend struct {
Version string `json:"version"`
Backends []Config `json:"backends"`
Blobs []Blob `json:"blobs"`
}
type Config struct {
Type string `json:"type"`
Config map[string]interface{} `json:"config,omitempty"`
}
type Blob struct {
Backend int `json:"backend"`
Config BlobConfig `json:"config"`
}
type BlobConfig struct {
MediaType string `json:"media_type"`
Digest string `json:"digest"`
Size string `json:"size"`
ChunkSize string `json:"chunk_size"`
}
type Result struct {
Chunks []Chunk
Files []FileAttribute
Backend Backend
}
type FileAttribute struct {
RelativePath string
BlobIndex uint32
BlobID string
BlobSize string
FileSize uint64
ChunkSize string
Chunk0CompressedOffset uint64
Type string
Mode uint32
Crcs string
}
type File struct {
RelativePath string
Size int64
}
// Handler is the interface for backend handler.
type Handler interface {
// Backend returns the backend information.
Backend(ctx context.Context) (*Backend, error)
// Handle handles the file and returns the object information.
Handle(ctx context.Context, file File) ([]Chunk, error)
}
type RemoteHanlder interface {
// Handle handles the file and returns the object information.
Handle(ctx context.Context) (*Backend, []FileAttribute, error)
}
type Chunk interface {
ObjectID() uint32
ObjectContent() interface{}
ObjectOffset() uint64
FilePath() string
LimitChunkSize() string
BlobDigest() string
BlobSize() string
}
// SplitObjectOffsets splits the total size into object offsets
// with the specified chunk size.
func SplitObjectOffsets(totalSize, chunkSize int64) []uint64 {
objectOffsets := []uint64{}
if chunkSize <= 0 {
return objectOffsets
}
chunkN := totalSize / chunkSize
for i := int64(0); i < chunkN; i++ {
objectOffsets = append(objectOffsets, uint64(i*chunkSize))
}
if totalSize%chunkSize > 0 {
objectOffsets = append(objectOffsets, uint64(chunkN*chunkSize))
}
return objectOffsets
}

View File

@ -0,0 +1,59 @@
package backend
import (
"fmt"
"reflect"
"testing"
"unsafe"
"github.com/stretchr/testify/require"
)
func TestLayout(t *testing.T) {
require.Equal(t, fmt.Sprintf("%d", 4096), fmt.Sprintf("%d", unsafe.Sizeof(Header{})))
require.Equal(t, fmt.Sprintf("%d", 256), fmt.Sprintf("%d", unsafe.Sizeof(ChunkMeta{})))
require.Equal(t, fmt.Sprintf("%d", 256), fmt.Sprintf("%d", unsafe.Sizeof(ObjectMeta{})))
}
func TestSplitObjectOffsets(t *testing.T) {
tests := []struct {
name string
totalSize int64
chunkSize int64
expected []uint64
}{
{
name: "Chunk size is less than or equal to zero",
totalSize: 100,
chunkSize: 0,
expected: []uint64{},
},
{
name: "Total size is zero",
totalSize: 0,
chunkSize: 10,
expected: []uint64{},
},
{
name: "Total size is divisible by chunk size",
totalSize: 100,
chunkSize: 10,
expected: []uint64{0, 10, 20, 30, 40, 50, 60, 70, 80, 90},
},
{
name: "Total size is not divisible by chunk size",
totalSize: 105,
chunkSize: 10,
expected: []uint64{0, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := SplitObjectOffsets(tt.totalSize, tt.chunkSize)
if !reflect.DeepEqual(result, tt.expected) {
t.Errorf("SplitObjectOffsets(%d, %d) = %v; want %v", tt.totalSize, tt.chunkSize, result, tt.expected)
}
})
}
}

View File

@ -0,0 +1,55 @@
package backend
const MetaMagic uint32 = 0x0AF5_E1E2
const MetaVersion uint32 = 0x0000_0001
// Layout
//
// header: magic | version | chunk_meta_offset | object_meta_offset
// chunks: chunk_meta | chunk | chunk | ...
// objects: object_meta | [object_offsets] | object | object | ...
// 4096 bytes
type Header struct {
Magic uint32
Version uint32
ChunkMetaOffset uint32
ObjectMetaOffset uint32
Reserved2 [4080]byte
}
// 256 bytes
type ChunkMeta struct {
EntryCount uint32
EntrySize uint32
Reserved [248]byte
}
// 256 bytes
type ObjectMeta struct {
EntryCount uint32
// = 0 means indeterminate entry size, and len(object_offsets) > 0.
// > 0 means fixed entry size, and len(object_offsets) == 0.
EntrySize uint32
Reserved [248]byte
}
// 8 bytes
type ChunkOndisk struct {
ObjectIndex uint32
Reserved [4]byte
ObjectOffset uint64
}
// 4 bytes
type ObjectOffset uint32
// Size depends on different external backend implementations
type ObjectOndisk struct {
EntrySize uint32
EncodedData []byte
}

View File

@ -0,0 +1,127 @@
package backend
import (
"context"
"io/fs"
"os"
"path/filepath"
"github.com/pkg/errors"
)
type Walker struct {
}
func NewWalker() *Walker {
return &Walker{}
}
func bfsWalk(path string, fn func(string, fs.FileInfo) error) error {
info, err := os.Lstat(path)
if err != nil {
return err
}
if info.IsDir() {
files, err := os.ReadDir(path)
if err != nil {
return err
}
dirs := []string{}
for _, file := range files {
filePath := filepath.Join(path, file.Name())
if file.Type().IsRegular() {
info, err := file.Info()
if err != nil {
return err
}
if err := fn(filePath, info); err != nil {
return err
}
}
if file.IsDir() {
dirs = append(dirs, filePath)
}
}
for _, dir := range dirs {
if err := bfsWalk(dir, fn); err != nil {
return err
}
}
}
return nil
}
func (walker *Walker) Walk(ctx context.Context, root string, handler Handler) (*Result, error) {
chunks := []Chunk{}
files := []FileAttribute{}
addFile := func(size int64, relativeTarget string) error {
_chunks, err := handler.Handle(ctx, File{
RelativePath: relativeTarget,
Size: size,
})
if err != nil {
return err
}
if len(_chunks) == 0 {
return nil
}
chunks = append(chunks, _chunks...)
lastFile := ""
for _, c := range _chunks {
cf := c.FilePath()
if cf != lastFile {
fa := FileAttribute{
BlobID: c.BlobDigest(),
BlobSize: c.BlobSize(),
BlobIndex: c.ObjectID(),
Chunk0CompressedOffset: c.ObjectOffset(),
ChunkSize: c.LimitChunkSize(),
RelativePath: cf,
Type: "external",
}
files = append(files, fa)
lastFile = cf
}
}
return nil
}
walkFiles := []func() error{}
if err := bfsWalk(root, func(path string, info fs.FileInfo) error {
target, err := filepath.Rel(root, path)
if err != nil {
return err
}
walkFiles = append(walkFiles, func() error {
return addFile(info.Size(), target)
})
return nil
}); err != nil {
return nil, errors.Wrap(err, "walk directory")
}
for i := 0; i < len(walkFiles); i++ {
if err := walkFiles[i](); err != nil {
return nil, errors.Wrap(err, "handle files")
}
}
// backend.json
bkd, err := handler.Backend(ctx)
if err != nil {
return nil, err
}
return &Result{
Chunks: chunks,
Files: files,
Backend: *bkd,
}, nil
}

View File

@ -0,0 +1,170 @@
package backend
import (
"context"
"os"
"path/filepath"
"testing"
"github.com/stretchr/testify/assert"
)
// Helper function to create a temporary directory and files for testing
func setupTestDir(t *testing.T) (string, func()) {
// Create a temporary directory
tmpDir, err := os.MkdirTemp("", "bfsWalkTest")
if err != nil {
t.Fatalf("failed to create temp dir: %v", err)
}
// Create test files and directories
err = os.Mkdir(filepath.Join(tmpDir, "dir"), 0755)
if err != nil {
t.Fatalf("failed to create dir: %v", err)
}
err = os.WriteFile(filepath.Join(tmpDir, "dir", "file1"), []byte("test content"), 0644)
if err != nil {
t.Fatalf("failed to create file1: %v", err)
}
err = os.Mkdir(filepath.Join(tmpDir, "dir", "subdir"), 0755)
if err != nil {
t.Fatalf("failed to create subdir: %v", err)
}
err = os.WriteFile(filepath.Join(tmpDir, "dir", "subdir", "file2"), []byte("test content"), 0644)
if err != nil {
t.Fatalf("failed to create file: %v", err)
}
// Cleanup function to remove the temporary directory
cleanup := func() {
err := os.RemoveAll(tmpDir)
if err != nil {
t.Fatalf("failed to cleanup temp dir: %v", err)
}
}
return tmpDir, cleanup
}
// TestBfsWalk tests the bfsWalk function with various cases.
func TestBfsWalk(t *testing.T) {
// Setup test directory
tmpDir, cleanup := setupTestDir(t)
defer cleanup()
t.Run("Invalid path", func(t *testing.T) {
err := bfsWalk(filepath.Join(tmpDir, "invalid_path"), func(string, os.FileInfo) error { return nil })
assert.Error(t, err)
})
t.Run("Single file", func(t *testing.T) {
called := false
err := bfsWalk(filepath.Join(tmpDir, "dir", "subdir"), func(path string, _ os.FileInfo) error {
called = true
assert.Equal(t, filepath.Join(tmpDir, "dir", "subdir", "file2"), path)
return nil
})
assert.NoError(t, err)
assert.True(t, called)
})
t.Run("Empty directory", func(t *testing.T) {
emptyDir := filepath.Join(tmpDir, "empty_dir")
err := os.Mkdir(emptyDir, 0755)
if err != nil {
t.Fatalf("failed to create empty_dir: %v", err)
}
called := false
err = bfsWalk(emptyDir, func(string, os.FileInfo) error {
called = true
return nil
})
assert.NoError(t, err)
assert.False(t, called)
})
t.Run("Directory with files and subdirectories", func(t *testing.T) {
var paths []string
err := bfsWalk(filepath.Join(tmpDir, "dir"), func(path string, _ os.FileInfo) error {
paths = append(paths, path)
return nil
})
assert.NoError(t, err)
expectedPaths := []string{
filepath.Join(tmpDir, "dir", "file1"),
filepath.Join(tmpDir, "dir", "subdir", "file2"),
}
assert.Equal(t, expectedPaths, paths)
})
}
type MockChunk struct {
ID uint32
Content interface{}
Offset uint64
Path string
ChunkSize string
Digest string
Size string
}
func (m *MockChunk) ObjectID() uint32 {
return m.ID
}
func (m *MockChunk) ObjectContent() interface{} {
return m.Content
}
func (m *MockChunk) ObjectOffset() uint64 {
return m.Offset
}
func (m *MockChunk) FilePath() string {
return m.Path
}
func (m *MockChunk) LimitChunkSize() string {
return m.ChunkSize
}
func (m *MockChunk) BlobDigest() string {
return m.Digest
}
func (m *MockChunk) BlobSize() string {
return m.Size
}
type MockHandler struct {
BackendFunc func(context.Context) (*Backend, error)
HandleFunc func(context.Context, File) ([]Chunk, error)
}
func (m MockHandler) Backend(ctx context.Context) (*Backend, error) {
if m.BackendFunc == nil {
return &Backend{}, nil
}
return m.BackendFunc(ctx)
}
func (m MockHandler) Handle(ctx context.Context, file File) ([]Chunk, error) {
if m.HandleFunc == nil {
return []Chunk{
&MockChunk{
Path: "test1",
},
&MockChunk{
Path: "test2",
},
}, nil
}
return m.HandleFunc(ctx, file)
}
func TestWalk(t *testing.T) {
walker := &Walker{}
handler := MockHandler{}
root := "/tmp/nydusify"
os.MkdirAll(root, 0755)
defer os.RemoveAll(root)
os.CreateTemp(root, "test")
_, err := walker.Walk(context.Background(), root, handler)
assert.NoError(t, err)
}

View File

@ -0,0 +1,138 @@
package external
import (
"context"
"encoding/json"
"fmt"
"os"
"path/filepath"
"strings"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/snapshotter/external/backend"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
type Options struct {
Dir string
ContextDir string
Handler backend.Handler
RemoteHandler backend.RemoteHanlder
MetaOutput string
BackendOutput string
AttributesOutput string
}
type Attribute struct {
Pattern string
}
// Handle handles the directory and generates the backend meta and attributes.
func Handle(ctx context.Context, opts Options) error {
walker := backend.NewWalker()
backendRet, err := walker.Walk(ctx, opts.Dir, opts.Handler)
if err != nil {
return err
}
generators, err := NewGenerators(*backendRet)
if err != nil {
return err
}
ret, err := generators.Generate()
if err != nil {
return err
}
bkd := ret.Backend
attributes := buildAttr(ret)
if err := os.WriteFile(opts.MetaOutput, ret.Meta, 0644); err != nil {
return errors.Wrapf(err, "write meta to %s", opts.MetaOutput)
}
backendBytes, err := json.MarshalIndent(bkd, "", " ")
if err != nil {
return err
}
if err := os.WriteFile(opts.BackendOutput, backendBytes, 0644); err != nil {
return errors.Wrapf(err, "write backend json to %s", opts.BackendOutput)
}
logrus.Debugf("backend json: %s", backendBytes)
attributeContent := []string{}
for _, attribute := range attributes {
attributeContent = append(attributeContent, attribute.Pattern)
}
if err := os.WriteFile(opts.AttributesOutput, []byte(strings.Join(attributeContent, "\n")), 0644); err != nil {
return errors.Wrapf(err, "write attributes to %s", opts.AttributesOutput)
}
logrus.Debugf("attributes: %v", strings.Join(attributeContent, "\n"))
return nil
}
func buildAttr(ret *Result) []Attribute {
attributes := []Attribute{}
for _, file := range ret.Files {
p := fmt.Sprintf("/%s type=%s blob_index=%d blob_id=%s chunk_size=%s chunk_0_compressed_offset=%d compressed_size=%s",
file.RelativePath, file.Type, file.BlobIndex, file.BlobID, file.ChunkSize, file.Chunk0CompressedOffset, file.BlobSize)
attributes = append(attributes, Attribute{
Pattern: p,
})
}
return attributes
}
func RemoteHandle(ctx context.Context, opts Options) error {
bkd, fileAttrs, err := opts.RemoteHandler.Handle(ctx)
if err != nil {
return errors.Wrap(err, "handle modctl")
}
attributes := []Attribute{}
for _, file := range fileAttrs {
p := fmt.Sprintf("/%s type=%s file_size=%d blob_index=%d blob_id=%s chunk_size=%s chunk_0_compressed_offset=%d compressed_size=%s crcs=%s",
file.RelativePath, file.Type, file.FileSize, file.BlobIndex, file.BlobID, file.ChunkSize, file.Chunk0CompressedOffset, file.BlobSize, file.Crcs)
attributes = append(attributes, Attribute{
Pattern: p,
})
logrus.Debugf("file attr: %s, file_mode: %o", p, file.Mode)
}
backendBytes, err := json.MarshalIndent(bkd, "", " ")
if err != nil {
return err
}
if err := os.WriteFile(opts.BackendOutput, backendBytes, 0644); err != nil {
return errors.Wrapf(err, "write backend json to %s", opts.BackendOutput)
}
logrus.Debugf("backend json: %s", backendBytes)
attributeContent := []string{}
for _, attribute := range attributes {
attributeContent = append(attributeContent, attribute.Pattern)
}
if err := os.WriteFile(opts.AttributesOutput, []byte(strings.Join(attributeContent, "\n")), 0644); err != nil {
return errors.Wrapf(err, "write attributes to %s", opts.AttributesOutput)
}
logrus.Debugf("attributes: %v", strings.Join(attributeContent, "\n"))
// Build dummy files with empty content.
if err := buildEmptyFiles(fileAttrs, opts.ContextDir); err != nil {
return errors.Wrap(err, "build empty files")
}
return nil
}
func buildEmptyFiles(fileAttrs []backend.FileAttribute, contextDir string) error {
for _, fileAttr := range fileAttrs {
filePath := fmt.Sprintf("%s/%s", contextDir, fileAttr.RelativePath)
if err := os.MkdirAll(filepath.Dir(filePath), 0755); err != nil {
return errors.Wrapf(err, "create dir %s", filepath.Dir(filePath))
}
if err := os.WriteFile(filePath, []byte{}, os.FileMode(fileAttr.Mode)); err != nil {
return errors.Wrapf(err, "write file %s", filePath)
}
}
return nil
}

View File

@ -0,0 +1,153 @@
package external
import (
"context"
"os"
"path/filepath"
"testing"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/snapshotter/external/backend"
"github.com/stretchr/testify/assert"
)
// Mock implementation for backend.Handler
type mockHandler struct {
backendFunc func(ctx context.Context) (*backend.Backend, error)
handleFunc func(ctx context.Context, file backend.File) ([]backend.Chunk, error)
}
func (m *mockHandler) Backend(ctx context.Context) (*backend.Backend, error) {
return m.backendFunc(ctx)
}
func (m *mockHandler) Handle(ctx context.Context, file backend.File) ([]backend.Chunk, error) {
return m.handleFunc(ctx, file)
}
// Mock implementation for backend.RemoteHandler
type mockRemoteHandler struct {
handleFunc func(ctx context.Context) (*backend.Backend, []backend.FileAttribute, error)
}
func (m *mockRemoteHandler) Handle(ctx context.Context) (*backend.Backend, []backend.FileAttribute, error) {
return m.handleFunc(ctx)
}
// TestHandle tests the Handle function.
func TestHandle(t *testing.T) {
tmpDir := t.TempDir()
metaOutput := filepath.Join(tmpDir, "meta.json")
backendOutput := filepath.Join(tmpDir, "backend.json")
attributesOutput := filepath.Join(tmpDir, "attributes.txt")
mockHandler := &mockHandler{
backendFunc: func(context.Context) (*backend.Backend, error) {
return &backend.Backend{Version: "mock"}, nil
},
handleFunc: func(context.Context, backend.File) ([]backend.Chunk, error) {
return []backend.Chunk{}, nil
},
}
opts := Options{
Dir: tmpDir,
MetaOutput: metaOutput,
BackendOutput: backendOutput,
AttributesOutput: attributesOutput,
Handler: mockHandler,
}
err := Handle(context.Background(), opts)
assert.NoError(t, err)
// Verify outputs
assert.FileExists(t, metaOutput)
assert.FileExists(t, backendOutput)
assert.FileExists(t, attributesOutput)
}
// TestRemoteHandle tests the RemoteHandle function.
func TestRemoteHandle(t *testing.T) {
tmpDir := t.TempDir()
contextDir := filepath.Join(tmpDir, "context")
backendOutput := filepath.Join(tmpDir, "backend.json")
attributesOutput := filepath.Join(tmpDir, "attributes.txt")
mockRemoteHandler := &mockRemoteHandler{
handleFunc: func(context.Context) (*backend.Backend, []backend.FileAttribute, error) {
return &backend.Backend{Version: "mock"},
[]backend.FileAttribute{
{
RelativePath: "testfile",
Type: "regular",
FileSize: 1024,
BlobIndex: 0,
BlobID: "blob1",
ChunkSize: "1MB",
Chunk0CompressedOffset: 0,
BlobSize: "10MB",
Mode: 0644,
},
}, nil
},
}
opts := Options{
ContextDir: contextDir,
BackendOutput: backendOutput,
AttributesOutput: attributesOutput,
RemoteHandler: mockRemoteHandler,
}
err := RemoteHandle(context.Background(), opts)
assert.NoError(t, err)
// Verify outputs
assert.FileExists(t, backendOutput)
assert.FileExists(t, attributesOutput)
assert.FileExists(t, filepath.Join(contextDir, "testfile"))
}
// TestBuildEmptyFiles tests the buildEmptyFiles function.
func TestBuildEmptyFiles(t *testing.T) {
tmpDir := t.TempDir()
fileAttrs := []backend.FileAttribute{
{
RelativePath: "dir1/file1",
Mode: 0644,
},
{
RelativePath: "dir2/file2",
Mode: 0755,
},
}
err := buildEmptyFiles(fileAttrs, tmpDir)
assert.NoError(t, err)
// Verify files are created
assert.FileExists(t, filepath.Join(tmpDir, "dir1", "file1"))
assert.FileExists(t, filepath.Join(tmpDir, "dir2", "file2"))
// Verify file modes
info, err := os.Stat(filepath.Join(tmpDir, "dir1", "file1"))
assert.NoError(t, err)
assert.Equal(t, os.FileMode(0644), info.Mode())
info, err = os.Stat(filepath.Join(tmpDir, "dir2", "file2"))
assert.NoError(t, err)
assert.Equal(t, os.FileMode(0755), info.Mode())
}
func TestBuildAttr(t *testing.T) {
ret := Result{
Files: []backend.FileAttribute{
{
RelativePath: "dir1/file1",
},
},
}
attrs := buildAttr(&ret)
assert.Equal(t, len(attrs), 1)
}

View File

@ -0,0 +1,150 @@
package external
import (
"bytes"
"encoding/binary"
"unsafe"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/snapshotter/external/backend"
"github.com/pkg/errors"
"github.com/vmihailenco/msgpack/v5"
)
type Result struct {
Meta []byte
Backend backend.Backend
Files []backend.FileAttribute
}
type MetaGenerator struct {
backend.Header
backend.ChunkMeta
Chunks []backend.ChunkOndisk
backend.ObjectMeta
ObjectOffsets []backend.ObjectOffset
Objects []backend.ObjectOndisk
}
type Generator interface {
Generate() error
}
type Generators struct {
MetaGenerator
Backend backend.Backend
Files []backend.FileAttribute
}
func NewGenerators(ret backend.Result) (*Generators, error) {
objects := []backend.ObjectOndisk{}
chunks := []backend.ChunkOndisk{}
objectMap := make(map[uint32]uint32) // object id -> object index
for _, chunk := range ret.Chunks {
objectID := chunk.ObjectID()
objectIndex, ok := objectMap[objectID]
if !ok {
objectIndex = uint32(len(objects))
objectMap[objectID] = objectIndex
encoded, err := msgpack.Marshal(chunk.ObjectContent())
if err != nil {
return nil, errors.Wrap(err, "encode to msgpack format")
}
objects = append(objects, backend.ObjectOndisk{
EntrySize: uint32(len(encoded)),
EncodedData: encoded[:],
})
}
chunks = append(chunks, backend.ChunkOndisk{
ObjectIndex: objectIndex,
ObjectOffset: chunk.ObjectOffset(),
})
}
return &Generators{
MetaGenerator: MetaGenerator{
Chunks: chunks,
Objects: objects,
},
Backend: ret.Backend,
Files: ret.Files,
}, nil
}
func (generators *Generators) Generate() (*Result, error) {
meta, err := generators.MetaGenerator.Generate()
if err != nil {
return nil, errors.Wrap(err, "generate backend meta")
}
return &Result{
Meta: meta,
Backend: generators.Backend,
Files: generators.Files,
}, nil
}
func (generator *MetaGenerator) Generate() ([]byte, error) {
// prepare data
chunkMetaOffset := uint32(unsafe.Sizeof(generator.Header))
generator.ChunkMeta.EntryCount = uint32(len(generator.Chunks))
generator.ChunkMeta.EntrySize = uint32(unsafe.Sizeof(backend.ChunkOndisk{}))
objectMetaOffset := chunkMetaOffset + uint32(unsafe.Sizeof(generator.ChunkMeta)) + generator.ChunkMeta.EntryCount*generator.ChunkMeta.EntrySize
generator.Header = backend.Header{
Magic: backend.MetaMagic,
Version: backend.MetaVersion,
ChunkMetaOffset: chunkMetaOffset,
ObjectMetaOffset: objectMetaOffset,
}
generator.ObjectMeta.EntryCount = uint32(len(generator.Objects))
objectOffsets := []backend.ObjectOffset{}
objectOffset := backend.ObjectOffset(objectMetaOffset + uint32(unsafe.Sizeof(generator.ObjectMeta)) + 4*generator.ObjectMeta.EntryCount)
var lastEntrySize uint32
fixedEntrySize := true
for _, object := range generator.Objects {
if lastEntrySize > 0 && lastEntrySize != object.EntrySize {
fixedEntrySize = false
}
lastEntrySize = object.EntrySize
objectOffsets = append(objectOffsets, objectOffset)
objectOffset += backend.ObjectOffset(uint32(unsafe.Sizeof(object.EntrySize)) + uint32(len(object.EncodedData)))
}
if fixedEntrySize && len(generator.Objects) > 0 {
generator.ObjectMeta.EntrySize = generator.Objects[0].EntrySize
}
generator.ObjectOffsets = objectOffsets
// dump bytes
var buf bytes.Buffer
if err := binary.Write(&buf, binary.LittleEndian, generator.Header); err != nil {
return nil, errors.Wrap(err, "dump")
}
if err := binary.Write(&buf, binary.LittleEndian, generator.ChunkMeta); err != nil {
return nil, errors.Wrap(err, "dump")
}
for _, chunk := range generator.Chunks {
if err := binary.Write(&buf, binary.LittleEndian, chunk); err != nil {
return nil, errors.Wrap(err, "dump")
}
}
if err := binary.Write(&buf, binary.LittleEndian, generator.ObjectMeta); err != nil {
return nil, errors.Wrap(err, "dump")
}
for _, objectOffset := range generator.ObjectOffsets {
if err := binary.Write(&buf, binary.LittleEndian, objectOffset); err != nil {
return nil, errors.Wrap(err, "dump")
}
}
for _, object := range generator.Objects {
if err := binary.Write(&buf, binary.LittleEndian, object.EntrySize); err != nil {
return nil, errors.Wrap(err, "dump")
}
if err := binary.Write(&buf, binary.LittleEndian, object.EncodedData); err != nil {
return nil, errors.Wrap(err, "dump")
}
}
return buf.Bytes(), nil
}

View File

@ -0,0 +1,170 @@
package external
import (
"context"
"testing"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/snapshotter/external/backend"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
)
type MockChunk struct {
mock.Mock
}
func (m *MockChunk) ObjectID() uint32 {
args := m.Called()
return args.Get(0).(uint32)
}
func (m *MockChunk) ObjectContent() interface{} {
args := m.Called()
return args.Get(0)
}
func (m *MockChunk) ObjectOffset() uint64 {
args := m.Called()
return args.Get(0).(uint64)
}
func (m *MockChunk) FilePath() string {
args := m.Called()
return args.String(0)
}
func (m *MockChunk) LimitChunkSize() string {
args := m.Called()
return args.String(0)
}
func (m *MockChunk) BlobDigest() string {
args := m.Called()
return args.String(0)
}
func (m *MockChunk) BlobSize() string {
args := m.Called()
return args.String(0)
}
type MockBackend struct {
mock.Mock
}
func (m *MockBackend) Backend(ctx context.Context) (*backend.Backend, error) {
args := m.Called(ctx)
return args.Get(0).(*backend.Backend), args.Error(1)
}
func TestNewGenerators(t *testing.T) {
t.Run("normal case", func(t *testing.T) {
chunk := &MockChunk{}
chunk.On("ObjectID").Return(uint32(1))
chunk.On("ObjectContent").Return("content")
chunk.On("ObjectOffset").Return(uint64(100))
chunk.On("BlobDigest").Return("digest")
chunk.On("BlobSize").Return("1024")
ret := backend.Result{
Chunks: []backend.Chunk{chunk},
Backend: backend.Backend{
Version: "1.0",
},
Files: []backend.FileAttribute{
{
RelativePath: "file1",
FileSize: 1024,
},
},
}
generators, err := NewGenerators(ret)
assert.NoError(t, err)
assert.Equal(t, 1, len(generators.MetaGenerator.Objects))
assert.Equal(t, 1, len(generators.MetaGenerator.Chunks))
assert.Equal(t, "1.0", generators.Backend.Version)
})
t.Run("empty input", func(t *testing.T) {
ret := backend.Result{
Chunks: []backend.Chunk{},
Backend: backend.Backend{},
Files: []backend.FileAttribute{},
}
generators, err := NewGenerators(ret)
assert.NoError(t, err)
assert.Equal(t, 0, len(generators.MetaGenerator.Objects))
assert.Equal(t, 0, len(generators.MetaGenerator.Chunks))
})
}
func TestGenerate(t *testing.T) {
t.Run("normal case", func(t *testing.T) {
generators := &Generators{
MetaGenerator: MetaGenerator{
Chunks: []backend.ChunkOndisk{
{
ObjectIndex: 0,
ObjectOffset: 100,
},
},
Objects: []backend.ObjectOndisk{
{
EntrySize: 10,
EncodedData: []byte("encoded"),
},
},
},
Backend: backend.Backend{
Version: "1.0",
},
Files: []backend.FileAttribute{
{
RelativePath: "file1",
FileSize: 1024,
},
},
}
result, err := generators.Generate()
assert.NoError(t, err)
assert.NotNil(t, result)
assert.Equal(t, "1.0", result.Backend.Version)
assert.Equal(t, 1, len(result.Files))
})
}
func TestMetaGeneratorGenerate(t *testing.T) {
t.Run("normal case", func(t *testing.T) {
generator := &MetaGenerator{
Chunks: []backend.ChunkOndisk{
{
ObjectIndex: 0,
ObjectOffset: 100,
},
},
Objects: []backend.ObjectOndisk{
{
EntrySize: 10,
EncodedData: []byte("encoded"),
},
},
}
data, err := generator.Generate()
assert.NoError(t, err)
assert.NotNil(t, data)
assert.Greater(t, len(data), 0)
})
t.Run("empty input", func(t *testing.T) {
generator := &MetaGenerator{}
data, err := generator.Generate()
assert.NoError(t, err)
assert.NotNil(t, data)
assert.Greater(t, len(data), 0)
})
}

View File

@ -2,11 +2,13 @@ package utils
import ( import (
"encoding/base64" "encoding/base64"
"encoding/json"
"fmt" "fmt"
"os" "os"
"github.com/distribution/reference" "github.com/distribution/reference"
dockerconfig "github.com/docker/cli/cli/config" dockerconfig "github.com/docker/cli/cli/config"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/snapshotter/external/backend"
"github.com/pkg/errors" "github.com/pkg/errors"
) )
@ -20,9 +22,12 @@ type RegistryBackendConfig struct {
} }
type BackendProxyConfig struct { type BackendProxyConfig struct {
URL string `json:"url"` CacheDir string `json:"cache_dir"`
Fallback bool `json:"fallback"` URL string `json:"url"`
PingURL string `json:"ping_url"` Fallback bool `json:"fallback"`
PingURL string `json:"ping_url"`
Timeout int `json:"timeout"`
ConnectTimeout int `json:"connect_timeout"`
} }
func NewRegistryBackendConfig(parsed reference.Named, insecure bool) (RegistryBackendConfig, error) { func NewRegistryBackendConfig(parsed reference.Named, insecure bool) (RegistryBackendConfig, error) {
@ -55,3 +60,54 @@ func NewRegistryBackendConfig(parsed reference.Named, insecure bool) (RegistryBa
return backendConfig, nil return backendConfig, nil
} }
// The external backend configuration extracted from the manifest is missing the runtime configuration.
// Therefore, it is necessary to construct the runtime configuration using the available backend configuration.
func BuildRuntimeExternalBackendConfig(backendConfig, externalBackendConfigPath string) error {
extBkdCfg := backend.Backend{}
extBkdCfgBytes, err := os.ReadFile(externalBackendConfigPath)
if err != nil {
return errors.Wrap(err, "failed to read external backend config file")
}
if err := json.Unmarshal(extBkdCfgBytes, &extBkdCfg); err != nil {
return errors.Wrap(err, "failed to unmarshal external backend config file")
}
bkdCfg := RegistryBackendConfig{}
if err := json.Unmarshal([]byte(backendConfig), &bkdCfg); err != nil {
return errors.Wrap(err, "failed to unmarshal registry backend config file")
}
proxyURL := os.Getenv("NYDUS_EXTERNAL_PROXY_URL")
if proxyURL == "" {
proxyURL = bkdCfg.Proxy.URL
}
cacheDir := os.Getenv("NYDUS_EXTERNAL_PROXY_CACHE_DIR")
if cacheDir == "" {
cacheDir = bkdCfg.Proxy.CacheDir
}
extBkdCfg.Backends[0].Config = map[string]interface{}{
"scheme": bkdCfg.Scheme,
"host": bkdCfg.Host,
"repo": bkdCfg.Repo,
"auth": bkdCfg.Auth,
"timeout": 30,
"connect_timeout": 5,
"proxy": BackendProxyConfig{
CacheDir: cacheDir,
URL: proxyURL,
Fallback: true,
},
}
extBkdCfgBytes, err = json.MarshalIndent(extBkdCfg, "", " ")
if err != nil {
return errors.Wrap(err, "failed to marshal external backend config file")
}
if err = os.WriteFile(externalBackendConfigPath, extBkdCfgBytes, 0644); err != nil {
return errors.Wrap(err, "failed to write external backend config file")
}
return nil
}

View File

@ -0,0 +1,52 @@
package utils
import (
"encoding/json"
"os"
"testing"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/snapshotter/external/backend"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestBuildExternalBackend(t *testing.T) {
bkdCfg := RegistryBackendConfig{
Host: "test.host",
}
bkdCfgBytes, err := json.Marshal(bkdCfg)
require.NoError(t, err)
oldExtCfg := backend.Backend{
Version: "test.ver",
Backends: []backend.Config{
{Type: "registry"},
},
}
t.Run("not exist", func(t *testing.T) {
err = BuildRuntimeExternalBackendConfig(string(bkdCfgBytes), "not-exist")
assert.Error(t, err)
})
t.Run("normal", func(t *testing.T) {
extFile, err := os.CreateTemp("/tmp", "external-backend-config")
require.NoError(t, err)
defer os.Remove(extFile.Name())
oldExtCfgBytes, err := json.Marshal(oldExtCfg)
require.NoError(t, err)
err = os.WriteFile(extFile.Name(), oldExtCfgBytes, 0644)
require.NoError(t, err)
err = BuildRuntimeExternalBackendConfig(string(bkdCfgBytes), extFile.Name())
require.NoError(t, err)
newExtCfg := backend.Backend{}
newExtCfgBytes, err := os.ReadFile(extFile.Name())
require.NoError(t, err)
require.NoError(t, json.Unmarshal(newExtCfgBytes, &newExtCfg))
assert.Equal(t, bkdCfg.Host, newExtCfg.Backends[0].Config["host"])
})
}

View File

@ -8,6 +8,7 @@ const (
ManifestOSFeatureNydus = "nydus.remoteimage.v1" ManifestOSFeatureNydus = "nydus.remoteimage.v1"
MediaTypeNydusBlob = "application/vnd.oci.image.layer.nydus.blob.v1" MediaTypeNydusBlob = "application/vnd.oci.image.layer.nydus.blob.v1"
BootstrapFileNameInLayer = "image/image.boot" BootstrapFileNameInLayer = "image/image.boot"
BackendFileNameInLayer = "image/backend.json"
ManifestNydusCache = "containerd.io/snapshot/nydus-cache" ManifestNydusCache = "containerd.io/snapshot/nydus-cache"
@ -17,10 +18,12 @@ const (
LayerAnnotationNydusBootstrap = "containerd.io/snapshot/nydus-bootstrap" LayerAnnotationNydusBootstrap = "containerd.io/snapshot/nydus-bootstrap"
LayerAnnotationNydusFsVersion = "containerd.io/snapshot/nydus-fs-version" LayerAnnotationNydusFsVersion = "containerd.io/snapshot/nydus-fs-version"
LayerAnnotationNydusSourceChainID = "containerd.io/snapshot/nydus-source-chainid" LayerAnnotationNydusSourceChainID = "containerd.io/snapshot/nydus-source-chainid"
LayerAnnotationNydusArtifactType = "containerd.io/snapshot/nydus-artifact-type"
LayerAnnotationNydusReferenceBlobIDs = "containerd.io/snapshot/nydus-reference-blob-ids" LayerAnnotationNydusReferenceBlobIDs = "containerd.io/snapshot/nydus-reference-blob-ids"
LayerAnnotationUncompressed = "containerd.io/uncompressed" LayerAnnotationUncompressed = "containerd.io/uncompressed"
LayerAnnotationNydusCommitBlobs = "containerd.io/snapshot/nydus-commit-blobs" LayerAnnotationNydusCommitBlobs = "containerd.io/snapshot/nydus-commit-blobs"
LayerAnnotationNyudsPrefetchBlob = "containerd.io/snapshot/nydus-separated-blob-with-prefetch-files"
) )

View File

@ -6,17 +6,20 @@ package utils
import ( import (
"archive/tar" "archive/tar"
"context"
"encoding/json" "encoding/json"
"fmt" "fmt"
"io" "io"
"net/http" "net/http"
"os" "os"
"path/filepath"
"runtime" "runtime"
"strings" "strings"
"syscall" "syscall"
"time" "time"
"github.com/containerd/containerd/archive/compression" "github.com/containerd/containerd/archive/compression"
"github.com/goharbor/acceleration-service/pkg/errdefs"
"github.com/opencontainers/go-digest" "github.com/opencontainers/go-digest"
ocispec "github.com/opencontainers/image-spec/specs-go/v1" ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors" "github.com/pkg/errors"
@ -27,9 +30,6 @@ import (
const SupportedOS = "linux" const SupportedOS = "linux"
const SupportedArch = runtime.GOARCH const SupportedArch = runtime.GOARCH
const defaultRetryAttempts = 3
const defaultRetryInterval = time.Second * 2
const ( const (
PlatformArchAMD64 string = "amd64" PlatformArchAMD64 string = "amd64"
PlatformArchARM64 string = "arm64" PlatformArchARM64 string = "arm64"
@ -58,27 +58,85 @@ func GetNydusFsVersionOrDefault(annotations map[string]string, defaultVersion Fs
return defaultVersion return defaultVersion
} }
func WithRetry(op func() error) error { // WithRetry retries the given function with the specified retry count and delay.
var err error // If retryCount is 0, it will use the default value of 3.
attempts := defaultRetryAttempts // If retryDelay is 0, it will use the default value of 5 seconds.
for attempts > 0 { func WithRetry(f func() error, retryCount int, retryDelay time.Duration) error {
attempts-- const (
if err != nil { defaultRetryCount = 3
if RetryWithHTTP(err) { defaultRetryDelay = 5 * time.Second
return err )
}
logrus.Warnf("Retry due to error: %s", err) if retryCount <= 0 {
time.Sleep(defaultRetryInterval) retryCount = defaultRetryCount
} }
if err = op(); err == nil { if retryDelay <= 0 {
break retryDelay = defaultRetryDelay
} }
var lastErr error
for i := 0; i < retryCount; i++ {
if lastErr != nil {
if !RetryWithHTTP(lastErr) {
return lastErr
}
logrus.WithError(lastErr).
WithField("attempt", i+1).
WithField("total_attempts", retryCount).
WithField("retry_delay", retryDelay.String()).
Warn("Operation failed, will retry")
time.Sleep(retryDelay)
}
if err := f(); err != nil {
lastErr = err
continue
}
return nil
}
if lastErr != nil {
logrus.WithError(lastErr).
WithField("total_attempts", retryCount).
Error("Operation failed after all attempts")
}
return lastErr
}
func RetryWithAttempts(handle func() error, attempts int) error {
for {
attempts--
err := handle()
if err == nil {
return nil
}
if attempts > 0 && !errors.Is(err, context.Canceled) {
logrus.WithError(err).Warnf("retry (remain %d times)", attempts)
continue
}
return err
} }
return err
} }
func RetryWithHTTP(err error) bool { func RetryWithHTTP(err error) bool {
return err != nil && (errors.Is(err, http.ErrSchemeMismatch) || errors.Is(err, syscall.ECONNREFUSED)) if err == nil {
return false
}
// Check for HTTP status code errors
if strings.Contains(err.Error(), "503 Service Unavailable") ||
strings.Contains(err.Error(), "502 Bad Gateway") ||
strings.Contains(err.Error(), "504 Gateway Timeout") ||
strings.Contains(err.Error(), "401 Unauthorized") {
return true
}
// Check for connection errors
return errors.Is(err, http.ErrSchemeMismatch) ||
errors.Is(err, syscall.ECONNREFUSED) ||
errdefs.NeedsRetryWithHTTP(err)
} }
func MarshalToDesc(data interface{}, mediaType string) (*ocispec.Descriptor, []byte, error) { func MarshalToDesc(data interface{}, mediaType string) (*ocispec.Descriptor, []byte, error) {
@ -167,6 +225,45 @@ func UnpackFile(reader io.Reader, source, target string) error {
return nil return nil
} }
func UnpackFromTar(reader io.Reader, targetDir string) error {
if err := os.MkdirAll(targetDir, 0755); err != nil {
return err
}
tr := tar.NewReader(reader)
for {
header, err := tr.Next()
if err != nil {
if err == io.EOF {
break
}
return err
}
filePath := filepath.Join(targetDir, header.Name)
switch header.Typeflag {
case tar.TypeDir:
if err := os.MkdirAll(filePath, header.FileInfo().Mode()); err != nil {
return err
}
case tar.TypeReg:
f, err := os.OpenFile(filePath, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, header.FileInfo().Mode())
if err != nil {
return err
}
defer f.Close()
if _, err := io.Copy(f, tr); err != nil {
return err
}
default:
}
}
return nil
}
func IsEmptyString(str string) bool { func IsEmptyString(str string) bool {
return strings.TrimSpace(str) == "" return strings.TrimSpace(str) == ""
} }

View File

@ -8,12 +8,15 @@ package utils
import ( import (
"archive/tar" "archive/tar"
"compress/gzip" "compress/gzip"
"context"
"fmt"
"io" "io"
"net/http" "net/http"
"os" "os"
"strings" "strings"
"syscall" "syscall"
"testing" "testing"
"time"
"github.com/opencontainers/go-digest" "github.com/opencontainers/go-digest"
ocispec "github.com/opencontainers/image-spec/specs-go/v1" ocispec "github.com/opencontainers/image-spec/specs-go/v1"
@ -241,12 +244,14 @@ func TestWithRetry(t *testing.T) {
err := WithRetry(func() error { err := WithRetry(func() error {
_, err := http.Get("http://localhost:5000") _, err := http.Get("http://localhost:5000")
return err return err
}) }, 3, 5*time.Second)
require.ErrorIs(t, err, syscall.ECONNREFUSED) require.ErrorIs(t, err, syscall.ECONNREFUSED)
} }
func TestRetryWithHTTP(t *testing.T) { func TestRetryWithHTTP(t *testing.T) {
require.True(t, RetryWithHTTP(errors.Wrap(http.ErrSchemeMismatch, "parse Nydus image"))) require.True(t, RetryWithHTTP(errors.Wrap(http.ErrSchemeMismatch, "parse Nydus image")))
require.True(t, RetryWithHTTP(fmt.Errorf("dial tcp 192.168.0.1:443: i/o timeout")))
require.True(t, RetryWithHTTP(fmt.Errorf("dial tcp 192.168.0.1:443: connect: connection refused")))
require.False(t, RetryWithHTTP(nil)) require.False(t, RetryWithHTTP(nil))
} }
@ -270,3 +275,30 @@ func TestGetNydusFsVersionOrDefault(t *testing.T) {
fsVersion = GetNydusFsVersionOrDefault(testAnnotations, V5) fsVersion = GetNydusFsVersionOrDefault(testAnnotations, V5)
require.Equal(t, fsVersion, V5) require.Equal(t, fsVersion, V5)
} }
func TestRetryWithAttempts_SuccessOnFirstAttempt(t *testing.T) {
err := RetryWithAttempts(func() error {
return nil
}, 3)
require.NoError(t, err)
attempts := 0
err = RetryWithAttempts(func() error {
attempts++
if attempts == 1 {
return errors.New("first attempt failed")
}
return nil
}, 3)
require.NoError(t, err)
err = RetryWithAttempts(func() error {
return errors.New("always fails")
}, 3)
require.Error(t, err)
err = RetryWithAttempts(func() error {
return context.Canceled
}, 3)
require.Equal(t, context.Canceled, err)
}

View File

@ -6,6 +6,7 @@ import (
"os" "os"
"os/signal" "os/signal"
"path/filepath" "path/filepath"
"strings"
"syscall" "syscall"
"github.com/pkg/errors" "github.com/pkg/errors"
@ -64,16 +65,17 @@ func New(opt Opt) (*FsViewer, error) {
mode := "cached" mode := "cached"
nydusdConfig := tool.NydusdConfig{ nydusdConfig := tool.NydusdConfig{
EnablePrefetch: opt.Prefetch, EnablePrefetch: opt.Prefetch,
NydusdPath: opt.NydusdPath, NydusdPath: opt.NydusdPath,
BackendType: opt.BackendType, BackendType: opt.BackendType,
BackendConfig: opt.BackendConfig, BackendConfig: opt.BackendConfig,
BootstrapPath: filepath.Join(opt.WorkDir, "nydus_bootstrap"), BootstrapPath: filepath.Join(opt.WorkDir, "nydus_bootstrap"),
ConfigPath: filepath.Join(opt.WorkDir, "fs/nydusd_config.json"), ExternalBackendConfigPath: filepath.Join(opt.WorkDir, "nydus_external_backend"),
BlobCacheDir: filepath.Join(opt.WorkDir, "fs/nydus_blobs"), ConfigPath: filepath.Join(opt.WorkDir, "fs/nydusd_config.json"),
MountPath: opt.MountPath, BlobCacheDir: filepath.Join(opt.WorkDir, "fs/nydus_blobs"),
APISockPath: filepath.Join(opt.WorkDir, "fs/nydus_api.sock"), MountPath: opt.MountPath,
Mode: mode, APISockPath: filepath.Join(opt.WorkDir, "fs/nydus_api.sock"),
Mode: mode,
} }
fsViewer := &FsViewer{ fsViewer := &FsViewer{
@ -109,22 +111,37 @@ func (fsViewer *FsViewer) PullBootstrap(ctx context.Context, targetParsed *parse
return errors.Wrap(err, "output Nydus config file") return errors.Wrap(err, "output Nydus config file")
} }
target := filepath.Join(fsViewer.WorkDir, "nydus_bootstrap") target := fsViewer.NydusdConfig.BootstrapPath
logrus.Infof("Pulling Nydus bootstrap to %s", target) logrus.Infof("Pulling Nydus bootstrap to %s", target)
bootstrapReader, err := fsViewer.Parser.PullNydusBootstrap(ctx, targetParsed.NydusImage) if err := fsViewer.getBootstrapFile(ctx, targetParsed.NydusImage, utils.BootstrapFileNameInLayer, target); err != nil {
if err != nil {
return errors.Wrap(err, "failed to pull Nydus bootstrap layer")
}
defer bootstrapReader.Close()
if err := utils.UnpackFile(bootstrapReader, utils.BootstrapFileNameInLayer, target); err != nil {
return errors.Wrap(err, "failed to unpack Nydus bootstrap layer") return errors.Wrap(err, "failed to unpack Nydus bootstrap layer")
} }
logrus.Infof("Pulling Nydus external backend to %s", target)
target = fsViewer.NydusdConfig.ExternalBackendConfigPath
if err := fsViewer.getBootstrapFile(ctx, targetParsed.NydusImage, utils.BackendFileNameInLayer, target); err != nil {
if !strings.Contains(err.Error(), "Not found") {
return errors.Wrap(err, "failed to unpack Nydus external backend layer")
}
}
} }
return nil return nil
} }
func (fsViewer *FsViewer) getBootstrapFile(ctx context.Context, image *parser.Image, source, target string) error {
bootstrapReader, err := fsViewer.Parser.PullNydusBootstrap(ctx, image)
if err != nil {
return errors.Wrap(err, "failed to pull Nydus bootstrap layer")
}
defer bootstrapReader.Close()
if err := utils.UnpackFile(bootstrapReader, source, target); err != nil {
return errors.Wrap(err, "failed to unpack Nydus bootstrap layer")
}
return nil
}
// Mount nydus image. // Mount nydus image.
func (fsViewer *FsViewer) MountImage() error { func (fsViewer *FsViewer) MountImage() error {
logrus.Infof("Mounting Nydus image to %s", fsViewer.NydusdConfig.MountPath) logrus.Infof("Mounting Nydus image to %s", fsViewer.NydusdConfig.MountPath)
@ -175,6 +192,10 @@ func (fsViewer *FsViewer) view(ctx context.Context) error {
return errors.Wrap(err, "failed to pull Nydus image bootstrap") return errors.Wrap(err, "failed to pull Nydus image bootstrap")
} }
if err = fsViewer.handleExternalBackendConfig(); err != nil {
return errors.Wrap(err, "failed to handle external backend config")
}
// Adjust nydusd parameters(DigestValidate) according to rafs format // Adjust nydusd parameters(DigestValidate) according to rafs format
nydusManifest := parser.FindNydusBootstrapDesc(&targetParsed.NydusImage.Manifest) nydusManifest := parser.FindNydusBootstrapDesc(&targetParsed.NydusImage.Manifest)
if nydusManifest != nil { if nydusManifest != nil {
@ -211,3 +232,11 @@ func (fsViewer *FsViewer) view(ctx context.Context) error {
return nil return nil
} }
func (fsViewer *FsViewer) handleExternalBackendConfig() error {
extBkdCfgPath := fsViewer.NydusdConfig.ExternalBackendConfigPath
if _, err := os.Stat(extBkdCfgPath); os.IsNotExist(err) {
return nil
}
return utils.BuildRuntimeExternalBackendConfig(fsViewer.BackendConfig, extBkdCfgPath)
}

View File

@ -0,0 +1,151 @@
package viewer
import (
"bytes"
"context"
"encoding/json"
"errors"
"io"
"os"
"testing"
"github.com/agiledragon/gomonkey/v2"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/parser"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/provider"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/remote"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/snapshotter/external/backend"
"github.com/dragonflyoss/nydus/contrib/nydusify/pkg/utils"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestNewFsViewer(t *testing.T) {
var remoter = remote.Remote{}
defaultRemotePatches := gomonkey.ApplyFunc(provider.DefaultRemote, func(string, bool) (*remote.Remote, error) {
return &remoter, nil
})
defer defaultRemotePatches.Reset()
var targetParser = parser.Parser{}
parserNewPatches := gomonkey.ApplyFunc(parser.New, func(*remote.Remote, string) (*parser.Parser, error) {
return &targetParser, nil
})
defer parserNewPatches.Reset()
opt := Opt{
Target: "test",
}
fsViewer, err := New(opt)
assert.NoError(t, err)
assert.NotNil(t, fsViewer)
}
func TestPullBootstrap(t *testing.T) {
opt := Opt{
WorkDir: "/tmp/nydusify/fsviwer",
}
fsViwer := FsViewer{
Opt: opt,
}
os.MkdirAll(fsViwer.WorkDir, 0755)
defer os.RemoveAll(fsViwer.WorkDir)
targetParsed := &parser.Parsed{
NydusImage: &parser.Image{},
}
err := fsViwer.PullBootstrap(context.Background(), targetParsed)
assert.Error(t, err)
callCount := 0
getBootstrapPatches := gomonkey.ApplyPrivateMethod(&fsViwer, "getBootstrapFile", func(context.Context, *parser.Image, string, string) error {
if callCount == 0 {
callCount++
return nil
}
return errors.New("failed to pull Nydus bootstrap layer mock error")
})
defer getBootstrapPatches.Reset()
err = fsViwer.PullBootstrap(context.Background(), targetParsed)
assert.Error(t, err)
}
func TestGetBootstrapFile(t *testing.T) {
opt := Opt{
WorkDir: "/tmp/nydusify/fsviwer",
}
fsViwer := FsViewer{
Opt: opt,
Parser: &parser.Parser{},
}
t.Run("Run pull bootstrap failed", func(t *testing.T) {
pullNydusBootstrapPatches := gomonkey.ApplyMethod(fsViwer.Parser, "PullNydusBootstrap", func(*parser.Parser, context.Context, *parser.Image) (io.ReadCloser, error) {
return nil, errors.New("failed to pull Nydus bootstrap layer mock error")
})
defer pullNydusBootstrapPatches.Reset()
image := &parser.Image{}
err := fsViwer.getBootstrapFile(context.Background(), image, "", "")
assert.Error(t, err)
})
t.Run("Run unpack failed", func(t *testing.T) {
var buf bytes.Buffer
pullNydusBootstrapPatches := gomonkey.ApplyMethod(fsViwer.Parser, "PullNydusBootstrap", func(*parser.Parser, context.Context, *parser.Image) (io.ReadCloser, error) {
return io.NopCloser(&buf), nil
})
defer pullNydusBootstrapPatches.Reset()
image := &parser.Image{}
err := fsViwer.getBootstrapFile(context.Background(), image, "", "")
assert.Error(t, err)
})
t.Run("Run normal", func(t *testing.T) {
var buf bytes.Buffer
pullNydusBootstrapPatches := gomonkey.ApplyMethod(fsViwer.Parser, "PullNydusBootstrap", func(*parser.Parser, context.Context, *parser.Image) (io.ReadCloser, error) {
return io.NopCloser(&buf), nil
})
defer pullNydusBootstrapPatches.Reset()
unpackPatches := gomonkey.ApplyFunc(utils.UnpackFile, func(io.Reader, string, string) error {
return nil
})
defer unpackPatches.Reset()
image := &parser.Image{}
err := fsViwer.getBootstrapFile(context.Background(), image, "", "")
assert.NoError(t, err)
})
}
func TestHandleExternalBackendConfig(t *testing.T) {
backend := &backend.Backend{
Backends: []backend.Config{
{
Type: "registry",
},
},
}
bkdConfig, err := json.Marshal(backend)
require.NoError(t, err)
opt := Opt{
WorkDir: "/tmp/nydusify/fsviwer",
BackendConfig: string(bkdConfig),
}
fsViwer := FsViewer{
Opt: opt,
Parser: &parser.Parser{},
}
t.Run("Run not exist", func(t *testing.T) {
err := fsViwer.handleExternalBackendConfig()
assert.NoError(t, err)
})
t.Run("Run normal", func(t *testing.T) {
osStatPatches := gomonkey.ApplyFunc(os.Stat, func(string) (os.FileInfo, error) {
return nil, nil
})
defer osStatPatches.Reset()
buildExternalConfigPatches := gomonkey.ApplyFunc(utils.BuildRuntimeExternalBackendConfig, func(string, string) error {
return nil
})
defer buildExternalConfigPatches.Reset()
err := fsViwer.handleExternalBackendConfig()
assert.NoError(t, err)
})
}

View File

@ -17,6 +17,7 @@
# this list would mean the nix crate, as well as any of its exclusive # this list would mean the nix crate, as well as any of its exclusive
# dependencies not shared by any other crates, would be ignored, as the target # dependencies not shared by any other crates, would be ignored, as the target
# list here is effectively saying which targets you are building for. # list here is effectively saying which targets you are building for.
[graph]
targets = [ targets = [
# The triple can be any string, but only the target triples built in to # The triple can be any string, but only the target triples built in to
# rustc (as of 1.40) can be checked against actual config expressions # rustc (as of 1.40) can be checked against actual config expressions
@ -35,20 +36,12 @@ targets = [
db-path = "~/.cargo/advisory-db" db-path = "~/.cargo/advisory-db"
# The url(s) of the advisory databases to use # The url(s) of the advisory databases to use
db-urls = ["https://github.com/rustsec/advisory-db"] db-urls = ["https://github.com/rustsec/advisory-db"]
# The lint level for security vulnerabilities
vulnerability = "deny"
# The lint level for unmaintained crates
unmaintained = "warn"
# The lint level for crates that have been yanked from their source registry # The lint level for crates that have been yanked from their source registry
yanked = "warn" yanked = "warn"
# The lint level for crates with security notices. Note that as of
# 2019-12-17 there are no security notice advisories in
# https://github.com/rustsec/advisory-db
notice = "warn"
# A list of advisory IDs to ignore. Note that ignored advisories will still # A list of advisory IDs to ignore. Note that ignored advisories will still
# output a note when they are encountered. # output a note when they are encountered.
ignore = [ ignore = [
{ id = "RUSTSEC-2024-0357", reason = "openssl 0.10.55 can't build in riscv64 and ppc64le" }, { id = "RUSTSEC-2024-0436", reason = "No safe upgrade is available!" },
] ]
# Threshold for security vulnerabilities, any vulnerability with a CVSS score # Threshold for security vulnerabilities, any vulnerability with a CVSS score
# lower than the range specified will be ignored. Note that ignored advisories # lower than the range specified will be ignored. Note that ignored advisories
@ -64,8 +57,6 @@ ignore = [
# More documentation for the licenses section can be found here: # More documentation for the licenses section can be found here:
# https://embarkstudios.github.io/cargo-deny/checks/licenses/cfg.html # https://embarkstudios.github.io/cargo-deny/checks/licenses/cfg.html
[licenses] [licenses]
# The lint level for crates which do not have a detectable license
unlicensed = "deny"
# List of explictly allowed licenses # List of explictly allowed licenses
# See https://spdx.org/licenses/ for list of possible licenses # See https://spdx.org/licenses/ for list of possible licenses
# [possible values: any SPDX 3.11 short identifier (+ optional exception)]. # [possible values: any SPDX 3.11 short identifier (+ optional exception)].
@ -75,28 +66,8 @@ allow = [
"BSD-3-Clause", "BSD-3-Clause",
"BSD-2-Clause", "BSD-2-Clause",
"CC0-1.0", "CC0-1.0",
"Unicode-DFS-2016", "Unicode-3.0",
] ]
# List of explictly disallowed licenses
# See https://spdx.org/licenses/ for list of possible licenses
# [possible values: any SPDX 3.11 short identifier (+ optional exception)].
deny = [
#"Nokia",
]
# Lint level for licenses considered copyleft
copyleft = "deny"
# Blanket approval or denial for OSI-approved or FSF Free/Libre licenses
# * both - The license will be approved if it is both OSI-approved *AND* FSF
# * either - The license will be approved if it is either OSI-approved *OR* FSF
# * osi-only - The license will be approved if is OSI-approved *AND NOT* FSF
# * fsf-only - The license will be approved if is FSF *AND NOT* OSI-approved
# * neither - This predicate is ignored and the default lint level is used
allow-osi-fsf-free = "neither"
# Lint level used when no other predicates are matched
# 1. License isn't in the allow or deny lists
# 2. License isn't copyleft
# 3. License isn't OSI/FSF, or allow-osi-fsf-free = "neither"
default = "deny"
# The confidence threshold for detecting a license from license text. # The confidence threshold for detecting a license from license text.
# The higher the value, the more closely the license text must be to the # The higher the value, the more closely the license text must be to the
# canonical license text of a valid SPDX license file. # canonical license text of a valid SPDX license file.

Some files were not shown because too many files have changed in this diff Show More